text
stringlengths
70
452k
dataset
stringclasses
2 values
Open multiple files using ex command :edit and a wildcard Is it possible to load multiple files at the same using :edit? I tried: :e ~/dev/myproject/*.c and it did not work. (Note this is different than "How to open multiple files matching a wildcard expression?" because in that question he is not using :edit) But the answer is the same -- :edit doesn't take more than one argument, so use one of the commands that does. @jamessan I do not want to give multiple args, I want to use ONE argument which is a wildcard. Yes, and that one argument is expanded to multiple filenames, which :edit doesn't not accept. :edit only accepts a single filename, so use a command which supports multiple filenames. If your wildcard does only expand to a single filename, you can simply use your command given, e.g. :e *.txt, however Vim will complain, if this expands to more than one file. So you can e.g. use :argadd *.txt|:next or similar. Another related question: How can I open multiple tabs at once?
common-pile/stackexchange_filtered
Location of windows command prompt command files In Unix, the terminal commands are in /etc folder. Similarly, I'd like to know where the command files of Windows are located eg., mkdir, cd, etc. Thanks in advance. c:\windows\system32. also see this link: http://superuser.com/questions/229945/where-are-the-standard-windows-prompt-commands-files @morpheus Thanks! Could you provide it as an answer? You can use where to find where the executables are located. Some as @sb9 said, are not separate exe's and they are built in. Using where you can find out if they have their own exe file or not. where ftp where at where cd In this case cd will error as it is built in. Some commands are located in windows\system32, and some others (like mkdir and cd) are built in internally into the shell cmd.exe, so you won't find them on the hard disk.
common-pile/stackexchange_filtered
asp:checkbox color change when checked or unchecked I have a checkbox inside a GridView and I want to change its color to red when unchecked and to green when checked. How can I do this ? <ItemTemplate> <asp:CheckBox ID="checkboxAttendanceStatus" BackColor="Red" runat="server" AutoPostBack="true" /> </ItemTemplate> Is your checkbox posting back when checked? Yes, it is set to autopostback. You can set the OnCheckChanged server-side event for the check box as in markup below. Then, all you need is to write code in server-side event to change the BackColor. Markup <ItemTemplate> <asp:CheckBox ID="checkboxAttendanceStatus" BackColor="Red" runat="server" AutoPostBack="true" OnCheckedChanged="checkboxAttendanceStatus_CheckedChanged"/> </ItemTemplate> C# code for server-side event protected void checkboxAttendanceStatus_CheckedChanged(object sender, EventArgs e) { CheckBox chk = sender as CheckBox; if (chk.Checked) { chk.BackColor = System.Drawing.Color.Green; } else { chk.BackColor = System.Drawing.Color.Red; } } Another approach using client-side code without any C# If you wanted to implement this requirement completely on the client-side, then you could add the following JavaScript to your aspx page. You do not need to subscribe to CheckChanged event of checkbox nor write any C# code. This approach needs only JavaScript/jQuery code. <script type="text/javascript"> function pageLoad() { var checkboxes = $("[id*='checkboxAttendanceStatus']"); checkboxes.each(function (i) { if (this.checked) { $(this).parent().css("background-color", "green"); } else { $(this).parent().css("background-color", "red"); } }); } </script> Client-side approach is not working. How do I invoke the pageLoad() ? In ASP.Net, you don't have to invoke pageLoad method since ASP. Net framework will automatically invoke it once the page scripts have loaded and all objects have been created. You can read more about it at this MSDN link: https://msdn.microsoft.com/en-us/library/bb383829.aspx Also, make sure that jquery is included in the aspx page, else client-side approach will not work. The easiest way is to use CSS with an adjacent sibling selector for the label since the generated html will look like this: <input id="id" type="checkbox" name="id" /><label for="id">Label</label> So set the color of the label based on the checked status of the CheckBox. <style> input[type="checkbox"]:checked + label { color: blue; background-color: yellow; } input[type="checkbox"] + label { color: red; background-color: green; } </style> If you want to style the actual checkbox itself, you're gonna need a jQuery plugin or more complex CSS tricks. See this answer here
common-pile/stackexchange_filtered
How to try out a product you do not intend to buy? An organization I'm in offered a "test drive" event of a car manufacturer to its members. Since I have an interest in cars, I opted to go. The purpose of this event is obviously to promote their cars. However this is a high-end car manufacturer, and I do not foresee I would be able to afford their cars in the near future. I, therefore, have no intention to even consider buying one, I just want to know how it "drives like". Considering that the car manufacturer needs to allocate resources (cars and staff) for the event, I feel it would be rude to take their offer knowing it would never have turned out to be a customer of them. If the sales asks me "why are you here", how can I reply without being seen as rude? "Is it rude" is an opinion based question that we can't really answer in this format. @apaul hmm I see quite a number of "is it rude" questions on this stack. Apologies I'm new here. How can I make it not opinion-based? @apaul: I think this might be ok, as the goal of such events is marketing wise usually well defined and hence I could see this being not too opinion based. But I will wait a bit before answering to see if others share your view. @kevin According to the help center this question is very much on topic if you reword it to ask for the "the written and unwritten - but well-established and expected - rules or conventions of behavior in a specific setting (also called etiquette)." Trim off the "is it rude" and focus on how to communicate with the sales people. I'm certainly not unquestionable here, just my experience on this stack. I would dispute the comment saying that it is opinion based. It is something that was offered to you, and if you want to do it then go for it. It is not rude, and no organisation putting on any event has any expectations that it's attendees will go into business with them. Usually these types of events whether it is for cars or otherwise are about networking, and getting that companies name out there, so if you are ever in the position to make a purchase, you will think of them. There is no expectation to buy anything and therefore it is not rude. As for what to say if asked why you are there. Be honest, not brutally, but what you said here is fine: I like cars, and I wanted to see how [insert car brand] felt to drive. This is exactly the response expected from this kind of event and obviously if it turns into a sale on the day, they would be over the moon but it is not expected of anyone. All of the above comes from personal experience at this kind of events, both cars (Ferrari) and many other industries. Precisely - the car company knows what is going on, so they won't be insulted in the least. Last weekend I was at an event where Ford had a bunch of new 4x4 pickups and would let you drive them around the 4x4 track. They knew that everyone there already had a 4x4 vehicle, knew that nobody was buying one on the spot, but were there for the awareness and marketing.
common-pile/stackexchange_filtered
How to pre-load a fragment before showing? In my activity I have several fullscreen fragments, each of them downloads some data from web (using an async task) and shows them to the user. The fragments are showed one at a time. To be more specific, each of the fragment readings some urls from a sqlite database, and fetch the content before showing them in a list, if that matters. The data loading tasks can be done in the OnCreate() function. I would like to preload all the fragment (at least starting the downloading), when I show a splash screen. Pretty much like a viewpager preload its fragments. I am wondering how to achieve this? I tried initialize/create all the fragments in the OnCreate() function of my activity, hoping the OnCreate() of fragments could be called earlier, but the OnCreate() and OnCreateView() function of the fragments are not called until a fragment is about to show to the user. It sounds like you need to separate your model (the data which is downloaded) from your view (the fragments). One way to do this is to start the downloading AsyncTasks in your activity, rather than starting them in each fragment. Then when the fragments are eventually displayed they can show the data which has been downloaded (or a spinner or some other indication that the download process is still executing). That is true, and I thought about it. The reason I am not doing it now is that, I have several fragments attached to the activity, each of them host some data. If I put all the data download/update task into the activity, it ends up to a cumbersome activity, and also create very strong couple between my hosting activity and fragments, which defeat the purpose of fragments to some degree. Fragment's onActivityCreated(Bundle) tells the fragment that its activity has completed its own Activity.onCreate(). So your solution to this problem is initialize or create or do your stuffs which you want to preload before fragments are created, inside your Fragment's onActivityCreated(Bundle) see documents for fragment's lifecyle The earliest pace you can start loading is either in a static singleton or in the Application Class What I end up doing is the following, (1) add all the fragments into the container. So they (and their view) will be created and initialized. (2) hide those not in use and only show the one I would like the user to see. (3) use FragmentTrasaction.show()/FragmentTrasaction.hide() to manipulate the visibility instead of FragmentTrasaction.add() or FragmentTrasaction.replace(). If you following this approach, be warn that all the fragments will be cached in memory. But the benefit is the switch between fragment will be fast and efficient. "Fast" yes, "efficient" no @MartinKonecny, I know what you are saying. But in my app, the user will flip between all fragments fairly frequently. I don't want them to wait for the data to load. Do you have a more efficient way to implement the logic? Can you elaborate on the first step? What do you means by 'add all the fragments into the container'? Thanks. I was facing the same problem and then I used this method, suppose we are having an EditText in the fragment, then we can use codes like this @Override public void onViewCreated(View view, Bundle savedInstanceState) { //this method allows you to input or instantiate fragments before showing this to an activity conidering id is "editTextEditProfileFirstName" EditText firstName = (EditText) getActivity().findViewById(R.id.editTextEditProfileFirstName); firstName.setText("This is my first name", TextView.BufferType.EDITABLE); super.onViewCreated(view, savedInstanceState); }
common-pile/stackexchange_filtered
Support of $x$ also belonging to the set of best responses when $x$ is a best reponse If $ \mathbf x $ belongs to the set of best responses to $\mathbf y$ i.e. to $BR(\mathbf y) = arg max \mathbf x \cdot A \mathbf y$, why do all of the pure strategies in the support of $\mathbf x $ also belong to $BR(\mathbf y)$?
common-pile/stackexchange_filtered
Dynamically change options of a jQuery plugin with callbacks There is a plugin that is instantiated with default settings, with the possibility of overriding them, for example: $('#test').Plugin({ option1: 'blahblah', option2: 'test' }); It also has the possibility of adding callbacks like this: funcCallback = function(t){ //Do something with t to change option1 }; $('#test').Plugin({ option1: 'blahblah', option2: 'test', callback: funcCallback }); Inside the plugin I can see that the callbacks are like this: if (this.settings.callback) { this.settings.callback.call(this); } If there a way to change option1 and option2 with that callback? I know I can create a function inside the plugin to specifically change the settings, but I would like to use the plugin unchanged. Short answer is no. You would need to amend the plugin to expose its internal settings object for you to be able to amend it from within the callback. If it doesn't do that by default, then you cannot achieve what you require here. Wouldn't the context be the plugin itself, as it is being bound via the call invocation? Thereby this.settings.option1 and this.settings.option2 should work. Answer really depends on how plugin is constructed. what plugin is it? The context will be the plugin itself, as it is being bound via the call invocation. funcCallback = function(t){ this.settings.option1 = 'newblahblah'; }; So simple... I kept insisting on doing t.settings.option1 It depends on the design of the plugin. For example, the jQuery UI plugins generally allow you to change options dynamically by using the option method, like this: $("#test").dialog("option", { autoOpen: true, disabled: false }); If the plugin is written using the jQuery Widget Factory I think it gets this automatically. But I've seen many simple plugins that don't provide any way to modify the settings dynamically. Instead, you may have to create a new element and initialize the plugin with the new options.
common-pile/stackexchange_filtered
Exchange 2010 Block Internet Email, with exceptions Background: We are in the process of our Exchange2003 to Exchange 2010 migration (SBS2003 to Win2008/Exchange2010). All the mailboxes have been transferred to the Exchange 2010 server, but we are still using the SMTP Connector of Exchange 2003 to send external emails. The company has a policy that not all users are allowed to send/receive External emails. This 'rule' has been applied to the Exchange 2003 SMTP connector a couple of years by from this article: "Restricting Users from Sending Internet Based Email". A quick overview: Basically create an AD security group called "No Internet Email" and assign this group to the Connector's Delivery Restrictions - Reject Message From field. All one now has to do is to add all the users to the "No Internet Email" security group in order to block those users from sending emails. The problem: I've been instructed to keep the email restrictions for the "No Internet Emails" group in place, but I must allow the restricted users to be able to send/receive internet emails to/from a select view domains, i.e. certain customers, etc. How would I go about doing this? If I need to change the way the users are blocked from sending/receiving emails on Exchange 2010 instead of using the Connector route as described in the above mentioned article, then so be it. Any help would be greatly appreciated This shouldn't be too difficult using transport rules. Am on Exchange 2007 but process is extremely similar... Restricting outbound internet mail for some users Create a Distribution Group and add the recipients you want to prevent from sending internet email as members of the group. Create a Transport Rule 1) Fire up Exchange console | Organization Configuration | Hub Transport | Transport Rules tab | click New Transport Rule 2) Enter a name for the rule – e.g. Rule-NoInternetMail 3) On the Conditions page, select “From a member of a distribution list“ 4) In the rule description, click the link for distribution list (underlined) 5) Click Add | Select the distribution list “DG-NoInternetMail” 6) Under Conditions, select a second condition “Sent to users inside or outside the organization“ 7) In the rule description, click Inside (underlined) | change scope to Outside 8) Click Next 9) On the Actions page, select “send bounce message to sender with enhanced status code“ 10) If you want to modify the text of the bounced message (optional): In the description, click “Delivery not authorized, message refused” | enter new message text 11) Click Next | verify the rule conditions and action in the summary 12) Click New | click Finish Restricting inbound internet mail for some users Using the Exchange console: Expand Recipient Configuration > select recipient > recipient Properties | Mail Flow Settings page | Message Delivery Restrictions | Properties Select “require that senders are authenticated“ (source: http://exchangepedia.com/2007/07/how-to-prevent-a-user-from-sending-and-receiving-internet-mail.html) Ok, and if I want hose users that are rescricted from sending internet email to be allowed to send/received only from a select few email domains? Any idea on how to go about it? Again, using the transport rule, you could use the 'when the From address contains specific words' to filter out entire domains. If I set the delivery restrictions, will that stop delivery from specified "allowed domains" as well, or will it allow delivery from those domains. I've user send email to email address in specified domain, if recipient replies to the email, will the user still received it when inbound email is restricted as mentioned above?
common-pile/stackexchange_filtered
Error 500 Rails + Passenger + Apache I've configure Apache 2.4.1 + Ruby on Rails using passenger on Ubuntu 14.04, but when I try to access my application I get a 500 error... the last log of apache and my VirtualHost config are here: https://gist.github.com/anonymous/f27e04d8ed014554c7b64bb54321a9d7 Which version of Passenger are you running? Could be related to this issue which has been fixed as of Passenger 4.0.58 I am using the lattest version of passenger
common-pile/stackexchange_filtered
Modifying state & the correct way to update my reducer? (React / Redux) I believe that I have been modifying my state for quite some time now. I wanted to do the following and was wondering why it did not work: case "SAVE_DATA_TO_PERSON" : let newState = {...state, data: {start: action.payload.start, end: action.payload.end}}; return newState; I am here creating a new object, take the old state, and add my new data. While it does seem to make a difference, it does not keep the data for long. After firing other actions this is just gone. I wonder why? This is how I used to do it, and it seems to work: case "SAVE_DATA_TO_PERSON" : let newState = state; newState.audio = {start: action.payload.start, end: action.payload.end}; return newState; But here, it seems, I am modifying state. I would just like to know whether my first solution is the correct one, and my second solution here is indeed modifying state. Of course the second solution is modifying state directly--how would it not be? I guess I thought I had copied the state into newState (as a new object), and was not working with a reference. Nope, they refer to the same state object. In the second options you are modifying the state directly. The reducer should changing state in a React-Redux application without mutate existing state. Immutability is a requirement of Redux because Redux represents your application state as "frozen object snapshots". With these discrete snapshots, you can save your state, or reverse state, and generally have more "accounting" for all state changes. Ref. You cloud consider using and immutable utility/library as dot-prop-immutable or Immutable.js Related article from redux docs: http://redux.js.org/docs/recipes/reducers/ImmutableUpdatePatterns.html
common-pile/stackexchange_filtered
I need to find fixed leftside 100% height menu I'm Matthew and i'm new here. I need to find a menu which looks like that here: http://noblebank.pl/ (leftmenu) I have searched it a half day and i got no more ideas how to find something like that :) I will be grateful for any help :) EDIT: The most important thing in this menu is sliding submenu after clicking his parent and submenu should be also made in 100% height. This is a fixed sidebar. look at this jsfiddle http://jsfiddle.net/U8HGz/1/ Just inspect the sidebar element with Firebug or equivalent and see how they did it. I made a little example for you, basicly you need a width, position and 100% height position:fixed; height:100%; width:200px; To position it you can set the following options left:0; bottom:0; top:0; http://jsfiddle.net/XSxqA/ This is cool. I've just even tried to do something by myself but the most important thing in this menu is sliding submenu after clicking his parent and submenu should be also made in 100% height. You'll need to use jQuery for that, have a look at animate - http://api.jquery.com/animate/
common-pile/stackexchange_filtered
Rails+Backbone+Faye messaging, how to instantiate model and remove its element from DOM for all subscribers? I am using Rails + Backbone + Faye to make a sample chat application. I'm currently able to use Faye's messaging capabilities to write to the DOM on a create event, though I'm not actually instantiating a backbone model. Ala Ryan Bates' tutorial I'm just calling inside of create.js.erb <% broadcast "/messages/new" do %> $("#chats-table").append("<%= escape_javascript render :partial => "chat", :locals => { :chat => @chat } %>"); <% end %> And publishing it in another javascript: faye.subscribe("/messages/new", function(data) { eval(data); }); I'd like to refactor this a bit and leverage backbone's models. A good use case would be the delete method. My chat model is bound to a click event, delete which calls: model.destroy(); this.remove(); Backbone will call the delete method and put a delete request to /entity/id That also dispatches rails' /views/delete.js.erb'. In there I call a helper method which publishes a message with Ruby code. <% broadcast "/messages/delete" do %> <%= @chat.to_json.html_safe; %> <% end %> listener var faye = new Faye.Client('http://<IP_ADDRESS>:9292/faye'); faye.subscribe("/messages/delete", function(data) { }); Here, I was wondering if i could instantiate the deleted backbone model somehow so I could push that event onto everybody's screen and remove it from the DOM. Basically, I would like to call this.remove(); inside the faye client instead of in the chat model. Is this even possible? Well, you should do remove on the model and let the UI listen for the event and refresh itself. Once the UI reflects the model changes you are golden. The problem you have here is that Backbone collections/models are not an identity map. So the model object you are dealing with in the view is not the same you will instantiate and remove from the faye callback. If your messages collection is globally accessible, then I suggest you get the instance from there are remove it. Thanks. Yes that's what i had to do. I passed the id of the item and got the model from the collection and removed it from the dom using its model
common-pile/stackexchange_filtered
Web based workflow designer that can be used with ASP.Net application? I have been diving heavily into Workflow Foundation 4 and slowly realized we cannot host the designer in ASP.Net. What’s more, the designer is not suited to a non-developer. We are looking for a web based solution we can tie into an ASP.Net application. Workflows will not be incredibly complicated. As an example, say we have a Request for Information (RFI) document that is created in our system. The flow for that RFI is as follows: When the RFI is created UserA and UserB need to be notified. UserA is responsible for approving the RFI. UserA needs to respond within in 3 days. UserA will be notified after the 3rd day. If after 6 days close the RFI and notify UserA and UserB. Workflows will only serve as the communication flow between users and nothing more. So, the designer will allow users to define who gets notified when and no expressions will need to be compiled (like in WF4 for more complicated flows). All we need is something where a user drags predefined boxes onto a workspace and can draw lines from one box to another and pick from a list of users and timeframes. And your solution was? @PrimeByDesign, please see my edits to my answer. After a very long hunt for something, I did not find what I was looking for. We decided to come up with our own solution. We used a combination of jsPLumb and Stateless
common-pile/stackexchange_filtered
Soundcloud API - Playback_count is wrong Hey SoundCloud guys the value of all playback_counts delivered by your API is wrong over 1 month now. Please fix it. We can't countinue working with this wrong stats... Try the API V2 as a workaround http://stackoverflow.com/questions/36976285/soundcloud-api-playback-count-different-than-on-website/36985629#36985629
common-pile/stackexchange_filtered
Quill: author-advanced class does not fire on "keyup" event Using Quill, I am trying to store the text within a span element ("span .author-advanced" class) upon releasing the key, but the event keyup does not get fired. I realized that in pure Javascript i can solve this problem by changing the contenteditable using a nested span, but that implies I have to modify Quill. I there a way to solve this problem without modifying Quill? There are some details missing from your question. What are you trying to fire a keyup event on? What text are you trying to store in the span, or where are you getting "the text" from? I'm trying to fire a keyup event on a span element of class author-advanced that i am currently typing, so I can store the span in a variable. The text are just regular keyboard input, like "jpaugh is such a nice guy" More information on the pure Javascript solution with nested spans would be helpful. Otherwise I don't believe you can attach a key listeners to just any DOM element, such as ordinary spans: http://jsfiddle.net/vqpp1hr5/ http://jsfiddle.net/aBYpt/14/ here is one example in javascript, but we cannot figure out a way to do so using Quill :(
common-pile/stackexchange_filtered
Determine if time falls in designated hour range I am new at C#. I'd like to check whether a time is between 2 given hours, and if so then do something. Can anyone give me an example? pseudocode example: int starthour = 17; int endhour = 2; if ( hour between starthour and endhour){ dosomething(); } How do I write a check on whether hour is between starthour and endhour? In C#, the time is returned in AM/PM format so I don't know if it will understand the 17 number as "5 PM". what do you mean by "hours between 2 times is true"? true in what sense? @hunter, I believe what we want is that the current time is between two specified times. The pseudo-code says: if ( hour between start hour and end hour)... i mean , on everyday, when currenthour > 17(5pm) and hour < 2am , dosomething(). else donothing Assuming you're talking about the current time, I'd do something like this: // Only take the current time once; otherwise you could get into a mess if it // changes day between samples. DateTime now = DateTime.Now; DateTime today = now.Date; DateTime start = today.AddHours(startHour); DateTime end = today.AddHours(endHour); // Cope with a start hour later than an end hour - we just // want to invert the normal result. bool invertResult = end < start; // Now check for the current time within the time period bool inRange = (start <= now && now <= end) ^ invertResult; if (inRange) { DoSomething(); } Adjust the <= in the final condition to suit whether you want the boundaries to be inclusive/exclusive. If you're talking about whether a time specified from elsewhere is within a certain boundary, just change "now" for the other time. Looks good to me. Only change I'd make is to use DateTime.Today instead of DateTime.Now.Date. @whatispunk: No, you can't do that - because you have to fetch the current time exactly once for consistency. You need the date and time later on (the final comparison) so you can't use DateTime.Today. @JonSkeet I'm trying this piece of code but it fails on this case: Suppose times are between 8pm and 5am, and it's currently 4 am. when comparing, it uses 8pm today and 5am tomorrow, so it falses out. Actually, if we're dealing with pure hours here like a Abelian Ring from 0 to 23 and 0 again, I believe the following is actually a working solution: (start <= end && start <= t && t <= end) or (start > end && (start <= t || t <= end)) Complex though this is, it is essentially an if-else where you have a different algorithm depending on whether start <= end or not, where t is the time you wish to test. In the first case, start and end are normal order, so t must be both greater than start and less than end. In the case where start is greater than end, the times outside the opposite range are what we want: NOT(end < t and t < start) Using DeMorgan's theorem: NOT(end < t) or NOT(t < start) NOT(t < start) or NOT(end < t) t >= start or end >= t start <= t or t <= end This should solve your and my problems. @JonSkeet The thing is, looking at your algorithm, let's assume for a moment the time is 1am, day 1. Now holds 1am Day 1 Today holds midnight Day 1 Start holds 5pm Day 1 (given the original example) End holds 2am Day 1 (again from the example) End holds 2am Day 2 (since start > end) Now, unless I'm mistaken, start ≰ now since start is 5pm Day 1 and now is 1am Day 1 which is before now, therefore the test fails but the original question wanted 1am included in the range since 1am is between 5pm and 2am. Did I miss something? @Brian Also, looking at your code, I think you can detect 1am but now you would have a problem with 10pm (22:00) since your times become: Start is 17 End is 26 Now is 22 + 24 = 46! so you will fail in the less-than test. Clearly, the general case is very tricky! More so when you're restricted to Google Spreadsheets as I am. Because this is not a comment, Jon and Brian will never get notified of your post here. When you earn enough reputation to post comments, you should start using comments for this sort of thing. Thank you Harvey; as you can see, I don't have enough reputation to comment, and I wish I could because I'm pretty sure my point is valid; I did, however, add a solution which I believe does work for all cases so please consider the question now officially once again answered. Thanks! Dang it, I had the right answer for over a year and someone who could make comments informed the top answer of what I just spelled out and now the top poster is right and no credit for me. Dang it. When subtracting DateTimes, you get a TimeSpan struct that you can query for things like the total number of hours (the TotalHours property): TimeSpan ts = starttime - endtime; if(ts.TotalHours > 2) { dosomething(); } If you want to see if the times are identical, then you can use TotalMilliseconds - for identical DateTimes, this will be equal to 0. I can see why you misunderstood, but I'm pretty sure the question is how to find out whether a given hour is between two other times... If you want to compare minutes also like I do you can use this snippet of code in java. //Initialize now, sleepStart, and sleepEnd Calendars Calendar now = Calendar.getInstance(); Calendar sleepStart = Calendar.getInstance(); Calendar sleepEnd = Calendar.getInstance(); //Assign start and end calendars to user specified star and end times long startSleep = settings.getLong("startTime", 0); long endSleep = settings.getLong("endTime", 0); sleepStart.setTimeInMillis(startSleep); sleepEnd.setTimeInMillis(endSleep); //Extract hours and minutes from times int endHour = sleepEnd.get(Calendar.HOUR_OF_DAY); int startHour = sleepStart.get(Calendar.HOUR_OF_DAY); int nowHour = now.get(Calendar.HOUR_OF_DAY); int endMinute = sleepEnd.get(Calendar.MINUTE); int startMinute = sleepStart.get(Calendar.MINUTE); int nowMinute = now.get(Calendar.MINUTE); //get our times in all minutes int endTime = (endHour * 60) + endMinute; int startTime = (startHour * 60) + startMinute; int nowTime = (nowHour * 60) + nowMinute; /*****************What makes this 100% effective***************************/ //Test if end endtime is the next day if(endTime < startTime){ if(nowTime > 0 && nowTime < endTime) nowTime += 1440; endTime += 1440; } /**************************************************************************/ //nowTime in range? boolean inRange = (startTime <= nowTime && nowTime <= endTime); //in range so calculate time from now until end if(inRange){ int timeDifference = (endTime - nowTime); now.setTimeInMillis(0); now.add(Calendar.MINUTE, timeDifference); sleepInterval = now.getTimeInMillis() / 1000; editor.putBoolean("isSleeping", true); editor.commit(); Log.i(TAG, "Sleep Mode Detected"); returned = true; } bool CheckHour(DateTime check, DateTime start, DateTime end) { if (check.TimeOfDay < start.TimeOfDay) return false; else if (check.TimeOfDay > end.TimeOfDay) return false; else return true; } int starthour = 17; int endhour = 2; int nowhour = DateTime.Now.Hour; if (endhour < starthour) { endhour+=24; nowhour+=24; } if (starthour <= nowhour && nowhour <= endhour) { dosomething(); } I'm not sure which I prefer between this code and Jon Skeet's code. Using Jon Skeet's solution above I added a fix where if start time is after beginning time eg You start the job after 6pm at night and end it the next morning at 5am. then you need to check this and apply another day to the end time. Hope it helps, I personally have spent too much time on this piece of work. have a great day :) if (stopHour < startHour) { end = today.AddHours(stopHour+24); } Full Code is below. private static bool IsInRunHours() { try { ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None).Save(ConfigurationSaveMode.Modified); ConfigurationManager.RefreshSection("appSettings"); // after = 18 before = 5 // Only take the current time once; otherwise you could get into a mess if it // changes day between samples. DateTime now = DateTime.Now; DateTime today = now.Date; Int32 startHour = ConfigurationManager.AppSettings["UpdateRunAfter"].ToInt(); Int32 stopHour = ConfigurationManager.AppSettings["UpdateRunBefore"].ToInt(); DateTime start = today.AddHours(startHour); DateTime end = today.AddHours(stopHour); if (stopHour < startHour) { end = today.AddHours(stopHour+24); } //ConfigurationManager.AppSettings["UpdateRunBefore"].ToInt() //ConfigurationManager.AppSettings["UpdateRunAfter"].ToInt() // Cope with a start hour later than an end hour - we just // want to invert the normal result. bool invertResult = end < start; // Now check for the current time within the time period bool inRange = (start <= now && now <= end) ^ invertResult; if (inRange) { return true; } else { return false; } } catch { return false; } }
common-pile/stackexchange_filtered
I need to sum based on certain criteria. I believe I need to groupby but I get an error when applying the method I have a table that has the following type of data. id start_time approach movement value 1 11/12/2020 12:00:00 AM Southbound Right 2 2 11/12/2020 12:00:00 AM Northbound Right 2 3 11/12/2020 12:00:00 AM Eastbound Right 3 1 11/12/2020 12:00:00 AM Southbound Thru 3 2 11/12/2020 12:00:00 AM Northbound Thru 6 3 11/12/2020 12:00:00 AM Eastbound Thru 7 1 11/12/2020 12:00:00 AM Southbound Left 4 2 11/12/2020 12:00:00 AM Northbound Left 8 3 11/12/2020 12:00:00 AM Eastbound Left 9 It then repeats itself but the time moves up by 15 minutes. I would like to create a table that sums the values of Right,Thru,Left by grouping the id, time, and approach together. I tried the following code, but I get an error. TypeError: list indices must be integers or slices, not str df2['combinedValue'] = df1.groupby(['id','approach','start_time'], as_index=False)['value'].sum() Any thoughts? Could you post the error message Related issue with count vs transform('count') -> adding column to df that calculates count of different column using groupby Use transform if you want to add a new column to your original dataframe. df1['combinedValue'] = df1.groupby(['id','approach','start_time'], as_index=False)['value'].transform("sum") >>> df1 id start_time approach movement value combinedValue 0 1 11/12/2020 12:00:00 AM Southbound Right 2 9 1 2 11/12/2020 12:00:00 AM Northbound Right 2 16 2 3 11/12/2020 12:00:00 AM Eastbound Right 3 19 3 1 11/12/2020 12:00:00 AM Southbound Thru 3 9 4 2 11/12/2020 12:00:00 AM Northbound Thru 6 16 5 3 11/12/2020 12:00:00 AM Eastbound Thru 7 19 6 1 11/12/2020 12:00:00 AM Southbound Left 4 9 7 2 11/12/2020 12:00:00 AM Northbound Left 8 16 8 3 11/12/2020 12:00:00 AM Eastbound Left 9 19 Use sum without transform to give you one row for each unique combination of ['id','approach','start_time']. df2 = df1.groupby(['id','approach','start_time'], as_index=False)['value'].sum() >>> df2 id approach start_time value 0 1 Southbound 11/12/2020 12:00:00 AM 9 1 2 Northbound 11/12/2020 12:00:00 AM 16 2 3 Eastbound 11/12/2020 12:00:00 AM 19
common-pile/stackexchange_filtered
The requested package robmorgan/phinx No version set (parsed as 1.0.0) is satisfiable by robmorgan/phinx[No version set (parsed as 1.0.0)] When trying to install phinx executed the below command and it's giving the error as below php composer.phar require robmorgan/phinx Loading composer repositories with package information Updating dependencies (including require-dev) Your requirements could not be resolved to an installable set of packages. Problem 1 - The requested package robmorgan/phinx No version set (parsed as 1.0.0) is satisfiable by robmorgan/phinx[No version set (parsed as 1.0.0)] but these conflict with your requirements or minimum-stability. there is no robmorgan/phinx tagged version >= 1.0.0. So you have to add the version constraint, too, ie.: php composer.phar require robmorgan/phinx:^0.9.1 If that does not work please provide your composer.json Thanks for the response. actual mistake I've made was suggested slash in phinx was not worked in windows. It's resolved by using forward slash
common-pile/stackexchange_filtered
ImportError: conv_relu_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at15checkAllSameGPUEPKcN3c108ArrayRefINS_9TensorArgEEE I am getting error, when I try to run 03_train from https://github.com/ahendriksen/noise2inverse. It is giving me this output when I tried to run # Option a) Use MSD network. Error output: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_15770/3472637506.py in <module> 1 # Option a) Use MSD network 2 if network == "msd": ----> 3 from msd_pytorch import MSDRegressionModel 4 model = MSDRegressionModel(1, 1, 100, 1, parallel=multi_gpu) 5 net = model.net ~/anaconda3/envs/noise2inverse/lib/python3.7/site-packages/msd_pytorch/__init__.py in <module> 22 import msd_pytorch.errors 23 from .image_dataset import ImageDataset ---> 24 from .msd_regression_model import MSDRegressionModel 25 from .msd_segmentation_model import MSDSegmentationModel ~/anaconda3/envs/noise2inverse/lib/python3.7/site-packages/msd_pytorch/msd_regression_model.py in <module> ----> 1 from msd_pytorch.msd_model import MSDModel 2 import torch.nn as nn 3 4 5 loss_functions = {"L1": nn.L1Loss(), "L2": nn.MSELoss()} ~/anaconda3/envs/noise2inverse/lib/python3.7/site-packages/msd_pytorch/msd_model.py in <module> ----> 1 from msd_pytorch.msd_block import MSDModule2d 2 from torch.autograd import Variable 3 import numpy as np 4 import torch as t 5 import torch.nn as nn ~/anaconda3/envs/noise2inverse/lib/python3.7/site-packages/msd_pytorch/msd_block.py in <module> 1 import torch ----> 2 import conv_relu_cuda as cr_cuda 3 from msd_pytorch.msd_module import MSDFinalLayer, init_convolution_weights 4 import numpy as np 5 ImportError: /home/aknahin/anaconda3/envs/noise2inverse/lib/python3.7/site-packages/conv_relu_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at15checkAllSameGPUEPKcN3c108ArrayRefINS_9TensorArgEEE Anyone please help me to solve this error. You can find out about this error by just searching for the error message. Concerning the cause, it would help if you provided a [mcve]. However, I'd also search for according bug tickets, maybe there are existing fixes available. @UlrichEckhardt I tried to find solution from google. But it seems none get exact error. But, it gave me an idea is my msd_pytorch version is not compatible with cuda version. When I tried to install msd_pytorch in method mentioned in their git repo, it took forever.
common-pile/stackexchange_filtered
when reloading vuejs component 'props' is updating but reference to 'props' via the 'this' scope still referencing previously loaded props I have a vuejs component which renders out a post and its replies. When the user wants to view the post, they click on the post title and it opens post in a modal window by calling the component which gets the post data and renders it out. I have this strange reactivity problem which is a bit difficult to explain. Best explained with a use case. Click to open Post 1 (all is OK) Close that modal. Click to open Post 2 (strange things happen). If I reference post in the template (e.g. {{post.comments}}, it is displaying all the information for Post 2 just fine BUT if I reference this.post.comments in the script block it is rendering information from Post 1. So it seems the post props itself is updating, but this.post seems to reference the first post opened regardless of if I pass a new post in as a props. If I go further and pass Post 3 into the component.. post references Post 3, but this.post is still populated with Post 1. So the props is updating, but this scope is not updating whenever I recall the component with a new props being passed in. Here is my vuejs code (or top relevant part anyway). module.exports = { props: ['post'], data: function(){ return { userHasVoted: this.post.userHasVoted, votes: this.post.voters.length, showReply: false, replyPlaceholder: "Post reply here", reply: { comment_of: this.post._id, channel: this.post.channel, post: null, tags: [] }, tagify: null, showByIndex: null } }, methods: { ..... Thanks in advance for any assistance on unraveling this. Can you share a link to a reproduction? Found the problem / solution. It appears that although I was binding the component to a property in the parent, reloading or changing the parent property does not remount or change the local scope in the component. So to get around this I needed to use the :key when calling the component. So this was my original call to the component (passing in a post object) <post-item-full v-bind:post="post"></post-item-full> All I needed to do was add the key, and pass in a unique key (in this case the _id of the object passing in.. but could be anything unique to the post. <post-item-full v-bind:post="post" :key="post._id"></post-item-full> And now working find. Not sure if this is the right solution, but works without too much fuss. Thanks to this post for the solution: https://michaelnthiessen.com/force-re-render/
common-pile/stackexchange_filtered
How to play a non-rtmp stream I have an issue to stream a video conference for many mobile device(Andriod, iPhone etc) and I have troubles with streaming. RTMP stream uses a flash and didn't work with this devices. What stream can I use? and how can I convert rtmp to this supported stream? For mobile devices you would need to convert your video to HTTP Live Streaming (HLS). If it's truly a live stream, you could use something Zencoder live streaming to convert it. If the RTMP stream is just reading from an MP4, you could use Handbrake or Zencoder to transmux it to HLS. Thank you! I will use nginx-rtmp-module. It can send rtmp to HLS. And HLS support on most Android devices.
common-pile/stackexchange_filtered
Problem with saving files to S3 and Saving Entry to DB I have a job application form, I takes some inputs: FirstName LastName Phone (Unique Index) Email (Unique Index) Resume ProfilePhoto I want to save Resume and ProfilePhoto to AWS S3 (as they are files), the problem is if I try to save these 2 files: what happens if one file gave some error, it will then have extra space occupied which is problematic. what if after saving both files the saving in the DB fails then also the same problem if I try to save the entry into DB then what if the files are failed to save. I face these kind of issues a lot I tried saving the files first and then save the entry to the DB but the issue still remains These are common issues in software development. This is usually solved using transactions; you can "roll back" any changes if the transaction doesn't complete each step successfully. I suggest looking into transaction options for S3 and how to implement this in Node. Please provide enough code so others can better understand or reproduce the problem. @James There is no support for transaction in S3, plus let's say I want to go with the first solution of rolling back the changes, what should I do if rolling back fails?
common-pile/stackexchange_filtered
Jquery AJAX POST not passing anything to PHP I'm having a problem passing variable using POST method with AJAX.Jquery Here is my code: ajaxtest.php <?php $dir = $_POST['dir']; $scaned = glob($dir."*",GLOB_ONLYDIR); echo json_encode($scaned); ?> ajaxtest.html <html> <head> <script type="text/javascript" src="js/jquery.js"></script> </head> <script> $(document).ready(function(){ $('button[type="button"]').click(function(){ var dir = 'gals/'; $.ajax({ url: "ajaxtest.php", type: "POST", data: dir, success: function(results){ data = jQuery.parseJSON(results); for (var i = 0; i < data.length ; i++) { $('#buttonA').after('<br />'+data[i]+'<br />'); }; } }) }) }) </script> <body> <br /> <button id="buttonA" type="button">Test button</button> </body> </html> This code not working. But this one do: (but not with json) $.post("ajaxtest.php", {dir:dir}, function(results){ var data = $.parseJSON(results); for (var i = 0; i < data.length ; i++) { $('#buttonA').after('<br />'+data[i]+'<br />'); } }) why so?! What is wrong in my code? please advice! Thanks a lot. data should have the following format: data: { 'dir': dir } It doesnt work with json because the success parameter name is wrong. It's not according to the code inside the callback. Change it from results to data. var dir = 'gals/'; $.ajax({ url: "ajaxtest.php", type: "POST", data: {'dir': dir}, success: function(data){ data = jQuery.parseJSON(data); for (var i = 0; i < data.length ; i++) { $('#buttonA').after('<br />'+data[i]+'<br />'); }; } }); you're sending a string, and it doesn't have to be called data. That's how parameters work, you can call them anything within the parens as long as the function references it correctly. @thescientist I know that, he asked why it doesn't work, I answered because the paremeter name is not according to his code. huh? he passes the response in as result, then JSON parses it and assigns it to data. Then uses data. Not sure I see what you mean. You didnt edited data in second add (otherwise it's my problem remains.) The difference being that in the non-working example, you are sending a string, and in the working example you are sending an object. So, send that same object in your non-working example. $.ajax({ url: "ajaxtest.php", type: "POST", data: {dir : 'gals/'}, success: function(results){ data = jQuery.parseJSON(results); for (var i = 0; i < data.length ; i++) { $('#buttonA').after('<br />'+data[i]+'<br />'); }; } }) the url field has a trailing slash that can made it not working so: url:'mypath/' had to be: url:'mypath'
common-pile/stackexchange_filtered
What are the black pads stuck to the underside of a sink? I found some hard black pads with a honeycomb pattern stuck to the bottom of my sink. Some of them are starting to disintegrate and fall apart. Wondering what the purpose of these are and if I need to be concerned about their deterioration and if this is something that needs to be replaced? These are sound-dampening pads: (Image from https://www.instructables.com/Easily-Soundproof-a-Stainless-Steel-Sink/) They aren't strictly necessary, but as the name suggests, they do help reduce how much sound the water makes as it hits the sink. If you remove the old ones and find the change in sound annoying, you might want to replace them. You can buy new ones in any home improvement store or online, either sold specifically for sinks, or any rubber mat with an adhesive backing. Thanks Abe. Could the sink have come with these already attached or is this only an after-market modification that someone would make? @Guy Of course it can come with it attached, because someone would just rip a piece from their stock and stick it on instead of sending it out separate. @Guy - every sink I've ever owned/worked on has had this. Here in Eastern Europe we got them some ~25 years ago. Older homes don't have them. Quite a pleasant difference when one glues a single 10x10cm patch (the more is better, but not decisively better, the first 10x10cm remove 95% of the noise). For one reason or another, one also gets less splashing. That's "damping", not "dampening"; the words have quite different meanings. It might not matter much for a sink, but consider the difference between a toilet seat that is damped (slow close) and one that is dampened (bad aim). @RayButterworth "Dampening" can refer to either (see the first sense in https://www.merriam-webster.com/dictionary/dampen), and is by far the more common term when referring to sound-reducing materials. Do you know whether the reduction in vibration affects the amount/type of splash when the water hits it? It seems plausible that this might make a difference, but I don't have any evidence either way. Seconding Abe's reply, these are sound dampeners. When we got our stainless steel sink it had them attached. One fell off pretty soon and I noticed more noise so I glued it back in place using regular Elmer's type glue. I think they're mostly used for stainless sinks, our previous porcelain sink didn't have pads. Performance won't be harmed by removing them or not reattaching any that fall off other than more noise when you're running water in the sink.
common-pile/stackexchange_filtered
Use d3 link nodes with mutiple lines for diffrent relations? I can through target and source to link nodes with one line easily. but how can link nodes with multiple lines? like: or: Don't use lines, use paths: https://stackoverflow.com/q/11368339/5768908 And how to add text on the middle of the line? Use a text path: https://developer.mozilla.org/en-US/docs/Web/SVG/Element/textPath
common-pile/stackexchange_filtered
FFmpeg Media Source Extension Examples and Adaptive VOD and Live Streaming I have a quick questions. Has anybody examples or links for the use of FFmpeg with the Media Source Extension for pure JavaScript adaptive VOD-, or Live-Streaming? I mean really pure not HLS or DASH. I prefer the chunkless, so say inline-container-segmenting:). So that i have a full files like so: video_1080p_5000k.webm video_720p_5000k.webm video_480p_5000k.webm audio_128k.opus audio_256k.opus and so on ... What I search are JavaScript snippets and FFmpeg examples for a pure mse-js-adaptive-vod-player;). Thx Edit: A deeper explain why i not use DASH or HLS? I try to understand the deeper functions that JavaScript chunk the Videos on the fly, to prevent and memory low error and make a very simple and basic adaptive streaming. I can load small Videos and Music files into a blob/string. Cool thing but when I try to use the chunking functions I have many problems with some things. That is why I search compact FFmpeg encoder params and JavaScript examples preferably links to sites with that preferably for all free codecs that I can use. Chunkless and Adaptive is a contradiction. You can concatenate chunks, and request with a byte range however. Both HLS and DASH support that mode. Hello, okay my question ist then how can i made video files with concatenate chunks with ffmpeg and how can i use this with javascript/mse/video-tag or knows anybody a site with examples. But i mean a pure JavaScript/MSE-Solution without HLS or DASH. Why HLS is not free and Dash is the answer but i specially want to use MSE without ready standards. I know No Dash without MSE and when MSE why not Dash;), but i'm intrested in the pure MSE-Technology with an ffmpeg/purejavascript solution:). Pure Adaptive Streaming:). What you are saying doesn't make much sense. first of all HLS is not "free" in what sense? if your talking about mpeg2 patents, they have all expired. It is no less free that DASH. "pure" MSE/JS is just fmp4. the same fmp4 used by DASH, and optionally by HLS. DASH just adds an XML manifest file, and HLS an m3u8 manifest file. Hello, thx for the infos but one objection to fMP4, when i us ogg/webm as Container it is no fMP4, HLS is the one that is not codec agnostic and Dash is, i know. This is why i have asked this Question. I search ffmpeg params which i can encoded every av-codec to use in dash with the right codec params and why i search examples or good links to sites that explain pure MSE/JS? One small thing ist the Overhead against dash-xml and javascript to use it (dash.js is very big), another thing is doing by learning and understandig mse:). But you are right in deep the answer on my questions cbe use dash DASH is codec agnostic in theory, but in reality browsers only support AVC and VP9 and a very few HEVC. HLS supports AVC and HEVC. So preactilly, the only difference is VP9. Just because you use “DASH like”segments (which are also HLS like, and CMAF like) doesn’t mean you need dash.js. You can make your own manifest and js. Hello, yes cool but how i make my own manifest and the javascript? Do you know any goog sites or a code-example for the mse? It wil be great. tell you how to make your own manifest? Just put a list of video segments into a text file. Just look and an m3u8 to see an example. Or just make the segment names predictable and don’t use a manifest. As for code W3c or MDN good. This are all not the problems, the problems are that i need code examples or links. Dash and Hls are not the problem, too. The problem is that i can not generate webm-video-files that the mse accepted. Some videos from the web functioned or are not functioned and this is an encoder-problem. And i search help for encoding. I cannot wrong decoded webm put in an manifest, w the core of mse says. oh you are coder but you don't has aknowledgment from encoding an video fme. When i can encode correctly one-file-webm or an multiplebitratevidandaudoi, but i want to chunk with js to mse and don't use pr
common-pile/stackexchange_filtered
Convert eastern arabic numerals to western arabic in bigquery I have a problem where a eastern-arabic numeral has entered my table as a timestamp and bigquery doesn't recognise this as a timestamp and will not execute my queries. I wish to be able to convert this: '٢٠١٨-١٠-١١T١٦:٠١:٤١.٠٤١Z' into this: '2018-10-11T16:01:41.041Z in bigquery, Is this possible? How about this SQL UDF: CREATE TEMP FUNCTION arabicConvert(input STRING) AS (( SELECT STRING_AGG(COALESCE(FORMAT('%i', i), letter), '') FROM (SELECT SPLIT(input, '') x), UNNEST(x) letter LEFT JOIN (SELECT letter_dict,i FROM ( SELECT SPLIT('٠١٢٣٤٥٦٧٨٩', '') l), UNNEST(l) letter_dict WITH OFFSET i ) ON letter=letter_dict )); SELECT arabicConvert('٢٠١٨-١٠-١١T١٦:٠١:٤١.٠٤١Z') converted 2018-10-11T16:01:41.041Z There is alternative, lighter option :o) CREATE TEMP FUNCTION arabicNumeralsConvert(input STRING) AS (( CODE_POINTS_TO_STRING(ARRAY( SELECT IF(code > 1600, code - 1584, code) FROM UNNEST(TO_CODE_POINTS(input)) code )) )); WITH t AS ( SELECT '٢٠١٨-١٠-١١T١٦:٠١:٤١.٠٤١Z' str UNION ALL SELECT '2018-10-12T20:34:57.546Z' ) SELECT str, arabicNumeralsConvert(str) converted FROM t result is as str converted ٢٠١٨-١٠-١١T١٦:٠١:٤١.٠٤١Z 2018-10-11T16:01:41.041Z 2018-10-12T20:34:57.546Z 2018-10-12T20:34:57.546Z
common-pile/stackexchange_filtered
Android How To Get The Return Value String From SQLite Cursor Query And Display It In Edittext? When I run the SQLite Cursor Query it fails to display the result string in the edittext. It fails with the Logcat message: java.lang.RuntimeException: Unable to start activity android.database.CursorIndexOutOfBoundsException: Index -1 requested, with a size of 1 How can I get the query result value string from the cursor and display it in the edittext? public class Books extends Activity implements OnClickListener { private BooksData books; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.books); books = new BooksData(this); } SQLiteDatabase db1 = books.getReadableDatabase(); SEARCH_TERM = "math"; EditText edittext = (EditText) findViewById(R.id.edittext1); // Get the value from TITLE when the TOPIC value is specified, ie such as "math". Cursor cursor_2 = db1.query(TABLE_NAME, new String[] {"_id", "TOPIC", "TITLE"}, "TOPIC = ?", new String[]{ ""+SEARCH_TERM+"" }, null, null, null, null); String TEMP_DATA = cursor_2.getString(2); edittext.setText(TEMP_DATA); books.close(); cursor_2.close(); } You need to move to the first record in the cursor before you can pull data from it. Cursor cursor_2 = db1.query(TABLE_NAME, new String[] {"_id", "TOPIC", "TITLE"}, "TOPIC = ?", new String[]{ ""+SEARCH_TERM+"" }, null, null, null, null); if (cursor_2 == null) return; try{ if (cursor_2.moveToFirst()) // Here we try to move to the first record edittext.setText(cursor_2.getString(2)); // Only assign string value if we moved to first record }finally { cursor_2.close(); } dymmeh, your solution works great! Thank you for providing the answer. dymmeh, your solution works great! Thank you for providing the answer. @user2308699 - No problem. Make sure to mark it as the answer if you feel it was the proper solution.
common-pile/stackexchange_filtered
Filtering user access based on tabled responses I am building a "Survey" type application. The user answers a set of questions with pre-vetted answers. Question: Where do you live? Answers: England, Finland, Spain, France, Monrovia The answers in this case would be in a DropDownList. Once the user has completed the basic responses (location, age, sex etc) I would like to be able to prevent them accessing the rest of the survey based on their answers. So for example, if they live anywhere but England I want to direct them to a page which says "Thanks, but Monrovian's can't complete this survey". I need my filtering to be user configurable (Table based) and I need to be able to have ANDs and ORs. So one filter being the user MUST earn 100k+ a year. Another being they must either live in Spain, or be female AND like model trains - "100k+ && (Spain || (Female && Trains))" I would usually use Enums and bitmasking for this, but as my country list is 200+ items long, I can't think of a sensible way to store the filtering. Hopefully I have made some sense and someone has a decent solution :) I don't know if I can answer your question completely, but I'll try... So, we have a bunch of Views that are only visible to the user if she previously chose some answers, like, she will see view#3 if she is older that 30, and view#4 if she is younger than 30 AND from China, and view#5 if she is older than 40 AND from Spain OR Italy, and so on...I want also to introduce the notion of **step**, and for each step we could have 1, 2, or more corresponding views. Each view should have a set of rules (like the ones above) that define if it is displayed or not. How to create these rules? These rules could be simple instances of a Filter class/interface that, when asked, should return true/false. Like this: interface Filter { boolean apply(); } Then you can create Filters like 'older than 30', 'from Spain', whatever. Remember that each view is configured with a set of rules (Filters) so it can answer yes/no if asked if it can display itself. Next, how to apply these filters? We could have a controller object that only knows about **steps** (each step can have one or more corresponding views, as I said), and, after the user pressed **next** at the current step, it should collect the answers and apply them against the rules attached to each view. Like, take the answers from step one, and apply them to all views from step two, and see which one matches (returns true). For example, at step two, you can have two separate views, one for young people, other for old people, and you apply the rules from each view to decide if you show the old or young view. I could give you one code example, and you could also do research on your own, since I know nothing about your technical environment. I have used Google Guava's predicates on a similar problem and here it is: suppose we are dealing with Witch objects, and each of them has name(string), age(int) and spells(collection) attributes. If I have a list of witches and I need to sort them based on specific criteria, I can do: // first I want to sort witches by age(natural ordering) then by spells, // and then by name lexicographically Ordering.natural() .compound(new BySpellsWitchOrdering()) .compound(new ByNameWitchOrdering()) .sortedCopy(witchList); The above line of code is going to take the witch list and return a list of sorted witches according to the criteria. Your situation is pretty similar. Next, how to create the answers? For each view(page), you have possible answers, like, for view#1, you can have : age, sex, race, country. You can construct some answers, in the form of strings, ints, enums, and pass them to the controller, which in turn is going to apply them to each view corresponding to the next step. As for how to store the rules in the database, as an example, you could have a column defining rule name (like, OLDER THAN) and one column for value, say, 30. Again, I do not know that much about your environment, and it is a really general issue, so I will stop here... SQL Server 2012 - I don't want hundreds of columns/rows for each possible filter combination, and I can't use masking (more than 64 items). I guess I could store a gigantic xml field, but it all seems messy. Thanks for your answer, although not too helpful in pointing me to a solution, it was interesting, and I'm sure it will be useful in the future :-) Also, I'm not concerned with the frontend implementation. I'm only concerned with how you would store sets of rules in the DB which would allow for dynamic extensions and not look like "smelly"... :-)
common-pile/stackexchange_filtered
Strange glyph when moving commandline cursor I recently tried vim for the (still in-dev!) Bash for Ubuntu on Windows. I noticed this odd glyph appearing on the commandline in vim - but only when moving across it. It flashes for one screen update when moving the cursor... Sometimes. Does anyone know of this? From my testing it happens only in vim. (I did try it on bash and nano as well). If anyone's curious, I'm fairly certain that's an uppercase O - my font has slashed zeroes. If you need to see more, I uploaded a 105kb webm video of the issue here: https://webmshare.com/play/NELvx, where I also show off the lack of a .vimrc Is it vim? Is it the terminal? Does anyone have any ideas at all? Edit: I know Stackexchange isn't a bug tracker and it most likely is a bug - but if there's some arcane vim setting that causes this I'm interested to know. If not, I'll just write it down as a bug in conhost.exe. if there's some arcane vim setting that causes this I'm interested to know: since the behavior appears with an empty vimrc it doesn't look like an option is causing it. I think you should report it as a bug. (Also a little tip: you can use vim -u NONE to start Vim without a vimrc and you don't need to delete the content of your vimrc) Ah, no, i LITERALLY do not have a vimrc on this install, haha. This is vim from inside Bash for Ubuntu on Windows (WSL), I usually work in a cygwin enviroment since WSL is still very young (it only recently got more than 16-color support on the insiders build) I saw on your gif that you showed that your vimrc was empty and I deduced that you didn't know the -u option but I was wrong. And yes this environment is still pretty young so there are good chances that it contains some bugs. :set term=builtin_ansi changed the displayed O to a [ so it's most likely a bug with the terminal.
common-pile/stackexchange_filtered
Publish and Subscribe (data transfer) with permanently offline nodes. Are message queues a good fit? The general question is what kind of mechanism can I use to transfer data to and from publishers and subscribers where publishers or subscribers can be permanently offline? Can message queues be used for this? Possible approach I am thinking a message queue style approach where the online service (publisher) publishes messages to a queue online, copy the queue offline, and have the offline service (subscriber) then process all incoming messages. I need to design a solution where the same applies with the roles reversed, where the offline service is the publisher and the online service is the subscriber The data is physically transported between the networks by way of storage medium. Once the storage medium is plugged into a network a data transfer service reads the storage medium and moves all of the data from the storage medium to the queues. Current approach The way I currently do it is a simple table copy where The database has triggers that listen for inserts, updates, or deletes, makes note of the event in a database event table (db_publish_events) Later a data transfer service reads all published events from db_publish_events and then copies the entire row to a JSON file with a tag of INSERT, UPDATE, or DELETE Then the JSON file is manually transported to the subscribing system and then processed by a data transfer service. Each record transferred in is marked in a database event receiving table (db_received_events). A JSON file is downloaded from the subscriber of acknowledgment receipts of all events received by the subscriber. The JSON file with receipts is sent back to the original publishing database and changes the state of the db_publish_event to mark it as received so the publisher will stop sending it. Pros Simple table copy across a network Cons No data integrity across tables or business event boundaries since one record is transferred at a time. Solution I am thinking of a solution where the entire event is transferred as one message, so before complex business rule processing splits an event into multiple tables. Is there a way for commercial or open source messaging software (RabbitMQ) to do a simple USB copy of messages for publishing in an offline queue? The general question is what kind of mechanism can I use to transfer data to and from publishers and subscribers where publishers or subscribers can be permanently offline? Can message queues be used for this? There is a concept of durable queues in some messaging systems, e.g., in RabbitMQ. But if your subscriber is offline for quite a long time, these queues can obviously get overloaded. And if your application is data-intensive, it might happen pretty soon. Besides that, queues are slower when they are overwhelmed (I guess it's a common trait of many messaging systems, not only RabbitMQ). But the better option for you is Kafka. Kafka has the concept of topic, which roughly maps to the concept of queue in RabbitMQ. And it's perfectly fine to store data there. Here is a good intro in Kafka. Cited from there: What makes Kafka unique is that Kafka treats each topic partition as a log (an ordered set of messages). Each message in a partition is assigned a unique offset. Kafka does not attempt to track which messages were read by each consumer and only retain unread messages; rather, Kafka retains all messages for a set amount of time, and consumers are responsible to track their location in each log. Consequently, Kafka can support a large number of consumers and retain large amounts of data with very little overhead. I'm not aware of the size and load of your system, but probably the simpler option could be simple http callbacks. So if the subscriber is down -- it's ok, the publisher gets http code 500 and goes on in a couple of minutes. That is sort of what I have been thinking. I'll probably end up going the Kafka route, which would be simpler for historical modeling as well.
common-pile/stackexchange_filtered
continuous mapping is determined by its values on a dense subset of its domain Question: If f and g are continuous mappings of a metric space X into a metric space Y, let E be a dense subset of X. if g(p) = f(p) for all p $\in$ E, prove that g(p)= f(p) for all p$\in$ X. Answer: Now, suppose f(p) = g(p) for all p ∈ E. Let x ∈ X\E. Since E is dense in X we have a sequence q$_n$ ∈ E such that q$_n$ → x. So, f(x) = f(lim q$_n$) = lim f(q$_n$) = lim g(q$_n$) = g(lim q$_n$) = g(x). Thus, f(x) = g(x) for all x ∈ X Hi, I founded the above answer and I'm not sure if the bold line is true and how so if it is. Could you help me understand the equalities in the bold line? Thank you! Yes, it's correct. What parts have you difficulties with? $$f(\lim q_n) = \lim f(q_n) = \lim g(q_n) = g(\lim q_n)$$ We can switch limit and function (equalities 1 and 3) by continuity of $f$ and $g$. Equality 2 is true because $f(q_n)$ and $g(q_n)$ are two sequences which are equal in every entry. By continuity of $f$ and $g$, they both have limits; since they are equal, their limits must be the same. The bold line is true because of continuity of $f$ and $g$. This is because sequential continuity is equivalent to metric continuity. That is, the $\epsilon-\delta$ definition of pointwise continuity of a function $f$ is the same as $\lim_{n\to\infty}f(x_{n})=f(x)$ for all sequences $(x_{n})_{n\in\mathbb{N}}$ that converge to $x$.
common-pile/stackexchange_filtered
Java cloning method does not work I am copying the the current best solution of my program as follows: public Object clone() { MySolution copy = (MySolution)super.clone(); copy.wtour = (int[])this.wtour.clone(); copy.w = (int[][][][])this.w.clone(); return copy; } When I request the overall best solution in the end, the program always gives me the current best solution but never the one with the best objective value. Also, nothing changes with the solution if I exclude the stated part from my program. Edit: It is part of a tabu search optimizer that generates solutions and saves new current best solutions (as here) The clone method saves the tours of a routing problem where w[][][][] is a binary decision variable and wtour is a copy of that but consists of the customer numbers in the visiting sequence, i.e. [0, 5, 3, 2, 1, 4]. Edit: I changed my program according to Robby Cornelissen as follows: public Object clone() { MySolution copy = (MySolution)super.clone(); copy.w = copy2(w); return copy; } public static int[][][][] copy2(int[][][][] source) { int[][][][] target = source.clone(); for (int i = 0; i < source.length; i++) { target[i] = source[i].clone(); for (int j = 0; j < source[i].length; j++) { target[i][j] = source[i][j].clone(); for (int q = 0; q < source[i][j][q].length; q++) { target[i][j][q] = source[i][j][q].clone(); } } } return target; } As a result, i get a clone as follows: w[0][5][1][0]=1 w[4][2][2][0]=1 w[2][5][3][0]=1 w[5][0][4][0]=1 w[0][4][1][1]=1 w[6][1][2][1]=1 w[1][3][3][1]=1 w[3][0][4][1]=1 The problem now is that only the first element of this belongs to the very best solution (w[0][5][1][0]). Why do the other ones not get copied? SOLUTION: I changed my program as the provided link suggested to the following: public Object clone() { MySolution copy = (MySolution)super.clone(); copy.w = deepCopyOf(w); return copy; } // end clone @SuppressWarnings("unchecked") public static <T> T[] deepCopyOf(T[] array) { if (0 >= array.length) return array; return (T[]) deepCopyOf( array, Array.newInstance(array[0].getClass(), array.length), 0); } private static Object deepCopyOf(Object array, Object copiedArray, int index) { if (index >= Array.getLength(array)) return copiedArray; Object element = Array.get(array, index); if (element.getClass().isArray()) { Array.set(copiedArray, index, deepCopyOf( element, Array.newInstance( element.getClass().getComponentType(), Array.getLength(element)), 0)); } else { Array.set(copiedArray, index, element); } return deepCopyOf(array, copiedArray, ++index); } Could you please clarify what the problem is when you run your program? What do you expect to happen, and what is actually happening? What signs indicate a problem? Can you provide details of the class you are trying to clone Keep in mind also that clone() is essentially "broken", so you might want to just roll your own copy solution. The problem is the following: When you clone an array of primitives, the array object and its values are cloned. When you clone an array of objects, the array object is cloned, but the cloned array will contain references to the same objects that were contained in the original array. Now what does that mean for your case? Your wtour value seems to be an array containing primitive ints, so the first of the two cases above applies. I.e. both the array and it's contents are effectively copied. Your w value however seems to be a multidimensional array of ints. In practice this means that it's actually an array containing array objects, hence the second case applies. Although your top-level array object is copied, the copied array contains references to the same second-level array objects as the original array. A similar issue and possible solutions are discussed here. Update As requested in the comments, a straightforward implementation could look like this. Note that this is completely untested: public static int[][][] copy(int[][][] source) { int[][][] target = source.clone(); for (int i = 0; i < source.length; i++) { target[i] = source[i].clone(); for (int j = 0; j < source[i].length; j++) { target[i][j] = source[i][j].clone(); } } return target; } Indeed, your link refers to a very similar problem. Would I have to create a new "deepCopy()" method as suggested there or could I just change mine and Iterate over w[][][][] to make a proper clone o w[][][][]? If you decide to iterate, keep in mind that you have to iterate 2 levels deep. What do you mean with 2 levels? Clone array first and then each position OR clone 4 levels deep as its a 4-dimensional array? I though it was only 3-dimensional, so I meant that you have to not only iterate over the 2nd level, but also the 3rd level. Could you provide an example for that? I am trying to do that but it does not work for me :-/ Thanks, that's much appreciated. I updated my post as well.
common-pile/stackexchange_filtered
Proof Taylor's theorem: "Let $M$ be the unique solution"; is this always possible? I have a question concerning the proof of the following theorem: I don't see how they can state that we're looking for a unique solution $M$; $$ f(x)=\sum_{k=0}^{n-1}\frac{f^{(k)}(c)}{k!}(x-c)^k+\frac{M(x-c)^n}{n!}. $$ How do we know the solution is unique? How do we know it exists at all? I'm a little bit confused by this. Why don't they word it like this: we are looking for a solution $M$, and we're hoping that it exists and is unique.
common-pile/stackexchange_filtered
create columns representing 4 week period from weekly data in pandas I have a dataframe of weekly data. The date are the first day of each week. The dataframe look like below: import pandas as pd df = pd.DataFrame( {'date': ['2019-12-22', '2019-12-15', '2019-12-08', '2019-12-01', '2019-11-24', '2019-11-17', '2019-11-10', '2019-11-03', '2019-10-27', '2019-10-20', '2019-10-13'], 'p': list((df.index+4)//4) }) date p 0 2019-12-22 1 1 2019-12-15 1 2 2019-12-08 1 3 2019-12-01 1 4 2019-11-24 2 5 2019-11-17 2 6 2019-11-10 2 7 2019-11-03 2 8 2019-10-27 3 9 2019-10-20 3 10 2019-10-13 3 I need to create a column p2 as last week of every 4 week period. And also another column showing date range for each period. Look like below: date p p1 p2 0 2019-12-22 1 2019-12-22 2019-11-24: 2019-12-22 1 2019-12-15 1 2019-12-22 2019-11-24: 2019-12-22 2 2019-12-08 1 2019-12-22 2019-11-24: 2019-12-22 3 2019-12-01 1 2019-12-22 2019-11-24: 2019-12-22 4 2019-11-24 2 2019-11-24 2019-10-27: 2019-11-24 5 2019-11-17 2 2019-11-24 2019-10-27: 2019-11-24 6 2019-11-10 2 2019-11-24 2019-10-27: 2019-11-24 7 2019-11-03 2 2019-11-24 2019-10-27: 2019-11-24 8 2019-10-27 3 2019-10-27 2019-10-13: 2019-10-27 9 2019-10-20 3 2019-10-27 2019-10-13: 2019-10-27 10 2019-10-13 3 2019-10-27 2019-10-13: 2019-10-27 Does anyone knows how to achieve that? you are creating df, and referencing it in column 'p'? could you have a look at your original dataframe and fix it? does p column indicate the week group? I think you might be able to use some kind of groupby to groupby column p. Another way is to use modulus operations and slicing by column p. This only applies if the data is given in the format as you described in your question. However, I think using Pandas Time Series native/datetime operations would probably be the most fruitful way to approach this problem Based on what I understand, you can give the below a try: df['date']=pd.to_datetime(df['date']) #convert to datetime g = df.groupby('p') # group on column p df['p1'] =g['date'].transform('max') # gets the last date for the group df['p2'] = (df['p'].map(g['date'].max().shift(-1)).fillna(g['date'].transform('last')) .astype(str).add(' : ' + df['p1'].astype(str))) print(df) date p p1 p2 0 2019-12-22 1 2019-12-22 2019-11-24 : 2019-12-22 1 2019-12-15 1 2019-12-22 2019-11-24 : 2019-12-22 2 2019-12-08 1 2019-12-22 2019-11-24 : 2019-12-22 3 2019-12-01 1 2019-12-22 2019-11-24 : 2019-12-22 4 2019-11-24 2 2019-11-24 2019-10-27 : 2019-11-24 5 2019-11-17 2 2019-11-24 2019-10-27 : 2019-11-24 6 2019-11-10 2 2019-11-24 2019-10-27 : 2019-11-24 7 2019-11-03 2 2019-11-24 2019-10-27 : 2019-11-24 8 2019-10-27 3 2019-10-27 2019-10-13 : 2019-10-27 9 2019-10-20 3 2019-10-27 2019-10-13 : 2019-10-27 10 2019-10-13 3 2019-10-27 2019-10-13 : 2019-10-27
common-pile/stackexchange_filtered
Click event on fullcalendar.js plugin not working I am trying to display a calendar in VueJS. I need the following things : Month view Week view Day view when I click on a cell To display data in each cell of my calendar I can't use Vuetify plugin because I am already using Bootstrap, that's why I am using fullcalendar.js plugin (https://fullcalendar.io/docs/vue). However, it doesn't seem the API behind the component is working properly, <template> <div> <FullCalendar :plugins="calendarPlugins" :weekends="false" @dateClick="handleDateClick" /> </div> </template> <script> import FullCalendar from '@fullcalendar/vue' import dayGridPlugin from '@fullcalendar/daygrid' export default { name: "Calendar", components: { FullCalendar // make the <FullCalendar> tag available }, data() { return { calendarPlugins: [ dayGridPlugin ] } }, methods: { handleDateClick(arg) { alert(arg.date) } } } </script> <style lang='scss' scoped> @import<EMAIL_ADDRESS> @import<EMAIL_ADDRESS></style> When I click on a date no event if fired, and I don't have any error that appears in the console.. What I am doing wrong here ? Thanks for your help ! According to the FullCalender documentation: In order for this callback to fire, you must load the interaction plugin. Thanks for the reply, do you know how can I do that with vueJS ? This is what I've tried so far : data() { return { calendarPlugins: [ dayGridPlugin ], plugins: [ interactionPlugin ], } }, Solved it by adding the plugin to the calendarPlugins list.. Sorry for the stupid question and thanks for your help man !
common-pile/stackexchange_filtered
Creating an IR Spectrum using Gaussian ADMP Calculation I have a Gaussian 16 trajectory for a small cluster of water molecules carried out with ADMP technique. I was wondering if anyone knows how to compute the IR spectrum from the information in the trajectory. Thank you Dear Victor, if you have an XYZ File, you can use Travis (by Brehm and Kirchner) to calculate power spectra. For an IR spectrum, you need to compute the dipoles - normally done using Wannier centers, the analysis to make the spectrum is also possible in Travis: http://www.travis-analyzer.de/
common-pile/stackexchange_filtered
Get the current speed of internet (mobile & Wifi) using Android I have an app that has to work in offline and online mode. Application has to make requests to the app server based on the internet speed of the current network connection (Wifi or Data). If Wifi is connected, I can get the Wifi signal strength of the current connection using this code. WifiManager wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE); WifiInfo wifiInfo = wifiManager.getConnectionInfo(); int level = WifiManager.calculateSignalLevel(wifiInfo.getRssi(), 5); OR wifiInfo.getLinkSpeed(); If the mobile internet is connected, I was able to get the singal strength using the listener class @Override public void onSignalStrengthsChanged(SignalStrength signalStrength) { super.onSignalStrengthsChanged(signalStrength); int obj = signalStrength.getGsmSignalStrength(); int dBm = (2 * obj) -113; } The above codes gets the signal strength, not the speed. So, how do I get the current speed of the connected network? Normally, the speed test apps found in the stores or websites check the speed by sending or downloading a specific file or pinging a remote server and then calculating the speed using the time elapsed. This upload/download or pinging feature is not supported by the app server with which I am communicating, so I am not able to use that. So, is there any alternative way to check the speed of the current internet connection that can be done real-time? Any leads could be helpful. PS: This link has the code for checking the connectivity type of the current network. But there are cases in which I have a LTE signal with full strength but no/very slow internet connection. So, this too, wont be an accurate solution for me. https://gist.github.com/emil2k/5130324 http://stackoverflow.com/a/4430334/4824159 May be help you The reason people use pinging is because there is no other way to be accurate. A link may support up to 100 Mbps, but you may only be able to get 2 due to congestion. The only way to know that is to calculate. It also isn't helpful to know that your link supports 100Mbps to foo.com, if you want to hit bar.com which is congested and only able to support 10. Any other method will be an approximation. found a useful medium article. have a look https://android.jlelse.eu/designing-android-apps-to-handle-slow-network-speed-dedc04119aac Did you try this sample : https://github.com/bertrandmartel/speed-test-lib @Ankit Mehta No. I haven't tried the sample. Without any pings to any server, in my understanding, there is no way to check for connectivity speed. You can not get Download/Upload speed without pinging any server. As Your server doesn't support ping you can use third party pinging site. With JSpeedTest library you can do it easily. Some of your desired functionalities are available in this library. Such as speed test download speed test upload download / upload progress monitoring configurable hostname / port / uri (username & password for FTP) configurable socket timeout and chunk size configure upload file storage Gradle: compile 'fr.bmartel:jspeedtest:1.32.1' Example Code: SpeedTestSocket speedTestSocket = new SpeedTestSocket(); // add a listener to wait for speedtest completion and progress speedTestSocket.addSpeedTestListener(new ISpeedTestListener() { @Override public void onCompletion(SpeedTestReport report) { // called when download/upload is complete System.out.println("[COMPLETED] rate in octet/s : " + report.getTransferRateOctet()); System.out.println("[COMPLETED] rate in bit/s : " + report.getTransferRateBit()); } @Override public void onError(SpeedTestError speedTestError, String errorMessage) { // called when a download/upload error occur } @Override public void onProgress(float percent, SpeedTestReport report) { // called to notify download/upload progress System.out.println("[PROGRESS] progress : " + percent + "%"); System.out.println("[PROGRESS] rate in octet/s : " + report.getTransferRateOctet()); System.out.println("[PROGRESS] rate in bit/s : " + report.getTransferRateBit()); } }); I want to check internet speed and set image or video url accordingly for example if current internet speed is between 50kbps - 150 kbps link1 , 150kbps - 500kbps link2 and >500kbps link3. so How to achieve that? This library does not work on vpn , I tried that speedTestSocket.setProxyServer("http://<IP_ADDRESS>:9000"); , it is throwing error INVALID_HTTP_RESPONSE : Error occurred while parsing http frame and INVALID_HTTP_RESPONSE : Error status code -1 You can do it simply just add this class named TrafficUtils to your project and call the following method - TrafficUtils.getNetworkSpeed() But first of all, add the two following permissions on your AndroidManifest.xml - <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" This class uses TrafficStats.getMobileRxBytes which returns how much data has been received across mobile networks in total, then waits a second to call it again to find a difference between the two numbers. This is not an accurate representation of the current speed of the user's network. If the user's device is currently not doing much networking, then this number will be lower than if a user was downloading something in the background while this method was called even while they are in the same network.
common-pile/stackexchange_filtered
Do WHERE clauses exist in Lucene(.Net)? I'm building an ASP.NET MVC site where I want to use Lucene.Net for full-text search. My site will be divided into certain categories, and I want to allow users to search inside a specific category or inside all categories. To accomplish this, I plan to create a term in all documents in my index that contains the name of the category that they're in. When querying the index, I would need to execute a query that contains a WHERE clause if the user only wants results from one category. Does such WHERE clause functionality exist in Lucene/Lucene.Net? How do I restrict searches to only return results from a limited subset of documents in the index (e.g. for privacy reasons)? What is the best way to approach this? Thanks for the link. The FAQ there says: "Just before calling IndexSearcher.search() add a clause to the query to exclude documents in categories not permitted for this search." How do I add a clause to the query? Take a look here to see how to use the QueryFilter class - http://stackoverflow.com/questions/1307782/lucene-net-combine-multiple-filters-and-no-search-terms To implement a custom filter: http://stackoverflow.com/questions/1079934/how-do-you-implement-a-custom-filter-with-lucene-net
common-pile/stackexchange_filtered
Remove rows from a CSV file when a 4 columns have null values I have a CSV file and I need to filter out some rows that do not contain certain values. Because of this I don't care about those rows and want to remove them or put the results of the command in a new csv file. This is the format of my CSV file: employeeid,time,homephone,workphone,ssn,insurance,address,state,salary,position,rank,boss,hiredate Now there are some rows that have no information some of these fields. How would I perform an awk or sec command to read all the lines in the csv file and only put the lines which no fields are null into another file? Or would it be possible to replace every ,, with a word like notthere? I have some word replacing going on here but this is not 100% working. So far I have something like this: sed -e 's/^,/notthere,/' old.csv > new.csv This pretty much does nothing that I am looking for. I would greatly appreciate it if someone could help me out. I am not that experienced with using linux commands at all. Thank you! Seems like you could also grep the file for connected commas: grep -v ',,' somefile.csv > newfile.csv EDIT: Just realized you have fields at the beginning and end you want to check for as well. We can include those with regex, like so: grep -vE ',,|^,|,$' somefile.csv > newfile.csv grep -v means 'inverse', in other words: print all lines that don't match these patterns: two commas together, a comma at the start of the line, a comma at the end of the line. The | here means "or". Thank you so much! The inverse way of doing things is interesting and works at the same time. Now I can carry on with my other code I need to do. Thank you! No problem. I actually use grep -v and grep -vE all the time. This should work: sed -e 's/,,/,notthere,/' old.csv > new.csv should add -e 's/^,/notthere,/' -e 's/,$/,notthere/' to check first and last field for emptiness Some sample data would have helped, but try this to skip lines with empty fields: awk -F , '{n=0; for (i=1;i<=NF;i++) if ($i=="") n++} n==0' filename More readably awk -F , '{ empty=0 for (i=1; i<=NF; i++) { if ($i == "") { empty++ } } if (empty == 0) { print } }' filename It's worth noting that the above examples are "grepping" across the entire row. Another approach is to search specific columns for non-existence using awk as shown below. Given a comma delimited file, the below script only prints lines that have empty values in column 2 signified by $2. The print $0 portion means print the entire line. Print all lines where column 2 is empty, redirect to new.csv awk -F "," '$2 !~ /./ {print $0}' old.csv > new.csv Another related example, print column 3 when only it matches regular expression [0-9] awk -F "," '$3 ~ /[0-9]/ {print $3}' old.csv > new.csv
common-pile/stackexchange_filtered
Generating an HTML link to a file once it is created I have the following bit of code which pops open a jquery mobile dialog box and updates the contents of the dialog box with a link to a file when it detects the file exists on the server. It checks for the existence of the file every second and tries up to 10 times. The code works fine, however, when the user clicks on the actual link in the dialog box to download the file, nothing happens. However, if the browser is refreshed, the file does get downloaded. Obviously, I'd like the user to just click once on the link to begin the download. I'm sure the way I'm using setTimeout() is what's gumming things up but I'm not sure what else to try. Javascript/jquery is not my strong suit. Thanks! function download_notify(grp_name, token) { $('#download_dialog_open_button').click(); exists(0); function exists (try_count) { try_count = try_count + 1; $.ajax ( { type: 'HEAD', url: '/files/' + grp_name + '_' + token + '.csv', async: true, error: function (try_count) { if (try_count < 10) {setTimeout(exists, 1000); } else { return; } }, success: function () { $('#download_dialog h1.ui-title').html('File ready'); $('#download_dialog .ui-content div').html('<a href="/files/' + grp_name + '_' + token + '.csv">Download</a>'); } }) } } why are you using \$ instead of $ ? I was generating it with perl code and had to escape them. Fixed. setTimeout(exists(), 1000); is wrong. It should be setTimeout(exists, 1000); Thanks @epascarello. My perl habits leaking into the code there. Edited. Have you tried rel="external" ?, i.e.: $('#download_dialog .ui-content div').html('<a rel="external" href="/files/' + grp_name + '_' + token + '.csv">Download</a>'); That worked! However, I still have a suspicion I'm not using the setTimeout properly.
common-pile/stackexchange_filtered
Is coffee a drug that you could give to your dog? Sometimes questions are very short and answers very long. We all like coffee, but, would you give coffee to your dog? According to dictionary.com, one of the definitions of the word drug is: a chemical substance used in the treatment, cure, prevention, or diagnosis of disease or used to otherwise enhance physical or mental well-being As caffeine, which coffee contains, is an active ingredient in some medicines (such as Anadin) it certainly fits that definition so it would be correct to say that coffee contains a drug (putting aside decaf). If you take the first sentence of the Wikipedia page that describes drug: A drug (/drɑːɡ/) is any substance that causes a change in an organism's physiology or psychology when consumed. Caffeine also fits that definition as there's a body of evidence that supports that description when talking about caffeine. Would you give coffee to your dog? Absolutely not. There is a page on petmd.com titled Caffeine and Pets: Safety Tips and Considerations which states: It turns out our pets react in much the same way we do. Caffeine makes them restless. They get jittery and their hearts start to race. That doesn't sound.... ideal! It turns out that it's not, and the next part of the page makes it pretty clear why: But because our pets weigh so much less than we do, it only takes a relatively small amount of caffeine to cause a big problem, potentially leading to expensive hospitalization or even death. Further on in the article it's explained that 23-27mg caffeine per lb of body weight can lead to cardio-toxicity. For a 3.5kg cat, such as mine, that equates to between 177mg and 207mg of caffeine. Based on the table in this answer, that puts a double espresso or a 200mg caffeine tablet squarely in that range. That's absolutely not to say that giving your dog, or other pet, coffee (even decaf!) is something you should consider doing. The petmd page states it much better than I ever could (my emphasis): “Cats and dogs should not ingest any caffeine,” says Dr. Elisa Mazzaferro, adjunct associate clinical professor of emergency-critical care at Cornell University College of Veterinary Medicine in Ithaca, New York.
common-pile/stackexchange_filtered
Observable on one parent node vs multiple sub node Firebase angular 5 I have a confusion regarding Firebase observable in angular, here is structure I want to watch all chats of particular user who logged in to my app, and I am seeing two option for achieve this. Watch /chats node which means logged user will get notification for all users chats and I have to filter out logged user chats. Watch multiple node like /chats/chatId , /chats/chatId .... so on, In this logged user will not get notification for all users chats, It will only get their own chats notification. So which one will be good or if something better than this Please let me know. I would say, its better to listen to chat node, chat.author equals . @rijin But It would be bad for logged user to get all events from other users chats in case of watching full node /chat. you will be querying for a specific user, so how will you get all users messages? @rijin I am taking about top-level /chats node not about /user/id/chats . and as you can see in users/id/chats there is mapping of logged users chats userId=>chatId, so I have to watch all these chatId in top-level /chats. Are you creating an app where multiple users can have chats with one person or many? ( https://stackoverflow.com/q/52218336/5675325 )
common-pile/stackexchange_filtered
Why can I not insert a substring into an array using splice()? Excuse the petty question, but this is really nagging me. I'm following the mozilla example: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice Can someone explain why this doesn't work: <body> <p id="test"> </p> </body> var url = "teststring"; document.getElementById("test").innerHTML = (url.split('').splice(2,0,"teststring").join('')); jsFiddle: https://jsfiddle.net/uyk2p437/1/ what do you expect? @MattBurland I think you meant splice not split. (5min already passed)! @ibrahimmahrir: Whoops. Thanks. because splice() doesn't return the Array you've spliced, but the array you've removed. In your case url.split('').slice(2,2).join('') would do exactly the same https://jsfiddle.net/uyk2p437/2/ The Array#splice method returns an array containing removed elements, in your case, it would be empty and you are applying Array#join method which generates an empty string. Use String#slice ( or String#substring) method instead : url.slice(0, 2) + "teststring" + url.slice(2) var url = "teststring"; document.getElementById("test").innerHTML = url.slice(0, 2) + "1" + url.slice(2); <body> <p id="test"> </p> </body> substr will be better since it's a string (I don't mean in performance or anything I just meant it would make sense)! But good answer anyway! Because the return value from splice is the removed items. Not the modified array. It modifies in place As per MDN Return value An array containing the deleted elements. If only one element is removed, an array of one element is returned. If no elements are removed, an empty array is returned. var url = "teststring"; var split = url.split(''); split.splice(2,0,"teststring"); // this returns an empty array because you aren't removing anything // but the value in split is now: // ['t','e','t','e','s','t','s','t','r','i','n','g','s','t','s','t','r','i','n','g'] console.log(split.join('')); // gives you the expected result Or you can use slice as in Pranav's answer, or substr or substring.
common-pile/stackexchange_filtered
Translating user id when creating image I want to create an image (for example ext3 image) from a given directory hierarchy. This directory hierarchy is owned by a given user. But on the final image, I want the files to be owned by superuser. Is there a solution for tranlating the user uid / gid to 0 ? Here is a pseudo workflow : Create file hierarchy, as user in /home/user/rootfs Create an ext3 img on a file loop mount the empty img (as user or root) in /somewhere/loopmount cp -R --magic_uid_options 500=0 /home/user/rootfs /somewhere/loopmount Or step 4 could be a normal copy and step 5 a clever find that would chown and chgrp recursively. Is there well known solution to this kind of problem ? I don't know any "--magic_uid_options" for the cp command and you don't need to be clever for doing recursive chown or chgrp. In fact I would: keep as much files/directory properties as possible when doing the copy (timestamp, ...) cp -pR /home/users/rootfs /somewhere/loopmount/ chown all to root:root: chown -R root:root /somewhere/loopmount Your absolutely right. Just after asking the question I googled for recursive chown... -R is not that clever indeed ;) man is also a good thing to start with ;) You could make step 4, as root: cd /home/user/rootfs pax -rw -pp . /somewhere/loopmount -pp preserves modes and times but creates files owned by the user and group running pax.
common-pile/stackexchange_filtered
On periodic orbits of a discrete dynamic system Let $f: D \subset \mathbb{R} \rightarrow \mathbb{R}$ continuous with $f(D) \subset D$ and let $x_0$ be a $n$-periodic atractor point of $f$. Show that every point of the cycle $\{ x_0, f(x_0),\dots,f^{n-1}(x_0) \}$ is also an $n$-periodic atractor point of $f$. I tried using the definition of continuity in $\mathbb{R}$, but I always end up using 3 points of the cycle and I don't know how to solve this. Can anyone help me? A periodic point is attractive only if the products of the derivatives at every point on the cycle has an absolute value less than 1. But you get the same product regardless of what point on the cycle you start on. It follows that if one in the cycle is attractive, then so are the rest. I know that, but I can only assume continuity, not derivability. Our proffessor explicitly said that we couldn't use the derivative criterion. Thanks anyway! What is your definition of attractor? An atractor of a function g is a point so that there exists an $\varepsilon > 0$ so that for every $y \in (x-\varepsilon, x+\varepsilon)$, you have $\lim_{k \to \infty} g^k(y) = x$. So essentially, you have per assumptions continuous functions $F,G$, here more precisely $F=f^k$, $G=f^{n-k}$, with a cycle $F(x_*)=y_*$ and $G(y_*)=x_*$ and a neighborhood $U$ of $x_*$ so that $$\lim_{m\to\infty}(G\circ F)^{\circ m}(x)=x_* ~~\text{ for all }~~ x\in U. $$ Now take $V=G^{-1}(U)$ and observe $$(F\circ G)^{\circ m}=F\circ(G\circ F)^{\circ(m-1)}\circ G$$ to find a similar limit property for all $y\in V$ being propagated towards $y_*$. This redistribution of the compositions means that an $y=y_0\in V$ gets mapped to some $x=x_0\in U$, and the sequences $x_m$, $y_m$ are connected by $x_m=G(x_m)$ and $y_{m+1}=F(x_m)$. Thus convergence of one series is equivalent to the convergence of the other.
common-pile/stackexchange_filtered
python eve gracefully exit from callback I'm wondering if it's possible to update an item without completely process the PATCH request. What I'm trying to do is to randomly generate and insert a value inside the db when a user sends a PATCH request to the accounts/ endpoint.If I don't exit from the PATCH request I will get an error because it expects a value but I cannot give it in advance because it will be randomly generated. def pre_accounts_patch_callback(request, lookup): if not my_func(): abort(401) else: return HTTP 201 OK What can I do? Not sure I get what you want to achieve, however keep in mind that you can actually update lookup within your callback, so the API will get back and process the updated version, with validation and all. import random def pre_accounts_patch_callback(request, lookup): lookup['random_field'] = random.randint(0, 10) app = Eve() app.on_pre_PATCH_accounts += pre_accounts_patch_callback if __name__ == '__main__': app.run()
common-pile/stackexchange_filtered
kubernetes Extended resource for pod Kubernetes have a feature named extended resource. but I do not know the effects of what it brings. In other words, what are the differences between I use it and I no use it? Besides, Is the feature similar to the default limit or default request? Extended resources are custom resources your nodes can advertise to the cluster, to make sure pods scheduled on them are able to have enough of them when scheduling. The default limit/request allows you to assign these resources implicitly, even if the pod spec did not explicitly specify them. So, what is included in the custom resources? It's up to you, as you can see in the docs - they use dongle as an example, but it be anything you think of Including the CPU limit and memoery limit? Cpu and memory are the default resources k8s supports, and are provided by the node exporter. @zhangqichuan, if you feel my answer was helpful for you please consider accepting it as the correct answer.
common-pile/stackexchange_filtered
Two characters seem identical but UTF-8 encodings are not identical I need to filter some illegal strings like "Password", but I found someone bypassed my check program. They input a string that seems exactly "Password" but it's not equal. I checked the Unicode of it and, for example, the "a" is 8e61, while normal "a" is 61 (hex). My PHP files' encoding, HTML meta Content-Type and MySQL encoding are utf-8. How does this happen? Why there're visually identical characters with different codes? I want to know how can I filter these characters. I put the weird string here, please copy it for research: Password For some reason when I copied the "Password" with problem here, it actually displayed ASCII one. I use PHP function bin2hex() on "Password", and get below: 50c28e61c28e73c28e73c28e776fc28e72c28e64c28e while a normal one is: 50617373776f7264. To make it simpler, the hexadecimal representation for "a" is: c28e61 while normal one is: 61 Welcome to Stack Overflow. Please read the [About] page soon. Welcome to the wonderful world of Unicode, too. There are a lot of characters with multiple representations. For a semi-exotic example, the Arabic digit one is encoded twice, once for western Arabic U+0660 and once for eastern Arabic U+06F0, but the symbol is the same; it is some of the other digits that differ. See In Unicode, why are there two representations for the Arabic digits. You'll have to decide whether you're going to treat U+8E61 the same as U+0061 [...continued...] [...continuation...] Hold on; U+8E61 is a Unified Han symbol. Which code page are you using? 0x8E61 is not valid UTF-8; the 0x8E is a continuation byte, and the 0x61 is LATIN SMALL LETTER A, which can't be followed by a continuation byte. You've not given all the information we need; what is the entire byte sequence you're dealing with? The comments above are still accurate and more or less relevant, but you are unlikely to be treating U+8E61 as if it was U+0061. I copied your string and it is identified as containing: 0x0000: 50 61 73 73 77 6F 72 64 Password. That's the regular ASCII representation of Password. So either your copy/paste didn't preserve the odd characters, or mine didn't. I'm working on a Mac. Can you identify the bytes you think you have in hex? (Oops: U+0660 and U+06F0 are Arabic zeroes, not ones; U+0661 and U+06F1 are the ones.) @Jonathan Leffler, hex string provided, thanks Given the hex string 50c28e61c28e73c28e73c28e776fc28e72c28e64c28e, you have an encoding of a valid UTF-8 string: 0x50 = U+0050 = P 0xC2 0x8E = U+008E = SS2 0x61 = U+0061 = a 0xC2 0x8E = U+008E = SS2 0x73 = U+0073 = s 0xC2 0x8E = U+008E = SS2 0x73 = U+0073 = s 0xC2 0x8E = U+008E = SS2 0x77 = U+0077 = w 0x6F = U+006F = o 0xC2 0x8E = U+008E = SS2 0x72 = U+0072 = r 0xC2 0x8E = U+008E = SS2 0x64 = U+0064 = d 0xC2 0x8E = U+008E = SS2 The 0xC2 0x8E sequence maps to ISO 8859-1 0x8E, which is a control character SS2 or Single Shift 2 (see Unicode Code Charts). SS2 doesn't have a defined visible representation. The string is clearly different from plain 'Password'. As long as you don't strip out control characters, you should be able to spot the difference as a string comparison should not treat that as identical to plain 'Password'. Thank you ! How to remove this character in PHP, or this kind of characters? I searched some, like this http://stackoverflow.com/questions/1176904/php-how-to-remove-all-non-printable-characters-in-a-string, but they can't remove this character. I've found the solution to remove it here: http://stackoverflow.com/questions/3295125/preg-replace-to-strip-out-non-printing-characters-seems-to-remove-all-foreign-ch What you might be seeing (I can't tell exactly because parts of your question don't make sense or are inconsistent) are so-called homoglyphs. Those are characters that look identical or very similar and thus can be mistaken at first glance. To circumvent your check people can use a Cyrillic a and get away with it. But frankly, this isn't actually a problem because I know no password cracker that will actually try mixing scripts, as most passwords are ASCII-only. As for the why, you can take a look at Why are there duplicate characters in Unicode?.
common-pile/stackexchange_filtered
"Different than" followed by nominative case? I'm going to try to explain my question as clearly as I can: "Different" usually takes a preposition, either "from" (standard English regardless of region), "to" (British English), or "than" (American English.) But "than" differs from the other options because, unlike conventional prepositions, it can also be used as a conjunction and take the nominative case: grammatically conservative dialects and styles of English prefer "He is taller than I." Even though "different" normally takes a preposition followed by objective case, do any English speakers (or usage authorities?) also consider a sentence like "My parents look different than I" acceptable? The existing questions have some information: this and this. The former quotes AUE: "Different than" is sometimes used to avoid the cumbersome "different from that which", etc. (e.g., "a very different Pamela than I used to leave all company and pleasure for" -- Samuel Richardson). And this is the canonical post on "than him" vs "than he". The usage for different differs from that for comparatives. My late aunt was an English teacher in the UK and she would be spinning in her grave if she heard me use anything other than "different from". As far as she was concerned that is the only grammatically correct construct, @PeterJennings As stated in the question, "different than" is primarily an American construction. I think this is a strong argument for using 'different from' and the objective case.
common-pile/stackexchange_filtered
React PWA check if app is installed on mobile I used create-react-app to create my first PWA. Everything worked great but I encountered this issue where I want to display a button to the user that will install the app. I managed to display the button to the user but now I want to hide the button if the user has the application installed on his phone. How can I achieve this? I tried to use getInstalledRelatedApps() but without any luck. Any help will bee much appreciated! Does this answer your question? Javascript to check if PWA or Mobile Web Not really, but thank's for the resource. If for example the user has the app installed on his phone but enters the app from his browser he will still see the install button. I want to display the button inside the browser only if the user hasn't already installed the app. You can check if display mode is standalone (to know if app is installed as PWA ) like this : { !window.matchMedia('(display-mode: standalone)').matches && <button >...</button> } this will partly solve the issue. If for example the user has the app installed on his phone but enters the app from his browser he will still see the button. I want to display the button inside the browser only if the user hasn't already installed the app. me to have this problem
common-pile/stackexchange_filtered
Map from $\mathbb {R}$ to $\mathbb {R^2}$ Is there a way to construct an injective function that map from $\mathbb{R}$ to $\mathbb{R^2}$? If yes, please give me an example. Thank you! $x \mapsto (x,0)$ is injective from $\mathbb{R}$ to $\mathbb{R}^2$. I guess you forgot something in your question. I think you mean surjective. If you meant surjective, there still is. There is a bijection between $\mathbb{R}^2$ and $\mathbb{R}$. See http://en.wikipedia.org/wiki/Cardinal_number#Cardinal_multiplication Injective means that for $(x,y) \in \mathbb R^2$ and $a,b\in\mathbb R$ we have $f(a)=(x,y) = f(b) \Rightarrow a=b$ thus all we need is every $\mathbb R$ to go to a different point in $\mathbb R^2$. Any line through $\mathbb R^2$ will satisfy this (amongst others) Thus $$f(a) = \left(2a-0.5,1.7-a\right)$$ is injective, for example. $f(x) = (x, 2){}$ is injective.
common-pile/stackexchange_filtered
Composite Primary Key Cascading Two fields, Building and Room, make up a unique primary composite key in my rooms table. The key validates and saves, etc. I have a BLANK Objects table which has three fields which will make it unique (again a composite primary key). The tables are as follows: ROOM TABLE [Building] [Room] 01 101A 01 102 02 101A OBJECT TABLE [Building] [Room] [Number] 01 101A 1 01 101A 2 01 102 1 02 101A 1 How do I enforce referential integrity? When editing the relationships in MS Access' relationship tool, I get the following error: No unique index found for the referenced field of the primary table. I know (by trying non-unique values) that the composite keys for the primary (Object) table is correct. What am I doing wrong? How do I set up the proper relationships and maintain integrity (as updates will be a gruelling challenge without them)? You need to set up your key like so: Note that the primary key for rooms is set to Building + Room and for Objects it is Building + Room + Numb (Number is a reserved word AFAIR) Oh wow.... (you also replied to a previous, somewhat related question today because I couldn't get this method to work). Turns out that it matters which table is the "Related" table. I was dragging Building from objects to rooms instead of from rooms to objects. It makes sense.. but it didn't click with me that the Rooms table should be the "primary" table instead until I saw your screenshot. Solved!
common-pile/stackexchange_filtered
Jenkins:can not see test report tab I can not see report tab here: Public Junit test report: I have configured Public JUnit test report, but I still can not see test report in left area. Make sure you have some .xml files in **/target/sunfire-reports/*.xml yes, there are some xml files in surefire-reports folder. As the OP joan.li adds in the comments, you need to make sure Your Jenkins installation knows about maven (as mentioned in "Jenkins executing maven from incorrect path"): Add the default maven installation under (Jenkins -> configuration) Goto the failing job and make sure you choose the default maven installation from dropdown And You need to make sure your pom.xml does include a surefire build step <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.16</version> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> </plugin> See also this example. Check if those xml files are generated org.codehaus.mojo cobertura-maven-plugin 2.4 xml html @joan.li Don't hesitate to edit your question to add new information. there are some xml files in surefire-reports folder. @joan.li you still need to check that your build step in your maven build does include a surefire step, as configured in my answer. So that would be what is missing in your case. Check the first link in my answer for a complete pom.xml example. I have solved the problem,Just Scroll down about half way to the Maven section and click Maven installations. Give your path to your Maven install in MAVEN_HOME. @joan.li Right! I have added the relevant step and illustrations to the answer for more visibility.
common-pile/stackexchange_filtered
Accessing jQuery Variable on form My data looks something like this I have a parent class which has some attributes and list of child objects. Child class has attributes like ID and Name. What I want to achieve is On load display all the parent items. On click of a parent item list all the child items below. I could achieve this using following jQuery function $(document).ready(function () { $(".parent").click(function () { $("#child").show(); var v1 = "parent_" + $(this).attr('id'); $("#" + v1).slideToggle("slow"); $("#" + v1).siblings().hide(); }); This function is performing well as far as hiding or displaying the div containing child items is concerned. I have provided a button which helps me add new parent items. Similarly I want to provide a button which will help me add a child item under selected parent item. I'm not sure how to provide the parent id when the button in child item div is clicked. Is there a way where I can create a variable in above mentioned function and use it on click of 'Add Child' button? Is document.getElementById("Child") working ? Post your html code as well.. Try if $(this).parent() can help? Or perhaps $(this).closest(".parent"). Thanks this worked.. . You can try $(this).parent() or $(this).closest(".parent") and get id of parent and store your child of respective parent
common-pile/stackexchange_filtered
How to check whether ear is deployed correctly in weblogic 12c? I have an application ear which I am trying to deploy into Oracle Weblogic 12c server. I following instructions given on this site. After following those steps, I tried to hit the url "http://localhost/myApp/servlet" where myApp is specified in application.xml file and servlet is specified in web.xml file. This web.xml file is located with the .war file and this war file is within the .ear file. My application.xml looks like below : <application xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/application_1_4.xsd" version="1.4"> <description>This is my application</description> <display-name>myApp</display-name> <module> <web> <web-uri>myApp.war</web-uri> <context-root>/myApp</context-root> </web> </module> </application> and my web.xml looks like as below : <servlet> <servlet-name>MyServlet</servlet-name> <servlet-class>com.foo.bar.MyServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>MyServlet</servlet-name> <url-pattern>servlet</url-pattern> </servlet-mapping> When I hit the url on browser, I get 404 (Not Found) error. I am trying to deploy any ear/war on weblogic 12c for the first time so I do not know what is going wrong. I checked server logs too, but there's no stack trace or error. Request you to please help !!! Open your Admin Console and you can see if the deploy is deployed correctly. In NetBeans IDE you go do Services > Servers > Oracle Weblogic Server > right click on Oracle Weblogic and select View Admin Console. Them you have to login you session and in the menu on left side of your screen, you have deployment. In this part you can see if the deployment is correct. I hope this will help you.
common-pile/stackexchange_filtered
Appending to next row the result of math operation between three columns So, I have the following Pandas DataFrame where all values in third column (Ratio) are the same: import pandas as pd df = pd.DataFrame([[2, 10, 0.5], [float('NaN'), 10, 0.5], [float('NaN'), 5, 0.5]], columns=['Col1', 'Col2', 'Ratio']) ╔══════╦══════╦═══════╗ ║ Col1 ║ Col2 ║ Ratio ║ ╠══════╬══════╬═══════╣ ║ 2 ║ 10 ║ 0.5 ║ ║ NaN ║ 10 ║ 0.5 ║ ║ NaN ║ 5 ║ 0.5 ║ ╚══════╩══════╩═══════╝ I want to know if there is a way to multiply Col1 * Ratio and then the output of that product add it to Col2 and append the value to next row Col1 using a function provided by pandas. Output example: ╔══════╦══════╦═══════╗ ║ Col1 ║ Col2 ║ Ratio ║ ╠══════╬══════╬═══════╣ ║ 2 ║ 10 ║ 0.5 ║ ║ 11 ║ 10 ║ 0.5 ║ ║ 15.5 ║ 5 ║ 0.5 ║ ╚══════╩══════╩═══════╝ better use for loop @YOBEN_S that's what I want to avoid, if posible. I'm not sure you can, as in your operation each row depends on the result of the previous row... so they must be executed in order. (maybe you can avoid an explicit loop using apply or something, but that just loops under the hood...) and ratio varies across the rows? @QuangHoang Nope, ratio stay the same in all rows. I think you're looking for window functions in pandas. I think this Stack Overflow question might point you in the right direction @Adam.Er8 What you said is very similar to ewm, which is vectorizable. However, it might not worth implement such a solution here. I think numba is way how working with loops here if performance is important: from numba import jit @jit(nopython=True) def f(a, b, c): for i in range(1, a.shape[0]): a[i] = a[i-1] * c[i-1] + b[i-1] return a df['Col1'] = f(df['Col1'].to_numpy(), df['Col2'].to_numpy(), df['Ratio'].to_numpy()) print (df) Col1 Col2 Ratio 0 2.0 10 0.5 1 11.0 10 0.5 2 15.5 5 0.5 So, how it seems there is not pandas built-in function to acchieve the goal of the question, I've marked this as the correct answer because at least it is focused on performance.
common-pile/stackexchange_filtered
Input as List of dictionary python str_arr = input() d = "}, " for line in str_arr: new_str = [e + '},' for e in str_arr.split(d) if e] print(len(new_str)) for i in new_str: print(i) I want my code to accept a list of dictionaries but the input is a string and tried to make the input str_list to a list but my code adds }, at the end of the console output how can I remove it. In the for loop, should you simply do: new_str = [eval(e.strip()) for e in str_arr.split(",")] I assume that str_arr is an input like {"A":1},{"B":2,"C":3}, in other words, a string of comma separated dictionaries eval is evil, avoid using instead you could use json.loads can you show me how to use json.load implementation tried but my code had an error yes, Lemmi try it new_str = [eval(e.strip()) for e in str_arr.split(",")] got a an error new_str = [eval(e.strip()) for e in str_arr.split(",")] File "<string>", line 1 [{"bus_id" : 128 ^ SyntaxError: unexpected EOF while parsing
common-pile/stackexchange_filtered
Citrix webform file uploading, gives access denied For some reason my form won't submit if a user is in a Citrix environment, with Internet Explorer as a browser. The code for the form is: <form method="POST" action="http://mywebsite.com/photo/upload" accept-charset="UTF-8" enctype="multipart/form-data"> <input name="photo" type="file"> <input type="submit" value="Save"> </form> The console gives an error, pointing to the line of the form. With the error-message: "Access denied". Just that. The website is written in PHP and hosted on a Linux server. Is this something that isn't possible because of some rights users have on Citrix, or? Because it does work when a user uses Chrome in a browser. Is http://mywebsite.com/photo/upload on a separate domain than the form? If so, the access-denied error message is stemming from the security issue of sending a POST message cross-domain opening the door to cross-domain-scripting attacks. To make it work you may need to add something like: header('Access-Control-Allow-Origin: *'); header('Access-Control-Allow-Methods: GET, POST'); To the beginning of the http://mywebsite.com/photo/upload PHP code. You would want to review that and set the actual origins, as the above code allows ANY domain to send GET and POST messages. It is on the same domain. I already had the following in the .htaccess: Header add Access-Control-Allow-Origin "*" Header add Access-Control-Allow-Headers "origin, x-requested-with, content-type" Header add Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS" </ifModule>``` - So that is not the solution. Can you use a relative path then instead of the full URL? Did not help. All forms are working in this way, except when a file should be uploaded. Citrix may be filtering full paths instead of a relative one. Try changing it to a relative path instead. So, instead of http://mywebsite.com/photo/upload try just upload, or upload.php. Straight from their website: Attempts to upload a 10 MB or larger file may fail when Cross-Site Scripting (XSS) and SQL Injection checks are enabled. Found here: http://support.citrix.com/proddocs/topic/netscaler-release-notes-93/ns-rn-common-kwn-issues-wrk-arnds-93cl-con.html My answer is a viable answer, you need to make sure that the URL is a relative path and something that is on the same domain. I would even try using a file that includes an extension such as upload.php instead of just upload Yea ok, but it is not working. Tried different paths, tried adding *.php, tried adding .html. It's not the solution. Sorry. :) I'm not sure then, but it may be an Internet Explorer issue if you are using Citrix on Windows Server.
common-pile/stackexchange_filtered
Qsqlite duplicate connection warning in qt c++ I am creating an application in qt which uses sqlite database. I have written a class to open database connection. The constructor for the class is given below: currencydb::currencydb() { currency = QSqlDatabase::addDatabase("QSQLITE"); currency.setDatabaseName("currency.sqlite"); if(!currency.isOpen()) { if (!currency.open()) { qDebug() << "Error: connection with database fail"; } else { qDebug() << "Database currency: connection ok"; } } } Since i use this constructor, when i create object for the database class, i get following warning: QSqlDatabasePrivate::addDatabase: duplicate connection name 'qt_sql_default_connection', old connection removed. Is there a way to check whether the database is already open? That warning doesn't mean that your database is already open but that you already have a connection to the database with a default name. The connection provides access to the database via (in your case SQLITE v3) database driver. You create a default connection to the database when you don't pass the connection name argument when you call static public method QSqlDatabase::addDatabase(). You can use QSqlDatabase::contains() for checking if you already have the default connection. CurrencyDb::CurrencyDb() { currency = openDb("QSQLITE", "currency.sqlite"); } QSqlDatabase CurrencyDb::openDb(const QString &driver, const QString &name) const { QSqlDatabase db; // contains() default argument is initialized to default connection if (QSqlDatabase::contains()) { db = QSqlDatabase::database(QLatin1String(QSqlDatabase::defaultConnection), false); } else { db = QSqlDatabase::addDatabase(driver.toUpper()); } db.setDatabaseName(name); if (!db.isValid()) { // Log error (last error: db.lastError().text()) and throw exception } if (!db.open()) { // Log error (last error: db.lastError().text()) and throw exception } return db; }
common-pile/stackexchange_filtered
Should I really test controllers? I'm trying to get the best codecoverage/development time result Currently I use rspec+shoulda to test my models and rspec+capybara to write my acceptance tests. I tried writing a controller test for a simple crud but it kinda took too long and I got a confusing test in the end(my bad probably) What`s the best pratice on controller testing with rspec? Here is a gist on my test and my controller(one test does not pass yet): https://gist.github.com/991687 https://gist.github.com/991685 The way I view this is that acceptance tests (i.e. Cucumber / Capybara), test the interactions that a user would normally perform on the application. This usually includes things like can a user create a specific resource with valid data and then do they see errors if they enter invalid data. A controller test is more for things that a user shouldn't be able to normally do or extreme edge cases that would be too (cu)cumbersome to test with Cucumber. Usually when people write controller tests, they are effectively testing the same thing. The only reason to test a controller's method in a controller test are for edge cases. Edge cases such as if a user enters an invalid ID to a show page they should be shown a 404 page. This is a very simple kind of thing to test with a controller test, and I would recommend doing that. You want to make sure that when they hit the action that they receive a 404 response, boom, simple. Making sure that your new action responds successfully and doesn't syntax error? Please. That's what your Cucumber features would tell you. If the action suddenly develops a Case of the Whoops, your feature will break and then you will fix that. Another way of thinking about it is do you want to test a specific action responds in a certain way (i.e. controller tests), or do you care more about that a user can go to that new action and actually go through the whole motions of creating that resource (i.e. acceptance tests)? it's been a while since you posted it. Has anything changed in your opinion since than? Or still this post is 100% up to date with your views :) Despite the age of this reply, I still very strongly agree with it. Maybe not. Sure you can write tests for your controller. It might help write better controllers. But if the logic in your controllers is simple, as it should be, then your controller tests are not where the battle is won. Personally I prefer well-tested models and a thorough set of integration (acceptance) tests over controller tests any time. That said, if you have trouble writing tests for controllers, then by all means do test them. At least until you get the hang of it. Then decide whether you want to continue or not. Same goes for every kind of test: try it until you understand it, decide afterwards. I do acceptance with rspec/capybara and unit only for models. What I like about acceptance is that you don't test the implementation. You test the feature, the implementation could change and the feature remain and the test would still pass. BTW I find cucumber specs really painful to mantain, I prefer pure Ruby. Writing controller tests gives your application permission to lie to you. Some reasons: controller tests are not executed in the environment they are run in. i.e. they are not at the end of a rack middleware stack, so things like users are not available when using devise (as a single, simple example). As Rails moves more to a rack based setup, more rack middlewares are used, and your environment deviates increasingly from the 'unit' behaviour. You're not testing the behaviour of your application, you're testing the implementation. By mocking and stubbing your way through, you're re-implementing implementation in spec form. One easy way to tell if you're doing this; if you don't change the expected behaviour of url response, but do change the implementation of the controller (maybe even map to a different controller), do your tests break? If they do, you're testing implementation not behaviour. You're also setting your self up to be lied to. When you stub and mock, there's no assurances that the mocks or stubs you've setup do what you think they do, or even if the methods they're pretending to be exists after refactoring occurs. Calling controller methods is impossible via your applications 'public' api. The only way to get to a controller is via the stack, and the route. If you can't break it from a request via a url, is it really broken? I use my tests as an assurance the my application is not going to break when I deploy it. Controller tests add nothing to my confidence that my application is indeed functional, and actually their presence decreases my confidence. One other example, when testing your 'behaviour' of your application, do you care that a particular file template was rendered, or that a certain exception was raised, or instead is the behaviour of your application to return some stuff to the client with a particular status code? Testing controllers (or views) increases the burden of tests that you impose on yourself, and means that the cost of refactoring is higher than it needs to be because of the potential to break tests. I like to have a test on every controller method at least just to eliminate stupid syntax errors that may cause the page to blow up. Should you test? yes There are gems that make testing controllers faster http://blog.carbonfive.com/2010/12/10/speedy-test-iterations-for-rails-3-with-spork-and-guard/ But why?....... A lot of people seem to be moving towards the approach of using Cucumber for integration testing in place of writing controller and routing tests.
common-pile/stackexchange_filtered
Selenium Python trying to click on Radio button error ElementNotVisibleException I have a webpage with some radio buttons. I am trying to click on a radio button but it is not clicking and throwing The full error is: Traceback (most recent call last): File "C:\Webdriver\ClearCore\TestCases\DataPreviewsPage_TestCase.py", line 48, in test_add_Lademo_CRM_DataPreviews data_previews_page.click_from_file_radio_button_from_options_tab() File "C:\Webdriver\ClearCore\Pages\data_previews.py", line 112, in click_from_file_radio_button_from_options_tab fromfile_radiobutton.click() File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 69, in click self._execute(Command.CLICK_ELEMENT) File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webelement.py", line 448, in _execute return self._parent.execute(command, params) File "C:\Python27\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 196, in execute self.error_handler.check_response(response) File "C:\Python27\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 181, in check_response raise exception_class(message, screen, stacktrace) ElementNotVisibleException: Message: Cannot click on element My method to click the button is: def click_from_file_radio_button_from_options_tab(self): fromfile_radiobutton = self.driver.find_element(*MainPageLocators.data_previews_fields_from_File_radioButton_from_options_tab_xpath) fromfile_radiobutton.click() return self The XPATH for the locator for the button from MainPageLocators is: data_previews_fields_from_File_radioButton_from_options_tab_xpath = (By.XPATH, '//span[@class="gwt-RadioButton block"]/label[contains(text(), "From file")]/../input') The HTML is: <table class="gwt-DisclosurePanel gwt-DisclosurePanel-open" cellspacing="0" cellpadding="0"> <tbody> <tr> <tr> <td align="left" style="vertical-align: top;"> <div style="padding: 0px; overflow: hidden;" aria-hidden="false"> <div class="content" aria-hidden="false"> <span class="gwt-RadioButton block"> <input id="gwt-uid-163" type="radio" name="fields" value="on" tabindex="0" checked=""/> <label for="gwt-uid-163">From file</label> </span> <span class="gwt-RadioButton block"> <span class="gwt-RadioButton GPI5XK1CET GPI5XK1CFT"> <input class="gwt-IntegerBox" type="text" disabled="" size="3"/> <span class="gwt-RadioButton block"> </div> </div> </td> </tr> </tbody> </table> It does not show hidden=True in the HTML. It shouldn't say Element is not visible. I do not know why i cannot click this button. It is visible on the webpage. How can i click this radio button? Some help appreciated. Thanks, Riaz I don't believe your XPath is correct. I would try something like this. I can't tell from the limited HTML if this will be unique enough or not. fromfile_radiobutton = self.driver.find_element_by_css_selector("input[type='radio'][name='fields']") fromfile_radiobutton.click() Thank you for the suggestion. It is not unique as there are other radio buttons. I have asked the developer to put an ID for the table. I can then build the xpath starting from the table id. Just for fun... try this self.driver.find_element_by_css_selector("table.gwt-DisclosurePanel.gwt-DisclosurePanel-open input[type='radio'][name='fields']"). I added a reference to the TABLE that might be unique enough to find the radio.
common-pile/stackexchange_filtered
Selenium WebDriver Java - Trying to log out of facebook but placed with a pop up when clicking on Log out link I am doing some testing on Facebook using Selenium and Java. I am trying to log out of Facebook which i am able to by clicking on the Log Out link however i am placed with a pop up which i am unable to bypass. I have wrote the following code but i am placed with the 'cannot find element error'. public static void logout() { WebElement listitem = driver.findElement(By.id("userNavigationLabel")); listitem.click(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.findElement(By.partialLinkText("Log Out")).click(); driver.findElement(By.className(" hidden_elem" )).findElement(By.linkText("Log out")).click(); } ______________________________________________________________________ 'cannot find element error' which findElement() method causes the errors ? would you please post the stacktrace ? and secondly what would you like to achieve ? you are already clicking the logout button as far as I understood. Or do you need something else ? Scrapping Facebook is against the ToS 3.2 and you are liable to be questioned and may even land up in Facebook Jail. Use Facebook API instead. Hi, thanks for the quick reponse. I can click on the log out button however when i click on the log out link i am placed with a pop up window shown in the figure above. I wrote the line of code below "driver.findElement(By.className(" hidden_elem" )).findElement(By.linkText("Log out")).click();". I am however placed with this error 'no such element: Unable to locate element: {"method":"link text","selector":"Log Out"}' Thankyou @Aneesa could you please post the screenshot with UI Inspector, may be I can give a solution! I fixed the error, i was using the wrong class element. Should have been driver.findElement(By.className("_4t2a" )).findElement(By.linkText("Log out")).click();
common-pile/stackexchange_filtered
Regular Expression - How to find the <%@ %> line in the file? I found there is a bug in this highlight editor: http://cshe.ds4a.com/ The following ASP.Net code can't be highlighted correctly <%@ Page Title="<%$ Resources: XXX %>" Language="C#" ContentType="text/html" ResponseEncoding="utf-8" %> The problem is about the regular expression, how can I find this whole line by regular expression? I am using the RegExp from ActionScript3 The main challenges are: The <%@ %> instruction may contains another <%$ %> instruction in its attribute, just like the one above The <%@ %> instruction may have a line break in it, just like the following. <%@ Page Title="<%$ Resources: XXX %>" Language="C#" ContentType="text/html" ResponseEncoding="utf-8" %> 3 . The <%@ %> instruction may followed by another <%@ %> without any space / line-break <%@ Page Title="<%$ Resources: XXX %>" Language="C#" ContentType="text/html" ResponseEncoding="utf-8" %><%@ Import Namespace="System" %> Thank you What language are you using? RegEx flavors behave differently and have different features, so knowing this will help. Please edit and tag the question with the language you are using. This is a classic example of where RegExs fall down - they're not designed to handle nested expressions like this. Really you'd need a proper ASP.Net parser to do this properly. See also http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags Can you try and post the original regex? Otherwise this is just guess work. Try this: /<%@[^%"']++(?:(?:%(?!>)|"[^"]*+"|'[^']*+')[^%"']++)*+%>/ Anything that's enclosed in double-quotes or single-quotes is treated as generic string content, so a %> in an attribute value won't prematurely close the tag for matching purposes. I'm not sure all these escapes are necessary, but I kept them for good meassure. This found your line in notepad++ find <EMAIL_ADDRESS> EDIT For multiple lines, set the multiline and dotall flags. Those inform that the expression should span over several lines, and that the . wildcard should match newline (\n). <EMAIL_ADDRESS> or <EMAIL_ADDRESS> With s and m flags. Thanks, but actually this line can have a line-break in it, just like <%@ Page Title="<%$ Resources: XXX %>" Language="C#" ContentType="text/html" ResponseEncoding="utf-8" %><%@ Import Namespace="System.Data" %>, I don't think your expression will works :) Then you have to set the multi line flag. Depending on your software, usually that's done by adding "n" behind the end delimiter. That is not just about multi-line switch, I am using thr RegExp from ActionScript3, The <%@ %> instruction may contain line break; The instruction attribute may contain another <%$ %>, and <%@ %> may followed by another <%@ %> without any space / line-break Then you should consider trying a programmatical solution, using more than one regex if necessary. @Codemonkey: The multiline flag is irrelevant; all it does is change the behavior of the ^ and $ anchors, which aren't being used here. Based on the headline I created a little RegEx which also takes care of whitepasce at the start or end of the file. However I can not assure if this fits into your project. <EMAIL_ADDRESS> I tested this with the PHP function preg_match_all() Update: Use this pattern for a across multiple lines. Your RegExLibrary has to support the parameter "s" (which accepts newlines as character) though <EMAIL_ADDRESS>
common-pile/stackexchange_filtered
Regression function in terms of a set of basis functions Regression function in terms of a set of basis functions is given as $f(x) = \sum_{m=1}^M \beta_m h_m(x) + \beta_0$$ To estimate $\beta$ and $\beta_0$ we minimize the following expression. $$H(\beta, \beta_0) = \sum_{i=1}^N V(y_i - f(x_i)) + \frac \lambda 2 \sum _{m=1}^M \beta_m^2$$ (The constant is not penalized) Find $\hat \beta, \hat \beta_0$ and $\hat f$. I know that if $\beta_0$ was penalized than the solution to this problem would be (they sey $\beta_0 = 0$ $$\hat f = \sum_i^N \hat \alpha_i K(x, x_i)$$ where $\alpha = (HH^T + \lambda I)^{-1}y$ and $K(x,y) = \sum_{m=1}^M h_m(x)h_m(y)$ But I do not understand what changes if the constant is not penalized. Would you mind sharing the source of the equations above (if there is one)? p. 436. https://web.stanford.edu/~hastie/Papers/ESLII.pdf
common-pile/stackexchange_filtered
Facing null pointer exception while merging documents using iText 5.5.3 I'm passing 2 documents in the form of streams and trying to merge them into a single pdf. In the first iteration of the loop everything works fine, but miraculously during the second iteration it throws a null pointer exception. Both the streams passed are actually the same, being used as two separate streams. Code : public synchronized void doMerge(InputStream[] streams,OutputStream out) throws Exception { System.out.println("Starting To Merge Files..."); System.out.println("Total Number Of Files To Be Merged..." + streams.length + "\n"); try { int fileIndex = 0; Document document = null; PdfCopy writer = null; PdfReader reader = null; for (fileIndex = 0; fileIndex < streams.length; fileIndex++) { System.out.println("Loop No : "+ (fileIndex+1)); reader = new PdfReader(streams[fileIndex]); System.out.println("Reader "+ (fileIndex+1) +" created successfully"); if (fileIndex == 0) { document = new Document(); System.out.println("Document created successfully"); writer = new PdfSmartCopy(document, out); System.out.println("Reader created successfully"); document.open(); } writer.addDocument(reader); System.out.println("Document "+ (fileIndex+1)+" merged" ); reader.close(); System.out.println("Reader "+ (fileIndex+1)+" closed" ); } document.close(); System.out.println("Document successfully closed"); System.out.println("File has been merged and written to-" + out); } catch (Exception e) { System.out.println(in exception block"); System.out.println(e.toString()); } } This is the output i get when i send 2 streams in the input : Loop No : 1 Reader 1 created successfully Document created successfully Reader created successfully Document 1 merged Reader 1 closed Loop No : 2 in exception block java.lang.NullPointerException Where (in which line) does the exception occur exactly and what is the complete stack trace? I assume that the exception is being thrown when the reader is trying to read the pdf at index 1 (i.e 2nd stream), as the code is printing the value "Loop : 2". Here is the stack trace i see : com.itextpdf.text.exceptions.InvalidPdfException: PDF header signature not found. What happens when you upgrade to iText 5.5.6? I know that we had a problem with NullPointerExceptions wen merging forms and documents with a structure tree. This was fixed. Also: aren't you closing your reader too early? And how come you create a new Document inside the loop over the different documents? Your code looks awkward. Where did you find it? Bruno - if you can help me with a better code it would be really great. To my utter surprise this code works absolutely fine when we send 3 or more streams. http://stackoverflow.com/users/1622493/bruno-lowagie Will the same code behave differently when i use 5.5.6? I have modified the basic merge pdf code from itext_so-sample.pdf, by creating a document only once, when it encounters the first stream. This has to move to production, so your inputs would be of great help. Both the streams passed are actually the same, being used as two separate streams. - Do you mean the same instance? E.g. for some InputStream is and some OutputStream os you do doMerge(new InputStream[]{is,is}, os)? That would explain your issue: In the first iteration the contents of is are read and in the second there is no content anymore to read. mkl - that is not the case what you are thinking. Its like i have copied the same pdf on two different location, but the files (in content) are actually the same. The method works perfectly fine when i used the stream placed at 3+ locations (with the same body).
common-pile/stackexchange_filtered
Problems with basic proof in modal logic (event based) I am having trouble deriving the following basic result: $\ast$) For every $\omega \in \Omega, \omega \in P (\omega),$ from the following axioms: A1) $K (\Omega) = \Omega$, A2) $K (A) \cap K (B) = K ( A \cap B)$. Where, $\Omega$ is a state space, $K: 2^{\Omega} \to 2^{\Omega}$, and $P (\omega):= \bigcap \{ A \subseteq \Omega \ | \ \omega \in K (A) \}$. My attempt was to prove it by contradiction, starting from the definition of $P$, that corresponds to $$ \forall A \subseteq \Omega, \omega \in K (A) \Longrightarrow \omega \in P(\omega).$$ and having that, if $\{ \omega \} \in K (\{ \omega \})$, then $K (\{\omega\} ) = \varnothing$. However I am not going anywhere. Any feedback is welcome. Thank you for your time. Is this modal logic? I don't see any modal operators anywhere. And what is the $P_i$ that appears in the desired conclusion? Your definitions define only a $P$ without subscript. Should the $P:=\cdots$ be $P(\omega):=\cdots$? A counterexample to the claim is: $$ K(A) = \Omega \text{ for all } A$$ Then $P(\omega_0)=\varnothing$ for all $\omega_0$ because it is the intersection of a class that includes $\varnothing$. At first I read the claim as being $\omega\in K(P(\omega))$. But that is not true either; here's a counterexample to that: $\Omega$ is infinte. $\displaystyle K(A) = \begin{cases} \Omega & \text{if }\Omega\setminus A\text{ is finite} \\ \varnothing & \text{otherwise} \end{cases} $ Then $P(\omega_0)=\varnothing$ for all $\omega_0$ because it is the intersection of a class that includes $\Omega\setminus\{\omega\}$ for every $\omega$. $\omega\in K(P(\omega))$ does hold under the additional assumption that $\Omega$ is finite. Thanks. Of course, you were right about the many typos (I corrected them). Actually, my problem was with the finite case, but I got it. @Kolmin: Um, it seems that I misread the claim. It fails even for finite $\Omega$; see edited answer. Thanks for the interest. Actually the claim is not $\omega \in K (P (\omega))$, but $\omega \in P (\omega)$. @Kolmin: Yes, I noticed that afterwards. The first part of the edited answer is a counterexample for $\omega\in P(\omega)$.
common-pile/stackexchange_filtered
Programmatically create an <ui:include src="..."/> in backing bean In a JSF page I have: <h:form id="form"> <h:panelGroup id="content" layout="block"/> </h:form> In a Java SessionScoped bean i have a method: public void fillContent() { UIComponent content = FacesContext.getCurrentInstance().getViewRoot().findComponent("form:content"); content.getChildren().clear(); content.getChildren().add(/*New <ui:include src="Page.xhtml"/>*/); } What is the Java code to insert the <ui:include src="Page.xhtml"/> as content children? Where I can find the list for the mapping of all the JSF Java names? Thank you Unfortunately ui:include is implemented as a tag handler. This means it is evaluated and executed when component tree is built and there is no corresponding UIComponent class. To achieve your goal you would have to use the facelets api like javax.faces.view.facelets.FaceletContext#includeFacelet, after preserving reference to the faceletContext which is accessible during tree construction. This is not a straightforward approach and I would strongly recommend rephrasing your problem and looking for another solution. I don't know any official guide with tag-component/handler mapping, I am sure some books like "Core Java Server Faces" will help with this though. You can try to do this in facelets to begin with, something like: <h:form id="form"> <ui:include src="#{content.path}"/> </h:form> I don't think primefaces offers anything extra to solve this. Why you have to alter your view in a programmatic way? Doing this in facelets to begin with should be much easier. How? Can you explain me, please? I used it, it works "just a bit", I mean that loads the page (all in Ajax, no refreshes) but the page content doesn't work, and when I reload the page it throws a ClassCastException: Cannot Cast primefaces.AjaxBehavior to java.util.List. Don't know how to do it working properly :( Your question does not contain enough detail to answer it properly I'm afraid. If all included variants are known up front the safest bet would be creating a ui:fragment section for each and control its "renderer" attribute through an EL expression.
common-pile/stackexchange_filtered
Enable Virtualization for Windows 10 Pro running inside VirtualBox My ultimate goal is to run docker for windows inside a Windows 10 Pro (evaluation). To do that, Downloaded Windows 10 Pro evaluation image from Microsft website, Mounted it with virtual box Installed docker for windows The installation failed, since it required "virtualization" to be enabled, as described in https://github.com/docker/for-win/issues/74 I have already configured "hardware virtualization" settings for the VM, as you can see below... But still it is not enabled in guest windows OS Any clues on how to enable it? "But still it is not enabled in guest windows OS " - This is because your attempting to run Docker within a Virtual Box VM. You will have to use Hyper-V or VMware or some other virtualization software if you want to run Docker from within a VM. Virtual Box can't do nested virtualization. Note that your image is showing virtual box's settings for the Host, not the guests. its saying "virtuabox will use VT-x/AMD-V exposed by the host". it is NOT stating that it will expose those capabilities to the guests. The problem is with VirtualBox. It doesn't support nested virtualization (yet), and Docker for windows use Hyper-V. However, if you create a VM running Widows 10 anniversary edition inside VMware player. Docker for Windows will work. During the installation it will activate Hyper-V and after a reboot, everything will work. You could do it using Docker for Windows Beta. https://beta.docker.com/ By default it uses Hyper-V instead of Virtualbox for its hypervisor. Not sure.... Docker for Windows needs Hyper-V AND Virtualization enabled... (Virtualization != VirtualBox) @guilhermecgs - "Virtualization" in that context is VT-X. Docket requires Hyper-V and (Hardware virtualization) i.e. VT-x Virtualbox does not expose the Intel VT extensions to the virtual machines. Thus, you cannot use these extensions in a Virtualbox or a hypervisor running in a Windows in VirtualBox. When you activate the VT extensions in the host Virtualbox, that hypervisor uses these extensions to support the virtualization However, although you have activated the extension, the guest OS running in Virtualbox cannot use these extensions. Nowadays, Docker for Windows uses Hyper-V (the hypervisor provided by Microsoft). Hyper-V supports "nested-virtualization", i.e. You can run hyper-V in a guest OS accessing the Intel VT extensions exposed by the host Hyper-V. If you are not interested on using Hyper-V on the guest and the host at the same time, you can consider VMware Workstation. This hypervisor [supports Intel VT emulation][3]. You can run an operating system that use these extensions in a virtual machine in VMware. This error is due to something related to the AMD processor. Uncheck ACTIVATE VT-x/AMD nested in the docker_windows - Settings - System - Processor tab. Seems counterintuitive to disable nested virtualization in order to use hardware virtualization within a VM.
common-pile/stackexchange_filtered
Download multiple csv files in a zipped folder in Shiny Can someone please point out how I can make this download zip function work in server.R? When I run this, I get the following error: [1] "/var/folders/00/1dk1r000h01000cxqpysvccm005p87/T//Rtmps3T6Ua" Warning in write.csv(datasetInput()$rock, file = "rock.csv", sep = ",") : attempt to set 'sep' ignored Warning in write.csv(datasetInput()$pressure, file = "pressure.csv", sep = ",") : attempt to set 'sep' ignored Warning in write.csv(datasetInput()$cars, file = "cars.csv", sep = ",") : attempt to set 'sep' ignored [1] "rock.csv" "pressure.csv" "cars.csv" adding: rock.csv (deflated 54%) adding: pressure.csv (deflated 42%) adding: cars.csv (deflated 57%) Error opening file: 2 Error reading: 9 library(shiny) # server.R server <- function(input, output) { datasetInput <- reactive({ return(list(rock=rock, pressure=pressure, cars=cars)) }) output$downloadData <- downloadHandler( filename = 'pdfs.zip', content = function(fname) { tmpdir <- tempdir() setwd(tempdir()) print(tempdir()) fs <- c("rock.csv", "pressure.csv", "cars.csv") write.csv(datasetInput()$rock, file = "rock.csv", sep =",") write.csv(datasetInput()$pressure, file = "pressure.csv", sep =",") write.csv(datasetInput()$cars, file = "cars.csv", sep =",") print (fs) zip(zipfile=fname, files=fs) }, contentType = "application/zip" ) } # ui.R ui <- shinyUI(fluidPage( titlePanel('Downloading Data'), sidebarLayout( sidebarPanel( downloadButton('downloadData', 'Download') ), mainPanel() ) ) ) shinyApp(ui = ui, server = server) Solution: Include 'if(file.exists(paste0(fname, ".zip"))) {file.rename(paste0(fname, ".zip"), fname)}' after zip() call. Please post your solution as answer. The top solution still wasn't working for me. I was working in RStudio Server on a Linux server. The problem was that RStudio couldn't automatically locate the path to the zip executable. I had to manually specify it. In the command line, which zip revealed to me /usr/bin/zip. So, I just had to set the R_ZIPCMD environment variable at the top of my code. Sys.setenv(R_ZIPCMD="/usr/bin/zip") Source: The help file for zip() mentions R_ZIPCMD. Solution: Include if(file.exists(paste0(fname, ".zip"))) {file.rename(paste0(fname, ".zip"), fname)} after zip() call.
common-pile/stackexchange_filtered
LibGit2Sharp CheckoutPaths() I did a commit (49916.....) now i want to checkout one file of the commit into the working dir. The file is named NEW.txt. If i type Git checkout 49916 NEW.txt into Git Bash it creates the NEW.txt file with the content in my working dir. But my LibGit2Sharp command does not want to work. What am I doing wrong? var repo = new Repository(repopath); var checkoutPaths = new[] { "NEW.txt"}; repo.CheckoutPaths("49916", checkoutPaths); I read every article which i could find about the checkoutpaths function. But i cant get it working. I got the function from the LibGit2Sharp checkout test file. repo.CheckoutPaths(checkoutFrom, new[] { path }); Beside @jamill's question, what is the state of your workdir before the call to CheckoutPaths()? Does the file exist? I tested both. So with the file existing and without. Now both cases work. :) What happens when you run that code? Does it run to completion but there are no changes in the working directory? What happens if you try to checkout with the CheckoutModifiers.Force option? CheckoutOptions options = new CheckoutOptions { CheckoutModifiers = CheckoutModifiers.Force }; repo.CheckoutPaths("49916", checkoutPaths, options); It ran to completion but without changes in the working dir. Your awnser solves my problem. I was missing the options. Thank you so much!!! I had similar issue.Code run to completion, but there were no changes in the working directory. The reason was my checkoutPaths collection. I've passed file path relative to app directory instead of path relative to repository itself. E.g. Wrong path: Repositories/MyRepo/MyFile.txt Correct path: MyFile.txt
common-pile/stackexchange_filtered
Apache conf on Ubuntu causing url to repeat itself instead of redirect Ubuntu 18.04 Apache2 Certbot I'm trying to get cerbot and non-www to www redirects set up on this site and I am copying the conf file from another one of my sites that is working just fine, but for some reason 443 is forbidden to the user on this new site and non-www.domain.url redirects to domain.url/www.domain.urlwww.domain.urlwww.domain.url etc. main.conf <VirtualHost *:80> ServerName domain.url Redirect permanent / https://www.domain.url/ </VirtualHost> <VirtualHost *:80> ServerName www.domain.url ServerAdmin<EMAIL_ADDRESS> DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log RewriteEngine on RewriteCond %{SERVER_NAME} =www.domain.url RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet main-le-ssl.conf <IfModule mod_ssl.c> <VirtualHost *:443> ServerName www.domain.url ServerAdmin<EMAIL_ADDRESS> ServerAlias domain.url DocumentRoot /var/www/html RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L] ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /static /home/user/project/static <Directory /home/user/project/static> Require all granted </Directory> <Directory /home/user/project/media> Require all granted </Directory> <Directory /home/user/project> <Files wsgi.py> Require all granted </Files> </Directory> WSGIScriptAlias / /home/user/project/project/wsgi.py WSGIDaemonProcess theprocess python-path=/home/user/prject python-home=/home/user/project/wow WSGIProcessGroup theprocess Include /etc/letsencrypt/options-ssl-apache.conf SSLCertificateFile /etc/letsencrypt/live/www.domain.url/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/www.domain.url/privkey.pem </VirtualHost> </IfModule> ssh, http, https are all enabled with ufw. I might have a permissions issue with users being able to view my site, but it was working fine until certbot was installed. now everything (except non-https non-www which causes the repeating domain issue) redirects to https://www.domain.url and I get a forbidden message. When I installed certbot I missed the non-www domain. when I went back and renewed to get both www and non-www the redirect setup failed but it said I had my certs. Is this causing the issue? I thought I could just build the redirect myself in the config file... This post ended up being the answer: https://serverfault.com/questions/957788/forbidden-after-enabling-ssl I had a case error in my WSGIScriptAlias file path.
common-pile/stackexchange_filtered
Does the "discussion board" application send a invitation to every team member? I recently had a unfortunate experience when adding Yammer to our company Intranet in SharePoint Online. After creating a new discussion, a invitation had been sent to everyone in the company. I want to prevent this at any cost. Now I do again need a discussion platform, but this time with the old school application "discussion board" from the app list. Does anyone know if this application will send a invitation mail to every employee again? The short answer to your question is No. If you are creating a classic Discussion Board app in SharePoint, it will not send any invitation/email to site users. Thanks, sadly I just saw they removed the disscusion board from the classical store. Last week it still existed.
common-pile/stackexchange_filtered
Not able to view existing H2 database using H2 console I am new to H2 database. I had installed H2 console. I am trying to connect existing H2 schema which is created via my java application using below Url. <property name="connection.url">jdbc:h2:./mydb;INIT=create schema IF NOT EXISTS datamagic;AUTO_SERVER=true</property> <property name="connection.username">sa</property> <property name="hibernate.hbm2ddl.auto">update</property> <property name="connection.password">password@1</property> <property name="hibernate.default_schema">mydb</property> Above configuration is creation mydb.mv.db file on my disc. Now I am having below dilemma. I would like to access above database using H2 console. I have tried couple of options but everytime it is creating new database e.g. mydb.h2.db. I had used below jdbc:h2:file:<mydir_pathof_mv_db_file>\mydb I am sure that something silly is missing.Does anyone help me to resolve this issue. It is highly appreciated. Thanks. If you want to connect a H2 database started by an application. You have to start the server. The official document offer the steps to start a TCP server and connect the TCP server. For web application, you can also config H2 Console Servlet. The servlet allow you access the database via web browser. The detail steps are under section Using the H2 Console Servlet.
common-pile/stackexchange_filtered
I am getting a "'tuple' object has no attribute 'set_width" error and I don't know what to do first post on stackoverflow, apologies if this is not written correctly I am pretty new to Python and Very new to Manim. I am a big fan of 3b1b, and was using one of his files from gitlab as a baseline for my text animation (For Reference: https://github.com/3b1b/videos/blob/3dc2bc24006a6d032eb3e7502914492d6f24bcdc/_2016/eola/chapter0.py) I am just playing around and trying to get a text animation to run but I am getting an error and I cannot seem to find a solution to. I don't know if I am missing a Library or if I am just simply not understanding what the function is asking me to fix, below is my current code. (I am just trying to make my friends laugh so ignore the quote" from manimlib.animation.creation import Write from manimlib.animation.fading import FadeIn from manimlib.constants import * from manimlib.scene.scene import Scene from manimlib.utils.rate_functions import linear from manim import RED, GREEN, BLUE, PURPLE from manim import * from manimlib import * from manimlib.mobject.svg.tex_mobject import * from manimpango import * from manimlib.mobject.svg.tex_mobject import * class OpeningQuote(Scene): def construct(self): words= ( """ ``Every PooPoo time is PP time but not every PP time is Poo Poo time :(`` """, ) words.set_width(FRAME_WIDTH-1) words.to_edge(UP) for mob in words.submobjects[48:49+13]: mob.set_color(PURPLE) author = ("--Taylor Woodard\\'e") author.set_color(GREEN) author.next_to(words, DOWN) self.play(FadeIn(words)) self.wait(2) self.play(Write(author, run_time = 4)) self.wait() Again, pretty new to python so I apologize if this my codes look chaotic I tried adding new/different libraries from manim, I tried filling in the 'set_width' argument with something that was predefined (basically anything that would be autofilled) and I just don't know what I am missing the error I am getting is: "AttributeError: 'tuple' object has no attribute 'set_width'" Again, apologies if this post is rough. I will do better next time. Anything will help. Please feel free to tear apart my code as well. I am willing to learn Thanks! Look at how you initialize words. Look at how the code in your link initializes words. Notice anything different? The code in the link has words = OldTexText(..., but your code just has words = (... What's Happening Interpreting FRAME_WIDTH to be an integer, a simplified example of the same issue would be: FRAME_WIDTH = 5 words= ( """ ``Every PooPoo time is PP time but not every PP time is Poo Poo time :(`` """, ) words.set_width(FRAME_WIDTH-1) Simplifying further, because the content of the str doesn't matter to reproduce the error: words = ("foo",) words.set_width(2) I can tell the words.set_width() line is the issue because with the above saved as the file, bar.py and running python bar.py, I see: ❯ python bar.py Traceback (most recent call last): File "bar.py", line 2, in <module> words.set_width(2) AttributeError: 'tuple' object has no attribute 'set_width' Which says on line 2, set_width() is being called on a tuple-typed object, and tuples don't have a method set_width(). Either a different method should be called on the tuple, or set_width() should be called on a different type of object. You'll notice in your linked example: def construct(self): words = OldTexText( """ ``There is hardly any theory which is more elementary than linear algebra, in spite of the fact that generations of professors and textbook writers have obscured its simplicity by preposterous calculations with matrices.'' """, organize_left_to_right = False ) words.set_width(2*(FRAME_X_RADIUS-1)) words is set to an ToldTexText-typed object and set_width() called on it, which isn't what you're doing. What You Should Do To be frank, it looks like your goal is just a bit beyond your current skill level with Python & programming. For this particular issue, skills/knowledge in the following would have helped: how to debug simple syntax errors and exceptions types/classes and methods, i.e. the basics of Object-Oriented Programming Python's built-in types like tuple Notable keywords in bold should help when searching for relevant tutorials & other material.
common-pile/stackexchange_filtered
Get the max of a nested dictionary I have the following data structure: {'923874rksd9293': {'animated': (1, 5.0),'romance':(1, 4.0),'superhero':(1,3.0)}} and I'd like to get the category with the maximum of the floating point value, here animated with 5.0. Is there a pythonic way to do this? There may be more than one id and it would be put into an array. Thanks so the return value would be: [{'id':'923874rksd9293','genre':'animated'}] What have you tried so far? Rik as in "Currents of Space"? Haven't tried anything, I know how to do it in a loop but was hoping I could use a lambda in a list comprehension or something No rik's my actual name LOL In case you're an Asimov fan: https://en.m.wikipedia.org/wiki/The_Currents_of_Space Could you please add a plain python tag? Possible duplicate of Nested Dict Python getting Keys and Max of value oh ok. For me it short for hendrik that's why you can use max with a custom key function, to choose the max genre based on the value of the tuple mapped by it. try this: d = {'1111': {'animated': (1, 5.0),'romance':(1, 4.0),'superhero':(1,3.0)}, '2222': {'genreone': (1, 3.5),'genretwo':(1, 4.8),'superhero':(1,4.0)}} result = [{"id":key, "genre":max(inner.keys(), key=lambda k:inner[k][1])} for key,inner in d.items()] print(result) Output: [{'id': '1111', 'genre': 'animated'}, {'id': '2222', 'genre': 'genretwo'}] you can try code below: data = {'923874rksd9293': {'animated': (1, 5.0),'romance':(1, 4.0),'superhero':(1,3.0)}} for id, val in data.items(): maxType = max(val, key=lambda x:max(val[x])) print(f"id:{id}, genre:{maxType}") The output is id:923874rksd9293, genre:animated
common-pile/stackexchange_filtered
How to keep the prompt after running some commands? I run a script using Launchy which opens an html document in the internet browser. the browser becomes the active window and I lose the prompt on irb. how can I keep the prompt ? What command does Launchy use to open the html document in the browser? Launchy.open ('URL') Use Kernel#spawn : http://ruby-doc.org/core-2.2.2/Kernel.html#method-i-spawn spawn('./launch_url.rb') Where the file ./launch_url.rb contains (and you have executable rights on it) : #!/usr/bin/env ruby require 'launchy' Launchy.open ('URL')
common-pile/stackexchange_filtered
Trying to identify possible causes of flakey postgis error on postgres We have a postgres trigger that is set to run when eastings and northings values are inserted or updated on a row. The trigger function use postgis to transform the eastings and northings, which are floats, to a point value which is stored and cached so that we can perform geospatial queries. On our CI tests we experience flakey test failures with respect to that trigger, we have not been able to recreate the issues locally. Sometimes when CI runs it will fail and sometimes it won't, and it won't be the same test that causes the failure. I am looking to see if anyone has experienced this problem before or have ideas for investigation. Our local dev postgres is SELECT version(); 'PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1) on aarch64-unknown-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit' SELECT PostGIS_full_version(); 'POSTGIS="3.4.2 c19ce56" [EXTENSION] PGSQL="150" GEOS="3.11.1-CAPI-1.17.1" PROJ="9.1.1 NETWORK_ENABLED=OFF URL_ENDPOINT=https://cdn.proj.org USER_WRITABLE_DIRECTORY=/var/lib/postgresql/.local/share/proj DATABASE_PATH=/usr/share/proj/proj.db" GDAL="GDAL 3.6.2, released 2023/01/02" LIBXML="2.9.14" LIBJSON="0.16" LIBPROTOBUF="1.4.1" WAGYU="0.5.0 (Internal)" TOPOLOGY RASTER' CI postgres SELECT version(); 'PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit', SELECT PostGIS_full_version(); 'POSTGIS="3.4.3 e365945" [EXTENSION] PGSQL="150" GEOS="3.11.1-CAPI-1.17.1" PROJ="9.1.1 NETWORK_ENABLED=OFF URL_ENDPOINT=https://cdn.proj.org USER_WRITABLE_DIRECTORY=/var/lib/postgresql/.local/share/proj DATABASE_PATH=/usr/share/proj/proj.db" LIBXML="2.9.14" LIBJSON="0.16" LIBPROTOBUF="1.4.1" WAGYU="0.5.0 (Internal)" TOPOLOGY', Trigger function which has modified to include some more verbose exception messages. DECLARE SRID INT; LOCATION GEOMETRY(POINT, 4326); v_state TEXT; v_msg TEXT; v_detail TEXT; v_hint TEXT; v_context TEXT; BEGIN SELECT project.utm_zone INTO SRID FROM projects_project project WHERE project.id = NEW.project_id; LOCATION := ( CASE WHEN NEW.planned_location_point_x IS NOT NULL AND NEW.planned_location_point_y IS NOT NULL THEN CASE WHEN SRID = 28348 THEN ST_InverseTransformPipeline( ST_POINT( NEW.planned_location_point_x, NEW.planned_location_point_y, SRID), '+proj=pipeline +step +proj=unitconvert +xy_in=deg +xy_out=rad +step +proj=utm +zone=48 +south +ellps=GRS80', 4326 ) ELSE ST_Transform( ST_POINT( NEW.planned_location_point_x, NEW.planned_location_point_y, SRID ), 4326 ) END ELSE NULL END ); NEW.planned_location_point = LOCATION; RETURN NEW; EXCEPTION WHEN OTHERS THEN get stacked diagnostics v_state = returned_sqlstate, v_msg = message_text, v_detail = pg_exception_detail, v_hint = pg_exception_hint, v_context = pg_exception_context; raise log E'Got exception: state : % message: % detail : % hint : % context: %', v_state, v_msg, v_detail, v_hint, v_context; raise log E'Got exception: SQLSTATE: % SQLERRM: %', SQLSTATE, SQLERRM; RAISE EXCEPTION 'SRID: %, X: %, Y: %, SQLERRM: %, SQLSTATE %', SRID, NEW.planned_location_point_x, NEW.planned_location_point_y, SQLERRM, SQLSTATE; END; Error messages when it fails on CI Got exception: state: XX000 message: transform: Unknown error (code 4096) (4096) detail: hint: context: PL/pgSQL function pgtrigger_set_planned_location_point_gps_cd099() line 24 at assignment psycopg2.errors.RaiseException: SRID: SRID: 20349, X:<PHONE_NUMBER>552063, Y: 6416413.62531117, SQLERRM: transform: Unknown error (code 4096) CONTEXT: PL/pgSQL function pgtrigger_set_planned_location_point_gps_cd099() line 162 at RAISE (4096), SQLSTATE XX000 I have modified the trigger function to try to provide more context as seen in the above code but since it is a flakey test that only appears on CI it is really hard to figure out the issue. Before adding the exception capture to the trigger, the error message was below, which means the error was on assignment to LOCATION. Following the code path it would suggest the error is cause by ST_Transform. psycopg2.errors.InternalError_: transform: Unknown error (code 4096) (4096) CONTEXT: PL/pgSQL function pgtrigger_set_planned_location_point_gps_cd099() line 19 at assignment But running the same function on dev that would be called doesn't raise an error: SELECT ST_Transform( ST_POINT(<PHONE_NUMBER>552063, 6416413.62531117, 20349 ), 4326 ); See this bug report, which demonstrates that a call to st_transform could be (have been, this particular bug is fixed) impacted by a prior malformed proj string. So maybe you want to record the parameters of ALL tests that you run, and if you are running tests in parallel it could be an unrelated test playing with the proj string that makes this one fail. @JGH I did see that bug report as well and tried to recreate it but looks like out version has already been patched. I will try to add more logging but the challenge is that it hard to isolate (even just running the few tests before the failing test in order doesn't recreate the issue) and the full test suite takes ~40 minutes to run. Hoping to hear from someone else that might have already encountered and solved it.
common-pile/stackexchange_filtered
Clock speed vs. number of cores for parallelized computer simulation My question is similar to some "speed vs. cores"-questions that have already been asked on this website. However, I am interested in some very specific and technical points. So I hope that my question will be eligible for an "answer" instead of being solely opinion-based. At my workplace, we frequently approach certain statistical problems using computer simulation. The software we use is largely meant for single cores, but we run it in parallel using multiple instances of these programs. The simulations are computationally intensive, and one iteration may take up to one hour to complete. To enhance the speed of these calculations, I have been asked to propose a few models that would be best suited. However, I am unsure if, at this point, the computations would benefit from higher clock speed more than they would from more parallelized processes. The computers we currently use are server-scale solutions that contain multiple CPUs at relatively high speed (16 physical cores, 2.9GHz each, and no GPU). So the decision cooks down to two options: investing in similar machines with slightly higher clock speed (e.g., 3.2GHz) and the same number of cores (say 16), or alternatively... stepping down on the clock speed (e.g., 2.6GHz) and going for a larger number of cores (say 20 or 24). I am unsure if increased clock speed would pay off even in computationally intensive applications because I assume that the performance does not increase linearly with clock speed. Strictly speaking, I could simply approach the problem like this: 3.2GHz * 16 cores = 51.2GHz, or alternatively... 2.5GHz * 24 cores = 60.0GHz However, I am pretty sure this calculation is flawed in a number of ways. But in what way exactly? Money is not really an issue in this paarticular case, and computation using GPUs must be ruled out, unfortunately. The machines will run Windows Server 2012 R2 and will be used exclusively for this kind of calculations. All programs involved are optimized for 64bit, but occasionally 32bit software programs be involved as well. Memory and HDD should not be a huge factor to consider. Cheers! As in the dupe: it depends on the processes and how they utilize the system. So (IMO) there is no single answer to give you, or a simple calculation you can do to determine which is better. Benchmark and profile your code on multiple platforms/configurations and then buy whichever setup it runs best on. If you don't have the time to perform profiling, then get a setup with the most cores, running a the fastest speed that you can afford. If the software only uses a single core then you want that single core to be fast as possible so that particular instance of the software completes its task as quickly as possible. Why are you multiplying the number of cores and the clock frequency. It doesn't work like that. You are indeed correct. The performance increase will not be linear but you can then identify the next bottlekneck. I personally would just use computationally software that was NOT single-threaded. @Ramhound: I realize it doesn't work like that, but I am interested in how it works, that is, what alternating factors I have to take into account. Indeed the programs are single core, but we still run them in parallel. This is done using a "main" process that starts multiple "worker" instances of the same program. When the calculation is as CPU intensive as above, the performance actually multiplies with the number of cores on a given machine. However, this does not allow a conclusion as to what brings the larger gain in the scenario described above: clock speed or physical cores. There isn't enough information conclude what would be best. If you have 5 tasks, and each task takes 1 hour to complete on a single core processor running at 3 Ghz then you could complete 5 tasks in one hour if that processor had 5 cores running at 3 Ghz. If you want the best performance then maximize the number of cores AND frequency that performance WILL be linear. @SimonG, All answers in this topic (and linked topics) are largely incorrect. You need to buy computers with biggest on-chip cache available, see http://superuser.com/questions/543702/why-are-newer-generations-of-processors-faster-at-the-same-clock-speed/1142639#1142639 The calculations are not precise. They are from mathematical point of view, but in computing you actually need to multiply by 0.9 to 0.75 to get the real "power" And more cores/processors mean lower number. This happen because of computer power you need to parallelize the tasks and them to build the final result from different threads. This remark already helps my purpose. However, I think in the given scenario, the effort it takes to parallelize the process is rather small because the computations take quite some time such that the communication between the main and the worker processes happens not too frequently.
common-pile/stackexchange_filtered
Modify a line in a file after finding it with wildcard re.match I am trying to rewrite a super simple html page dynamically after using socket to retrieve a value. Essentially this is pulling a track name from my squeezebox and trying to write it to html. The first part of line is always the same, but the track title needs to change. I'm sure it's super simple but I've spent hours trawling different sites and looking at diff methods, so time to ask for help. HTML has a line in it as follows, among more: <p class="GeneratedText">Someone Like You</p> I am then trying to run the following to find that line. It's always the same line number but I tried with read lines, and I read it reads everything in anyway: import socket import urllib import fileinput import re # connect to my squeebox - retricve the track name and clean up ready for insertion clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) clientsocket.connect(("<IP_ADDRESS>", 9090)) clientsocket.send("00:04:00:00:00:00 title ?\n") str = clientsocket.recv(100) title=str.strip( '00%3A00%3A00%3A00%3A00%3A00 title' ); result = urllib.unquote(title) #try and overwrite the line in we.html so it looks like <p class="GeneratedText">Now playing track</p> with open('we.html', 'r+') as f: for line in f: if re.match("(.*)p class(.*)",line): data=line print data f.write( line.replace(data,'<p class="GeneratedText">'title'</p>')) I think this might be what you're going for. You'd still be rewriting the entire file, though. I don't really understand your use case. Wouldn't it be better for user experience to bold the current playing song title or mark it with a * than replace the title? Anyway, a program edits any kind of text file by writing out a new copy of the entire file and then using mv to replace the old file. Also, instead of parsing html with regex, which is ugly and problematic and can lead to insanity, in python you can use beautiful soup, also known as bs4, an HTML parser and utility library for python. A quick solution might be to use the fileinput module you tried importing. Thus your code would look something like the this: for line in fileinput.input('we.html', inplace=True): if re.match("(.*)p class(.*)",line): print line.replace(line, '<p class="GeneratedText">' + title + '</p>') else: print line Where you'd have to replace your with block with the one above However, if you would like a cleaner solution, you should check out Beautiful Soup, which is a python library for manipulating structured documents. You'll still need to install the module through pip, and import BeautifulSoup, but this code should get you running afterwards: with open('we.html', 'r') as html: soup = BeautifulSoup(html) for paragraph in soup.find_all('p', class_='GeneratedText'): paragraph.string = title with open('we.html', 'w') as html: html.write(soup.prettify('utf-8')) If you have a single occurence of this in the whole page, you can simply do: new_html = re.sub('(?<=<p class="GeneratedText">)(.*)(?=<\/p>)', "WhateverYouWantGoesHere", html_file_as_string) It will replace everything between the 2 tags by whatever you want. with open('output.html', 'w') as o: with open('we.html', 'r') as f: for line in f: o.write(re.sub("(?:p\sclass=\"GeneratedText\">)(\w+\s?)+(:?</p>)", newTitle, line))
common-pile/stackexchange_filtered
Secure Connection Failed : The page you are trying to view cannot be shown because the authenticity of the received data could not be verified Secure Connection Failed : The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. FireFox browser responds like this what we should do? firefox version : 89.0 (64-bit) this link has issue click here Can you include more details like if that is the error when you visited a site that you are responsible for? Secure Connection Failed: An error occurred during a connection to www.example.com. PR_CONNECT_RESET_ERROR ->The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. -> Please contact the website owners to inform them of this problem. Even did not change any thing before it was working fine
common-pile/stackexchange_filtered
Was my use of downvote and upvote warranted? My Understanding of Downvotes and Upvotes It was my understanding that using downvotes and upvotes was the right way to: increase visibility of valid and high-quality answers, decrease visibility of invalid or low-quality answers. I also see downvotes as a temporary measure to encourage improvements. This attempts to decrease their visibility until then and forces their author to rework them. Ultimately, it "buries" them if the crowd so decides, and: reduces noise (by pushing them down the thread until they get improved), or even suppresses noise (by encouraging their original author to delete them if they know they won't rework them). In the extreme case where that's not appropriate or not efficient, and where it would be warranted, flagging for moderator attention would be the other approach (extremely bad answers, spam, offensive content and other things that require immediate and radical action). So, I occasionally add a -1 to a "useless" answer and notify its owner of my reasoning and request an improvement, sometime even mentioning that I'd gladly cancel or invert this downvote afterwards once they do improve their answer. By useless, I mean an answer of poor quality, or, most likely, not adding any value or no new information that would already listed by prior answers. The Current Case Study I just did on this SO question, to what seemed, at least to me, good effect. Hopefully this is something you can see that if you had high privileges or mod rights. A good answer was posted by Joachim Pileborg, and a few seconds later a nearly identical one appeared by Carl Norum. However it didn't provide, at the time, any added value. So I upvoted the good and first one and downvotied the bad one (with an explaining for the downvote in a comment). Of course, this created a shift between the +1-rated answer and the -1-rated answer. As you can imagine, the +1 quickly became a +4. I gad explained my logic to the one I downvoted, and he kindly removed his answer. This, to me, seemed a very good use of upvotes and downvotes, and the ideal resolution: answers without value get deleted, both downvoter and downvoter get their "downvotes" points back (I think? not even sure) as the answer gets deleted, more votes for top answerer, overall quality improves, yay! And it seems to me like a very good approach, though it's mostly effective if done right at the time when questions are posted and first answers appear. The window of opportunity is very, very small. Afterwards, it is a bit harsh to request a deletion if there's a loss of points involved or an emotional attachment to an answer. Which brings me to another point, and sometimes makes me wonder if voting shouldn't be disallowed - or maybe allowed, but without showing the sum of the votes - for the 10 minutes following the posting. Thus early answers can be fleshed out and improved and it's not a game of "I was first, so the timeshift is in my favor" but of "my answer is actually better, even though I posted it shortly after". Which seems more fitting. Unfortunately... Apparently, using the above approach on that question I listed may have pissed off the person whose answer I downvoted (I hope not, and I cannot know for sure as Carl didn't reply afterwards), but it surely looks like I may have pissed off others who feel like I was an *ss for doing so. So... I hereby ask confirmation on my behavior... Mostly... Was what I did good or evil? Was I, essentially, really being an *ass? I don't mind looking like one to others, if I was right in the eye of the sites' maintainers and mods. Also: Could I have done it differently? Is it a good use of downvotes and upvotes? Would that "no total votes" or "no votes at all" "grace period" be a good idea to encourage this, and make it more fair for good questions to be upvoted and accepted, while sometimes the time shift plays in favor of slightly worse answers, just because first votes came upon them first and it biased next voters? See also this meta-answer on P.SE (to a different question though) where I originally mentioned the same approach, which I rephrased above for clarity to get more feedback. Update: So, the general consensus was that I was being an ass or misusing downvotes. Won't do that again and will leave more time before downvoting. Thanks to all for the interesting and constructive discussions. If the second answer had appeared a long time after the first I would agree, but since it was a few seconds, we can reasonably expect that the second answerer was typing it simultaneously. In that case, there's no reason to punish a well intentioned poster just because they took a couple of seconds longer than somebody else. Upvote the first one by all means, but I would say you should just leave the other at 0, unless there is actually something wrong with it — it will still sink to the bottom. As I understand it, the point of downvotes is to flag a bad or incomplete answer, not a late one. A few seconds later? Jeez, tough crowd @Dave: yes, but I did say why I did the downvote, and I always revert them if they then later improve their answers with valuable additional info. However if they don't, then these answers are bad according to the site's rules, as they merely are duplicates. Isn't that so? And you say the other would sink to the bottom, but it's not necessarily true. Sometimes, by sheer luck of refreshes, the second posted answer gets the votes even though it was a few secs/mins later, and that doesn't feel right for the one who was first with the right answer. @LBT: :) tough crowd, tough love. In the end, if I look at that question now, what I see is a question with a good answer that's clearly accepted by both the OP and the community. If that hadn't been done, you'd have 2 similar answers. I think the reason why we (me included when that happens to me) is that we're not used to see downvotes as a hint to rework question. We have too much ego into our involvement. @haylem it's still, from the poster's point of view, a punishment. At best you'll end up "training" them to write quick, sloppy answers to be the first, at worst you'll discourage them from writing answers at all. @Dave: I don't know. On the other hand, the current system trained me and others to write quick sloppy answers and then rework them into better ones, because otherwise the first hitter catches the wave with a worse answer. Which doesn't seem good for the overall quality in a thread, and is then hard to overturn. @Dave: And, as you said, that would be a matter of training. There might be a cultural thing at play here as well, but I just perceive the downvote as a "non voiced" criticism (but I prefer it to come with an explanation and often request one). I far prefer that (downvote for non-additive answer) to a downvote because someone just didn't like what I said (which happens way more often). @haylem well then all I can say is that I personally disagree! @Dave fair enough, and I appreciate and thank you for your input. You down-vote if the answer is wrong, period, not if it "doesn't add any value". If it doesn't add value, then don't up-vote it, end of story. @HovercraftFullOfEels: maybe you should make that an answer, so that others can weight in on it. Same for Dave and others. I also find it acceptable to downvote and delete answers that: 1) Duplicate an existing answer. 2) Adds nothing new. 3) Posted long after the original. ("long" here means long enough to where it's clearly not FGITW) Too often I see users posting dupe answers on popular questions just to farm votes. I've seen too many questions with 5 or 6 identical answers (sometimes outright plagiarized) all posted months after the question was asked. @Mysticial: good point, but this doesn't apply to the original poster's "current case study". Likely the down-voted answer was being created simultaneously with the accepted answer. @HovercraftFullOfEels Correct. The OP's case here fails at #3. So we're not disagreeing. @HovercraftFullOfEels: Yes, it was. I even mention that in my question here. But still, I don't make it petty and do state clearly to the answerer why I downvote and that I'd revert it. @Mysticial: Yes. Though the system isn't too bad for that case, as I mention in the Meta P.SE thread: when that happens, "old" threads will usually promptly get "protected" to avoid such future misshaps. But still, "value-less" answers that were sent shortly (as opposed to your "long after") will linger in the thread forever, and that seems rather useless. It's not all bad, as the SO models allows to view by relevance, or activity, etc... But I find threads with dupes, old or new, rather annoying. On a different note, I would have thought this (as in this meta question) was a good and valid question. So I'm surprised to see it downvoted, as it seems to reflect what you think of my behavior on the case I describe (thoughts which should be stated as answers below and votes/upvotes on them) more than the value of the question... That's disturbing, somehow, no? Too lazy to find the link, but "votes on meta are different". They often express (dis)agreement, and I guess the downvotes here express disagreement with your stance that downvoting correct answers merely because somebody else typed a little faster is a good thing. @DanielFischer: ok, didn't know that. Thanks. I reworked my question a bit as I realized my phrasing may have been too oriented, thanks to HovercraftFullOfEels. I don't think that would change the current opinions, but it seems better that way. @haylem I downvoted this because I disagree with the idea of a quick duplicate answer deserves a downvote (so much so that I would downvote multiple times if possible). 2 or 3 answers within a close time (a few seconds to a couple of minutes could easily have been typed at the same time). Others have said it, but this practice only encourages people to post quick sloppy answers, just to get in first. 2 similar answers doesn't harm the site at all, so there is no reason to be so hard on someone who is just a bit slower... Don't forget that deletes can contribute to a ban, so encouraging people to delete quick duplicates by downvoting might unintentionally pushing them towards an answer ban. @psubsee2003: Didn't know, never came across the situation. I'm not even sure I understand your scenario here: people would downvote, authors would delete their own answer, and... we'd ban from answering them for than? Even though there's a badge having the integrity of deleting your own answers? I must have misunderstood what you meant. @haylem it is not likely for someone posting good content, but there is an algorithm that will impose a ban on someone who is continually posting bad content (either questions or answers). The criteria are secret, but it is widely assumed that deleted and downvoted content helps contributes to the ban. @psubsee2003: ok, I agree with that, but then I don't think that would consider self-deleted answers... Or that'd seem rather odd! @haylem Since no one outside the devs really know the criteria, no one knows, but based on observation, it does consider self deletes. The theory is behind is people spend time reading or voting on posts, so someone who is deleting their own content is wasting the time of the community. @psubsee2003: Interesting. Thanks. The general idea in these situations is to just plain use common sense and to assume that people here posting answers to questions are doing so because they intrinsically care about providing value to others by sharing their knowledge and experience in solving problems. If we take the Theory X style management approach and assume that people are here for bad reasons, everyone loses. But the Theory Y style approach assumes that knowledge workers are intrinsically motivated to do the best they can. Sure, we all love the badges and the reputation and the ability to unlock privileges that give us the ability to help in other ways on the site, but for many, especially users with reputation in the top percentages, the reputation is of little importance. Thus, when Carl posts his answer it's because he loves programming and software development so much that he chooses to volunteer his time to help others solve problems. Sometimes we end up posting duplicate answers by accident, due to the fact that there are 5.5 million visitors per day on Stack Overflow and the chances that two people post very similar content is quite high. While maintaining high standards is important, we must also consider the human factor, as that is what drives people like Carl -- the feeling of satisfaction one derives from helping others. An extra answer on a Stack Overflow post is hardly an exceptional circumstance that would bring this community to a grinding halt; thus, as Hovercraft mentions, just ignore the post and move on. With that said, we don't want people to abuse the system. If it's clear that someone has a track record of posting duplicate answers for the sake of increasing their score, then perhaps that may be worth a second look. But we should never assume people are doing this. It will be obvious if they are because most people who do this first, aren't very smart about it and second, they don't usually put a lot of effort into their answers, which means you'll see a string of low quality one-liner posts. Just keep in mind what the tooltips say on the upvote and downvote buttons. Up: This answer is useful Down: This answer is not useful. And keep in mind that, although two posts may effectively have the same meaning, it's quite possible that the way the other is worded may help a future visitor more quickly understand whatever it is they were confused about. That's why we don't downvote correct answers. :) Hope this helps! As an aside, I oftentimes volunteer to expand my answers if I see that they repeat something someone else said, regardless of who posted first. This is just something I personally like to do to be more helpful, but I'd hardly say it's required. Just lead by example, and the users with the most potential to be awesome will follow your lead. Maybe the tooltip should be "this answer is not (not) useful on its own". Though I'm not sure the distinction would be clear for most people... Thanks for your feedback. I understand the importance of the human factor. I was under the impression that we were trying to train people a bit to behave in a certain way of SE sites though, and to control some of these human aspects (like ego) and to encourage others (creative criticism). But maybe it's a bit too much to expect, especially on first-timers or new users, and that'd be too deterrent. Seems good indeed. And in all honesty, even though I asked for this particular question, I don't systematically use the approach I mentioned above. For one thing it'd take too much effort, but also I think things work fine as they are. But when I see the potential for this early arbitrage, I've never seen it fail. I was just afraid of rubbing things people the wrong way though (and apparently I did), hence my request for comments and confirmation. I think if you had left a comment, that would have been fine. I personally have no problem with people encouraging others to do the best they can, but if their target is already doing good -- or trying to do good -- and he/she did provide value, we shouldn't use downvotes. As you mentioned in your original post, downvotes can be used as a tool to spring others into action, but they should only be used when content is actively harmful if it's not improved. It's good you want to improve things, and I hope this helps. :) as mentioned, I did use a comment and mention my intent to cancel or even invert the downvote (and I'm not sure the answer's author took it badly, but for sure someone else took offense). But I understand your (and others') point: just making it clear the question needs improvement with a comment is less aggressive, and less demotivational. though it still leaves the door open for the later answer to grab the lead unfairly... I still think a delay in the visibility of total votes during the early life of a question would solve all of these issues at once. A delay would introduce more problems, like it not being immediately clear to an asker that a post is harmful. Also, as for answers grabbing the lead unfairly, I don't think that's really a problem that matters all that much, at least not in the grand scheme of things. Sure, I've occasionally posted an answer first that isn't the most upvoted, but in many cases I do have the top answer simply because I did take the time to expand my answer. I don't lose much sleep over the answers that didn't get upvoted to the top, and neither should anyone else. :) Just move on and keep providing value. I do, don't worry (move on, not lose sleep). I've been here a while :) But I don't want to use the tools given to me in the wrong, so that's why I ask. You down-vote if the answer is wrong, and particularly if it may be misleading, but not if it "doesn't add any value". If it doesn't add value, then simply don't up-vote it. I don't like to think of down-votes as for "punishing bad ones", but rather for alerting the original poster that this particular answer should not be followed. For a much better written answer to a similar question, please read Jon Skeet's answer here. In this case it's not even to "punish bad ones" though, more to alert them as well. But I see your point. Thanks. Right, I just saw that I actually wrote "punish bad ones" in my question! Sorry about that, that's not really how I meant it. That may have colored my post a bit too much right from the start... I reworked the question a bit to address that. That John Skeet answer is indeed a pretty good one. Though in the end, I guess that as rulings are done by mods and admins, it's their views that are the most relevant (until the next round of them are elected or hired, as with governments). And the sad thing is that, in the end, and despite John's amazing post, his open-ended request for ruling at the end wasn't answered by a definitely authority either... There are more links in his answer's comments, and the one that was converted to question in its own right doesn't have a definite answer (http://meta.stackexchange.com/questions/2451/why-do-you-cast-downvotes-on-answers). As you can see from my comments, I have specific objections to some of the criteria you have used to judge what is downvote worthy. I'm not going to rehash that here as other answers cover that quite well, but if you want to positively contribute to the site you have to stop thinking of downvotes as punitive measures to punish community members or tools to get them to perform specific actions (namely deleting a post). Don't forget that deletes can contribute to a ban, so encouraging people to delete quick duplicates by downvoting might unintentionally push them towards an answer ban. In the end, only you can decide what criteria you will use to judge posts, and it is entirely your right to vote however you see fit. Is the answer clearly wrong - please downvote. Does the answer encourage bad practices? Do you dislike someone's name, their coding style, or think they smell bad, go ahead and downvote them as you desire. But I strongly encourage you to consider the impact of your votes beyond just yourself and the person you downvoted. The simple fact is voting (both upvotes and downvotes) is only designed to relate how useful the community finds a specific answer, nothing more. The other benefits (reputation and the associated privileges) are just fringe benefits to provide incentive to the community to help contribute. Imagine being a new programmer who comes to the site and sees 2 almost identical answers, one has a few upvotes and one has a single downvote. How is this supposed to help me? Is this solution good or bad? He/she doesn't care about rep-whoring or how far apart the answers were posted, he/she just wants an answer to his problem. The point is you need to consider others who will read the question in the future and use the upvotes/downvotes to help decide if an answer will solve their problem. stop thinking of downvotes as punitive measures to punish community members: I don't really. I just badly phrased that and reworked it afterwards. Do you dislike someone's name, their coding style, or think they smell bad, go ahead and downvote them. I'm not that petty :) But I strongly encourage you to consider the impact of your votes beyond just yourself and the person you downvoted. I did and do. I apparently was wrong in my conclusion on the impact, but don't assume that wasn't my approach. I was considering them in terms of pushing for better quality. This wasn't ill-intentioned. Ill-advised, maybe so, and I'm sorry for that. Imagine being a new programmer who comes to the site and sees 2 almost identical answers, one has a few upvotes and one has a single downvote. That's my point. They shouldn't see that, as the answer should be deleted if it doesn't have added value. That seems even better to me. Of course they don't care about repwhoring (which never was a point in this debate) or about timing (which they wouldn't really look at). @haylem - Just keep in perspective that the problem Stack Overflow solves is that we -- as people looking for answers to problems we face -- don't need to search an entire forum thread just to find the answer buried on page 14 of 29, 3/4 of the way down the page, after painstakingly reading each and every 'me too' post beforehand. Could the second SO answer be deleted, sure. But is it worth alienating our volunteers when the correct answer is still right there at the top of the page? Probably not. ;) @jmort253: yes, and it surely already works pretty well that way. But while "perfect is the enemy of good" and there may not be a need to fuss too much on getting each thread to look perfect and canonically edited and "curated", I think it would be be bad to just tell ourselves "we do it so much better than other systems, our job here is over". It's true that there are already way too many things that can put people off when they visit SO though, so maybe there's no reason to add alcohol to the fire...
common-pile/stackexchange_filtered
Vehicle GPS tracking for Atmel AVR I have made the circuitry for Vehicle tracking device now I'm stuck in the code of Atmel AVRmega2561, which is supporting with flash memory. Please advise from where should I start. What have you tried? You'll have to explain where you're stuck and what your thinking is so far. Questions like this hardly get answered, because there's not really a specific question to answer. Yes. "where do I start" is very general, and you haven't given much specifics on what exactly you've tried so far. Mostly questions like this might not even get an answer - for future questions, show us the code you've tried, and the problems you've encountered (eg the error messages) and ask for specific help with each error you have received. Start by learning a bit about the Kalman Filter, which will allow you to filter the GPS data. To help more, we would need to see the circuits and know what you have tried and/or what your experience is. Modern GPS chips will do that filtering for you. You can pretty much take their output at face value and throw it directly at your business logic. They have all the gyros etc in them? Eh, no, but gyro's are unrelated to Kalman filters (or position in general, usually - the normal use is heading/bearing). Accelerometers are the one dubious bit, but considering modern GPS accuracy you ignore the accelerometer as long as the GPS fix is good. My mind went blank on accelerometers. GPS is still only accurate to 100 metres though, right? Standard deviation is ~2 meters. Galileo (European GPS) will push it to 1 meter. (It's good enough for my employer to count how many lanes a highway has: the GPS traces of cars on different lanes are sufficiently distinct)
common-pile/stackexchange_filtered
Get missed element of array I have a method which receives an array public static int getMissed(int[] array) {...} For example it will be {1,3,5,7,11} As you can see 4d element is missed, it should be 9. For example array's size is unknown and step is unknown, how can i find missed element? I have no idea. It was the question from my test task and i could not solve it. please show your effort? In this case we know it's 9. What other rules can be applied to elements? Take up to 4 elements to find the step, then either iterate, or use binary search I don't get why is 4d element missing, and why it should be 9... Could you try to explain your problem some other way? @sp00m Because each element is +2 of previous one.. sp00m i meant 4 index of array, and Maroun is rught tod, i wrote, i have no idea @user3673623 is my answer correct&good for you? I'm writing this just to give a solution. I'm sure there will be better answers. public static int getMissed(int[] array) { int difference, difference_2; if (array.length < 3) // we need at least 3 elements return -1; difference = array[1] - array[0]; difference_2 = array[2] - array[1]; if (difference == difference_2) { for (int i = 3; i < array.length; i++) { if (array[i] - array[i - 1] != difference) { return array[i] - difference; } } } else { if (difference > difference_2) return array[1] - difference_2; else return array[2] - difference; } return -1; }
common-pile/stackexchange_filtered
Ignore first parskip inside a save box I have the following definition (MWE below) of a lrbox. If I use the normal setting parskip=off the interaction \mdf@restoreparams works as expected. Inside the lrbox I can use parskip and parindent. But if I use the option parskip=half I get an extra skip at the beginning of the save box. How can I avoid this. Picture MWE: \documentclass[parskip=half]{scrreprt} \usepackage{kantlipsum} \catcode`\@11\relax \def\mdf@lrbox#1{% \edef\mdf@restoreparams{% \parindent=\the\parindent \parskip=\the\parskip}% \setbox#1\vbox\bgroup% \color@begingroup% % \mdf@horizontalmargin@equation% \columnwidth=\hsize% \textwidth=\hsize% \let\if@nobreak\iffalse% \let\if@noskipsec\iffalse% \let\par\@@par% \let\-\@dischyph% \let\'\@acci\let\`\@accii\let\=\@acciii% \parindent\z@ \parskip\z@skip% \linewidth\hsize% \@totalleftmargin\z@% \leftskip\z@skip \rightskip\z@skip \@rightskip\z@skip% \parfillskip\@flushglue \lineskip\normallineskip% \baselineskip\normalbaselineskip% %% \sloppy% \let\\\@normalcr% \hrule \@height\z@ \@width\hsize\relax \mdf@restoreparams\relax \@afterindentfalse% \@afterheading% } \def\endmdf@lrbox{\color@endgroup\egroup} \newbox\MyTestBox \begin{document} \mdf@lrbox\MyTestBox \kant[1] \kant[1] \endmdf@lrbox \fbox{\box\MyTestBox} \end{document} I didn't tag this question with KOMA because the same result is reached by the header with a standard class \documentclass[]{report} \usepackage{parskip} or with memoir \documentclass[]{memoir} \setlength{\parindent}{0pt} \nonzeroparskip If you have any improvements I won't be disappointed ;-) Well.... If you don't do \hrule \@height\z@ \@width\hsize\relax Then you don't get the parskip at the top. Then in your end code, if the width of the box is not what you expect (because no paragraph material has been added) do \vbox{ hrule \@height\z@ \@width\hsize\relax \unvbox the box you had } \documentclass[parskip=half]{scrreprt} \usepackage{kantlipsum} \catcode`\@11\relax \def\mdf@lrbox#1{% \edef\mdf@restoreparams{% \parindent=\the\parindent \parskip=\the\parskip}% \def\tmp{#1}% \setbox#1\vbox\bgroup% \color@begingroup% % \mdf@horizontalmargin@equation% \columnwidth=\hsize% \textwidth=\hsize% \let\if@nobreak\iffalse% \let\if@noskipsec\iffalse% \let\par\@@par% \let\-\@dischyph% \let\'\@acci\let\`\@accii\let\=\@acciii% \parindent\z@ \parskip\z@skip% \linewidth\hsize% \@totalleftmargin\z@% \leftskip\z@skip \rightskip\z@skip \@rightskip\z@skip% \parfillskip\@flushglue \lineskip\normallineskip% \baselineskip\normalbaselineskip% %% \sloppy% \let\\\@normalcr% % \hrule \@height\z@ \@width\hsize\relax \mdf@restoreparams\relax \@afterindentfalse% \@afterheading% } \def\endmdf@lrbox{\color@endgroup\egroup \ifdim\wd\tmp<\hsize \typeout{making box fill width} \setbox\tmp\vbox{% \hrule \@height\z@ \@width\hsize\relax \unvbox\tmp}% \fi } \newbox\MyTestBox \begin{document} \mdf@lrbox\MyTestBox \kant[1] \kant[1] \endmdf@lrbox \fbox{\box\MyTestBox} \mdf@lrbox\MyTestBox \begin{tabbing}aaa\end{tabbing} \endmdf@lrbox \fbox{\box\MyTestBox} \end{document} The aim of the rule is to get the correct with of the vbox ;-) -- I tried \setbox\MyTestBox=\vbox{\unvbox\MyTestBox} but it didn't remains the first skip. If a paragraph is started (ie you get the parskip) the box will have the correct width without that rule (as the paragraph lines will be hboxes of that width) you only need the rule when there is no paragraph, I'll fill out a patched MWE. Exactly what I searched You can add \vskip-\parskip just at the end of the definition of \mdf@lrbox, that is between \@afterheading and the closing brace. This will cancel out the \parskip glue automatically inserted when the first paragraph is about to be started. So long as the paragraph does get started. the reason why the rule is there in Marco's code is to catch the cases where no paragraph is started, to ensure the final box has full width.
common-pile/stackexchange_filtered
"Kinetic matrices" I've been wondering for a very long time what properties are know on what I will call "kinetic" matrices, for lack of a proper name. These matrices $k_{ij}$ have the following properties: $\forall j\neq i, k_{ij} \geq 0$ $ \forall i, \sum_{j} k_{ij} = 0$ These kind of matrices are very common in enzyme kinetics. For instance the Michaelis-Menten equations, arising for instance from the chemical reaction: $$\mathrm{E + S} \mathop{\rightleftharpoons}^{k_\mathrm{f}}_{k_\mathrm{b}} \mathrm{ES} \mathop{\to}^{k_\mathrm{cat}} \mathrm{E + P}$$ For this system, the evolution over time of the concentrations of $\mathrm{E}$ and $\mathrm{ES}$ can be deduced from the differential equation: $$ \dot{\mathbf{x}} = k \mathbf{x} $$ With $\mathbf{x} = (E, ES)$ and $k$ the following matrix: $$ k = \left( \begin{array}{cc} -k_f s & k_b + k_{cat}\\ k_f s & -k_b - k_{cat}\\ \end{array} \right) $$ There are a few (more or less) obvious properties, such as: they are not invertible; their eigenvalues have negative real parts; the eigenvalues that are not $0$ have eigenvectors with sums of coefficients equal to $0$. I'm sure these matrices have been heavily studied before, I just don't seem to find how they are called, and where to learn more about them. What is known about them ? @ Rahul Thanks ! I've edited the post that is now correct. Any good pointer about their properties ?
common-pile/stackexchange_filtered
Printing an HTML page on preprinted paper I need to design a page for a web application that makes sense for user input. Sadly, the simplest and most logical manner for the user to interact with the data is completely unlike the form it needs to be printed in. I'm aware of the @media @print @screen styles that allow me to style up the page differently for different media. What I'm trying to figure out is - is there a way of displaying the labels according to the location on the paper which they must be printed rather than laying out the screen and hoping it prints out correctly? s there a way of displaying the labels according to the location on the paper which they must be printed rather than laying out the screen and hoping it prints out correctly? I don't think there is, as all browsers will add their headers / footers to the document, plus there may be the printer's margins to consider (but that problem you will have whatever format you choose). I think the only half-way reliable way to build a document with exactly positioned elements is generating a PDF document. The problem with this is [AFAIK] I can't display elements that show on the printed paper that don't print in PDF, and to have the user print the PDF, it's going to generate and return to their local machine for printing on their local printer. If I return an HTML, I can at least display it as it would be printed so they see on screen exactly as it looks on paper... or I don't return anything, I just print to paper using a custom CSS. I'm wondering how I can achieve absolute positioning on paper using CSS though. @Bob my practical experience with this doesn't reach that deep but it should be possible to specify cm/inch measures when positioning elements: left: 1.5cm; top: 9cm. You could try and see what the browsers make of it. @BobTheBuilder let me know how it works out. What you still have to deal with though, is that the browsers add their own stuff to the top and bottom, and are very likely to not take that into consideration when calculating the positions. Yeah, you can switch them off in the browser. It's not a huge headache. I'm wondering whether I can remove those with JavaScript, haven't got that far yet. The only thing I'm having problems with now is the printer's margin which seems to be fixed at 0.5cm and doesn't appear to be changeable. @Bob I'm quite sure you can't influence the browser headers through JS. As for the margin, are you sure that is the browser and not your printer? It should be possible to print ignoring the margin, but that may depend on the driver. @Pekka - it seems to be the printer, thankfully it doesn't seem to upset anything. The preprinted paper doesn't require printing outside that margin and all measurements seem to be set from that margin. i.e. left:7cm = 7.5cm from the edge of the paper. What's troubling is that a div with width:10cm actually prints as 10.2cm when a border of even 1px is specified. Maybe I need to change the units for the border... @Bob that sounds better than I expected! Re the border, if you're on Firefox you could try outline, it should not change the layout. @Bob plus you could insert a nested additional <div> inside the <div style='width: 10cm'> and give the inner one the border. I think I'd move away from HTML if I needed that. What about giving the user one convenient way to interact with the data and then returning it to the user in a more print-suitable format --- say, PDF. If you can't achieve the layout you want with css, why add a link to a separate page for a printable version. If you absolutely have to have control over the positioning on a page, your only decent option is generating a pdf.
common-pile/stackexchange_filtered
Spring SAML + Wildfly 8 + IBM Jdk (1.7) - java.lang.RuntimeException: org.w3c.dom.ls.LSException: [ERR 0462] An unsupported encoding is encountered I am getting a error when deploying spring-saml application in Wildfly 8 with IBM JDK 1.7. Interestingly Googling got me no answers. The error stacktrace is Caused by: org.w3c.dom.ls.LSException: [ERR 0462] An unsupported encoding is encountered. at org.apache.xml.serializer.dom3.LSSerializerImpl.write(Unknown Source) [xml.jar:] at org.opensaml.xml.util.XMLHelper.writeNode(XMLHelper.java:892) at org.opensaml.xml.util.XMLHelper.writeNode(XMLHelper.java:872) at org.opensaml.xml.util.XMLHelper.nodeToString(XMLHelper.java:834) at org.opensaml.xml.XMLConfigurator.load(XMLConfigurator.java:159) at org.opensaml.xml.XMLConfigurator.load(XMLConfigurator.java:143) at org.opensaml.DefaultBootstrap.initializeXMLTooling(DefaultBootstrap.java:203) at org.opensaml.DefaultBootstrap.initializeXMLTooling(DefaultBootstrap.java:186) at org.opensaml.DefaultBootstrap.bootstrap(DefaultBootstrap.java:92) at org.opensaml.PaosBootstrap.bootstrap(PaosBootstrap.java:27) at org.springframework.security.saml.SAMLBootstrap.postProcessBeanFactory(SAMLBootstrap.java:42) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:696) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:686) at org.springframework.context.support.AbstractApplicationContext.__refresh(AbstractApplicationContext.java:461) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:410) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:306) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:112) at io.undertow.servlet.core.ApplicationListeners.contextInitialized(ApplicationListeners.java:173) at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:194) Any ideas? I think the problem is the dependency between opensaml.jar and serializer.jar. As you used Spring Security SAML plugin, you may be using the following libraries (as they are in the demo application): opensaml-2.6.1.jar spring-security-config-3.1.2.jar spring-security-core-3.2.1.jar spring-security-web-3.2.1.jar spring-security-saml2-core-1.0.2.jar Wildfly 8 (and 9 too) both are packed with serializer-2.7.1.jbossorg-1.jar (/modules/system/layers/base/org/apache/xalan/main) which seems to have some differences with the "standard" serializer-2.7.1.jar (https://mvnrepository.com/artifact/xalan/serializer/2.7.1). I compared both jars and they contain the same classes and resources, however their sizes differ. The "standard" one was compiled with IBM JDK 1.3.1, JBoss version with Oracle JDK 1.6_20 - This is based on both MANIFEST.MF files. Note opensaml.jar was compiled with the "standard" serializer library based on the Maven repository information. I was searching for the source code of the "standard" serializer-2.7.1.jar but I could not find it. However, I found another 2.7.1 build, this time from Spring and compared with the one JBoss source, and they have some differences. I assume that if I compare the "standard" source code I will find the same. It is not a simple file renaming as I have seen before. The next thing I did was a little bit of experimentation. Firstly just dropped serializer-2.7.1.jar into my application.war/WEB-INF/lib, started Wildfly (v 8.2.1) and the application started without any issues. Removed the JAR and back to the exception. Next, I changed WF’s xalan module to use serializer-2.7.1.jar instead serializer-2.7.1.jbossorg-1.jar (removed the jar file and edited module.xml). Started WF, and surprise, I got the same exception. I even tried changing to serializer-2.7.1.jbossorg-2.jar (packed with Wildfly 10) and serializer-2.7.2.jar (https://mvnrepository.com/artifact/xalan/serializer/2.7.2) and the exception still occurred. Finally, I rolled back the module changes to the original state and created a custom WF module including serializer-2.7.1.jar and added the module dependency in the application “jboss-deployment-structure.xml”. It is important to highlight the application starts without any issue (or additional serializer.jar) under WAS 8.5.5 and WAS Liberty Profile 8.5.5. (target production AS and another development AS respectively). Custom modules in Wildfly are handy to avoid modifying the application WAR on this kind of scenarios. As an additional note, as part of my test to solve this issue I upgraded to the latest Spring-Security (3.2.9) and Open-SAML (2.6.4) and I needed to upgrade to serialize.jar to 2.7.2. P.S. I worked with Mahe in the same project, but he left few months ago and I got the task to work out this issue.
common-pile/stackexchange_filtered
Time travel short story where past is blank and future is unformed matter I'm looking for the title and author of this short story. The premise is that the main character invents a time machine, but when he travels to the past, there is nothing but void and when he travels to the future there is nothing but chaotic, unformed matter. The conclusion of the time traveller is that the present is like a train on a single track using the unformed matter of the future to create existence in the present, and once the matter has been used it leaves nothing behind, hence the void of the past. The story was in an anthology and was probably written in the 50s, 60s or 70s, possibly 80s, but not later. I can't remember if it was an anthology from just one author or several. I have checked the Wikipedia page List of time travel works of fiction but the premise doesn't appear to be there. I've also tried searching The Internet Speculative Fiction Database but with not much information to go on, my advanced searches were fruitless. As always your help will be much appreciated. Edit 2018-01-20 I don't want to send people off on the wrong track but I have a vague feeling the story is either one of Michael Moorcock's or in an anthology he edited. Edit 2018-02-10 I have been through most of the titles in this advanced ISFDB search, but am no nearer to finding the answer. Unfortunately, a lot of titles do not contain a synopsis. I have also asked this question at the Science Fiction and Fantasy Community This sounds like the Langoliers. Except that nobody time travels in The Langoliers @MrLister - The entire premise of The Langoliers is time-travel. The people on the plane end up a few minutes or hours. Unlike classical time-travel stories the past isn't a place you can visit, it's just a slowly decaying still moment It's "Escape From Evening", a direct sequel to the aforementioned short story "The Time Dweller" in the eponymous collection by Michael Moorcock. Review summaries (in dialect): The Time Dweller The title storie is n interestin wan aboot the major character, Scar-Faced Brooder (sums up maist Moorcock protagonists tae be fair), leavin unwelcom hame tae leern aboot the wurld of a deeing Earth nae langer meant fir us humes. He gaes tae a toon n learns sumhing of the nature ah time. N interstin tale n the source ah the bangin cover art fir ma edition it his sum commentarie on how traditons kin be stupid n tim is a construct. Escape From Evening Set in the same wurld is 'The Time Dweller' tis carreed on the thems aboot tim n hoo yeah cannae gae back r firwurd, only liv noo. Deals wae a phenominon tha is so prevalent, the idealised past. Guid read. Well I found the story online at Sribd but just as it was about to confirm my memories it ended! Is that how it ends in the story, or is there more? There is more. Shame the story in the link is incomplete. I'm going on decades-old memory, but I recall there's a lot more after that, including a concluding conversation about the nature of time. You must have a good memory then! A friend of mine has got a copy of The Time Dweller and he's confirmed it's the story I was thinking of, so thank you. You're welcome. My memory is great when it comes to decades-old stories about time travel. When it comes to important things... Not so great! I might have to reread Feersum Endjinn now. This is a bit of a long shot, but how about Time Is the Simplest Thing by Clifford D. Simak? Time travel isn't the main point of the novel (in spite of the title!), but the main character does discover that he can move into the past and future. Only things that are dead in the present appear in the past -- so it is a desolate place with no plants, no animals, no people. Furthermore, while non-alive things people built -- bridges, houses, machines -- are visible, they are ghosts that he can see but not touch. The future is a gray formless chaos/void. https://en.wikipedia.org/wiki/Time_is_the_Simplest_Thing Written in 1961 and published by Doubleday and in Astounding in a shorter form, a novella, "The Fisherman." @user14111 Quite right! Project Fishhook, was the paranormal space travel organization. I've edited it. It's still not clear from what you've written that "The Fisherman" was the title given to the ASF serial. (By the way Astounding had officially changed its name to Analog by that time.) http://www.philsp.com/visco/Magazines/ASF/ASF_0365.jpg @MarkOlson this is a contender, but I do seem to recall some sort of machine was involved. The problem, of course, is the passage of time (mine) and my memory perhaps confusing things. @M. Robert Gibson: Yes and no. In Time Is the Simplest Thing, they have what are called the "Star Machines" which assist the telepath to send his or her mind to the stars, but unless I'm misremembering, the hero doesn't need them any more. In fact, his time travelling is done first to escape a mob, but second to remove a Star Machine from the hands of a bad guy.I don't know if that helps any! (And I know what you mean about memories getting melded. I remembered the interesting images of the past and future, but until I re-skimmed the book, had forgotten entirely why he time traveled.) This could be "Flux" by Michael Moorcock (actually this is a collaboration with Barrington J Bayley, but often the author attribution solely given to Moorcock). Originally published in 1963. The story has been reprinted multiple times. As shown in this entry on the Internet Speculative Fiction Database. This makes it likely that it could have been encountered at any time from the 1960s to the early 21st century. ADDENDUM: I have read "Flux" and while it has some concepts similar to the story I thought I remembered It's not exactly what I recalled. That story was basically about a fellow taking a time machine into either the past or future and only finding a formless void. Essentially the past and future do not exist beyond the present. There is a second possibility. A Moorcock short story published in New Worlds in 1964, so it is in the same ball park, called "The Time Dweller". I regret don't have access to a copy of this story, so I can't any comparisons. Hopefully someone with a more extensive collection of Moorcock's works may have better luck at confirming or refuting the possibility of it being 'The Time Dweller." On the positive side, so far everyone seems to think it's a Moorcock story. The OP's description reminded me of "Flux" but the premise seems different to me. The OP's "train on a single track" analogy does not really seem to describe the "Flux" world. If the OP wishes to check out "Flux" it's available at the Internet Archive. @user14111 I'm not entirely sure myself. This is the problem with memory. I definitely remember a Moorcock story where only the present seems to exist & not the past or future. So this may not be it. Thanks for the link to the story. The OP can decide for himself if it's right. In the back of my mind I had an idea it was Moorcock, but I could only find reference to Behold The Man in relation to time travel. It could be the one I'm looking for but my memory has twisted how I remember it. Interesting that @a4android remembers a similar story, so we might be onto something. I'll just have to re-read all my Moorcocks! @user14111 I gave "Flux" a quick read through. There are distinct similarities, but enough differences to suspect it might not be the correct story. I have a second suggestion for a possible candidate. I edited my answer accordingly. Good luck with the hunt. I've found a list of Moorcock's short stories one of which is called Time Drop, but I can't find any synopsis anywhere. @M.RobertGibson Do you happen to know where "Time Drop" was published? The page you linked to seems to have no bibliographic data for "Time Drop", and the ISFDB has no story at all by that title. @user14111 According to this page it was published in the 1965 edition of Boy's World Annual. However, I have no recollection of ever having seen that publication, so it might be a false lead. Gotta love Barrington J. Bayley. Dude was trippin'. @RossPresser Too true! Bayley outweirded the best of them. His writing deserves more love for its sheer audacity and verve.
common-pile/stackexchange_filtered
Iterating through fields and generating formatted string output I want to iterate through all integer type fields in a layer and generate new "display" fields marked with a 'd' at the end of the name using the appropriate field calculator expression. The script basically works, it generates the correct columns and even populates them with correctly formatted strings, but it runs extremely slow on my reasonably fast machine and after it has run some time, it leaves most of the cells in the new columns empty (NULL), so it might have run into some sort of timeout. If I perform the task manually via the Field calculator, it is done in a second. Perhaps a more experienced QGIS user could give me a hint about what I'm doing wrong here, it would by great: (QGIS Version used: 3.20.1 on Windows 10) from qgis.core import QgsProject, QgsField, QgsExpression, QgsFeature import processing from PyQt5.QtCore import QVariant layer = iface.activeLayer() # Get name and type of attributes in layer layerfields = [ {"name": i.name(), "type": i.typeName()} for i in layer.fields() ] # Filter for integer type fields intfields = list(filter(lambda x: x['type'].startswith('Integer'), layerfields)) intfieldsnames = list([x['name'] for x in intfields]) # Generate formatted string type field for display purposes via QgsExpression layer.startEditing() prov = layer.dataProvider() for a in intfieldsnames: fld = QgsField(a+'d', QVariant.String) prov.addAttributes([fld]) layer.updateFields() idx = layer.fields().lookupField(a+'d') e = QgsExpression('format_number('+a+',0,\'de\')') c = QgsExpressionContext() s = QgsExpressionContextScope() s.setFields(layer.fields()) c.appendScope(s) e.prepare(c) x = layer.getFeatures() for f in x: c.setFeature(f) value = e.evaluate(c) atts = {idx: value} layer.dataProvider().changeAttributeValues({f.id():atts}) x = layer.getFeatures() layer.commitChanges() Is this code faster (tested on a 300 features memory layer) : layer = iface.activeLayer() # Get name and type of attributes in layer intfieldsnames = [i.name() for i in layer.fields() if i.type() in [QVariant.LongLong, QVariant.Int]] with edit(layer): prov = layer.dataProvider() new_fields = [QgsField(f"{a}d", QVariant.String) for a in intfieldsnames] prov.addAttributes(new_fields) layer.updateFields() for feat in layer.getFeatures(): for field_name in intfieldsnames: feat[f"{field_name}d"] = f"{feat[field_name]:,}".replace(",", ".") layer.updateFeature(feat) Thank you! This works fine and gives me a string version of the integer field, but not a formatted one like with the field calculator expression format_number, which would give me points as thousand separators like: 1163235 -> 1.163.235. I don't know where to fit the field calculator expression. @gisnogud: see my edit, it works for me. Did the trick! Thank you! (So the takeaway is: Don't try to use field calculator expressions in pyqgis code...) You can also use Refactor fields. Takes ~15 s for a feature class with 44.000 features with 15 integer fields: lyr = QgsProject.instance().mapLayersByName('sksNaturvardsavtal')[0] #List all fields current mapping (a list of dictionaries, one dict for each field) default_mapping = [{'expression':f.name(), 'length':f.length(), 'name':f.name(), 'precision':f.precision(), 'type': f.type()} for f in lyr.fields()] #Find the integer fields intfields = [f for f in lyr.fields() if 'int' in f.typeName().lower()] #Create field mapping to add them as string fields, note the expression newfields = [{'expression': 'format_number("{0}", 3)'.format(f.name()), 'length': f.length(),'name': '{0}_d'.format(f.name()), 'precision': 0,'type': QVariant.String} for f in intfields] newmap = default_mapping+newfields #Use refactor field to add and calculate the new string fields processing.runAndLoadResults("native:refactorfields", {'INPUT':lyr,'FIELDS_MAPPING':newmap,'OUTPUT':'TEMPORARY_OUTPUT'}) (I ran the tool manually, Ctrl+Alt+H, copied the command syntax and adapted it using List Comprehension) Seems to be really fast ! Wow. Thank you for your help! It's greatly appreciated.
common-pile/stackexchange_filtered
Fill-In validation with N format I have a fill-in with the following code, made using the AppBuilder DEFINE VARIABLE fichNoBuktiTransfer AS CHARACTER FORMAT "N(18)":U LABEL "No.Bukti Transfer" VIEW-AS FILL-IN NATIVE SIZE 37.2 BY 1 NO-UNDO. Since the format is N, it blocks the user from entering non-alphanumeric entries. However, it does not prevent the user from copypasting such entries into the fill-in. I have an error checking like thusly to prevent such entries using the on leave trigger: IF LENGTH(SELF:Screen-value) > 18 THEN DO: SELF:SCREEN-VALUE = ''. RETURN NO-APPLY. END. vch-list = "!,*, ,@,#,$,%,^,&,*,(,),-,+,_,=". REPEAT vinl-entry = 1 TO NUM-ENTRIES(vch-list): IF INDEX(SELF:SCREEN-VALUE,ENTRY(vinl-entry,vch-list) ) > 0 THEN DO: SELF:SCREEN-VALUE = ''. RETURN NO-APPLY. END. END. However, after the error handling kicked in, when the user inputs any string and triggers on leave, error 632 occurs: error 632 occurs Is there any way to disable the error message? Or should I approach the error handling in a different way? EDIT: Forgot to mention, I am running on Openedge version 10.2B You didn't mention the version, but I'll assume you have a version in which the CLIPBOARD system handle already exists. I've simulated your program and I believe it shouldn't behave that way. It seems to me the error flag is raised anyway. My guess is even though those symbols can't be displayed, they are assigned to the screen value somehow. Conjectures put aside, I've managed to suppress it by adding the following code: ON CTRL-V OF FILL-IN-1 IN FRAME DEFAULT-FRAME DO: if index(clipboard:value, vch-list) > 0 then return no-apply. END. Of course this means vch-list can't be scoped to your trigger anymore, in case it is, because you'll need the value before the leave. So I assigned the special characters list as an INIT value to the variable. After doing this, I didn't get the error anymore. Hope it helps. That probably won’t work when the user right clicks and selects paste Ah, I forgot to mention I am running on version 10.2B As Mike has stated, this doesn't seem to work if the user right clicks and selects paste, which is also necessary to be handled otherwise the error will pop up again. But I think this is a great start, thanks! To track changes in a fill-in I always use at first this code: ON VALUE-CHANGED OF FILL-IN-1 IN FRAME DEFAULT-FRAME DO: /* proofing part */ if ( index( clipboard:value, vch-list ) > 0 ) then do: return no-apply. end. END. You could add some mouse or developer events via AppBuilder to track changes in a fill-in.
common-pile/stackexchange_filtered
Malformed XML in JSF 2.1 redirect I have a login page that's getting this error in only one client machine. The same application deployed in the same JBoss EAP version is working in every other server we try. After you input your credentials, the application redirects to a home page. We are getting this weird error i haven't been able to find in any forum. Internet explorer: Error [status: emptyResponse code: 200]: An empty response was received from the server. Check server error log. XML5635: XML declarations are only permitted at the beginning of the file. Chrome: Error [status: emptyResponse code: 200]: An empty response was received from the server. Check server error log. Firefox: [window] Error [status: malformedXML code:200] XML Parsing Error: junk after document element. The server log has nothing on this error but I managed to get the response from that request, it seems that the request is being setn twice to the server and in return we are getting a duplicated error xml response: <?xml version='1.0' encoding='UTF-8'?> <partial-response> <redirect url="/saraweb/inicio.jsf"></redirect> <changes> <extension aceCallbackParam="validationFailed">{"validationFailed":false}</extension> </changes> </partial-response> <?xml version='1.0' encoding='UTF-8'?> <partial-response> <redirect url="/saraweb/inicio.jsf"></redirect> </partial-response> Extra oddities: The application works fine ONCE, after server restart. The application works fine ONCE, after clearing browser cache. The same war works fine in the same JBoss version in another machine. Technologies used: java 1.6, jsf 2.1, icefaces 3.3. Redirect is done through navigation handler configured in faces-config.xml. I just need help to figure out what's wrong. Thank you Likely related to session expiration/renewal and something going wrong in IceFaces ajax response writer: http://www.icesoft.org/JForum/posts/list/17542.page No session expiration is involved here. @BalusC you're my only hope, i can't believe you haven't seen anything like this!! I haven't really used IceFaces. If I were you, I would start looking around in its PartialViewContext implementation, if any.
common-pile/stackexchange_filtered
Snakemake rules that are dependent on checkpoint outputs skipped I currenltly have a Snakefile with the following checkpoint & rules: checkpoint a: input: "some_bam.bam" output: info="some_info.info" zip="some_zip.gz" rule sub: input: info = lambda wc: checkpoint.a.get(**wc).output.info, file = "some_file.txt" output: "sub_out.out" checkpoint b: input: zip = lambda wc: checkpoints.a.get(**wc).output.zip some_other_file = "some_other_file.txt" output: "cbout.out" (shell/scripts seemed irrelevant to this problem so I skipped those) When I run the snakefile, for some reaason the sub rule is not run and goes straight from checkpoint a to checkpoint b. Dry-running the pipeline with the command snakemake --forceall -n --until sub built DAG of 0 jobs. Seems like Snakemake is ignoring the rule altogether and I'm not sure why. Any help would be appreciated. Thanks! I'm currently using Snakemake version 8.25.3.
common-pile/stackexchange_filtered
Parsing JSON Data Sent from Calendly.com I am trying to send data sent by calendly.com to my webserver. I am connecting to that webserver and can post data to it just fine using $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $myvars); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $response = curl_exec($ch); $myvars = 'first_name=' . $full_name . '&email=' . $thisemail; However, I am having difficulty getting data into $myvars. I have tried this $data = $_REQUEST['payload']; $unescaped_data = stripslashes($data); $obj = json_decode($unescaped_data); $thisemail = $obj->invitee->email; $full_name = $obj->invitee->name; But that variable doesn't populate. I have also changed the $data = $_REQUEST['payload'] to $data = $_POST['payload'] I have also tried a totally different approach $json = file_get_contents('php://input'); $request = json_decode($json, true); $thisemail = $request["payload"]["invitee"]["email"]; When I just put in dummy string in $thisemail and $full_name it works as expected. I also read that this data is gzip encoded. So I did also try to change this $request = json_decode($json, true); to this $request = gzdecode($json); but also with no effect. Here is a snippet of what the data is supposed to be looking like { "event":"invitee.created", "time":"2016-08-23T19:16:01Z", "payload":{ "event_type":{ "kind":"1-on-1", "slug":"event_type_name", "name":"Event Type Name", "duration":15 }, "event":{ "uuid":"BBBBBBBBBBBBBBBB", "assigned_to":[ "Jane Sample Data" ], "extended_assigned_to": [ "name": "Jane Sample Data", "email"<EMAIL_ADDRESS> "primary": false ], "start_time":"2016-08-23T12:00:00Z", "start_time_pretty":"12:00pm - Tuesday, August 23, 2016", "invitee_start_time":"2016-08-23T12:00:00Z", "invitee_start_time_pretty":"12:00pm - Tuesday, August 23, 2016", "end_time":"2016-08-23T12:15:00Z", "end_time_pretty":"12:15pm - Tuesday, August 23, 2016", "invitee_end_time":"2016-08-23T12:15:00Z", "invitee_end_time_pretty":"12:15pm - Tuesday, August 23, 2016", "created_at":"2016-08-23T00:00:00Z", "location":"The Coffee Shop", "canceled":false, "canceler_name":null, "cancel_reason":null, "canceled_at":null }, "invitee":{ "uuid":"AAAAAAAAAAAAAAAA", "first_name":"Joe", "last_name":"Sample Data", "name":"Joe Sample Data", <EMAIL_ADDRESS> ... @blewherself's answer brings to light the important fact that the JSON in the example found here is not valid JSON and I have upvoted their answer for that reason. After further investigation, however, I do believe that this is an error of that page alone and that the actual JSON passed by them is not invalid. However, I am happy to add here my answer that I found after much persistence as I believe that the documentation at calendly.com is too brief to be very useful. The JSON file can be retrieved with $HTTP_RAW_POST_DATA and otherwise you can access the individual components more or less according to the example in the docs, at least the part I was interested in. Like so $data = json_decode($HTTP_RAW_POST_DATA, true); $full_name = $data["payload"]["invitee"]["name"]; $thisemail = $data["payload"]["invitee"]["email"]; $first_name = explode(" ", $full_name)[0]; $last_name = explode(" ", $full_name)[1]; You'll notice a glaring omission that their form asks customers: phone number. It appears that even though the client is asked for it, it is not passed to your url in the JSON file. Also, though first and last name are given as a passed item in the JSON example in the docs, neither it nor many other items in the example are asked for in their form. I am not sure how they expect that data to be passed in their JSON. I sincerely hope the folks at calendly address these issues soon. "In the case of POST requests, it is preferable to use php://input instead of $HTTP_RAW_POST_DATA as it does not depend on special php.ini directives." Source: http://php.net/manual/en/wrappers.php.php#wrappers.php.input JSON is not valid. Square brackets are used for arrays ex. ["Jane", "John", "Jack"] Braces are used for objects ex. {"name": "John", "surname" : "Wick"} Try "extended_assigned_to": { "name": "Jane Sample Data", "email"<EMAIL_ADDRESS> "primary": false }, Instead of "extended_assigned_to": [ "name": "Jane Sample Data", "email"<EMAIL_ADDRESS> "primary": false ], In this case you can write a function that corrects mistakes in encoded JSON before feeding it to JSON decode. I've written a function that would handle it for you using regular expression. It is searching for all cases where colon is placed in square brackets and replaces them with braces. Should work with any number of such cases in one response. Hope it will help. function correctJSON ($string) { //Get an array of all cases with invalid brackets preg_match_all('/\[".*":.*\]/', $string, $matches); //Create an array with braces with inside content to replace with for($i=0; $i<count($matches[0]); $i++) { $replace[$i] = str_replace(array('[',']'), array('{','}'), $matches[0][$i]); } //Replace return $string = str_replace($matches[0], $replace, $string); } P.S. One can't just replace square brackets with braces straight away in the source. It will replace arrays ["Jane", "John", "Jack"] with {"Jane", "John", "Jack"} that is not valid JSON either. I don't have any control over the JSON. It is posted to my page by the folks at calendly.com. I've verified your JSON snippet at jsonlint.com and it says JSON is not valid. As soon as it's not valid you'll get NULL after using json_decode. I believe that's why you can't get data from your request. It's not the complete JSON. I wonder if it would be valid if you put in the entire JSON. For the complete JSON, you can go here. In any case, if calendly.com isn't sending me valid JSON, then I really am not sure what to do about that. It's valid except of the "extended_assigned_to" object that should be placed in { } instead of [ ]. I believe you had to write down a regular expression to replace brackets before using json_decode. I'll try to help in a couple of hours OK. Thanks. I think I need to report this issue to calendly.com. Edited my answer. Please check. Thanks for this. Another issue with the same problem has crept up, so I will have to check on it later. I upvoted if for no other reason than the fact that you pointed out that the api provide some invalid JSON. Can you get a JSON response to check if it's invalid again? I'm also doubt that your $_REQUEST['payload'] is working correctly. JSON is sent in response as a string by default. So if server sends you an object with 'event', 'time' and 'payload' you probably can't get it stratight away from the string in the response. If it is a case you should get request in whole, decode it and only after try to get 'payload'. $obj->payload->invitee->email is working as intended in my tests. By the way, calendly.com should have already changed mistake in JSON. At least they had changed Sample Webhook Data page in their Docs you had linked to.
common-pile/stackexchange_filtered
How to email query output I have a basic query, and i'd like to email the results. How can I do this at the query level? So if my query is: SELECT Store_Id, Paid_Out_Amount, Paid_Out_Comment, Paid_Out_Datetime, Update_UserName FROM Paid_Out_Tb WHERE (Store_Id = 1929) OR (Paid_Out_Amount > 50) AND (Paid_Out_Datetime BETWEEN CONVERT(DATETIME, '2012-06-01 00:00:00', 102) AND CONVERT(DATETIME, '2012-06-30 00:00:00', 102)) How would I email the output? I have a procedure to send email via SMTP and the parameters are @From, @To, @Subject and @body... which works.. How would I make the body the outcome of the query? SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO CREATE PROCEDURE [dbo].[sp_SQLNotify] @From varchar(100) , @To varchar(100) , @Subject varchar(100)=" ", @Body varchar(4000) = "Test" /********************************************************************* This stored procedure takes the above parameters and sends an e-mail. All of the mail configurations are hard-coded in the stored procedure. Comments are added to the stored procedure where necessary. Reference to the CDOSYS objects are at the following MSDN Web site: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cdosys/html/_cdosys_messaging.asp ***********************************************************************/ AS Declare @iMsg int Declare @hr int Declare @source varchar(255) Declare @description varchar(500) Declare @output varchar(1000) --************* Create the CDO.Message Object ************************ EXEC @hr = sp_OACreate 'CDO.Message', @iMsg OUT --***************Configuring the Message Object ****************** -- This is to configure a remote SMTP server. -- http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cdosys/html/_cdosys_schema_configuration_sendusing.asp EXEC @hr = sp_OASetProperty @iMsg, 'Configuration.fields("http://schemas.microsoft.com/cdo/configuration/sendusing").Value','2' -- This is to configure the Server Name or IP address. -- Replace MailServerName by the name or IP of your SMTP Server. EXEC @hr = sp_OASetProperty @iMsg, 'Configuration.fields("http://schemas.microsoft.com/cdo/configuration/smtpserver").Value', '<IP_ADDRESS>' -- Save the configurations to the message object. EXEC @hr = sp_OAMethod @iMsg, 'Configuration.Fields.Update', null -- Set the e-mail parameters. EXEC @hr = sp_OASetProperty @iMsg, 'To', @To EXEC @hr = sp_OASetProperty @iMsg, 'From', @From EXEC @hr = sp_OASetProperty @iMsg, 'Subject', @Subject -- If you are using HTML e-mail, use 'HTMLBody' instead of 'TextBody'. EXEC @hr = sp_OASetProperty @iMsg, 'TextBody', @Body EXEC @hr = sp_OAMethod @iMsg, 'Send', NULL -- Sample error handling. IF @hr <>0 select @hr BEGIN EXEC @hr = sp_OAGetErrorInfo NULL, @source OUT, @description OUT IF @hr = 0 BEGIN SELECT @output = ' Source: ' + @source PRINT @output SELECT @output = ' Description: ' + @description PRINT @output END ELSE BEGIN PRINT ' sp_OAGetErrorInfo failed.' RETURN END END -- Clean up the objects created. EXEC @hr = sp_OADestroy @iMsg PRINT 'Mail Sent!' GO SET QUOTED_IDENTIFIER OFF GO SET ANSI_NULLS ON GO This is the procedure i'm using to send the mail... which works... I just want to add a spot in it to send the results of the query above it... Can this be done easily within in the procedure? Do you want to send the entire query output to a single person, or do you want to send each line to someone different? I would like to send the entire output to a single person... It's usually only 5 lines of output... The problem is it's Server 2000... which means the sp_send_dbmail procedure will not work... Not sure how to accomplish it. Use the SQL Server Powershell pack. An example (with detailed explanation) of using it to obtain output is here. (The above is taken from this SO answer, but to clarify something s/he says: SQL Server 2008 client components is required (Express should be fine), but it can work with SQL Server 2000 databases (source).) This seems more complicated that I thought it would be. My initial though is there had to be some sort of backwards compatible send mail procedure... Or at least a way to add a way to get the query results and email them from the procedure... right now I have a procedure that emails fine, just does not use the results... You can also use a variable for a direct loop concatenation. See https://stackoverflow.com/a/4447564/1180926 (although you would use tab and newline delimiters instead of HTML code). You would then just change your query accordingly, store it in @Body, and you're done! Can I just call my procedure and assign @body to the query? No, you have to use that string trick. You can only assign the last line of a query to a string - the variable concatenation is a hack to get around that.
common-pile/stackexchange_filtered
Symfony 4 - Impossible to submit my form modal (error with path) in my project symfony 4 I have a button that I use to open a modal popup, which has a form, like this: But the problem is that when I submit the form, I have this error: No route found for "POST /validation/%7B%7BdeleteLink%7D%7D" (from "http://<IP_ADDRESS>:8000/validation/absences") In my Controller, I've my index function which pass my form : /** * @Route("/validation/absences", name="validation_index") */ public function index(PaginatorInterface $paginator, Request $request, AbsenceService $absenceService) { $refusAbsence = new Absence(); $formRefus = $this->createForm(RefusAbsenceType::class, $refusAbsence); $formRefus->handleRequest($request); //.... render, etc } And in my Twig, I display my form like this : {% import 'macro/macro.html.twig' as macro %} {% block body %} //... <a class="btn btn-secondary ml-1" href="#" data-target="#refuserAbsenceModal{{demande.id}}" data-toggle="modal"> <i class="fas fa-times"></i> </a> {{ macro.create_form_modal( 'refuserAbsenceModal'~demande.id, "Refuser l'absence ?", formRefus, path('validation_refuser',{'id': demande.id}) ) }} And in my macro.html.twig : {%- macro create_form_modal(id, title, form, deleteLink) -%} {% filter spaceless %} <div id="{{ id }}" class="modal fade" role="dialog"> <div class="modal-dialog"> <div class="modal-content"> {{form_start(form, {'action': "{{deleteLink}}"})}} <div class="modal-header"> <h4 class="modal-title">{{title}}</h4> <button class="close" data-dismiss="modal" type="button">&times;</button> </div> <div class="modal-body"> {{form_widget(form)}} </div> <div class="modal-footer"> <button type="submit" class="btn btn-primary">Valider</button> <button class="btn btn-secondary" data-dismiss="modal" type="button">Annuler</button> </div> {{form_end(form)}} </div> </div> </div> {% endfilter %} {%- endmacro -%} It has to call my controller function : /** * Refuser une demande d'absence * * @Route("validation/absences/refuser/{id}", name="validation_refuser") * @Method({"POST"}) * * @param Absence $absence * @return void */ public function refuser(Absence $absence) { dd($absence); } Is the problem coming from the fact that I declare my form in a function, and that when I submit it, I ask it to point to another function? Will I have to do everything in the same function? Error is here: {{form_start(form, {'action': deleteLink})}} deleteLink is a variable and there's no need to enclose it with quotes. And I suppose we should close this as offtopic(. Oh okay thank you ! But other problem, how can I access to my form data now ? Because in my "refuser" function, I haven't the form :s Your form data is in Request. Provide request as a first varible to refuser() method. Or probably $absence has waht you need. Dump variables and see what you have in them. Okay, I used $request->request->get('refus_absence'). Thanks for your help ! :)
common-pile/stackexchange_filtered
"Wire" multiple Deployments to one Service in Kubernetes I have several different Deployments. Deployment A: export port 3333 Deployment B: export port 4444 I want to use a single Service(with LoadBalancer type) to export them. Service Main: export port 4545 -> Route to Deployment A's port 3333 export port 5555 -> Route to Deployment B's port 4444 The documentation say that you can export multiple ports on one services, but it doesn't say whether it works for multiple Deployments. Since Services use selector to select Deployments but in my case, there will be more than one Deployment comes from the selecting result. I don't think that is possible today, but it seems like a potentially useful feature. I filed a feature request.
common-pile/stackexchange_filtered
How to convert jiffies to percentages from virt plugin i have installed collectd on our compute node. and enabled virt plugin inside it. we push the result to influxdb. we try to get the instance CPU utilization to be displayed inside grafana. however, we can see the output is using jiffies. Can anyone advise, how can we display correctly the graph in percentage instead of jiffies. thanks. Just use a visualisation which does a derivative
common-pile/stackexchange_filtered
Error loading openNLP Spanish model POS tagger in R I am trying to run a POS tagger function for Spanish text using R's openNLP package. I previously run the same function using a model for English text, but it seems there is not an official model for Spanish POS tagging in the openNLP page (http://opennlp.sourceforge.net/models-1.5/) I found a previous question that pointed to a Spanish POS tagging model (Java OpenNLP version 1.5.3. Spanish models), which I tried to use, but I got the following error message when I tried to use any of the models available there: word_token_annotator <- Maxent_Word_Token_Annotator(model = 'opennlp-es-perceptron-pos-es.bin') However: Error in .jnew("opennlp.tools.tokenize.TokenizerModel", .jcast(.jnew("java.io.FileInputStream", : java.lang.IllegalArgumentException: opennlp.tools.util.InvalidFormatException: The TokenizerME cannot load a model for the POSTaggerME! I suppose that the binaries available in that github repo are not in the format that is expected by "Maxent_Word_Token_Annotator". Do you know how I could solve this issue, or are you aware of any other Spanish POS tagging model that I can plug into my code? Thank you very much in advance for your help.
common-pile/stackexchange_filtered
How can a make a histogram for each item I m trying to built a histogram for each item but when I trying to do that , i have only one histogram . This the code I have for indicateur in indicator["Series Code"]: filtre_indic=data_sc['Indicator Code']==indicateur data_test = data_sc.loc[filtre_indic]['2000'] data_test.hist() Can someone help me Does this answer your question? python plot multiple histograms no, when i try the solution of this post is not working How does your data look like?
common-pile/stackexchange_filtered