text
stringlengths
70
452k
dataset
stringclasses
2 values
Using VBA to Loop Through Sheets with a Specific Name I am trying to build a macro to run through a file and open a userform for every sheet that has the name "File-n" where n is an integer. I've tried this code, but it doesn't work: Dim WS As Worksheet Dim indx As Integer For Each WS In ThisWorkbook.Worksheets WS.Activate If WS.Name = "File-" & indx Then WS.Select userform1.show End If Next WS This macro doesn't recognize any sheets. The code below works, but I was hoping to clean it up if possible. I don't like listing out the potential number of sheets up to thirty. If WS.Name = "File-0" or WS.Name = "File-1" or WS.Name = "File-2" or ... Then Possibly an easier approach - do you have any sheets with names starting with "File-" that you don't want to consider? @BigBen No, all sheets with the name "File-" would be relevant in this case. Then just use InStr (see the posted answer), or Left even. Try to avoid .Activate and .Select if not expressly needed for user communication - c.f. How to avoid using Select in Excel VBA? You can use Like as @BigBen mentioned in comment or use Instr like below: Option Explicit Sub test() Dim WS As Worksheet Dim indx As Integer For Each WS In ThisWorkbook.Worksheets WS.Activate If InStr(1, LCase(WS.Name), "file") Then WS.Select UserForm1.Show End If Next WS End Sub See if below code helps Dim WS As Worksheet For Each WS In ThisWorkbook.Worksheets WS.Activate If WS.Name Like "File-*" Then WS.Select userform1.show End If Next WS This would compare each Worksheet against an incremental indx number, so I doubt it would work as expected... @Xabier thank you for pointing this out. I see where the mistake is! I have corrected the code. Add one more loop for indx Dim WS As Worksheet Dim indx As Integer For i = 1 to 30 indx = i For Each WS In ThisWorkbook.Worksheets WS.Activate If WS.Name = "File-" & i Then userform1.show End If Next WS Next i If you were to do this approach, the outer loop should be the worksheet loop, and the inner loop should be i. But... it's really inefficient and there are way better approaches.
common-pile/stackexchange_filtered
select multiple option then display option related value on next select box I have multi selected drop down box which fill State list and I want to if I will select multiple select State then Selected State related cities are display another drop down box. For Example if I am select DELHI and UTTER PRADESH then both State all Cites are display on next drop down box. Thanks for Advance. Is it possible to show what you already have tried to solve your problem ? try your search for chained selects this is SUCH a FAQ. look to the right ---> possible duplicate of Chained Select with NO jQuery or Ajax possibly duplicate of Dynamic chained select box Simple question is if select 2 or 3 or more State on multiple selected drop down box then Selected State related Cites are display another drop down box. possible duplicate of Unable to populate chained dropdown list with Ajax and Javascript
common-pile/stackexchange_filtered
SQL rolling period of 20 days I was wondering what is the best way to find, if over any continuous period of 20 days, the number of events exceeded 10. I am trying to write an exception report but cannot figure out a logic besides using a loop. I have included the schema for the table. ID |StartDate |EndDate |Area ---------+---------+---------+--------- 12 |01-02-2013 |05-02-2013 |A12 14 |06-02-2013 |12-02-2013 |A13 15 |11-02-2013 |19-02-2013 |A14 19 |16-02-2013 |26-02-2013 |A15 21 |21-02-2013 |05-03-2013 |A16 24 |01-02-2013 |05-02-2013 |A17 26 |06-02-2013 |12-02-2013 |A18 28 |11-02-2013 |19-02-2013 |A19 30 |16-02-2013 |26-02-2013 |A20 33 |21-02-2013 |05-03-2013 |A21 I have got a partial solution: ' 'DECLARE @START AS DATE DECLARE @END AS DATE SET @START= '20130201' SET @END= '20130204' SELECT n.EVENT_DATE, (SELECT SUM(SESSIONS) AS Total_Sessionsn FROM dbo.[Session] WHERE (EVENT_DATE BETWEEN DATEADD(d,-20,n.EVENT_DATE) AND n.EVENT_DATE) GROUP BY AREA_CODE, SITE_CODE) AS Sessions FROM [dbo].[Session] AS n WHERE EVENT_DATE BETWEEN @START AND DATEADD(d,20+dbo.HolidayCount(@END,DATEADD(d,20,@END)) ,@END )' Got IT. For anyone interested in finding the date 20 working days from now, I have added the solution below. :) (I feel stupid!) /****** Script for SelectTopNRows command from SSMS ******/ SELECT TOP 1 [SK_CAL] ,[CAL_DATE] ,[CAL_CODE] ,[CAL_VALUE] ,[CAL_OPEN] ,[CAL_CLOSE] FROM (SELECT TOP 20 [SK_CAL] ,[CAL_DATE] ,[CAL_CODE] ,[CAL_VALUE] ,[CAL_OPEN] ,[CAL_CLOSE] FROM [tempdb].[dbo].[tbCalendar] WHERE (CAL_DATE>'20130201' AND CAL_VALUE=1) ORDER BY SK_CAL ASC) n ORDER BY SK_CAL DESC Does each ID number identify an event? How do you define start date of "continious period": today or event date? Start of any event -19 days would be one rolling 20 day period and similarly last day + 19 days would be another 20 day period. Still not sure what you're asking. These are multiple events, potentially concurrent, right? Are you looking for the number of simultaneously occurring events at any given time? My problem currently is how to do this for 20 working days instead of 20 days.The HolidayCount function idea was to count total holidays between start and end and add it to the end date. That dosen't work. :( See if this is what you're after. I start with a CTE that gets the earliest StartDate and latest EndDate to define the range. Then another CTE extrapolates all the dates between. Then for each date in the range I see how many events were active in the 20 days prior to and including that date. I determine that by checking if the event start or end dates occur within the rolling 20 day period or if the event start and end dates encapsulate the entire rolling 20 day period. ;with StartEnd as ( select min(StartDate) StartRange, max(EndDate) EndRange from @Events ), DatesInRange AS ( SELECT StartRange AS RangeDate, dateadd(d,-19,StartRange) Rolling20Start from StartEnd UNION ALL SELECT DATEADD(DAY, 1, RangeDate), DATEADD(DAY, -18, RangeDate) FROM DatesInRange, StartEnd WHERE RangeDate < EndRange ) select RangeDate, count(*) from DatesInRange left join @Events e on e.StartDate between Rolling20Start and RangeDate or e.EndDate between Rolling20Start and RangeDate or (e.StartDate < Rolling20Start and e.EndDate > RangeDate) group by RangeDate OPTION (MAXRECURSION 0) I have got a partial solution
common-pile/stackexchange_filtered
Is there a way to set an Enumerated Property in a class to All available enums? First things off, I had no idea what to title this question - I'm even confused how to state it. Now for the question. Let's take the System.IO.FileSystemWatcher class where you set it's NotifyFilter property: this.FileSystemWatcher1.NotifyFilter = NotifyFilters.Attributes | NotifyFilters.CreationTime | NotifyFilters.FileName | NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.Security | NotifyFilters.Size; That's quite a bit of code to set a single property. Inspecting NotifyFilter, it is an enumeration. Is there a 'lazy' or 'shortcut' way to set all these properties at once? I know it's not necessarily needed, but my curiosity is piqued. this.FileSystemWatcher1.NotifyFilter = <NotifyFilters.All> ? I prefer to define this sort of thing explicitly, e.g. in a field called All. You could then exclude specific enum values by using ~. @GregRos Can you somehow demonstrate what you mean? You could always do something like this, NotifyFilter ret = 0; foreach(NotifyFilter v in Enum.GetValues(typeof(NotifyFilter))) { ret |= v; } I don't know of a better way, unfortunately. But you could always throw that in a generic utility method. private static T GetAll<T>() where T : struct, IConvertible { if (!typeof(T).IsEnum) { throw new NotSupportedException(); // You'd want something better here, of course. } long ret = 0; // you could determine the type with reflection, but it might be easier just to use multiple methods, depending on how often you tend to use it. foreach(long v in Enum.GetValues(typeof(T))) { ret |= v; } return (T)ret; } Just an FYI, you could say foreach(NotifyFilter in Enum.GetValues(typeof(NotifyFilter))) saves the cast inside the foreach body. Good one. I am working on some major personal projects, and would like to keep my application fast. Didn't occur to me to use generic methods to nail this problem. Constraint your T to where T: struct, IConvertible and check if (!typeof(T).IsEnum) { throw ... } (T)ret doesn't compile for me -- but I'm stuck on c# 3.0. Does it work in 4.0/4.5? @dbc That is disallowed in any version of C#. There is no conversion from int to a type argument T (no matter what constraints T carries). If you assume that the underlying type of the enum T is int, maybe you should check that where you check typeof(T).IsEnum. Otherwise you will get unexpected behavior if some enum uses long as its underlying type. @Noobgrammer Maybe it is OK to set all flags, even those not used or named. In that case you can just do this.FileSystemWatcher1.NotifyFilter = (NotifyFilters)(~0); or (equivalently) this.FileSystemWatcher1.NotifyFilter = (NotifyFilters)(-1);. But be sure to check if the underlying type of NotifyFilters is int (it is). @JeppeStigNielsen thanks, good point. I realized that as I had to leave. I was going to add in the support for long, but I figured this gives the gist to people so I left it up. I assume ret |= (v; is a typo? @Blorgbeard that's what I get for answering in a hurry. Thanks for pointing that out! Fixed. There isn't a way without writing your own method - as correctly answered elsewhere. If the enum was yours to change you could add a new value All = ~0 If the enumeration has the Flags attribute applied on it you can add the numbers equivalent to the desired enum members and then assign it. In the example in your question sum up 4+64+32+16+8 = 124 then you can write this.FileSystemWatcher1.NotifyFilter = (NotifyFilters) 124; Of course this takes away the readability of an enum and dramatically causes maintenance issues, but as you said it's for our lazies: public class Program { [Flags] enum Enum:int { a = 1, b = 2 } static void Main(string[] args) { Enum x = (Enum) 3; Console.WriteLine(x.ToString()); } } Output: a,b UPDATE: I almost forgot. You can of course pass this string to Enum.Parse and get the desired value: this.FileSystemWatcher1.NotifyFilter = (NotifyFilter) Enum.Parse(typeof(NotifyFilter), "Attributes,CreationTime,FileName,LastAccess,LastWrite,Security,Size"); That's pretty freaky. Didn't know you could do this It still freaks myself out, but as you said: curiosity and of course laziness. I can imagine this landing in spaghetti code , and the next developer working on this jumping from a 27 floor building Exactly. I myself hate this, but comes handy in some tight spots. Writing a generic utility to mask together all the values in a c# enum turns out to be much more difficult than one would imagine, because The underlying type of an enum can be byte, short, int or long; signed or unsigned. There is no way to cast an object directly to a generic enum, perhaps because there's no enum constraint built into c#. One must box instead, then unbox. All the enum utilities date from c# 1.0 and so are rather crufty. This is the best I could do. I made use of the following: Cast Int to Generic Enum in C# Enum type constraints in C# Would this be easier in c++/CLI? /// <summary> /// Contains generic utilities for enums, constrained for enums only. /// </summary> public sealed class EnumHelper : Enums<Enum> { private EnumHelper() { } } /// <summary> /// For use by EnumHelper, not for direct use. /// </summary> public abstract class Enums<TEnumBase> where TEnumBase : class, IConvertible { private static void ThrowOnEnumWithoutFlags<TEnum>() where TEnum : struct, TEnumBase { var attributes = typeof(TEnum).GetCustomAttributes(typeof(FlagsAttribute), false); if (attributes.Length == 0) { throw (new ArgumentException("The generic argument [<T>] must be an enumeration with the [FlagsAttribute] applied.", "T: " + typeof(TEnum).FullName)); } } public static TEnum GetAll<TEnum>() where TEnum : struct, TEnumBase { ThrowOnEnumWithoutFlags<TEnum>(); var underlyingType = Enum.GetUnderlyingType(typeof(TEnum)); if (underlyingType == typeof(ulong)) { ulong value = 0; foreach (var v in Enum.GetValues(typeof(TEnum))) // Not sure I need the culture but Microsoft passes it in Enum.ToUInt64(Object value) - http://referencesource.microsoft.com/#mscorlib/system/enum.cs value |= Convert.ToUInt64(v, CultureInfo.InvariantCulture); return (TEnum)Enum.ToObject(typeof(TEnum), value); } else { long value = 0; foreach (var v in Enum.GetValues(typeof(TEnum))) // Not sure I need the culture but Microsoft passes it in Enum.ToUInt64(Object value) - http://referencesource.microsoft.com/#mscorlib/system/enum.cs value |= Convert.ToInt64(v, CultureInfo.InvariantCulture); return (TEnum)Enum.ToObject(typeof(TEnum), value); } } I have tested on byte, sbyte, short, int, long, ushort, uint and ulong types. Update simplified as per hvd's suggestion. +1 You do need separate handlers for long and ulong to handle values that are out of long's range, but for all other types, you can use the long version: simply use Convert.ToInt64 and Enum.ToObject to perform the conversions. (And for consistency, you can use Convert.ToUInt64 and Enum.ToObject for the ulong version.)
common-pile/stackexchange_filtered
How can I fix or reset a broken plasma-x11 login? I am was a happy Plasma-X11 user on ubuntu 22.04 until logging in crashes the UI. Scenario that breaks every time: On the login screen, I select my username, select the plasma-x11 DM and login with my password. Screen goes black, no mouse cursor, stays black and borks the UI I can recover by ssh and sudo systemctl restart gdm But plasma still works for other logins on the same system, i.e. selecting the plasma-x11 DM before logging in. And I can login with my broken user id using other DMs, e.g. "ubuntu". I've gone thru the syslog and am not finding any glaring differences between events when I login with my borked id and a working id. Is there a way to reset my plasma config for my user id? Is it trying to recover network mounts or open applications that are hanging up? EDIT: included syslog of point where failed login occurs Please include your logfiles By any chance, can you login to the guest session? I don't have guest sessions enabled, but I do have two other user names that both work for logging into plasma-x11. Only my main id fails. Looking the the xorg and syslog logs, I can't see where the problem is. All the logs show some problems, but those so far look innocent. I haven't found a place to increase verbosity logging -- yet.
common-pile/stackexchange_filtered
Unable to make resizeable div with default width inside webkit-box flex container I want to get a div container with default width (400px). If screen size is smaller, than that (smaller than 400px in width), then div must shrink. To do this, I'm using max-width css property: <div class="parent_2"> Text </div> .parent_2 { max-width: 400px; border: 1px solid gray; } And it works fine. However, in reality, my div is inside another third-party flex componet. So, in practice I have a code which looks like so: .parent_1 { display: -webkit-box; display: -ms-flexbox; display: flex; } .parent_2 { max-width: 400px; border: 1px solid gray; } <div class="parent_1"> <div class="parent_2"> Text </div> </div> And it stops working - my own inner div shrinks just to the width of text. This is not what I want. I should add, that I'm not able to fix this third-party container and I am responsible just for this inner div. So,how can I fix it? I want it to look just like so, but without tweeking or removing parent (class=parent_1) element: .parent_2 { max-width: 400px; border: 1px solid gray; } <div class="parent_2"> Text </div> min-width:0 to parent_2 min-width:0 does not work You mark it as duplicate, but the solution you provide does not work you want your content to shrink past the text size or you want it to also be more than the text size and start shrking at 400px? add width:100% ... (will update the duplicate) where should I add it? you mark it as duplicate or try to close before even checking I checked and wrongly understand your question but I am still here to reply to your comment and will rectify my mistake by providing the correct duplicate (we all do mistakes, no?) .. you have to add it to parent_2 in addition to width:100% you can also use flex-grow: 1;
common-pile/stackexchange_filtered
Sub-setting data table using & and grepl I've got a data set with column names like: names(d) [1] "Code" "LX(RI)" "LX(VO)" "LX(MV)" "LX(WC189)" "LX(WC035)" [7] "NX(RI)" "NX(VO)" "NX(MV)" "NX(WC189)" "NX(WC035)" "AX(RI)" [13] "AX(VO)" "AX(MV)" "AX(WC189)" "AX(WC035)" "SX3I(RI)" "SXI(VO)" [19] "SXI(MV)" "TX(RI)" "TX(VO)" "TX(MV)" "TX(WC189)" "TX(WC035)" Each column has several thousand rows associated with it. What I want to do is use grepl to subset the data table's columns based on those ending with RI AND retaining the Code column. Currently I've worked out how to subset all the RI columns into a new data.table, but I can't figure out how to include the Code column. I have currently: RI <- d[, grepl("\\(RI", names(d)), with = FALSE] Which gives me what I want: names(RI) [1] "LX(RI)" "NX(RI)" "AX(RI)" "SX3I(RI)" "TX(RI)" I've been trying (note that I have included &Code): RI <- d[, grepl("\\(RI&Code", names(d)), with = FALSE] Which I want to return a data table with the following columns: [1] "LX(RI)" "NX(RI)" "AX(RI)" "SX3I(RI)" "TX(RI)" "Code" The above is my desired output. The code however does nothing and returns an empty data table. A couple of questions: Can I use & in grepl? If so, is my example using & incorrect? If not, are there any suggestions on how to subset for both RI columns and the Code? Try this ab <- c("Code","LX(RI)","LX(VO)","LX(MV)","TX(RI)","NX(RI)","NX(RI)") ab[grepl("Code|RI",ab)] [1] "Code" "LX(RI)" "TX(RI)" "NX(RI)" "NX(RI)" names(df) <- c("Code","LX(VO)","TX(RI)","NX(RI)") df <- df[,grepl("Code|RI",names(df))]
common-pile/stackexchange_filtered
Nuxt site on GitLab Pages blocked due to MIME type I've seen other similar issues here, but as I am using GitLab pages for the first time, I wasn't able to find any setting to make changes there. The problem I am having is the errors on Firefox: The resource from “https://MYNAME.gitlab.io/_nuxt/08d802b84f0f5505a72c.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). and on Chrome: The resource from “https://MYNAME.gitlab.io/_nuxt/08d802b84f0f5505a72c.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). This is returning a blank page as the Nuxt app cannot run. I read about Access-Control-Allow-Origin: * but no idea how to set this up on GitLab page. Thanks for your help! as an update, it looks like the page works without error when reached from a custom domain, while the error persists if accessed directly via MYNAME.gitlab.io
common-pile/stackexchange_filtered
Change in executable (*.exe) size while transferring file from one PC to another I am encountering a strange issue with en executable file. When I transfer this executable from PC A to PC B using IP messenger its size changes. Functionally, it still behaves in the same manner. Again, when I further transfer the file from PC B to PC C, the executable reverts back to the original size. I tried comparing both these executable files of different sizes using HEX compare and there are quite a lot of bytes that have changed. What could be the reason for this? NOTE: All these systems using Windows operating system. Have you updated your virus definitions lately? Also, could you confirm that the hash (like CRC32) changes as well? Differing block sizes could cause a minor change in actual file size, but the contents of the file shouldn't change. @Gleno: Just FYI, but CRC32 is not a real "hash" and is easy to collide. For file integrity, MD5 or SHA are better. @grawity, the probability that two different files output same CRC32 is low enough. You are right in that there exist better hashes, but CRC32 is a very common check for file integrity. @Gleno: Yes, but only against accidental corruption, not malware. (Unrelated, but interesting: many Microsoft-distributed ISOs have FFFFFFFF as their CRC32.) In the past I have experienced issues with CR/LF -> CR conversion, or ASCII/binary transfer mode conflicts in numerous file transfer contexts. If the virus payload theory doesn't pan out, you might want to go down this path to see if this is happening in your case. If transferring an executable from system A to system B changes it in some way, and transferring it back to system A apparently changes it back, then I'd say it a common sign of a virus infection. That is, the EXE file is infected. However, on the original system (A) this virus is active, and makes the file size to be reported as it was originally. However, checking the copied file on a "clean" system (B) you can see the difference. My advice is, upload the EXE file from system B (where the file appears to be bigger) to VirusTotal, which will have it checked with many antiviruses simultaneously, in a matter of minutes. If the file is infected, you'll most probably know it. Can you elaborate as to why a virus would make a file appear larger on a system? Theoretically, I guess I could see how it could affect what a utility might display as a size. Am I missing another possibility? When a virus infects a file, it adds its payload to it, so the file grows. To avoid detection, on an infected system (where the virus is resident and active), it shows the filesize as it was before infection. But if you copy that file to a clean system, you see its true size, which is what it was before it got infected plus the size of the virus payload. Curiosity: Are you aware of how a virus is able to control what size is reported for a file? Yes. I worked in an antivirus company many years ago. Basically you write a filesystem filter driver... Then you can do all kinds of "funny" stuff, e.g. return different file size depending on who's asking, etc. Thanks for the insight! +1, interesting stuff. Sorry for the tangent :-)
common-pile/stackexchange_filtered
/usr/bin/gem:8:in `require': no such file to load -- rubygems I install and uninstall ruby 1.8 and 1.9, and install again, and now i am lost... The 'gem' doesn't work... This is the only message i get: /usr/bin/gem:8:in `require': no such file to load -- rubygems (LoadError) from /usr/bin/gem:8 gem -v - give back the same result Description: Ubuntu 10.04.4 LTS which ruby: /usr/bin/ruby which gem /usr/bin/gem ruby -v ruby 1.8.7 (2010-01-10 patchlevel 249) [x86_64-linux] apt-get install rubygems rubygems is already the newest version. rails -v getopt: invalid option -- 'v' Terminating... What do you get when you do this: gem list --local The same thing : /usr/bin/gem:8:in `require': no such file ... have you tried running gem update --system? The same thing : /usr/bin/gem:8 ... something is not good with the command itself, but i can't find how to reinstall it You will have to use apt-get to uninstall, then reinstall ruby. You say you have installed two different versions of ruby (1.8.x, and 1.9.x), so both will have to go. sudo apt-get uninstall ruby1.8 sudo apt-get uninstall ruby sudo apt-get purge ruby1.8 sudo apt-get purge ruby sudo apt-get autoremove --purge Do not worry if the purge and autoremove commands tell you nothing was done. Now do: sudo apt-get install ruby This will get you ruby version 1.9.3 --that's the latest on my box using apt-get. Once that is done, then do: gem list --local and you should get a short list. To get rails do: sudo apt-get install rails Thank you for answer. now i get /usr/lib/ruby/1.8/rubygems.rb:9:in require': no such file to load -- thread (LoadError) from /usr/lib/ruby/1.8/rubygems.rb:9 from /usr/bin/gem:8:in require' from /usr/bin/gem:8 apt-get uninstall ruby1.8 - E: Invalid operation uninstall apt-get uninstall ruby - E: Invalid operation uninstall I use root user. In my case it is something with the installation that go's wrong, it doesn't install the ruby properly.
common-pile/stackexchange_filtered
Where are the default methods of interface stored in memory? I have gone through a number of posts, but all seem to answer that where are the static methods of an interface stored. However, an interface can have abstract, static and default methods. I know about static and abstract methods. But, I am not able to find anything related to default method storage in memory. I might be wrong, but I am of the idea that default methods will be stored in static heap space just like instance methods are stored with class. But, on top of this as well, I am confused if default methods are also allocated to stack frames once they are called considering that the implementing class does not override the implementation of default method in interface and there is no diamond problem. I have referred below links: Where are methods stored in memory? Where are static methods and static variables stored in Java? There is nothing special for default methods as far as storage in the JVM memory is concerned. Like other class methods, they are part of the method area. I am confused if default methods are also allocated to stack frames once they are called considering that the implementing class does not override the implementation of default method in interface and there is no diamond problem. The stack frames are allocated when methods are invoked, again regardless of the kind of method (static, default, etc). Don't confuse their use during runtime invocation with where the method code (and other class metadata) is stored.
common-pile/stackexchange_filtered
How to open Chrome Custom Tab in a Dialog in Android Application In my current Android application I have a requirement to open a DialogFragment to display a list of results. Each Result has an associated url that explains the item in more detail. I can open a Chrome Custom Tab with this url, however it closes the DialogFragment and the user has a poor experience. Is it possible that I could open the Crome Custom Tab within its own Dialog? That way my Results DialogFragment should not be closed, and the user can return directly to the results list. I'd be interested in more details for your use-case, Hector. Would you be able to add details to your question (maybe a mock of what you'd like to dialog to look like?). Also, feel free to DM on Twitter (@andreban) Currently, a Custom Tab uses the entire screen, so it's not possible to use it in a similar way to a DialogFragment (which doesn't use the entire width/height of the screen).
common-pile/stackexchange_filtered
Why is my implementation of the Sieve of Atkin overlooking numbers close to the specified limit? My implementation of Sieve of Atkin either overlooks primes near the limit or composites near the limit. while some limits work and others don't. I'm am completely confused as to what is wrong. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*limit factor = int(math.sqrt(lim)) for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= lim) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= lim) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= lim) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results For example, when I generate a primes to the limit of 1000, the Atkin sieve misses the prime 997, but includes the composite 965. But if I generate up the limit of 5000, the list it returns is completely correct. Change lim to limit. Of course you must have known that. Since sieve = [False]*limit, the largest index allowed is limit-1. However, on this line if (n <= limit) and (n % 12 == 1 or n % 12 == 5): you are checking if n<=limit. If n==limit then sieve[n] raises an IndexError. Try your algorithm with a small value of limit (e.g. n=50). You'll see this error come up. An easy fix is to use sieve = [False]*(limit+1) The easy fix is a bit wasteful since sieve[0] is never used. So you might think a better fix is to keep sieve = [False]*limit, but fix all your other code by stepping the index on sieve down by one. (E.g., change sieve[n] to sieve[n-1] everywhere, etc.) However, this will force you to do a number of extra subtractions which will not be good for speed. So the easy/wasteful solution is actually probably the better option. According to http://en.wikipedia.org/wiki/Sieve_of_Atkin, x should be an integer in [1,sqrt(limit)], inclusive of the endpoints. In your code factor = int(math.sqrt(limit)) and int takes the floor of math.sqrt(limit). Furthermore, range(1,factor) goes from 1 to factor-1. So you are off by 1. So you need to change this to factor = int(math.sqrt(limit))+1 See Fastest way to list all primes below N for an alternative (and faster) implementation of the Sieve of Atkin, due to Steve Krenzel. def AtkinSieve (limit): results = [2,3,5] sieve = [False]*(limit+1) factor = int(math.sqrt(limit))+1 for i in range(1,factor): for j in range(1, factor): n = 4*i**2+j**2 if (n <= limit) and (n % 12 == 1 or n % 12 == 5): sieve[n] = not sieve[n] n = 3*i**2+j**2 if (n <= limit) and (n % 12 == 7): sieve[n] = not sieve[n] if i>j: n = 3*i**2-j**2 if (n <= limit) and (n % 12 == 11): sieve[n] = not sieve[n] for index in range(5,factor): if sieve[index]: for jndex in range(index**2, limit, index**2): sieve[jndex] = False for index in range(7,limit): if sieve[index]: results.append(index) return results Yeah after programming in Java i noticed all these mistakes... I will definitely check out that faster implementation though.
common-pile/stackexchange_filtered
Date issues with quantmod getSymbols.csv? Im uploading files to R using the quantmod function getSymbols.csv. however, once i have uploaded the files, the dates seem to get lost and all the dates are the same. I use the following code to upload the 100 stock symbols: getSymbols.csv(symbols, env=parent.frame(), dir="E:/data/CData_Files_NB/", return.class = "xts", extension="csv") the do all get uploaded, but for each stock symbol, the dates for the entire history are the same?? [1] "SHFJ" "FSRJ" "RDFJ" "GRTJ" "MTNJ" "SLMJ" "SBKJ" "WHLJ" [9] "NTCJ" "LHCJ" "MRFJ" "SACJ" "PPCJ" "SGLJ" "MMIJ" "IMPJ" [17] "TRUJ" "GFIJ" "SOLJ" "TSHJ" "KAPJ" "VODJ" "NPKJ" "TKGJ" [25] "HARJ" "RMIJ" "SHPJ" "MDCJ" "ZEDJ" "SAPJ" "DSYJ" "NPNJN" [33] "RMHJ" "MURJ" "BGAJ" "ANGJ" "GNDJ" "TFGJ" "AEGJ" "BATJ" [41] "EMIJ" "APNJ" "EQSJ" "NHMJ" "ATTJ" "REMJ" "FPTJ" "STXDIVJ" [49] "CMLJ" "CVHJ" "PIKJ" "MPCJ" "AVIJ" "BLUJ" "CLSJ" "IPLJ" [57] "BVTJ" "INLJ" "RESJ" "VKEJ" "NEDJ" "GPLJ" "STX40J" "HYPJ" [65] "TBSJ" "EXXJ" "BAWJ" "PGRJ" "AWAJ" "KIOJ" "SPPJ" "PNCJ" [73] "ARIJ" "LEWJ" "DLTJ" "ACLJ" "PGLJ" "AIPJ" "RLOJ" "MNDJ" [81] "REBJ" "LBHJ" "GRFJ" "NGPLTJ" "GLDJ" "CLRJ" "DTCJ" "ADHJ" [89] "SPGJ" "HLMJ" "TORJ" "MSMJ" "DBXWDJ" "INGJ" "MIXJ" "FGLJ" [97] "APFJ" "ILVJ" "DBXUSJ" "WEZJ" here is an example of what i mean. the dates all get loaded as the same (last) date? > head(SHFJ) SHFJ.Open SHFJ.High SHFJ.Low SHFJ.Close SHFJ.Volume SHFJ.Adjusted 2015-07-07 5.62 5.62 5.58 5.58 719.1 5.58 2015-07-07 5.53 5.71 5.53 5.71 1021.6 5.71 2015-07-07 5.71 5.80 5.71 5.80 381.4 5.80 2015-07-07 5.80 5.80 5.75 5.75 52.8 5.75 2015-07-07 5.75 5.75 5.62 5.64 33.5 5.64 2015-07-07 5.67 5.75 5.67 5.75 189.2 5.75 what am i getting wrong? could it be something with formatting in the CSV file or is it something to do with the time series xts or zoo classes? Any help would be much apreciated. http://stackoverflow.com/questions/8970823/how-to-load-csv-data-file-into-r-for-use-with-quantmod im not quite sure how to apply the suggestions there to what I am doing. I can see the similarity but the difference is that I need to upload 100 symbols. I have furthermore also checked that the format of the csv file is correct by uploading this via: FormatCheck="SHFJ.csv" Stock<-data.frame Stock<-read.zoo(FormatCheck,sep=",",header=TRUE, format="%Y-%m-%d") and this seems to sort out the date issue, but id like to do this via getSymbols. This is a bug that has been fixed. Pull the latest from the 'develop' branch on GitHub. Thanks Joshua, how do i go about that? Is that Your Github directory?
common-pile/stackexchange_filtered
Add square brackets around tag name This post inspired by discussion on meta.ruSO and initially was only for localization purpose on ruSO. Now I think that this idea could be useful (or at least should be discussed) for entire Stack Exchange network. Text in the description for the tag badge in the user profile page doesn't contain any highlighting (only my own freehanded circles). E.g.: Same string (I know it because of using Transifex for ruSO localization) is used on the badge's list page: and on the badge description page: But for last two cases [tags] tag is highlighted as ordinary clickable link for a tag, so no extra highlighting is requered here. My request is to add some kind of highlighting every time when tag name is used. If highlighting as clicked link is not possible, then add at least square brackets (in consistence for search pattern) around tag name. This could also affect recently changed "Favorite/Watched tags". Current version: Suggested view: This definitely should be done for reasons like readability and consistency. First, at least have a link attached to the name of the tag like so: Earn at least {number here} total score for at least {other number here} community wiki answers in the name of tag tag. Then insert some characters to define the name of the tag so "tags tag" doesn't seem confusing, like " " or square brackets as you suggested: Earn at least {number here} total score for at least {other number here} community wiki answers in the "name of tag" tag. Earn at least {number here} total score for at least {other number here} community wiki answers in the [name of tag] tag. The final step that makes the most sense and consistency is simply using [tag:tag-name] for the tag: Earn at least {number here} total score for at least {other number here} community wiki answers in the name-of-tag tag. When inspecting the HTML, the code is literally: <p>Earn at least {number here} total score for at least {other number here} community wiki answers in the [tag:name-of-tag] tag.</p> Instead, we could change that to: <p> Earn at least {number here} total score for at least {other number here} community wiki answers in the <a href="link-to-tag.here">"name-of-tag"</a> tag. </p> Mentioned raw string is Earn at least $score$ total score for at least $answerScoreStr$ non-community wiki answers in the $tagLink$ tag for all cases. So developers just need to add brackets around $tagLink$ while parsing raw-string for text-only versions, as it's done for pages with clickable-link. @alexolut See my edit
common-pile/stackexchange_filtered
Why doesn't the whole volcanic cone appear black? Cooled lava looks black, but why the whole volcano, even near crater, doesn't always appear black like cooled lava? Please [edit] your question and insert a pic to show what you mean. If you don't have enough rep yet to insert pictures here, put it on a generic hosting site like Imgur and insert the link here. Why do you think cooled lava always looks black? There are a number of volcanic rocks hereabouts (though I'll grant that they erupted some time ago), and they run from near white (pumice) through browns and reds. not all lava is black the mineral content of the lava has an effect. Pumice for instance can be stark white. @John?????????? The cooled lava might be covered by ashes. So depending of the amount of ashes and the wind you might have a black volcano or a gray volcano. Many volcanoes are formed by layers of lava and ash. https://en.wikipedia.org/wiki/Volcano#/media/File:Volcano_scheme.svg An active volcanic crater is generally a highly toxic environment full of highly-acidic, sulphur, chloride and fluoride-rich gasses emanating from, and reacting with hot rock. Weave into the mix sudden exposure to oxygen and meteoric water (which may become reactive steam) within the rock, and you have a cocktail of extreme weathering, geochemical reaction and rock corrosion. So the original black rock doesn't stay black for long. One bizarre and very rare exception is carbonatite volcanoes, such as occur in remote parts of the East African Rift Valley. These 'lavas' start off as white magma, and almost immediately turn black upon exposure to air. Not all lava is black, the mineral content and cooling speed of the lava has an effect. Cooled lava can have a variety of colors including stark white pumice. https://hvo.wr.usgs.gov/volcanowatch/archive/2000/00_10_19.html
common-pile/stackexchange_filtered
Jesse's real bad luck! Why Jesse always coincidentally comes in contact with those people who are related to drug business or drug addicts. Even Jane who seemed like a real nice girl was a Heroin addict. If Jane would have been a clean girl, Jesse would have had some chance of reconstructing his life. Even they planned to run away to NewZealand and start a new life. Is this his bad luck or a real example of what goes around comes around and life is seriously a b**** (pun intended)? It's part of the story; it was written like this so we have something to watch! Out of the universe: Scenario. Because Walter White could have moved to Canada but then we would have no Breaking Bad. Same goes here, if Jesse settles...Well there is no more Jesse to follow. In universe Jesse was always going in drug related places, due to his activity as a meth cook and dealer. Even if he meets a nice girl, it's safe to assume that he would get along with someone who comes from the same places than him, so either a dealer, a cook or an addict as Jane. So in some way he meets those drug related people "coincidentally" but in fact it's the only kind of people he can meet with, possibly the only one he can get along with. You can also see that Walter White always pushes him to not quit the drug business, which reduces consequently his chances of giving up and "living a normal life". "Because Walter White could have moved to Canada but then we would have no Breaking Bad." I like the joke, but it doesn't really hold true. Walt wasn't trying to pay for his medication; he was trying to provide a stable income for his family in his absence after his death. Canada would not guarantee his survival (unless they have a program that supplies income to widow(er)s?) @Flater Well, as I'm not a canadian I can't tell wether life insurance is well handled here or not.
common-pile/stackexchange_filtered
Bibtex saying I couldn't open style file .bst When using bibtex (out of texmaker) I get the error "I couldn't open style file spbasic.bst". With spmpsci instead of spbasic it's working, but the situation for both is the same: spmpsci.bst is in folder /usr/share/texlive/texmf-dist/bibtex/bst/spmpsci and spbasic.bst is in /usr/share/texlive/texmf-dist/bibtex/bst/spbasic MWE \documentclass{article} \begin{document} \cite{Bernoulli1713}. \bibliographystyle{spbasic} \bibliography{Bibliographie} \end{document} Solution: update by sudo texhash /usr/share/texlive/texmf-dist Checking if it was successful by kpsewhich spbasic.bst This should result in giving the path. How do I sort that out if this error occurs in Overlead? @PWillms Sorry, I don't know. I am not using overleaf but tex maker.
common-pile/stackexchange_filtered
Where can I get challenging exercises for Naive Set Theory by Halmos? the title says it all. I am just filling in the 30 characters required amount. Here? Or here? Or here? thanks. I am once again filling in the required amount. There is a book by L. E. Sigler, Exercises in Set Theory, that is explicitly designed to follow Halmos. The exercises range from the routine to some that are more challenging. If this is too easy for you, then I recommend checking out Komjáth & Totik's Problems and Theorems in Classical Set Theory, which is considerably more challenging. Both books contain answers to the exercises.
common-pile/stackexchange_filtered
CyberAgent/android-gpuimage: How to implement parallel filter pipeline? Using GPUImage in iOS I can apply multiple filters like this: two parallel filter pipeline by code: [videoCamera addTarget:filter1]; [videoCamera addTarget:filter2]; [filter1 addTarget:filter3]; [filter2 addTarget:filter3 atTextureLocation:1]; [filter3 addTarget:videoView]; However, when going to Android, I can not implement this filter pipelines using CyberAgent/android-gpuimage. Have anyone experienced with it? or give me an idea. can´t you do something like?: mGPUImage.setFilter(new GPUImageSobelEdgeDetection()); mGPUImage.setFilter(new GPUImage3x3ConvolutionFilter());
common-pile/stackexchange_filtered
Can I pet dogs? In Zelda Breath of the Wild you find dogs around stables and other various locations and when you feed them, they bring you to a chest. I know that I can give it food and play with it but I want to show my appreciation for this by petting it. How does he know that he is a good boy if I can't pet him? Possible duplicate of How to befriend a dog? If you can't pet the dog to let him know he was a good boy just try shouting, "YOU'RE A GOOD BOY!" at your screen. I tried that but my roommates got mad at me last night around 5 A.M. I feel like I could do more. They're good dogs bront You can't actually pet a dog from what I've found. However, there is a sort of glitch (if you will) that kind of looks like it. See this linked video. First, you have to unequip any items you are holding. Then, use the attack button to attempt to draw a weapon out. If you angle it right, it can appear that you are petting the dog. In the video, the affection level actually increases as shown by the bubble effect produced by the dog when doing this. There is actually a rather humorous guide on how to do this as well. Love that guide! I think this answers my question. Thanks! Note that you don't have to do anything to get the hearts from "petting" the dog, just standing near it/it's head (not sure if that's important) long enough will cause the hearts to appear once. I always assumed this was the equivalent of letting the dog sniff you or petting the dog. This isn't a glitch, and the "bubble effect" is simply a coincidence, as it only appeared because Link was standing next to it for long enough.
common-pile/stackexchange_filtered
How to scroll the page left to right in selenium for different div tags I am trying to scrape all the apps url from the target page:- https://play.google.com/store/apps?device= using the below code:- from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager import time from tqdm import tqdm options = webdriver.ChromeOptions() options.add_argument('--no-sandbox') options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option("useAutomationExtension", False) driver = webdriver.Chrome(ChromeDriverManager().install(), options=options) driver.maximize_window() items = ['phone','tablet','tv','chromebook','watch','car'] target_url = "https://play.google.com/store/apps?device=" all_apps = [] for cat in tqdm(items): driver.get(target_url+cat) time.sleep(2) new_height = 0 last_height = 0 while True: # Scroll down to bottom driver.execute_script("window.scrollTo(0, document.body.scrollHeight)") # Wait to load page time.sleep(4) # Calculate new scroll height and compare with last scroll height new_height = driver.execute_script("return document.body.scrollHeight") # break condition if new_height == last_height: break last_height = new_height for i in driver.find_elements(By.XPATH,"//a[contains(@href,'/store/apps/details')]"): all_apps.append(i.get_attribute('href')) The above code scrolls the page upside down and gives me the URLs of all the apps available on the page. However, I tried to click the element using the below code but getting error: driver.find_element(By.XPATH,"//i[contains(text(),'chevron_right')]").click() error:- ElementNotInteractableException: Message: element not interactable (Session info: chrome=110.0.5481.77) I tried using the below code:- element = driver.find_element(By.XPATH,"//div[@class='bewvKb']") #any icon, may be that whatsapp icon here hover = ActionChains(driver).move_to_element(element) hover.perform() element = driver.find_element(By.XPATH,"//i[text()='chevron_right']") element.click() There is no option to click on the highlighted button as shown in the image. Can anyone help me with this like how to scroll the page sideways so that all the contents can be scraped from the page? This is the element <i class="google-material-icons B1yxdb" aria-hidden="true">chevron_right</i> Try coding to click it and it would work. Thanks for the answer but I already tried that, and forgot to mention it in the answer. It's giving an error: ElementNotInteractableException. @LalitJoshi, what is your usecase to get the urls of popular apps and Editor's choice apps with given search ex :- phone? @KunduK I need to scrape all the apps mentioned on that particular to further scrape their release date and update date. I just need to make sure that I am scraping all the apps URL. all the apps under popular apps and Editor's choice apps right? No actually all the apps present under this URL. The problem, that right arrow icon appears only when you hover upon any of the icon which is on the line. So first hover any of the icon, so that right arrow would appear and then issue the click, mostly like this element = driver.find_element_by_css_selector("#yDmH0d > c-wiz.SSPGKf.glB9Ve > div > div > div.N4FjMb.Z97G4e > c-wiz > div > c-wiz > c-wiz:nth-child(1) > c-wiz > section > div > div > div > div > div > div.aoJE7e.b0ZfVe > div:nth-child(1) > div > div > a > div.TjRVLb > img") #any icon, may be that whatsapp icon here hover = ActionChains(driver).move_to_element(element) hover.perform() element = driver.find_element_by_xpath("//i[text()='chevron_right']") element.click() Thanks, I understand but using this I can only do it for the first row only not for others. Also, I tried using the code you shared but to no avail. I have eited the main code.
common-pile/stackexchange_filtered
R formula paste condition I want to paste a formula in R out of two different vectors which has a condition. Pasting a formula out of a vector containing the coefficients is ok but I don't have a clue how to add the conditional terms I have tried to manage the problem with paste and paste0 f1 <- c("x1", "x2", "x3") f2 <- c("x3", "x4", "x5") the result should be y ~ x1 + x2 + x3 | x3 + x4 + x5 I have to manage a big dataset with > 100 coefficients so typing it manually is no real option. Thank you in advance! I think this might be what you are looking for: f1 <- c("x1", "x2", "x3") f2 <- c("x3", "x4", "x5") paste("y ~ ",paste(f1, collapse = " + "),"|",paste(f2, collapse = " + ")) #output #[1] "y ~ x1 + x2 + x3 | x3 + x4 + x5" Thank you for your help@LouisMP! I tried to paste the conditional part to an existing formula... your solution works just fine We could put the terms into a list, terms <- list(f1, f2) and use reformulate(). fo <- reformulate(paste(sapply(terms, paste, collapse=" + "), collapse=" | "), response="y") fo # y ~ x1 + x2 + x3 | x3 + x4 + x5 The benefit is this: class(fo) # [1] "formula" Data f1 <- c("x1", "x2", "x3") f2 <- c("x3", "x4", "x5") Hi @jay.sf! Thank you for your tip - I didn't know about the reformulate function till now! Thx!
common-pile/stackexchange_filtered
How does hashing algorithm convert hash to array index? I am trying to understand how can I create my own hash table. If I understand how to implement hashtable, I would need something like this public class HashBucket { public object Key { get; set; } public ICollection<object> Values { get; set; } } Then the hashtable would be like so public class Hashtable { private HashBucket[] _buckets = [12]; public void Insert(object value) { // here I some how need to hash the value the insert it in the buckets array. var hash = getHash(value); _bucket[hash] = new Bucket(value); // here I’ll need to update or insert } } What I am struggling with is how to get a hash code to use as array index where I can retrieve the hash by index as well. What is the algorithm of hash function to would create a value that would be a key of the buckets array? Most if not all hash maps just clamp the hash value by using modulo. where I can retrieve the hash by index as well. You can't. Sorry I still don’t understand how would I clamp the hash value using modulo? I should be able to use the same value to directly locate the value directly using index. var index = hash % _buckets.Length. https://referencesource.microsoft.com/#mscorlib/system/collections/generic/dictionary.cs,327
common-pile/stackexchange_filtered
What is the significance of the cat? In Way of the Dragon, Chuck Norris and Bruce Lee fight. This scene is not comedic; yet there are numerous cuts to a cat. At one point at the end, the camera actually zooms in and out on the cat! What is the importance of the cat? I'd love an answer to this one! I always crack up when the cat shows up. That scene is incredibly comedic. According to Bruce Lee: Incomparable Fighter (M. Uyehara, 1988, p. 87), Bruce had the film edited so the cat's actions would "interrelate" to his (his toying with his opponent, for example, like the cat toys with the ball of paper), but it "didn't succeed in the finished product." At the time, this movie was the most expensive produced in a Chinese studio, at $150,000.
common-pile/stackexchange_filtered
Change card's title without affecting the other element I want to change the card's title which is Main Title into 2nd Title everytime the user selects an option in the dropdown. Here's my code. HTML: <h3 id="title" class="card-title">Main Title <span class="badge badge-primary" id="counts">0</span> <span class="float-right"> <a href="" style="color: #c0c6cc;"><i class="mdi mdi-trash-can"></i> </a> </span> </h3> <select class="form-control" id="select" onchange="optionCheck(this);" required> <option value="yor1" Option 1 </option> <option value="yor2"> Option 2</option> <option value="yor3"> Option 3</option> </select> If user select Option 3 the card's title change. JS: function optionCheck(that) { if (that.value == "yor3") { alert("Title changed!"); document.getElementById("title").innerHTML = "2nd Title"; } else { document.getElementById("title").innerHTML = "Main title"; } } Now the problem is if I select the Option1 or 2 only the Main title word can see the span badge and the icon disappears. can you able to put Main Title in any span or div like Main Title Here is working code: https://jsfiddle.net/usmanmunir/c9uqktn3/19/ HTML <h3 id="title" class="card-title">Main Title</h3> <span class="badge badge-primary" id="counts">0</span> <span class="float-right"> <a href="" style="color: #c0c6cc;"> <i class="mdi mdi-trash-can"></i></a> </span> <br> <select class="form-control" id="select" onchange="optionCheck(this);" required> <option disabled selected></option> <option value="yor1">Option 1</option> <option value="yor2">Option 2</option> <option value="yor3">Option 3</option> </select> JS function optionCheck(that) { if (that.value === "yor1" || that.value === "yor2" || that.value === "yor3") { document.getElementById("title").innerHTML = "2nd Title"; } else { document.getElementById("title").innerHTML = "Main title"; } } my only problem is when selecting option 1 the span class of badge and the mdi icon disappear. It must not disappear cause I only need to change the text title. I have updated the answer should be working now. Thanks for clarifying. You can put your content inside span and give that id title to the span tag so that only the content inside span will get changed. Demo code: function optionCheck(that) { if (that.value == "yor3") { alert("Title changed!"); document.getElementById("title").innerHTML = "2nd Title"; } else { document.getElementById("title").innerHTML = "Main title"; } } <!--added id to span tag--> <h3 class="card-title"><span id="title">Main Title</span> <span class="badge badge-primary" id="counts">0</span> <span class="float-right"> <a href="" style="color: #c0c6cc;"><i class="mdi mdi-trash-can"></i> </a> </span> </h3> <select class="form-control" id="select" onchange="optionCheck(this);" required> <option value="yor1"> Option 1 </option> <option value="yor2"> Option 2</option> <option value="yor3"> Option 3</option> </select>
common-pile/stackexchange_filtered
How to rotate Annotations when converting HDF5 files to dm3? I am working on converting Velox file (HDF5) to .dm3 file using Tore Niermann's plugin (gms_plugin_hdf5) to read string. Annotations on HDF5 file also need to transfer to .dm3 file. HDF5 file maybe rotate in any angle. But the position coordinate of annotation read from hdf5 file is corresponding to images without rotating. I found that the annotations didn't move with rotating images. I had to re-calculate the position coordinate for every annotation. It isn't convenient for annotations such as box or oval. And I need to extract maximum area when rotating images. So the image size will change with rotation angle. So is there better solutions for rotating the annotations? Thanks. Here is a sample function from my script. I didn't attach all because it's quite long. image GetAnnotations(Taggroup names, string filename, string name, Taggroup Annotations, Image VeloxImg, number Angle) { number i, j, imagex, imagey, xscale, yscale String Displaypath, AnnotationStr, DisplayStr, units taggroup attr = NewTagList() getsize(VeloxImg, imagex, imagey) number centerx=imagex/2 number centery=imagey/2 getscale(veloximg, xscale, yscale) units=getunitstring(veloximg) component imgdisp=imagegetimagedisplay(VeloxImg, 0) For (j=0; j<TagGroupCountTags(Annotations); ++j) { TagGroupGetIndexedTagAsString(Annotations, j, AnnotationStr) string AnnotPath=h5_read_string_dataset(filename, AnnotationStr) string AnnotDataPath=GetValueFromLongStr(AnnotPath, "dataPath\": \"", "\"") AnnotDataPath=ReplaceStr(AnnotDataPath, "\\/", "\/") string AnnotLabel=GetValueFromLongStr(AnnotPath, "label\": \"", "\"") string AnnotDrawPath=h5_read_string_dataset(filename, AnnotDataPath) image img := RealImage( "", 4, 1, 1 ) TagGroup AnnoTag=alloc(MetaStr2TagGroup).ParseText2ImageTag(AnnotDrawPath, img ) deleteimage(img) string AnnotDrawType=TagGroupGetTagLabel(AnnoTag,0) //AnnoTag.TagGroupOpenBrowserWindow( "AnnotationsTag", 0 ) if (AnnotDrawType=="arrow") { number p1_x,p1_y,p2_x,p2_y TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:x", p1_x) TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:y", p1_y) TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:x", p2_x) TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:y", p2_y) //VeloxImg.CreateArrowAnnotation( p1y, p1x, p2y, p2x ) number p1_x_new=(p1_x-0.5)*cos(Angle)+(p1_y-0.5)*sin(Angle)+0.5 number p1_y_new=-(p1_x-0.5)*sin(Angle)+(p1_y-0.5)*cos(Angle)+0.5 number p2_x_new=(p2_x-0.5)*cos(Angle)+(p2_y-0.5)*sin(Angle)+0.5 number p2_y_new=-(p2_x-0.5)*sin(Angle)+(p2_y-0.5)*cos(Angle)+0.5 result(p1_x+" "+p1_y+" new "+p1_x_new+" "+p2_y_new+"\n") component arrowAnno=newarrowannotation(p1_y_new*imagey, p1_x_new*imagex, p2_y_new*imagey, p2_x_new*imagex) arrowAnno.ComponentSetForegroundColor( 1, 0 , 0 ) arrowAnno.ComponentSetDrawingMode( 2 ) imgdisp.ComponentAddChildAtEnd( arrowAnno ) } HDF5 is a directly supported file-format in GMS 3.5 - but I don't think this solves your immediate problem. (As GMS does not support "rotation" of image-objects without resampling them to a new, screen-axis aligned grid.) Not directly answering your question, but maybe nevertheless of interest to you: While GMS does not support the rotation of annotations (Rect, Oval, Text, ImageDisplay...) it does support a rotation property for ROIs. So maybe you can just use rect-ROIs and oval-ROIs instead of annotations in your application. Example (never mind, that I did the shift-computation wrongly): image test := realImage("Test",4,512,512) test= abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150)) test.showimage() imageDisplay disp = test.ImageGetImageDisplay(0) ROI box = NewROI() box.RoiSetRectangle(46,88,338,343) box.RoiSetVolatile(0) ROI oval = NewROI() oval.ROISetOval(221,226,287,254) oval.RoiSetVolatile(0) disp.ImageDisplayAddROI(box) disp.ImageDisplayAddROI(oval) number rot_deg = 8 image rot := test.rotate( pi()/180*rot_deg) rot.ShowImage() imageDisplay disp_rot = rot.ImageGetImageDisplay(0) ROI box_rot = box.ROIClone() ROI oval_rot = oval.ROIClone() disp_rot.ImageDisplayAddROI(box_rot) disp_rot.ImageDisplayAddROI(oval_rot) number shift_x = (rot.ImageGetDimensionSize(0)-test.ImageGetDimensionSize(0)) / 2 number shift_y = (rot.ImageGetDimensionSize(1)-test.ImageGetDimensionSize(1)) / 2 number t,l,b,r box_rot.ROIGetRectangle(t,l,b,r) box_rot.ROISetRectangle(t+shift_y,l+shift_x,b+shift_y,r+shift_x) oval_rot.ROIGetOval(t,l,b,r) oval_rot.ROISetOval(t+shift_y,l+shift_x,b+shift_y,r+shift_x) box_rot.ROISetRotationAngle( rot_deg ) oval_rot.ROISetRotationAngle( rot_deg ) Thanks for your help. This is what I need. Although the rotation only supports Rect-ROI and Oval-ROI, I will use them instead of annotations. And I will re-computed the coordinate system for other annotations. Thanks so much. If I understood you correctly, then your source data (HDF5) stores the image (2D array?) plus a rotation angle, but the annotations in the coordinate system of the (not rotated) image? How is the source-data displayed in the original software then? (Is it showing a rotated rectangle-image?) GMS does not support rotating imagesDisplays (as objects) and consequently also not rotations of annotations. The coordinates systems are always screen-axis aligned orthogonal. Hence the need for interpolation when "rotating" images. The data values are re-computed for the new grid. If you don't need the annotations to be adjustable after your input, one potential thing you could do would be to create an "as displayed" image after import prior rotation, and then rotate the image with the annotations "burnt in". This is obviously only good for creating "final display images" though. image before := realImage("Test",4,512,512) before = abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150)) before.showimage() before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,60,430)) before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,206,240)) before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewOvalAnnotation(206,220,300,256)) // Create as-displayed image Number t,l,b,r,ofx,ofy,scx,scy before.ImageGetOrCreateImageDocument().ImageDocumentGetViewExtent(t,l,b,r) before.ImageGetOrCreateImageDocument().ImageDocumentGetViewToWindowTransform(ofx,ofy,scx,scy) image asShown := before.ImageGetOrCreateImageDocument().ImageDocumentCreateRGBImageFromDocument(round((r-l)*scx),round((b-t)*scy),0,0) asShown.ShowImage() number angle_deg = 8 image rotated := asShown.Rotate( angle_deg/180*PI() ) rotated.ShowImage() Thanks for the answer. You clearly understand my problems and sorry for the unclear description. This is a brilliant way. But the rotation angle may be 180° in real cases. So the text will not show correctly in that cases. So coordinate system transformations are more suitable for me. Thanks a lot.
common-pile/stackexchange_filtered
Populating AWS Alb Ingress Annotations from ConfigMap I am creating a 'alb.ingress' resource as part of my Helm chart. apiVersion: extenstions/v1beta1 kind: Ingress metadate: annotation: alb.ingress.kubernetes.io/certification-arn: $cert_arn alb.ingress.kubernetes.io/security-group: $sg ... The values required in the 'alb.ingress' resource annotation sections, are available in my ConfigMap. env: - name: cert_arn valueFrom: configMapKeyRef: name: environmental-variables key: certification_arn - name: sg valueFrom: configMapKeyRef: name: environmental-variables key: security-groups ... Is there a way to populate the annotations using the config-map? The way I solved this challenge was to create the ingress resource using Helm and the variables I had prior to creating the resource, such as name of the application, namespaces etc. apiVersion: extenstions/v1beta1 kind: Ingress metadata: name: "{{ .Values.application.name }}-ingress" namespace: "{{ .Values.env.name }}" labels: app: "{{ .Values.application.name }}" specs: rules: - host: "{{ .Values.environment.name }}.{{ .Values.application.name }}.{{ .Values.domain.name }}" https: .... I used a pod (a job is also an option) to annotate the newly created ingress resource using the environmental values from the configmap. apiVersion: extenstions/v1beta1 kind: Ingress metadate: name: annotate-ingress-alb spec: serviceAccountName: internal-kubectl containers: - name: modify-alb-ingress-controller image: "{{ .Values.images.varion }}" command: ["sh", "-c"] args: - '... kubectl annotate ingress -n {{ .Values.env.name }} {{ .Values.application.name }}-ingress alb.ingress.kubernetes.io/certificate-arn=$CERT_ARN; env: - name: cert_arn valueFrom: configMapKeyRef: name: environmental-variables key: certification_arn Note that the pod should have the right service account with the right permission roles are attached to it. For instance, in this case for the pod to be able to annotate the ALB, it had to have extensions apiGroup and the ingress resources in the list of permissions (I have not restricted the verbiage yet). apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: service-account-role rules: - apiGroups: - "" - extensions resources: - ingresses verbs: ["*"] Hope this helps someone in the future.
common-pile/stackexchange_filtered
How can I expose an API from my Corvid site? I recently built a Corvid site that shows some inventory listing for a client using a dataset. The client wants to access that data from a third-party app, and for that I need an API that returns the inventory. How can I expose a REST / HTTP API from a Corvid site? You need to use Corvid's HTTP functions Take a look at the MyAPI and MyApiClient example provided by the Corvid team
common-pile/stackexchange_filtered
Force VSCode CMake Tools extension to use "Unix Makefiles" for the generator By default, CMakeTools picks a generator of his choosing. In my system, by default, CMake Tools is picking Ninja to be the generator, instead of the desired Unix Makefiles (stored in CMakeCache.txt as CMAKE_GENERATOR:INTERNAL=Ninja) Supposedly you can force CMakeTools' hand when picking the generator by using the cmake.generator setting in settings.json When doing by hand cmake -B build in my system, CMakeCache.txt contains CMAKE_GENERATOR:INTERNAL=Unix Makefiles Thus I set my current settings.json in $workspace/.vscode to { "C_Cpp.default.configurationProvider": "ms-vscode.cmake-tools", "cmake.generator": "Unix Makefiles", } but when configuring, it still picks Ninja. What am I doing wrong? There seems to be a caching issue somewhere. After restarting VSCode it worked as expected If you alter the "cmake.generator" entry, delete build folder, and rerun CMake's configuration step [proc] Executing command: /usr/local/bin/cmake --no-warn-unused-cli -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=TRUE -DCMAKE_BUILD_TYPE:STRING=Release -DCMAKE_C_COMPILER:FILEPATH=/usr/bin/gcc-7 -DCMAKE_CXX_COMPILER:FILEPATH=/usr/bin/g++-7 -H/home/dario/temp/CMakeToolsTest -B/home/dario/temp/CMakeToolsTest/build -G "Unix Makefiles" the value after the -G flag does not change accordingly
common-pile/stackexchange_filtered
Convert multiple year and month pay data in a monthly sum R In R: I had two files that I have joined. Both contain commission pay data, I had two files because the pay structure periods of the jobs codes were different. For example, all the job codes in file one are paid commission monthly and all the job codes in file two are paid commission bi-monthly. In order to accurately and fairly analyze the data I need to aggregate (sum) pay into a new field (let's call it "monthlypay") by month for each employee id (currently a factor), my problem is that I appear to successfully sum the pay for month for each employee, but currently it ignores the differing years. I'm not opposed to spreading year and month OR dummy coding from 6/2015-6/2017 as pay months 1-24, but I'm wondering if there is a way to do this all at once? Current: Check_DT EMPLID DEPTID JOBCODE PAY_FREQUENCY MAX._TTL.GROSS 2015-12-18 99999999 23231606 100880 W 1203 2015-12-24 99999999 23231606 100880 W 597 2015-12-31 99999999 23231606 100880 W 625 2016-01-08 99999999 23231606 100880 W 245 2016-01-13 99999999 23231606 100880 W 480 2016-01-15 99999999 23231606 100880 W 758 2016-01-22 99999999 23231606 100880 W 599 2016-01-29 99999999 23231606 100880 W 551 2016-02-05 99999999 23231606 100880 W 767 2016-02-12 99999999 23231606 100880 W 880 2016-02-19 99999999 23231606 100880 W 557 2016-02-26 99999999 20441606 100880 W 909 2016-03-04 99999999 20441606 100880 W 989 2016-03-11 99999999 20441606 100880 W 751 2016-03-18 99999999 20441606 100880 W 776 2016-03-25 99999999 20441606 100880 W 770 2016-04-01 99999999 20441606 100880 W 712 2016-04-08 99999999 20441606 100880 W 602 2016-04-15 99999999 20441606 100880 W 798 2016-04-22 99999999 20441606 100880 W 527 What I want (need actually, I am going to be running cluster analysis): >Check_DT EMPLID DEPTID JOBCODE PAY_FREQUENCY MAX._TTL.GROSS Year Month Pay >2015-12-18 99999999 23231606 100880 W 1203 2015 12 2425 >2015-12-24 99999999 23231606 100880 W 597 >2015-12-31 99999999 23231606 100880 W 625 >2016-01-08 99999999 23231606 100880 W 245 2016 01 2633 >2016-01-13 99999999 23231606 100880 W 480 >2016-01-15 99999999 23231606 100880 W 758 >2016-01-22 99999999 23231606 100880 W 599 >2016-01-29 99999999 23231606 100880 W 551 >2016-02-05 99999999 23231606 100880 W 767 >2016-02-12 99999999 23231606 100880 W 880 >2016-02-19 99999999 23231606 100880 W 557 >2016-02-26 99999999 20441606 100880 W 909 >2016-03-04 99999999 20441606 100880 W 989 >2016-03-11 99999999 20441606 100880 W 751 >2016-03-18 99999999 20441606 100880 W 776 >2016-03-25 99999999 20441606 100880 W 770 >2016-04-01 99999999 20441606 100880 W 712 >2016-04-08 99999999 20441606 100880 W 602 >2016-04-15 99999999 20441606 100880 W 798 >2016-04-22 99999999 20441606 100880 W 527 etc...I am not even opposed to the year month and date for every year and month combo to repeated, I can get rid of duplicates. As a reminder some people in the file are paid weekly and others a paid bi-monthly. Here is what I've done: #Convert weekly/bimonthly pay to monthly sum of pay paydat_all$monthlypay <- month(paydat_all$Check_DT) aggregate(MAX._TTL.GROSS~monthlypay+EMPLID, FUN = sum, data = paydat_all) This should get you the results you're looking for library(lubridate) library(dplyr) data = 'Check_DT EMPLID DEPTID JOBCODE PAY_FREQUENCY MAX._TTL.GROSS "2015-12-18" 99999999 23231606 100880 W 1203 "2015-12-24" 99999999 23231606 100880 W 597 "2015-12-31" 99999999 23231606 100880 W 625 "2016-01-08" 99999999 23231606 100880 W 245 "2016-01-13" 99999999 23231606 100880 W 480 "2016-01-15" 99999999 23231606 100880 W 758 "2016-01-22" 99999999 23231606 100880 W 599 "2016-01-29" 99999999 23231606 100880 W 551 "2016-02-05" 99999999 23231606 100880 W 767 "2016-02-12" 99999999 23231606 100880 W 880 "2016-02-19" 99999999 23231606 100880 W 557 "2016-02-26" 99999999 20441606 100880 W 909 "2016-03-04" 99999999 20441606 100880 W 989 "2016-03-11" 99999999 20441606 100880 W 751 "2016-03-18" 99999999 20441606 100880 W 776 "2016-03-25" 99999999 20441606 100880 W 770 "2016-04-01" 99999999 20441606 100880 W 712 "2016-04-08" 99999999 20441606 100880 W 602 "2016-04-15" 99999999 20441606 100880 W 798 "2016-04-22" 99999999 20441606 100880 W 527' paydat_all <- read.table(text=data, header=TRUE, colClasses=c("Date", "character", "character", "character", "factor", "integer")) paydat_all <- paydat_all %>% mutate(Year = year(Check_DT), Month = month(Check_DT)) %>% group_by(EMPLID, DEPTID, JOBCODE, Year, Month) %>% summarise(sum(MAX._TTL.GROSS)) I'm attempting to use the second version and getting the following error, can mutate only be used with an integer?: "Error in mutate_impl(.data, dots) : invalid subscript type 'integer" Hey Matt, I appreciate you taking time to update your response, but I have an interesting phenomenon occurring, most of the time everything is accurate but for some employees the file will produce two rows for a certain year and month and split the pay into from two months in one and 3 in another. But, it doesn't do it for every 5 pay month....What do you think is occurring here? NVM, it's becasue the Job Code was different Anyway to generate the same flatten versioned even more into an average tenure pay for each EMPID? I'm trying the following but I suck at this. I'm more of a Data Analyst than a Data Scientist so I've been taking courses but haven't been able to apply my own successful code yet. I enjoy R for statistical purpose and want to learn the language over SAS. Here is what I have...avgpaydat_alltechs <- paydat_alltechs %>% mutate(AllPay = MonthlyPay) %>% group_by(CO, EMPLID, DEPTID, JOBCODE, PAY_FREQUENCY, Year, Month) %>% summarise(mean(MonthlyPay)) I just used aggregate to create a new field Consider base R's ave for inline aggregation where: first arg is a column to be aggregated one or more comma-separated args afterwards are factor levels to group by with explicit named FUN argument for aggregate type. R script data = 'Check_DT EMPLID DEPTID JOBCODE PAY_FREQUENCY MAX._TTL.GROSS "2015-12-18" 99999999 23231606 100880 W 1203 "2015-12-24" 99999999 23231606 100880 W 597 "2015-12-31" 99999999 23231606 100880 W 625 "2016-01-08" 99999999 23231606 100880 W 245 "2016-01-13" 99999999 23231606 100880 W 480 "2016-01-15" 99999999 23231606 100880 W 758 "2016-01-22" 99999999 23231606 100880 W 599 "2016-01-29" 99999999 23231606 100880 W 551 "2016-02-05" 99999999 23231606 100880 W 767 "2016-02-12" 99999999 23231606 100880 W 880 "2016-02-19" 99999999 23231606 100880 W 557 "2016-02-26" 99999999 20441606 100880 W 909 "2016-03-04" 99999999 20441606 100880 W 989 "2016-03-11" 99999999 20441606 100880 W 751 "2016-03-18" 99999999 20441606 100880 W 776 "2016-03-25" 99999999 20441606 100880 W 770 "2016-04-01" 99999999 20441606 100880 W 712 "2016-04-08" 99999999 20441606 100880 W 602 "2016-04-15" 99999999 20441606 100880 W 798 "2016-04-22" 99999999 20441606 100880 W 527' paydat_all <- read.table(text=data, header=TRUE, colClasses=c("Date", "character", "character", "character", "factor", "integer")) # MONTH AND YEAR paydat_all[c("Month", "Year")] <- sapply(c("%m", "%y"), function(d) format(paydat_all$Check_DT, d)) # THREE GROUP BY VARS WITH FORMAT() TO EXTRACT DATE TYPES paydat_all$PaySum <- ave(paydat_all$`MAX._TTL.GROSS`, paydat_all$Month, paydat_all$Year, paydat_all$EMPLID, FUN=sum) head(paydat_all) # Check_DT EMPLID DEPTID JOBCODE PAY_FREQUENCY MAX._TTL.GROSS Month Year PaySum # 1 2015-12-18 99999999 23231606 100880 W 1203 12 15 2425 # 2 2015-12-24 99999999 23231606 100880 W 597 12 15 2425 # 3 2015-12-31 99999999 23231606 100880 W 625 12 15 2425 # 4 2016-01-08 99999999 23231606 100880 W 245 01 16 2633 # 5 2016-01-13 99999999 23231606 100880 W 480 01 16 2633 # 6 2016-01-15 99999999 23231606 100880 W 758 01 16 2633 Great that transformed it to look almost exactly how I wanted, just had to flip the "%m", "%y" ;) Whoops! Definitely a typo. Edited accordingly. I hope solution helped.
common-pile/stackexchange_filtered
Trying to use KeyPress with arguments but don't know how to pass args correcly I want the program to exit when I press Escape, the way it is right now, just close whenever a press anybutton. Here is my code game.KeyPress += (sender, e) => { game.Exit(); }; I using https://github.com/ppy/osuTK this as reference in my project. Both KeyPress and KeyPressEventArgs Inherit from osuTK.Input There is also this code bellow Key.Escape The Key also Inherit from osuTK.Input. game.KeyPress<KeyPressEventArgs<Key.Escape>> += (sender, e) => { game.Exit(); }; This code above doesn't work, but something close to that would be perfect. You can try with this code according to KeyPressEventArgs.KeyChar: game.KeyPress += (sender, eventArgs) => { if (eventArgs.KeyChar == (char)Keys.Escape) { // TODO } }; KeyPressEventArgs has the KeyChar property. Use that to test what key was pressed: if (e.KeyChar == (char)Keys.Return) { e.Handled = true; } Thank you that helped me a .lot
common-pile/stackexchange_filtered
undefined method `[]' for nil:NilClass when using Warden after authentication I have the following super basic rack application using warden as authentication. require 'rack/router' require 'warden' require 'ostruct' require 'mustache' class BadAuthenticationEndsUpHere def call(env) Rack::Response.new( Mustache.render('<form method="post"><input name="username" /><input type="password" name="password" /><input type="submit" />'), 200, {'Content-Type' => 'text/html'} ) end end module PowerNineStore class Routes def routes Rack::Builder.new do use Rack::Session::Cookie, :key => 'rack.session' use Warden::Manager do |manager| manager.default_strategies :password manager.failure_app = BadAuthenticationEndsUpHere.new end Warden::Strategies.add(:password) do def valid? params['username'] && params['password'] end def authenticate! if params['username'] == 'foo' && params['password'] == 'bar' success!(OpenStruct.new(:username => 'foo')) else fail!('could not login') end end end Warden::Manager.serialize_from_session do |id| OpenStruct.new(:username => 'foo') end router = Rack::Router.new router.post('/session' => lambda { |env| env['warden'].authenticate! Rack::Response.new('authenticated!', 200, {'Location' => '/session'}) }) router.get('/session' => lambda { |env| env['warden'].authenticate! Rack::Response.new('authenticated!') }) run router end end end end When I try and authenticate, I get this error: undefined method `[]' for nil:NilClass (NoMethodError) /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/utils.rb:287:in `set_cookie_header!' /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/session/abstract/id.rb:362:in `set_cookie' /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/session/abstract/id.rb:350:in `commit_session' /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/session/abstract/id.rb:226:in `context' /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/session/abstract/id.rb:220:in `call' /home/vagrant/.gem/ruby/2.1.1/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call' It seems headers is nil, but I'm not sure why. I think I set everything up properly according to the documentation, but I must have missed something. Anyone know what it is? I see the login page from the BadAuthenticationEndsUpHere class. I get this error after posting the form. This was because I was using Rack::Response as the response from the rack application, and not a normal rack response array. I've filed a bug with Warden to address it.
common-pile/stackexchange_filtered
Question about Q-analogs I am trying to prove the following Given $n \in \mathbb{N}$ we define $[n]_{q} = (1-q^{n})/(1-q)$. We also define $[n]_{q} ! = [n-1]_{q} ! \cdot [n]_{q}$, with $[1]_{q} ! =1$. Then I want to prove the following Given compositions $\lambda$, $\mu$ we say $\mu \geq \lambda$ if $$ \lambda_{1} = \mu_{1} + \ldots + \mu_{i_{1}} $$ $$ \lambda_{2} = \mu_{i_{1}+1} + \mu_{i_{1}+2} + \ldots + \mu_{i_{2}} $$ Etc, in other words if one sees $\lambda$ as a division of the interval $[1, \ldots , | \lambda | ]$ then $ \mu \geq \lambda$ if $\mu$ has the same divisions than $\lambda$ and possibly more. For example $\lambda = (3,2,1)$ can be seen as $123|45|6$ and $\mu=2121$ can be seen as $12|3|45|6$. In this case $\mu \geq \lambda$. Given a composition $\lambda$, we define $$ f_{\lambda} (q) = \prod_{i=1}^{ \ell (\lambda) } [ \lambda_{i} ]_{q} !^{-1} $$ I want to prove that $$ f_{\lambda} \left( \dfrac{1}{q} \right) = \sum_{\mu \geq \lambda} f_{\mu} (q) (-1)^{\ell (\mu )- | \lambda |} $$ As the function is defined multiplicatively, I know that this follows if I can prove it for $\lambda = (\lambda_{1})$. Do anyone knows any reference in which I could look for theory that could aid me to come up with a proof? I don't think it follows from the single summand case because while $f_\lambda$ is a product, the second formula is a sum. The sum reminds me of some lattice/sieve theory but just at a single look I can't make a guess how the transformation $q \mapsto q^{-1}$ is fitting into the picture. I am not 100% sure, but I think that you can always take common factor $m$ times where $m = \ell ( \lambda )$. But even if I am wrong, I guess that understanding at least the simplest instance of this equality would be the first step to solve this. After trying again induction, but this time using all the previous terms, I think I have succeded. We want to prove \begin{equation} \label{Eq1} f_{m} \left( \frac{1}{q} \right) = \sum_{ | \lambda | = m} f_{\lambda} (q) (-1)^{\ell ( \lambda ) + | \lambda | } \qquad (1) \end{equation} First, we divide the compositions of $m$ on the following sets: the ones that end with a block of size $1$, they are $2^{m-1}$, the ones that end with a block of size $2$, they are $2^{m-2}$, and so. There is only $1$ that ends with a block of size $m-1$ and only one that ends with a block of size $m$. So $$ \sum_{ | \lambda | = m} f_{\lambda} (q) (-1)^{\ell ( \lambda ) + | \lambda | } = \sum_{0 \leq n < m} \sum_{ | \mu | = n} f_{\mu} (q) (-1)^{\ell ( \mu ) +1 + | \mu | +(m-n) } \dfrac{1}{ [m - n ]_{q} !} $$ If we assume by strong induction that $(1)$ holds for $n<m$, then this is is equal to $$ \sum_{0 \leq n < m} (-1)^{m-n+1} f_{n} \left( \frac{1}{q} \right) \dfrac{1}{ [m - n ]_{q} !} = (-1)^{1+m} \sum_{0 \leq n < m} (-1)^{n} f_{n} \left( \frac{1}{q} \right) \dfrac{1}{ [m - n ]_{q} !} $$ We add $f_{m}(1/q)$ to this, and we obtain $$ (-1)^{1+m} \sum_{0 \leq n < m} (-1)^{n} f_{n} \left( \frac{1}{q} \right) \dfrac{1}{ [m - n ]_{q} !} +(-1)^{2m+1} f_{m} \left( \frac{1}{q} \right) = $$ $$ = (-1)^{1+m} \sum_{0 \leq n \leq m} (-1)^{n} f_{n} \left( \frac{1}{q} \right) \dfrac{1}{ [m - n ]_{q} !} $$ A straightfoward computation shows that $f_{n} (1/q) = q^{n(n-1)/2}$. $$ (-1)^{1+m} \sum_{0 \leq n \leq m} (-1)^{n} q^{n(n-1)/2} f_{n} \left( q \right) \dfrac{1}{ [m - n ]_{q} !} $$ By definition $f_{n}(q) = [n]_{q}!^{-1}$, we replace on the expresion and obtain. $$ (-1)^{1+m} \sum_{0 \leq n \leq m} (-1)^{n} q^{(n-1)n/2} \dfrac{1}{ [n]_{q}! [m - n ]_{q} !} $$ If we multiply this by $f_{m}(1/q)^{-1}=[m]_{q}! (-1)^{1+m}$, then we get $$ \sum_{0 \leq n \leq m} (-1)^{n} q^{(n-1)n/2} \dfrac{ [m]_{q}! }{ [n]_{q}! [m - n ]_{q} !} $$ Which by Gauss Binomial Formula (found on page 15, equation (5.5) of Quantum Calculus Victor Kac) with $a=-1$ and $x=1$ this is $0$. So $ f_{m}(1/q) = \sum_{| \mu | = m} f_{\mu} (q) (-1)^{\ell ( \mu ) + | \lambda |}$.
common-pile/stackexchange_filtered
How do I embed Google Analytics in a dashboard? There seem to be 3 methods as listed below. The third is superior for what i'm after. How does it work? Where is this process documented? using the Google Embed API like shown in the "Basic Dashboard" example Embed API documentation. It uses a one click login method and requires you to know and enter user credentials for every session. This is what I use right now. The Google Embed API documentation describes "Server-side Authorization". It uses a key in a JSON file made via the Google Developer Console. It can be authorized indefinately to access any kind of datasource enabled in the Google Developer Console. There seems to be a third method. It is used for example in Google Analytics for WordPress. It uses a one time approval which leads to a key string as shown in this video by which the plugin can be authorized indefinately. The Google Developer Console is not accessed and no JSON file is uploaded anywhere. This offers the best UX. Have you thought about Google Data Studio 360 as a fourth option? It is a super easy way of constructing dashboards. More info here: https://www.google.com/analytics/data-studio/
common-pile/stackexchange_filtered
What is this four-petal lavender to white flower? This 4 petaled flower is growing in partial sun, slightly on the moist side. Stalk is 2 to 5 feet high. Blooming in May in Oregon. Flowers range from white to medium lavender and have a slight pleasant fragrance. Don't know if is a self-seeding annual or a perennial that dies back in winter. Actually, I think this is Dame's Rocket, Hesperis matronalis. It has a four-petaled flower that is white or lilac and fragrant, blooms May-June in most places, 1-3 feet tall, with oblong, toothed leaves that alternate up the stem. This is an escaped ornamental garden plant and is listed as a noxious weed in most (if not all) of the US. It reseeds very easily, and spreads this way. I know in our area it is hard to get rid of, because people love it and are hesitant to pull it. I have it in my garden but dead head at every opportunity. Well it appears to be Lunaria, common name Honesty or Moneyplant, (the latter name coming about because of the coinlike silver seedcases) most likely Lunaria annua, which is naturalized in your region, and does come in white and this deep purplish pink, and lighter pink. However, the leaves up the flowering stem are a little longer and narrower than those we see here in the UK, where this plant is indigenous, which gives me a little doubt. Images below of various colours and of the seedcases http://www.alamy.com/stock-photo/honesty-lunaria-annua-brassicaceae.html And you may be interested in the following http://www.landscapeofus.com/garden/lunaria-annua-perennials/ There are cultivars available for purchase of this plant, and the flowers on those may be more prolific, and in stronger colours, but always in the white through pink to purple range. UPDATE Now I've seen the other answer, mine's wrong, that one's (Hesperis matronalis) is right. The flower is very similar, but the leaves differ some, being generally heart shaped, and last season I did not notice the seed pods, which I think I would have. Yep, the other ID of Hesperis matronalis is more accurate! Just about to vote that answer up
common-pile/stackexchange_filtered
android first tab intent oncreate always called regardless we set tab2 as default tab Following is the example of tabs with intent data. While debugging i found that always when first tab we add in tab host in our case following tab tabHost.addTab(tabHost.newTabSpec("tab1") .setIndicator("list") .setContent(new Intent(this, List1.class))); oncreate method of "List1" intent get called regardless it is our current tab or not even if if i define tab2 as a current tab how to fix this ? public class Tabs3 extends TabActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); final TabHost tabHost = getTabHost(); tabHost.addTab(tabHost.newTabSpec("tab1") .setIndicator("list") .setContent(new Intent(this, List1.class))); tabHost.addTab(tabHost.newTabSpec("tab2") .setIndicator("photo list") .setContent(new Intent(this, List8.class).addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP))); // This tab sets the intent flag so that it is recreated each time // the tab is clicked. tabHost.addTab(tabHost.newTabSpec("tab3") .setIndicator("destroy") .setContent(new Intent(this, Controls2.class) .addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP))); } } So, you want to add all the tabs, but have something other than the first tab be selected on default, and you don't want the List1 Activity to be created until you click on the tab? setDefaultTab(1); seems not to be working in TabActivity when separate Activities are used as Tab Content. Use following instead of this method, tabHost.setCurrentTab(1); This will set "photo list" (i.e second tab) as the selected or default tab... I have the same issue, and no this doesn't work. It does set the selected tab, but the onCreate of the first tab gets called no matter what. same error here, the current tab works but the first tab oncreate is still also called. I have found this same behavior as well, and I do not have a specific fix. But I do know of a work-around. Instead of attaching Activities to each tab, attach a View to each tab. You can then handle the data passing very easily as each view will be in the same Activity. This also eliminates the need to pass information using Intents. Furthermore, you can create (or inflate) your Views as you need them and with more control. Good luck, -scott
common-pile/stackexchange_filtered
Saving Excel VBA Userforms as PDFs I have created a userform in Excel which works Pretty well. Unfortunately, I´m having Troubles with saving the forms. The Idea is, that several collegues can use this form, so having a Standard saving Folder or drive is not working because not all collegues have Access to all drives. I would like to have a window appearing, which asks the user where to save the document as a pdf. I know there is the command "saveCopyAs" but I cant get it running with PDF as Format. What specifically do you want saving as pdf? The picture of the form, its text code? Do you know that the form can be exported (as a .frm file) and then imported in another workbook? Should I show you how can you do that in VBA? But, if you really need it as pdf, please clarify the first questions. Thanks for your answer! The Userform inserts Information into a sheet. And I´m trying to save that specific sheet as pdf. So your question is not about userform but simply about saving an Excel sheet as PDF? Can you please reword your question. You are probably looking for Workbook.ExportAsFixedFormat (which took me approx. 30s to figure out using the search engine of my choice) Hey, yes the wording of my Question is bit unfortunate. Let me try again: I have a userform, that inserts Information into an Excel sheet. Within that userform, I want a button which inserts the Information into the sheet (which already works) but I also want that button to automatically save that specific sheet as pdf (and perferrably lets the user Chose the place where it sould be saved) without the user clicking around in the Excel sheet itself. So basically everything should happen within that userform. Thank you Then, try the code I posted. It export the active sheet as pdf... Consider making an [edit] to clarify your post instead of putting all the important information in comments You are Right. Sorry I´m new to this website Try the next simple code, please. Run your code to export the form in the way you do, activate the sheet where this action happens and run the next code, or insert these code lines inside the procedure making the export: Sub exportShAsPDF() Dim strPath As String strPath = folderPicker 'the ending "\" included... ActiveSheet.ExportAsFixedFormat Type:=xlTypePDF, fileName:= _ strPath & "Form.pdf" End Sub Use the next function to choose the folder where to save the pdf file: Function folderPicker() As String Dim tempFileDialog As FileDialog, initialFolder As String, strN As String initialFolder = "C:\" Set tempFileDialog = Application.FileDialog(msoFileDialogFilePicker) With tempFileDialog .AllowMultiSelect = False .InitialFileName = initialFolder If Not .Show = -1 Then Exit Function End With strN = tempFileDialog.SelectedItems(1) folderPicker = left(strN, InStrRev(strN, "\")) End Function @MarkAKE: Did you find some time to check the above code? If tested, didn't it solve your problem?
common-pile/stackexchange_filtered
Unable with an upload script that uses a version of jQuery I'm using Bootstrap that has its own jQuery file, but I need to use the code below in my webpage to get the uploaded file's URL . The problem is, as you can see, it uses the 1.7 version of jQuery, and won't work at all with Bootstrap's version. I'm now in front of a dilemma : if I use the 1.7 version of jQuery, the rest of the page won't appear correctly, but the script will work. If I use the last version, I'll get the exact opposite. What should I do ? Is there a mean to "translate" the code ? Thank you in advance ! <script> jQuery(document).ready(function() { var options = { beforeSend: function() { $("#progress").show(); //clear everything $("#bar").width('0%'); $("#alerte").html(""); $("#percent").html("0%"); }, uploadProgress: function(event, position, total, percentComplete) { $("#bar").width(percentComplete+'%'); $("#percent").html(percentComplete+'%'); }, success: function() { $("#bar").width('100%'); $("#percent").html('100%'); }, complete: function(response) { $("#alerte").html("<font color='green'>"+response.responseText+"</font>"); }, error: function() { $("#alerte").html("<font color='red'> ERROR: Unable to upload the files</font>"); } }; $("#myForm").ajaxForm(options); }); </script> How about (and this isnt great) using both versions and setting one to a noconflict alias? If you must include multiple versions of jQuery, for instance using a plugin which depends on an earlier version, you'll want to use $.noConflict(). Take a look at the docs here, and a bit of a tutorial here
common-pile/stackexchange_filtered
MySQL Update from PHP form As a novice MySQL user I tried to insert, but I just read on the MySQL documentation that you can only insert on blank rows. My UPDATE statement needs work though, and I'm not sure that I have the syntax correct. $query3 = "UPDATE `offices` SET `scash`="$total" WHERE `officename`="$office""; offices is the table name. scash is the row to be updated. $total is a variable pulled from a post. $office is a variable pulled from the same database. I only want to set scash to total where the officename is $office. Parse error: syntax error, unexpected T_VARIABLE is the error I'm getting. Please learn how to use proper SQL escaping before you hurt yourself. This query is extremely dangerous. $query3 = "UPDATE `offices` SET `scash`='$total' WHERE `officename`='$office'"; Replace the double quotes with normal quotes in the string since double quotes are string delimiters and can't be used in the string. And as Marc B mentioned your code might be vurnerable for SQL injections. See this post how you can avoid that. and for the average php novice, better mention something about SQL injection I really appreciate your efforts. I don't know what I'd do without you guys. The best way to say thanks is to learn PDO or mysqli so you don't fall into this trap again in the future. I'm on my way to learning mysqli, unfortunately, I'm under a time crunch right now, so this is the route I have to take. I can update my code as of next week. You are going wrong at quotes $query3 = "UPDATE `offices` SET `scash`="$total" WHERE `officename`='$office'"; Also always use LIMIT 1 if you want to update just a single row... And sanitize your inputs before updating your row, atleast use mysqli_real_escape_string() Using LIMIT 1 on an UPDATE is bad advice. Your WHERE clause should be more specific if you're having limit issues. Yes, there is harm. It makes no sense, for one, and secondly leads to a false sense of security. Randomly updating one row is crazy. The problem with LIMIT 1 is you don't get to say what row. MySQL will just pick one randomly for you, which leads to unpredictable behavior. Additionally, if you're using mysqli and somehow involve mysqli_real_escape_string in your code you're doing it wrong. Please use placeholders for all data escaping. The WHERE clause doesn't have a limit, the UPDATE does. It's generally wrong to do this. You really don't have a leg to stand on here. if you still want to use double quotes inside double quotes escape it.. your query can be modified as follows.. $query3 = "UPDATE `offices` SET `scash`=\"$total\" WHERE `officename`=\"$office\"";
common-pile/stackexchange_filtered
Meteor: Composite publish to get data from two collections I try to use reywood:publish-composite. But something I'm doing wrong as I want to load the article-data and the assigned literature-data. I think my main problem is to set the route correctly to get the data into the variables, which I could use in the template. Persume I would open /article/BpsCfbhZuoXfEvREG: publications.js Meteor.publishComposite('articles', function(){ return { find: function(){ return Articles.find(); }, children: [{ find: function(article){ return Literature.find({'article.detail.reference': article._id}) } }] } }); router.js Router.route('/article/:_id', { name: 'article', waitOn: function() { return Meteor.subscribe('articles'); /* correct? */ }, data: function () { return { article: Articles.findOne({ _id: this.params._id }), references: Literature.find({}) /* guess, this is wrong */ }; } }); Literature has this structure and I'm looking for reference: { "_id" : "YAEYvJ7tvXxTvnFtv", "article" : [ { "title" : "Article 1", "detail" : [ { "reference" : "BpsCfbhZuoXfEvREG", "year" : 2000, } ] } ] } You want to load an individual article? That looks like what you are doing but you are subscribing to all articles. You could pass the parameter to your subscription. waitOn: function () { return Meteor.subscribe('article', this.params._id); } Secondly, since your Literature is an array of objects, you should use $elemMatch Meteor.publishComposite('article', function (articleId) { check(articleId, String); return { find: function () { return Articles.find(articleId); }, children: [{ find: function (article) { return Literature.find({ 'article.detail': { $elemMatch: { 'reference': article._id } } }); } }] } }); Thanks. And what do I have to do, to get the data into variables for using in the template? Yes, I want to get one single article and multiple assigned literature. What you have looks fine. Is it not reacting as expected? If so, what seems wrong about it?
common-pile/stackexchange_filtered
graphicsmagick composite and crop in the same command I need to get a specific crop of an image and put it over another image at a certain position and resized. I can crop the first image and save it to a file in one command and then I can composite the 2 images in another command. However, I would like to do it in a single command - is this possible with graphicsmagick and how? Here are the 2 commands I am using atm: gm convert -crop 1457x973+254+413 amber.jpg tmp.jpg gm composite -geometry 6000x4000+600+600 tmp.jpg lux_bg.png out.jpg The reason for wanting this is to avoid writing to disk then reading again when all this could be done in memory. With ImageMagick, for example, the same 2 commands would be written in a single command like this: convert lux_bg.png \( amber.jpg -crop 1457x973+254+413 \) -geometry 6000x4000+600+600 -composite out.jpg I am doing this with ImageMagick for now but would love to do it with GraphicsMagick. If your reason is simply to avoid creating a temporary file, you can still do it with two commands by constructing 'pipelines' (a great concept invented by, afaik, Douglas McIlroy around 1964): gm convert -crop 1457x973+254+413 amber.jpg - | gm composite -geometry 6000x4000+600+600 - lux_bg.png out.jpg hint: note the two - dashes in the two commands, and the | pipe since the - can be used to mean the standard output and input in the two commands respectively. This means that no file is created, all should happen in the memory. You can find this in the help (gm -help convert | grep -i -e out -B 1): Specify 'file' as '-' for standard input or output. The use of - is common in unix-likes and must have been inspired by, or by something related to, the POSIX standard's Utility Syntax Guidelines. Have you tried && operator? Your command should become: gm convert -crop 1457x973+254+413 amber.jpg tmp.jpg && gm composite -geometry 6000x4000+600+600 tmp.jpg lux_bg.png out.jpg They're still technically 2 commands. The reason for wanting a single gm command is to avoid creating a temporary file (tmp.jpg in my example) which involves unnecessary read/write from/to hard disk. i'm using gm in node, how would I translate this command to it? currently this errors out if i uncomment crop line: gm(img) .resize(1280) .composite(__dirname + '/../public/images/header.jpg') .geometry('+0+0') // .crop(1280,720,0,0)
common-pile/stackexchange_filtered
Multiple items with the same ID in the view - ZF2 I am building a web application using ZF2 and Doctrine. I have a view containing a base form to which the user can add multiple instances of a fieldset, the filedsets are added via HTML template and js cloning. We are making use of the Doctrine hydrator and cascade=persist to write to the dB. It is all working but I am concerned when the fieldsets are added it results in multiple items with the same ID which breaks w3 standards. Has anyone a solution or work around for this? Or would it be considered acceptable in this instance? An example of one fieldset element: $this->add(array( 'name' => 'glassAssemblyID', 'attributes' => array( 'type'=> 'hidden', 'id' => 'glassAssemblyID', ), )); Many thanks James uuid() is your friend This is an easy one. Just just change your code to: $this->add(array( //'name' => 'glassAssemblyID', 'attributes' => array( 'type'=> 'hidden', //'id' => 'glassAssemblyID', ), )); No point in putting out an element id which is obviously not being used. If you really feel you do need ids for some reason then put out something like EntityType-id for your ids. I think (can't test it right now) this won't work since ZF2 automatically attaches an ID to every input if the user hasn't defined one. Sure but it will be the element name and will be unique. You should set the ID in JavaScript after cloning the element. Problem is that he is using the ZF2 server side framework, not javascript. See what OP wrote: "the filedsets are added via HTML template and js cloning"
common-pile/stackexchange_filtered
Complement of a knotted torus in $ S^3$ Let $T$ be a solid torus and $K$ be a knot. Suppose $i: T \rightarrow S^3 $ be an embedding of $T$ such that it is homeomorphic to regular neighbourhood of the knot(i.e., thickened knot). Now, how does the complement of this embedding $iT$ look like? I think it will also be a solid torus, say $T^\prime$, such that the meridian and longitude of $T^\prime$ are longitude and meridian of $T$ respectively. The only time the complement of a (thickened) knot is a another torus is when the knot is trivial, or the unknot. So if the knot is just a circle, you are right. But for every other knot, there is a lot going on. By the way, this space you describe is called the knot complement or the exterior of the knot. What exactly is going on is a current area of research and has many interesting questions. In particular, these manifolds are often hyperbolic and methods from geometry and topology are used together to study them. See wikipedia for some more information. I understand that gluing two solid tori along the boundary, such that meridian of one goes to the longitude of other would yield $S^3$. So, can't we just take homeomorphic images of these two tori and say the same? @AjayKumarNair No. The defining feature of a solid torus is a compressible disc, which you can still find in the inside of the torus that follows the knot. But if the exterior were a solid torus, there would be a compressible disk there too, and there is not one. Try reading "The Knot Book" by Colin Adams, which you can find online for free. Page 84. A Seifert surface $\Sigma$ of a knot is an orientable surface in the complement of $i(T)$ whose boundary lies in $\partial i(T)$ and where $\partial\Sigma$ is a loop that is homologically trivial (it is a longitude). A non-trivial knot's Seifert surface has genus at least $1$, and only the unknot has a genus-$0$ Seifert surface. Using Kneser's lemma, one may assume $\pi_1(\Sigma)\to\pi_1(S^3-i(T)^\circ)$ is injective. If the complement of $i(T)$ were a solid torus, then $\pi_1(S^3-i(T)^\circ)$ would be $\mathbb{Z}$, so $\pi_1(\Sigma)=1$, hence $\Sigma$ is a disk, implying the knot is trivial. If the complement were a solid torus, then you could reassemble the space using Dehn filling. To get $S^3$, you will find that, up to Dehn twists, the meridian disk of one solid torus has to be glued along the longitude of the other solid torus, again implying the knot was the unknot because that disk is a Seifert surface. If the complement were a solid torus, then the fundamental group of the knot complement (the knot group) would be $\mathbb{Z}$, however the trefoil knot has a knot group isomorphic to the braid group on three strands, for example.
common-pile/stackexchange_filtered
What type of series is $A_1 + A_2 n + A_3 \frac{n(n+1)}{2}$ I am solving a coding problem and I break it down to a point where I get a series like this: $$A_1 + A_2 n + A_3 \frac{n(n+1)}{2} + A_4 \frac{n(n+1)(n+2)}{2\cdot 3} + A_5 \frac{n(n+1)(n+2)(n+3)}{2\cdot 3 \cdot 4}$$ Now can this series be further broken down to some formula that is faster to calculate as n can be greater than $10^5$ so calculation of factorial is going to take lot of time. I figured it out that each of the variable part is one of the figurate numbers. So I think there might be some formula to calculate it faster. Any help on how to solve it or where I can learn more? Could you make your question more specific? Would you like a closed form, or a name? Also, what have you tried so far? done with my question now Do you mean just those $5$ terms or an arbitrary number of them? I think you accidentally wrote A3 two times and wrote A4 instead of A5. aribitrary number of them...I tried adding \ldots but it added dots from a new line so I skipped it Hint: If $c_k$ is the coefficient of $A_k$, then $c_1 = 1$ and $c_{k+1} = something \times c_k$. so some sort of recurrence relation? Not quite a recurrence, because the $something$ depends on $k$. You can write it as a sum (in your example, $N=2$) $$S = A_1+A_2n+\sum_{k=0}^{N} \frac{(n+k-1)!}{(n-1)!k!} A_{k+3}.$$ If I don't know what the $A$'s are more specifically, it is hard to simplify this. However, the sum without the $A$'s can be evaluated: $$\sum_{k=0}^{N} \frac{(n+k-1)!}{(n-1)!k!}=\frac{(N+n)!}{N!n(n-1)!},$$ if that is of any help. A's can be any number as they come from an input array.. If there is no systematic relationship between the A's, then I'm afraid you can't skip the computations.
common-pile/stackexchange_filtered
Does an interaction of entangled particles with each-other cause decoherence? I'll apologize in advance if this is not an appropriate place for my question. My background is not in physics, and my understanding of quantum mechanics is extremely rudimentary at best, so I hope you'll be forgiving of my newbish question. Given a system of entangled particles (eg, 2 or more electrons), possibly in a superposition state: if the particles interact with each-other, what effect does this have on their quantum state? Is their state now determined (but perhaps unknown until observed)? Interactions within the system could lead to entangled or disentangled states, as @Lagerbaer said in his answer. However, this has nothing to do with decoherence (mentioned in the title), which can only be caused by interaction with another (external) system. That depends on the interaction. Consider two spins interacting with a Heisenberg type interaction $$H = -J \vec{S}_1 \cdot \vec{S}_2$$ which basically means that the spins want to be parallel if $J > 0$ and anti-parallel if $J < 0$. For anti-ferromagnetic coupling, $J < 0$, the ground state is a singlet, which for spin-1/2 will look like $$\frac{1}{\sqrt{2}} \left[ |\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle \right]$$ which is one of the famous Bell-states, i.e., it's entangled. So here we have interaction and the ground-state is entangled. What if we had ferromagnetic coupling? Then the ground-state is degenerate. It could be the state above but with a + sign in the superposition, which would again be an entangled state, or it could be either $|\uparrow \uparrow\rangle$ or $|\downarrow \downarrow\rangle$ and these two states are not entangled. So interactions "with each other" can either lead to an entangled or disentangled state. It's the other way around: interactions within the system will generally result in an entangled state, while interactions with the environment are associated with measurement and decoherence/collapse. Although it's possible in principle for an interaction within the system to disentangle it, it won't generally happen because unentangled/separable states are very special (they are a subspace of measure 0 of the space of multiparticle states). On the other hand, independently measuring parts of the system always breaks the entanglement and leave you with a separable state (a partially separable state if it's a partial measurement).
common-pile/stackexchange_filtered
Why doesn't LinkedHashSet implement List? To my understanding, a List is an ordered collection of items. And a Set is a collection of unique items. Now my question is, why does LinkedHashSet, which describes an ordered collection of unique items, implement the Set interface (=> unique), but not the List interface (=> ordered)? One possible argument is that List is intended for random access datastructures, but that would be invalidated by the fact that LinkedList doesn't have "true" random access either. In fact, LinkedHashSet is backed by an internal linked list. Also the documentation for List says otherwise: Note that these [positional index] operations may execute in time proportional to the index value for some implementations. Because a List allows duplicates. because as the name suggest is a SET In fact, LinkedHashSet is backed by a LinkedList - wrong. It is backed by an internal implementation of a linked list, not by java.util.LinkedList. @Eran thanks, I updated my question @Tom the list interface doesn't specify that it has to allow duplicates. it even says: "It is not inconceivable that someone might wish to implement a list that prohibits duplicates" @ΦXocę 웃 Пepeúpa ツ that's just the class' name, but the behaviour suggests that it's both a set and a (immutable) list I don't know the answer, but I have a suggestion for how you could find out: write a class that implements List and delegates all its operations to an internal LinkedHashSet. Find out where the implementation gets messy. This is not opinion based. This is a question about what were the reasons of LinkedHashSet not implementing List interface when it was created by Sun/Oracle. @KevinKrumwiede I went ahead and implemented a version of LinkedHashSet that implements List: https://gist.github.com/Felk/185fec05c0d081908ec668a3fb208723 looks fine to me @JaroslawPawlak - Exactly, the class was created by Sun/Oracle. It is vanishingly unlikely anyone here is close enough to the designers of the Java class library to be able to accurately explain why the class is the way it is. Hence answers are going to share our opinions on why we might have done it this way were we in their place. Which we're not. Furthermore this question is not asking anybody to solve a problem the OP is having - it's not like we can change the Java standard library for them. If it implemented a List you would be able to use it as a List: List list = new LinkedHashSet(); This might lead to issues with duplicates which don't appear in Set but are allowed in List. In other words, you shouldn't declare that something is a List when it doesn't allow duplicates even if it holds the order and allows adding, getting, removing and checking the size. Unlike sets, lists typically allow duplicate elements --List documentation From that link: "It is not inconceivable that someone might wish to implement a list that prohibits duplicates, by throwing runtime exceptions when the user attempts to insert them . . ." @KevinKrumwiede "...but we expect this to be rare" Rare, not prohibited. Accepting duplicate elements is not a characteristic of List. It's a characteristic of common implementations of List. @KevinKrumwiede And that's the issue. It would be quite confusing when you have a list which doesn't allow dupe and you don't know the actual type, because you only work with List. Although the interface itself might allow it, it isn't what the developer would expect and that counts more (at least on my count, since code readability is a big plus). List.add(E e) documentation says "Appends the specified element to the end of this list (optional operation).". The method returns boolean saying whether collection has been changed by the call, as specified in Collection.add(E e) documentation: "Returns true if this collection changed as a result of the call. (Returns false if this collection does not permit duplicates and already contains the specified element.)". Hence, having List list = new LinkedHashSet(); and list.add(1); list.add(1); returning first true and then false looks like logical scenario to me. because LinkHashSet is class which implements set interface . List and Set has its own functionality List allowed duplicates while set do not allowed duplicates but if you want linear order insertion in a HastSet then LinkedHashSet is used with no duplicates in itself .. Set s = new LinkedHashSet(); is the implementation of a set in which insertion order is preserved and duplicates do not allowed.. List does not necessarily allow duplicates, please see the other answers and comments. https://www.google.co.in/search?source=hp&q=list+allow+duplicate+values%3F&oq=list+allow+duplicate+values%3F&gs_l=psy-ab.3..0i22i30k1l3.588.5222.0.55<IP_ADDRESS>.<IP_ADDRESS>0.2066.2-6j2.8.0....0...1.1.64.psy-ab..5.8.2059.0..0j35i39k1j0i67k1j0i20k1.utT8YhyMEcY please gather the knowledge then comment
common-pile/stackexchange_filtered
Problems with JavaScript parseInt and WSH I'm having trouble with a Windows Scripting Host script. Here is an example of problem input code: WScript.Echo(typeof(parseInt('woot'))) WScript.Echo(parseInt('woot')) The output is: number 1.#QNAN Shouldn't 'woot' be evaluating as a string? How can I get around this limitation? I found a solution here: Validate decimal numbers in JavaScript - IsNumeric() Can a mod please close this request? You can check if parseInt return NaN (not a number) isNaN(parseInt('woot', 10)) typeof return number becouse NaN is number in JavaScript But remember that isNaN is litle bit broken read more#Examples Anything from parseInt is a number, since even NaN is treated a number by JS.Therefore, you'd need to check the type of 'woot' before you parseInt it.
common-pile/stackexchange_filtered
Is it bad to allow my team to develop more knowledge and skills than I? Can there be any kind of negative implications if I am the manager (formerly the employee with most experience on the product, but also the most senior in the company within the team), but over time I allow my team to grow so fast and with so much knowledge and experience that all of them have their specialities in which they are much better than I? What I mean by negative implications includes: My reputation within the company My ability to advance in further ranks within the company The main reason I ask this is that I simply have no more time to train myself and work on improving my own skills and knowledge in the area. I am virtually full time focused on more higher level issues rather than learning technical updates. No, it is far from bad. In fact, as a manager it is your job to encourage your team to develop both as individuals and as members of the team. Your performance will be judged on your management skills now rather than your technical knowledge. You should still find time to keep your knowledge as up to date as you can, but you should expect some members of your team to have a deeper, more specialised knowledge of their areas of responsibility than you have. It is normal for new managers to worry about this kind of thing, but you have different responsibilities now and you will be measured against a different yard stick. Your reputation in the company and your chances of further promotion will now depend on how you manage your team and on their performance under your guidance. In my experience only inept, paranoid managers worry about members of their team developing more knowledge than them. I have worked for a few people like that and never enjoyed the experience. On the other hand, managers who acknowledged my strengths and asked my opinion/advice on matters I had knowledge in were a delight to work for. As a result I worked harder, learned more and was happier in my work. Great answer, especially "only inept, paranoid managers worry about members of their team developing more knowledge than them." If any member of your team didn’t grow professionally, that would be a bad sign for your management style and skills. If anything, push them to learn! The main reason I ask this is that I simply have no more time to train myself and work on improving my own skills and knowledge in the area. I am virtually full time focused on more higher level issues rather than learning technical updates. Why would you learn technical updates? You have such an amazing wide field to learn on how to be a good manager, that can easily fill a life time. Is it still part of your job to perform the technical tasks? If yes, you are in a very uncomfortable in-between place, not really management – in a good organisation, you are only ever doing one of these jobs at a time. If not, why would you worry about training yourself there? You need enough expertise to judge the results of your people so you can appraise who needs help (not from you, but from more experienced peers) and who can mentor them – and then you can do management’s primary job, namely removing all the obstacles that stop your people from doing theirs. Especially in smaller companies, it is quite normal for someone to mix management/project leader roles with technical work. Also, many senior engineer roles include some management. @Paulhiemstra Sure, but the question sounds like there is more than one level of management. I did think about the small ten-people company and if what I wrote was fair to them – I think it is, but I fully agree that some people may need to mix between two or more different jobs. I just think they should try and keep them separated mentally. Management is all about coordinating and motivating a group of people. A managers job isn't to know all the ins and outs of the field they are in. That is why you have technical experts whose job it is to know the technology. Trust them; if for some reason you can't then you need to find the people you can. Your job is all about making sure that your people have the right overall direction and everything necessary so they can do their job. In other words, you tell them where to go and make sure nothing is going to stand in their way. This means listening to them, taking care of any issues that crop up and possibly even changing direction if it's necessary. Your performance is now based on how well the team performs. If the team is successful, it looks good on you. If the team is struggling then this looks bad and you need to fix that. As a manager the fixes are rarely technical ones and almost always due to personalities and/or environment: which you now have control over. So, let go of trying to stay on top with the latest/greatest tech items; that's what your people are for.
common-pile/stackexchange_filtered
Does Ansible's systemd module a `daemon-reload` before starting a service? I have a playbook where I first copy a new service file to /etc/systemd/system/ and then start the service. Normally, I'd have to run sudo systemctl daemon-reload before starting the service. There is a daemon_reload parameter to the systemd module, but the description is not clear. It says "When set to true, runs daemon-reload even if the module does not start or stop anything." It sounds like it usually runs daemon-reload before starting or stopping services, and that this switch just makes it run daemon-reload always even when there's no state change. Example of what I'm doing: - name: Install Foo hosts: all tasks: - name: Install SystemD service become: true copy: src: ./foo.service dest: /etc/systemd/system/ - name: Ensure the service is running become: true systemd: name: mqtt-button.service enabled: true state: started Ansible does not implicitly run systemctl daemon-reload. It only runs it when you set daemon_reload: true, but in this case it will run the daemon-reload command regardless of whether or not it needs to start or stop any services. When in doubt about the documentation you can always refer to the source.
common-pile/stackexchange_filtered
Delete the begin of a CString I receive file paths in the form of a CString. For Example: C:\Program Files\Program\Maps\World\North-America I need to remove everything before Maps. I.e C:\Program Files\Program\ but this file path could be different. I tried: CString noPath = fullPath; fullPath.Truncate(fullPath.ReverseFind('Maps')); noPath.Replace(_T(fullPath),_T("")); Which doesn't work consistently. It's cutting some file paths in the wrong place. The solution doesn't need to use Truncate/Replace but I'm not sure how else to do this 'Maps' should be "Maps" Please provide a succinct problem description. What you have stated so far is subject to truncation on either side. That problem cannot be solved given the information we have. You may get better luck if you change the tags from C++ to c++/cli, or at least include the tag. I've found that dealing with paths with windows stuff is kludgy, much less doing with CString. Once I started using Boost Filesystem, I've never looked back. The CString I'm familiar with doesn't have a Truncate member and ReverseFind only works with single characters, not substrings; so fullPath's type is a mystery to me. One thing I noticed: _T(fullPath) appears in your code, but the _T macro only works for literals (quoted strings or characters). Anyway, here is a CString-only solution. CString TruncatePath(CString path, CString subdir) { CString sub = path; const int index = sub.MakeReverse().Find(subdir.MakeReverse()); return index == -1 ? path : path.Right(index + subdir.GetLength()); } ... CString path = _T("C:\\Program Files\\Program\\Maps\\World\\North-America"); CString sub_path = TruncatePath(path, _T("Maps\\")); Gives you sub_path: Maps\World\North-America I just needed to add 5 to the index so it would print Maps\World\North-America but this worked perfectly. Thank you I updated the function so that subdir is now kept at the beginning of the returned truncated path. You can use Delete function for this purpose. for example: CString path(_T("C:\\Program Files\\Program\\Maps\\World\\North-America")); path.Delete(0, path.Find(_T("Maps"))); //pass first index and number of count to delete Now variable path is having value Maps\\World\\North-America This will fail in undefined ways in case the substring is not found. And the CString::Find-call will match substrings of directory names, too. This works for the sample input, but for not much more.
common-pile/stackexchange_filtered
Add custom column at custom posts list I want to have a column (e.g Send Email) in my custom posts (books) list. In each row there should be a button (Send) and when I click on it I want to send an email. I have seen is a hook manage_posts_custom_column to add custom column but this hook only adds post meta as columns like featured image etc. How to do this, please help me. Screenshot Adding A New Column To The books Post Table Here we can use the filters manage_{$post->post_type}_posts_custom_column manage_{$post->post_type}_posts_columns or for the books post type: manage_books_posts_custom_column manage_books_posts_columns Here's an example how we could display a button, for each row in the send_email column: /** * Books Post Table: Display a utton in each row in the 'send_email' column */ add_action( 'manage_books_posts_custom_column', function ( $column_name, $post_id ) { if ( $column_name == 'send_email') printf( '<input type="button" value="%s" />', esc_attr( __( 'Send Email' ) ) ); }, 10, 2 ); To add the send_email column we can use: /** * Books Post Table: Add the 'send_email' column */ add_filter('manage_books_posts_columns', function ( $columns ) { if( is_array( $columns ) && ! isset( $columns['send_email'] ) ) $columns['send_email'] = __( 'Send Email' ); return $columns; } ); We could also limit the column width with: /** * Limit the 'send_email' column width */ add_action( 'admin_print_styles-edit.php', function() { echo '<style> .column-send_email { width: 100px; }</style>'; } ); Here's an example output: You will then have to implement how the button will work. ps: I removed the second part from my answer, since that part of your question, would be better served as a new separate question. First of all thanks for your answer friend. 1) Button successfully added :) can you please also let me know how to keep it at right side with equal to button width only, I have attached screenshot for better understanding. 2) Second answer is not exactly clear, I do not want to remove Draft , I want to remove "- post_status", see screenshot. Looks like your part #2 regarding post status has been solved. I added a hack to adjust the column width. @BhuvneshGupta
common-pile/stackexchange_filtered
Tell if a sum is convergent $\sum\limits_{n=1}^\infty \frac{2}{n(n+1)}$ $$\sum\limits_{n=1}^\infty \frac{2}{n(n+1)}$$ I tried to solve this by saying that $$\frac{2}{n(n+1)} = \frac{2}{n} - \frac{2}{n+1}$$ I then made two sums like this: $$\sum\limits_{n=1}^\infty \frac{2}{n} - \sum\limits_{n=1}^\infty \frac{2}{n+1}$$ And since we know that $1/n$ is divergent I said that $\sum\limits_{n=1}^\infty \frac{2}{n(n+1)}$ would also be divergent, which for some reason was wrong. So my question is simply what am I doing wrong? See http://en.wikipedia.org/wiki/Telescoping_series Try putting in values of n up to, say, 10, and check. Nearly all terms cancel out. Once you have split the fraction up, write down the first few terms of the initial sum in the split form. Can you find an explicit form for the sum to $N$? Then say what happens as $N\to \infty$? What you have done is to split a sum with positive terms into one with both positive and negative terms. This second sum can only be safely rearranged as you have done if it is absolutely convergent. Your observation at the end shows that it isn't absolutely convergent, and demonstrates that the rearrangement was illegitimate. This answer will consist of two parts: first, I will explain why your solution is wrong, then give you advice how to solve your problem correctly. You say that because $a_n=b_n + c_n$ and the series $$\sum_{n=1}^\infty b_n\\ \sum_{n=1}^\infty c_n$$ both diverge, then $$\sum_{n=1}^\infty a_n$$ must also diverge. By that logic, take the series $$a_n = \frac{1}{2^n}$$ a famous example of a converging series. You can then write $a_n = \left(\frac1{2^n}-1\right) + 1$, so $a_n=b_n + c_n$ if you define $b_n=\frac1{2^n} - 1$ and $c_n=1$. It is clear that both $\sum b_n$ and $\sum c_n$ diverges, but still, $a_n$ can be summed. Therefore, your argument is not valid. Now, how can we prove that the series converges? Well, you have a series of tests available: the ratio test, the root test, and the Raabe's test are probably known to you. I suggest you try them out (not all will work, but one should). Another way of doing this is noticing that $\frac{2}{n(n+1)}$ is somewhat similar (asymptotically) to $\frac1{n^2}$. So, maybe, you can find some constant $C$ for which you could say that $$\frac{2}{n(n+1)}\leq C\cdot \frac1{n^2}?$$ Yety another thing you could do is write the sum (after writing $\frac{1}{n(n+1)}$ as $\frac1n -\frac1{n+1}$) down (say, the first $4$ sumands) to see what happens. You may be pleasantly surprised at the result. Beside what has been said in comments and answers, you could have noticed that $$\sum\limits_{n=1}^\infty \frac{2}{(n+1)^2}<\sum\limits_{n=1}^\infty \frac{2}{n(n+1)}<\sum\limits_{n=1}^\infty \frac{2}{n^2}$$ Admitting you know the result of the summations of the extremes, you then have $$\frac{\pi ^2}{3}-2<\sum\limits_{n=1}^\infty \frac{2}{n(n+1)}<\frac{\pi ^2}{3}$$ There are multiple ways you can see that this series in fact is convergent. You can correctly split up the fraction such that $$\frac{2}{n(n+1)}=\frac{2}{n}-\frac{2}{n+1}. $$ However, you should here notice that almost all terms will cancel out. This is called a telescoping series. In fact, you will get $$\sum\limits_{n=1}^N[a_n-a_{n-1}] = a_N-a_{0}.$$ You can also see that $$\frac{1}{n(n+1)}\sim\frac{1}{n^2} $$ and I'm guessing you know that the series of the latter is convergent. Here you can use the limit comparison test. Here are the steps, $$ \sum\limits_{n=1}^\infty \frac{2}{n(n+1)} $$ $$= 2\sum\limits_{n=1}^\infty \left(\frac{1}{n}-\frac{1}{n+1}\right) $$ $$=2\lim\limits_{m\to\infty}\sum\limits_{n=1}^m \left(\frac{1}{n}-\frac{1}{n+1}\right) $$ $$=2\lim\limits_{m\to\infty} \left(1-\frac12+\frac12-\frac13 + \cdots +\frac{1}{m}-\frac{1}{m+1}\right) $$ $$=2\lim\limits_{m\to\infty} \left(1-\frac{1}{m+1}\right) $$ $$= 2(1-0) = 2$$ Therefore, this series converges to $2$. divergent minus divergent may be convergent: sigma (1/n-1/n)=0 while sigma 1/n and sigma 1/n are divergent. The given series is a convergent telescoping series and it is a difference of two divergent series. Let's write the partial sum $\sum\limits_{n=1}^k \frac{2}{n} + \sum\limits_{n=1}^k \frac{-2}{n+1} = \frac{2}{1}-\frac{2}{2}+\frac{2}{2}-\frac{2}{3}+\frac{2}{3}+...-\frac{2}{k}+\frac{2}{k}-\frac{2}{k+1}.\\ $ It remains only $2-\frac{2}{k+1} \rightarrow 2 $ for $ k \rightarrow \infty $. So by the definition of the convergence of a series it is convergent. You cannot say that in a sum of two series if one diverges than the enterely sum diverges. You can only say, for example, in a sum of two series, if one converges and the other diverges the sum diverges, it is simple to demonstrate.
common-pile/stackexchange_filtered
How to discover why a FileNotFoundException is thrown if the file is in the directory? This is server side code: import java.io.FileOutputStream; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.ServerSocket; import java.net.Socket; public class Server extends Thread { public static final int PORT = portno; public static final int BUFFER_SIZE = 100; ServerSocket serverSocket; @Override public void run() { try { serverSocket = new ServerSocket(PORT); while (true) { Socket s = serverSocket.accept(); saveFile(s); } } catch (Exception e) { e.printStackTrace(); } } private void saveFile(Socket socket) throws Exception { ObjectOutputStream oos = new ObjectOutputStream(socket.getOutputStream()); ObjectInputStream ois = new ObjectInputStream(socket.getInputStream()); FileOutputStream fos = null; byte[] buffer = new byte[BUFFER_SIZE]; // 1. Read file name. Object o = ois.readObject(); System.out.println("file name is:"+o.toString()); if (o instanceof String) { fos = new FileOutputStream("D://anmolrishte_files//"+o.toString()); } else { throwException("Something is wrong"); } // 2. Read file to the end. Integer bytesRead = 0; do { o = ois.readObject(); if (!(o instanceof Integer)) { throwException("Something is wrong"); } bytesRead = (Integer)o; o = ois.readObject(); if (!(o instanceof byte[])) { throwException("Something is wrong"); } buffer = (byte[])o; // 3. Write data to output file. fos.write(buffer, 0, bytesRead); } while (bytesRead == BUFFER_SIZE); System.out.println("File transfer success"); fos.close(); ois.close(); oos.close(); } public static void throwException(String message) throws Exception { throw new Exception(message); } public static void main(String[] args) { new Server().start(); }*/ public static final int PORT = portno; public static final int BUFFER_SIZE = 100; ServerSocket serverSocket; @Override public void run() { try { serverSocket = new ServerSocket(PORT); while (true) { Socket s = serverSocket.accept(); saveFile(s); } } catch (Exception e) { e.printStackTrace(); } } private void saveFile(Socket socket) throws Exception { ObjectOutputStream oos = new ObjectOutputStream(socket.getOutputStream()); ObjectInputStream ois = new ObjectInputStream(socket.getInputStream()); FileOutputStream fos = null; byte [] buffer = new byte[BUFFER_SIZE]; // 1. Read file name. Object o = ois.readObject(); System.out.println("file name is:"+o.toString()); if (o instanceof String) { System.out.println("hello"); fos = new FileOutputStream("D://anmolrishte_files//"+o.toString()); // fos.write(b); } else { throwException("Something is wrong"); } // 2. Read file to the end. Integer bytesRead = 0; do { o = ois.readObject(); if (!(o instanceof Integer)) { throwException("Something is wrong"); } bytesRead = (Integer)o; System.out.println("hello1"); o = ois.readObject(); if (!(o instanceof byte[])) { throwException("Something is wrong"); } buffer = (byte[])o; // 3. Write data to output file. fos.write(buffer, 0, bytesRead); } while (bytesRead == BUFFER_SIZE); System.out.println("File transfer success"); fos.close(); ois.close(); oos.close(); } public static void throwException(String message) throws Exception { throw new Exception(message); } public static void main(String[] args) { new Server().start(); } } This is client side code public class MessageSender extends AsyncTask<Void,Void,Void> { File file; private static final String ipAddress = "address"; private static final int port = portno; private Socket socket; @Override protected Void doInBackground(Void... voids) { file = new File(Environment.getExternalStorageDirectory(), "/123.jpg"); System.out.print(file); try { socket = new Socket(ipAddress, port); ObjectInputStream ois = null; ois = new ObjectInputStream(socket.getInputStream()); ObjectOutputStream oos = null; oos = new ObjectOutputStream(socket.getOutputStream()); oos.writeObject(file.getName()); FileInputStream fis = null; fis = new FileInputStream(file); byte[] buffer = new byte[100]; Integer bytesRead = 0; while ((bytesRead = fis.read(buffer)) > 0) { } oos.writeObject(bytesRead); oos.writeObject(Arrays.copyOf(buffer, buffer.length)); oos.close(); ois.close(); } catch (IOException e) { e.printStackTrace(); } System.exit(0); return null; } } This code is onButtonClick, when we click on button image send to server so I push this code for it. btn2.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { MessageSender messageSender = new MessageSender(); messageSender.execute(); } Manifest permissions are necessary so I written that <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> FileNotFoundException occurs, but I don't know why. This error is showing in Android Studio: W/System.err: java.io.FileNotFoundException: /storage/emulated/0/123.jpg (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:146) at MainActivity$MessageSender.doInBackground(MainActivity.java:167) at MainActivity$MessageSender.doInBackground(MainActivity.java:148) at android.os.AsyncTask$2.call(AsyncTask.java:305) at java.util.concurrent.FutureTask.run(FutureTask.java:237) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:243) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) 04-09 11:06:01.166 4148-4204/W/System.err: at java.lang.Thread.run(Thread.java:760) print the entire path of the file you are trying to use. You are saving the file in D://anmolrishte_files// and trying to acces it from /storage/emulated/0/. Save the file using this Environment.getExternalStorageDirectory(). if my answer helped you please mark as validated to help others. You are reading your file from a bad path and the error tells you no such file or directory at this path /storage/emulated/0/123.jpg so just change your reading path to the correct path. fos = new FileOutputStream("D://anmolrishte_files//"+o.toString()); This path is using to add the file to my Computer which i receive from Client.... /storage/emulated/0/123.jpg and this is path of my mobile's external directory where is save image "123.jpg" and i m sending to server on this path of server.... fos = new FileOutputStream("D://anmolrishte_files//"+o.toString()); The picture that you want to send to the server is a picture that you taken previously with the app or just an existant picture ? @gouravmanuja yeah...this picture is exist in my device or in external storage and i m sending direct to server on click of button... From your device try to find your picture and look at the details of your picture and copy the path that the details tells you. @gouravmanuja in my picture's details i can see the path is..../storage/FE17-386F/123.jpg ....i used this path also on the place of ...Environment.getExternalStorageDirectory(), "/123.jpg".....but its not working The error tells that no such file or directory at /storage/FE17-386F/123.jpg ? or a different path is written ?
common-pile/stackexchange_filtered
Custom Product Template breaks options in Bigcommerce Stencil After creating and assigning a custom product page template to one of my items, the JavaScript for this page breaks. The JS removes a display:none style on the product options container. Any idea why this would happen? What theme and version are you using? I'm using Cornerstone version 1.8.2 Try updating to 1.9.1 which should have a fix for that issue.
common-pile/stackexchange_filtered
Having trouble understanding PostgresSQL relationships between tables I have built a SQL database before, but for some reason I am having trouble visualizing how to properly set up this database. Agencies have Agents ( An agent can only be part of 1 agency at a time, but because bonds may last years, and the agent may switch Agencies, I am not sure how to handle this ). Agents are part of Bonds (as a reference) Defendants have Bonds ( They can be added to the database without a bond at first, but from that defendant's page/route, the user will be able to "create a bond", which will add the bond to the database ) Defendants MAY ( might ) have Dependents ( people who pay off their bonds ). Thus, a Bond will have 1 Defendant ( required ), and can have 1 dependent ( but not required ). Defendants can have multiple Dependents at any given time, but each has to be a part of their own bond. On the front end of the application, I want everything to be linked. When I create a defendant > create a dependent > create a bond ( and allow the user to choose a list of ONLY the dependents for the specific defendant ) , etc, I want the rest of the application to be updated ( i.e the Bonds page will now show a new bond in the list, with all the data ). Here is a picture of my current file structure. I decided to start off with this and ask my question now before moving onto the other tables. I think I am on the right track, but take a look. I haven't figured out how to add FK in pgadmin, but I will do that once I figure out how to do this the right way. As you can see, I created a table called "defdeprelations", which is meant to act as a table to show what dependents belong to which defendants, and on which bond. On the front end, I have a tab on the Defendants page called "dependents" which shows a table with all of that Defendant's Dependents, and on which bond...However, where I am confused is how this will work with multiple bonds. Is there a way to create an array of bond_ids , or is there a better way of going about this? Thank you in advance for taking the time to read. Did you try drawing an ER diagram for your proposed database tables? From the description in your question, I see five tables as follows: AGENCIES, AGENTS, BONDS, DEFENDANTS, DEPENDENTS. Do you concur? If I understood your last question correctly, you want an array of integers in the bond_id column of the defdeprelations table. You can define this column as type array and then update/access the array as needed using postgresql's array functions: https://www.postgresql.org/docs/12/functions-array.html You didn't explain how agents relate to anything, but if they can only work for one agency, you could map it as agent_agency table with a start_date and end_date (which is null for current employmnet) - no agent may have two null end dates (trigger). Also sounds to me like defendants have bonds and bonds have dependents. A bond has a defendant_id saying who the bond belongs to, and a dependent may have a bond id, or if a dependent will have multiple bonds, a dependent_bond table to map the bonds to the dependents @Ferdinand55 thanks this helps a lot
common-pile/stackexchange_filtered
How to do variable change for a general Fourier series? I failed at first step :( Well, firtly, I think I should define a general rule for Fourier series, but I fails. Below is my code including several attempts. Anys helps are greatly appreciated, thanks. (*try 1*) (*f/:Integrate[f[t]E^(-I k_ t_),{t_,-Pi,Pi}]:=2Pi \ F[x]/;IntegerQ[k]*) (*try 2*) (*f/:Integrate[f[t_]E^(-I k t_),t_]/;IntegerQ[k]:=2Pi F[t]*) (*try 3*) rule[x_] := x //. {Integrate[f[t_] E^(-I k t_), {t_, -Pi, Pi}] /; IntegerQ[k] :> 2 Pi F[t]} (*try 4*) rule[x_] := x //. {Integrate[f[t_] E^(-I k t_), t] /; IntegerQ[k] :> F[t]} 1/(2 Pi) Integrate[f[x] E^(-I k t), {x, -Pi, Pi}, Assumptions -> k \[Element] Integers] // rule ``` Welcome to Mathematica.SE! I hope you will become a regular contributor. To get started, 1) take the introductory [tour] now, 2) when you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge, 3) remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign, and 4) give help too, by answering questions in your areas of expertise. What do you mean by “variable change for a general Fourier series”? Can you elaborate a bit? It's not immediately clear at least for me. @xzczd Thank you! I am sorry for my poor english. Given a Fourier series of a function f[x], then I have another variable y = g[x] which is a linear transform of x. How can Fourier series of f[g^-1[y]]. Please edit your question to clarify this.
common-pile/stackexchange_filtered
How to execute tcl script from php page and retrive the output back My project requires where i need to generate a front end (php) which has all the data input fiels. And using those data as input, I need to execute a TCL script. Once the execution is done, the php page should be able to get the output of the TCL script and show it. I have generated a sample script but not able to execute it. can you guys please help me. Thanks in advance My php script is: <!DOCTYPE html> <html> <body> <?php function print_procedure ($arg) { echo exec("/usr/bin/tclsh /test.tcl"); } $script_name='test.tcl'; print_procedure($script_name); ?> </body> </html> My TCL script is: set a 10 set b 20 set c [expr $a + $b] return c Consider writing everything in PHP or everything in Tcl. Your script should put braces around the expression part of the call to expr — set c [expr {$a + $b}] — for reasons that were explained in great detail recently… what is $arg @harsh Your tcl script should write on stdout: set a 10 set b 20 set c [expr {$a + $b}] puts $c If your tcl script is supposed to output multiple lines: e.g.: puts $c puts $c puts $c you can capture them in a PHP array: e.g.: $array = array(); function print_procedure ($arg) { echo exec("/opt/tcl8.5/bin/tclsh test.tcl",$array); //print_r($array); } Thank you for helping me out. It worked and I was able to get the output onto the front end.
common-pile/stackexchange_filtered
C++ Command Line Argument Identification I need to be able to tell if the final argument in my command line is surrounded in double quotes or not. If it's in double quotes, I treat it as a string. If it's not, I need to treat it as a file to open and obtain the string. Argv by default will grab the double quoted string and strip the quotes, so I can't figure out a way to handle this problem. pseudocode is something like this... if(argv[argc-1] was called with surrounding double quotes){ //handle as string (I already have code to do this) } else{ //handle as filename (I already have code to do this) } How about a command-line argument that you use to decide which is which? (i.e. -s for a string and -f for a filename) You probably need to find a different way of users specifying if it's a string or filename. Note that many shells will interpret quotes in their own way, so you might hit some surprises with your design at some point. Are you on Windows, and will never ever change to another operating system? Then there may be a chance for you, since the find tool also distinguishes quotes from nonquotes. But if you do that, you can no longer have main as your starting point. All of the parameters in argv are strings. You are probably better off rethinking your strategy. Try opening the argument, if that fails then treat it as a string. Alternatively you could escape the quotes on the command line and they will be passed to your application: $ program "\"this is a string\"" Edit: The sample code assumes you are using a Bash shell or something similar Wouldn't it be $ program "\"this is a string\""? That would work too, Bash will strip the unescaped quotes. Try it! Edit: you're right, I originally tested with a one-word string Thanks guys, I have to do it exactly as I stated for a homework assignment, without escaping any characters. I'm going to try the strategy @Mike Steinert mentioned above: trying to open the file and if it doesn't exists using it as a string. Clever idea! .. except .. what if the user happened to want to specify a string that also just happened to be the name of a file? @Jim Buck: This is a good corner case, however I suspect it's not an issue for this particular homework assignment.
common-pile/stackexchange_filtered
AZ Directory posts directory index I would like to create an A to Z index of posts in a certain category. This must include a linkable directory of letters ABCDEF .... etc and a list of articles ordered by title grouped by letter. for example A A first article A second article B B first article B Second aericle C when some one clicks on the index (ABCDE .....) link him to the letter group.... Any ideas? Downvoted, due to the scope of this question. It is at least two (if not more) questions in one, in a "do-work-for-me" format: query posts by category, create letter index, sort posts by first letter. It's easy if all posts are shown on the same page, then you can do: $AZposts = get_posts(array( 'numberposts' => -1, 'post_type' => 'post', 'orderby' => 'title', 'order' => 'ASC', 'category' => $cat )); $current = ""; $nav = ""; $postlist = ""; foreach($AZposts as $AZpost) { $firstletter = strtoupper(substr($AZpost->post_title,0,1)); if($firstletter != $current) { $postlist .= "<b><a name='$firstletter'> $firstletter </a></b><br>\n"; $nav .= "<a href='#$firstletter'> $firstletter </a> "; $current = $firstletter; } $postlist .= $AZpost->post_title . "<br>\n"; } print $nav . "<br>" . $postlist; get_permalink( $AZpost->ID ) would be probably useful. Besides that: Nice, canonical answer. +1 :) @toscho how would someone get the permalinks to display as well?
common-pile/stackexchange_filtered
angularjs using single controller with two views So I have this situation where I have two views. One view takes several inputs from the user, the second view shows the results of the inputs after performing some calculations. Since both the views fall under different divs I have to define same ng-controller for both of them separately. Now the problem is that the changes in the input view are not being reflected in the output view. var app = angular.module("app", []); app.controller('controller',function ($scope) { $scope.val1 = 10; $scope.val2 = 0; $scope.val3 = 0; $scope.val4 = 0; $scope.cal = function() { return parseInt($scope.val1) + parseInt($scope.val2) + parseInt($scope.val3) + parseInt($scope.val4); } }); <!DOCTYPE html> <html> <head><script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <link rel="stylesheet" href="style.css" /> <script src="script.js"></script> </head> <body ng-app="app"> <form action="#" ng-controller="controller"> <input type="text" ng-model="val1"><br> <input type="text" ng-model="val2"><br> <input type="text" ng-model="val3"><br> <input type="text" ng-model="val4"><br> Self:{{cal()}} </form> <div ng-controller="controller"> Other:{{cal()}} </div> </body> </html> Now I can make it work by redefining all the val1, val2, val3 and val4 inputs as hidden in the output view but that is an ugly solution with redundant code. What is the correct way of doing it. Update: My both the views are far apart from each other and a lot of code lives between them both. I don't want to find a common ancestor div and assign it the controller as that will nest other controllers which are associated with the views in between them. This will complicate my situation. A quick search for this found another SO question with explaination and answer. Each time the Angular compiler finds ng-controller in the HTML, a new scope is created. (If you use ng-view, each time you go to a different route, a new scope is created too.) If you need to share data between controllers, normally a service is your best option. Put the shared data into the service, and inject the service into the controlle Using the same controller on different elements to refer to the same object (not my response - please upvote that answer if it helps you out) Thanks. I am sorry I could have searched it myself, that question is exactly what I asked for. Any particular reason you have 2 controller sections? You should just manage both of your divs with the same controller. If you wish to maintain 2 controllers then you will want to communicate between them - this can be done with a simple service that you inject that will store your data. Alternatively, though that practice is sometimes frowned upon, you can use the $rootScope. The concept remains the same though. var app = angular.module("app", []); app.controller('controller',function ($scope) { $scope.val1 = 10; $scope.val2 = 0; $scope.val3 = 0; $scope.val4 = 0; $scope.cal = function() { return parseInt($scope.val1) + parseInt($scope.val2) + parseInt($scope.val3) + parseInt($scope.val4); } }); <!DOCTYPE html> <html> <head><script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <link rel="stylesheet" href="style.css" /> <script src="script.js"></script> </head> <body ng-app="app" ng-controller="controller"> <form action="#" > <input type="text" ng-model="val1"><br> <input type="text" ng-model="val2"><br> <input type="text" ng-model="val3"><br> <input type="text" ng-model="val4"><br> Self:{{cal()}} </form> <div > Other:{{cal()}} </div> </body> </html> My both the views are far apart from each other and a lot of code lives between them both. I don't want to find a common ancestor div and assign it the controller as that will nest other controllers which are associated with the views in between them. This will complicate my situation.
common-pile/stackexchange_filtered
Mongoose findByIdAndUpdate fails, returns null I've got a Mongoose schema for an object called Plot which works fine when it comes to saving. However, when I try to update those plots with the code below, it fails, returning null: exports.updatePlot = async (req, res) => { let updates = { ...req.body.props }; const id = updates._id; if (updates.plannedYield) { const plannedYield = Number(updates.plannedYield); const metrics = { yield: { day120CustomerEstimation: plannedYield } }; updates.metrics = metrics; delete updates.plannedYield; } console.log(updates); try { console.log('updating plot'); const updatedPlot = await Plot.findByIdAndUpdate(id, updates, { new: true }); if (!updatedPlot) { console.log("couldn't update plot"); } console.log(('updated plot saved:', updatedPlot)); res.json({ updatedPlot }); } catch (e) { console.log('error', e); return res.status(422).send({ error: { message: 'e', resend: true } }); } }; This results in the following console output: { id: '5aa65801c97d496ef457b802', metrics: { yield: { day120CustomerEstimation: 1 } } } updating plot couldn't update plot null Can anyone suggest a fix? I've tried casting id to ObjectId, which didn't do the trick. Is _id or id in your req.body.props ? The console.log suggests the later but const id = updates._id; assumes the former. Also, you probably want to remove the id from updates object. That was it. Thank you, good call. If you add as an answer, I'll accept.
common-pile/stackexchange_filtered
Burninate [limit]? I draw your attention to the limit tag. This seems to be a perfect definition of a meta tag. It's currently on 43 questions. The tag excerpt reads: For questions where upper or lower restrictions are core issues. Examples would include minimum age for viewing and maxima for rate of sending emails, free storage capacity, number of followers, API calls in a given time, video quality, etc. I see it on a variety of questions for a broad range of web applications, with any number of uses of "limit". Some examples: the possible limit to the number of pages in a Google Doc how to restrict the number of posts you're seeing from people you follow on Twitter recovering contacts in Google Hangouts (with the tangential question of whether there is a limit to the number) the maximum number of items in a Cognito Forms picklist if there is a limit to the number columns in a Google Sheet ...and many more. No one is an "expert" in "limits" who would come here looking for questions to answer. Based on Shog9's criteria for burnination: Does it describe the contents of the questions to which it is applied? and is it unambiguous? No. Is the concept described even on-topic for the site? Only insofar as web apps sometimes limit things. Does the tag add any meaningful information to the post? Definitely no. Does it mean the same thing in all common contexts? Certainly not. It's used as a noun as well as a verb, and often as a synonym for "restrict" and "filter". It should be removed. The tag has been removed from all questions and the tag wiki has been removed.
common-pile/stackexchange_filtered
Can a legendary monster/NPC use a "free object interaction" as part of a legendary action? By the book (PHB p.190): You can also interact with one object or feature of the environment for free, during either your move or your action. For example, you could open a door during your move as you stride toward a foe, or you could draw your weapon as part of the same action you use to attack. And from the Jarlaxle stats block from W:DH (p. 206): Quick Step: Jarlaxle moves up to his speed without provoking opportunity attacks. The question is: Can Jarlaxle take free action in his legendary action? I'm thinking something like: one of his drow assistants opens his portable hole over the gold as an action (swallowing it), the other drow closes the hole (as an action) and Jarlaxle uses his legendary action to take the folded hole and leaves using his legendary action at the end of the last drow turn. Is it possible? No, Because It isn't Jarlaxle's turn The rules you quoted on PHB p. 190 on free object interactions are in a section entitled "Other Activity On Your Turn." It gives information of things that a character can do on their own turn in addition to their Movement, Action, Bonus Action and/or Reaction. However, legendary actions are defined as such (MM, p. 11, bold added): A legendary creature can take a certain number of special actions -called legendary actions -outside its turn. As such, the rules on interacting with one object for free during your move (or action) does not explicitly apply to a Legendary action. Now it's worth noting that just because something is in the section on "Other Activity On Your Turn" doesn't automatically mean you can't do it outside of your turn. For example, that section mentions speaking in "brief utterances," and many groups allow characters to shout brief encouragement or advice when it is not that character's turn. However, this particular example could have pretty large impacts on the action economy. If you don't want your players (rightly) expecting to be able to draw a weapon as part of an opportunity attack, it's probably best to conclude that a creature can't interact with an object as part of a legendary action (unless that legendary action specifically allows it). Instead of granting movement, if the Legendary action stated that Jarlaxle is granted a Move action, could he then take a free interaction? @retriever123 I’m not sure what you mean by “a Move action.” The "free object interaction" rule is written to apply to "your turn". In the section "Other Activity on Your Turn", we see this rule: You can also interact with one object or feature of the environment for free, during either your move or your action. This rule specifically applies to "your turn", as denoted by the section header. So rules as written, the only thing Jarlaxle can do when using his Quick Step legendary action is: Jarlaxle moves up to his speed without provoking opportunity attacks. Since it is not Jarlaxle's turn when he takes a legendary action, the "free object interaction" is not available at that time.
common-pile/stackexchange_filtered
Xcode 6 open Assistant Editor in New Window I feel like I have read every link on Google pertaining to this question, but none that I have read have helped. All I want to do is view my Storyboard layout on the left monitor, and on my right monitor, in a new window, have the Assistant Editor open to "Preview" for my Storyboard so that I can preview the different devices sizes (clicking different storyboard views on the left screen should update the assistant editor preview on the right). This seems so simple, but has not proved to be. Please tell me this is possible. EDIT: This guy seems to have it working but following the steps didn't work for me. It isn't possible but it would make a great feature request! I hope you'll file a bug with Apple. @matt - Crazy, this seems like such a no brainer thing to have... Great question, surprised you haven't had more up votes already. It's possible.. and it's awesome: I do have this working after following the instructions linked in the OP. I think the author left out that you need to click on the view controller that you're editing in BOTH instances of the story board window to see the changes update. Then as you're editing on your main window the changes will update to the open storyboard and thus the preview will update as well. I was able to test this and achieved a somewhat desired result. In case the link goes dead here are the instructions lined out Here’s how you can set this up… In the Project Navigator pane, single-click a storyboard/XIB file to open it in the main Xcode window. Now double-click that same file to open it in a new window. Move the new window to another monitor and maximize it (So now you have the story board on 2 windows) Click on the new window to make sure it has input focus, then type Option+Command+Enter to open an assistant editor in that window. In the assistant editor’s jump bar click on ‘Automatic‘ to open the drop-down menu (see the screenshot below if you don’t know what this means). Click on the ‘Preview‘ menu item to open the preview editor. Click and hold next to the assistant editor’s jump bar, then drag up or left (depending on which editor layout you prefer; vertical or horizontal), to maximize the preview’s screen real estate. Lastly... the part the author left out is that you need to select the view controller you want to edit in BOTH story board windows and then just drag the preview window to cover more of the screen. It's not pretty but it's effective. Edit: wording and grammar :) This is not currently possible (Xcode 6.3.1 at the time of writing). The best you can do is open your storyboard in one window, open it again in a new window, open the preview, and slide the assistant editor as far left as possible. The preview won't take up the entire window, but it'll be pretty close. But if you do that, choosing objects in the storyboard on the left monitor won't update the preview window on the right. They are essentially disconnected.
common-pile/stackexchange_filtered
Minimax for Tic Tac Toe in Python I am trying to implement Minimax algorithm for a tic tac toe game, the code works but minimax does not work correctly, since it doesn't pick the right moves and lets you win. If anyone could suggest why it does not work, or any general comments about the code, I would greatly appreciate it and try to learn from my mistakes. There's my Class Board code: #Import import os import time import random import copy class Board: def __init__(self, cell_list): self.cells = cell_list def get_possible_moves(self): return [i for i, j in enumerate(self.cells) if j == " "] def is_board_full(self): if " " in self.cells: return False else: return True def is_winner(self, player): if (self.cells[1] == player and self.cells[2] == player and self.cells[3] == player) or \ (self.cells[4] == player and self.cells[5] == player and self.cells[6] == player) or \ (self.cells[7] == player and self.cells[8] == player and self.cells[9] == player) or \ (self.cells[1] == player and self.cells[4] == player and self.cells[7] == player) or \ (self.cells[2] == player and self.cells[5] == player and self.cells[8] == player) or \ (self.cells[3] == player and self.cells[6] == player and self.cells[9] == player) or \ (self.cells[1] == player and self.cells[5] == player and self.cells[9] == player) or \ (self.cells[3] == player and self.cells[5] == player and self.cells[7] == player): return True else: return False def print_board(self): print " | | " print " "+self.cells[1]+" | "+self.cells[2]+" | "+self.cells[3]+" " print " | | " print "___|___|___" print " | | " print " "+self.cells[4]+" | "+self.cells[5]+" | "+self.cells[6]+" " print " | | " print "___|___|___" print " | | " print " "+self.cells[7]+" | "+self.cells[8]+" | "+self.cells[9]+" " print " | | " print "___|___|___" def utility(self): if self.is_winner("X") == True: return -1 elif self.is_winner("O") == True: return 1 else: return 0 def whose_turn(self): pos_moves = self.get_possible_moves() if len(pos_moves)%2 == 1: #ODD return "X" else: return "O" def is_cell_empty(self,index): if self.cells[index] == " ": return True else: return False def make_move(self, player, index): self.cells[index] = player def resulting_board(self,player,move): copy_of_cell = copy.deepcopy(self.cells) new_Board = Board(copy_of_cell) new_Board.make_move(player, move) return new_Board def minimax(board): if board.is_board_full() == True: return board.utility() elif board.whose_turn() == "O": score = max([minimax(board.resulting_board("O",move)) for move in board.get_possible_moves() ] ) return score else: score = min ([minimax(board.resulting_board("X",move)) for move in board.get_possible_moves() ] ) return score empty_board = ["", " ", " ", " ", " ", " ", " ", " ", " ", " "] Main_Board = Board(empty_board) def get_computer_move(board): score = minimax(board) best_move = None pos_moves = board.get_possible_moves() for move in pos_moves: if minimax(board.resulting_board("O",move)) == score: best_move = move break return best_move And there is my code to run the whole thing: while True: os.system("clear") Main_Board.print_board() while True: choice = raw_input("Pick an empty space, you are player X: ") choice = int(choice) if Main_Board.is_cell_empty(choice) == True: Main_Board.make_move("X",choice) os.system("clear") Main_Board.print_board() break else: print ("Sorry, its not empty, try again") time.sleep(2) if Main_Board.is_winner("X") == True: print "Congrats you won!" break elif Main_Board.is_board_full() == True: print "Tie" break print "AI Move.." time.sleep(1) AI_move = get_computer_move(Main_Board) Main_Board.make_move("O",AI_move) os.system("clear") Main_Board.print_board() if Main_Board.is_winner("O") == True: print "You lost" break Thank you You're only checking for a winner once the board is completely full. Tic Tac Toe doesn't work that way. Omg! Thank you! It works like a charm now.
common-pile/stackexchange_filtered
When i try to insert new value, the old value i inserted earlier gets updated. This is my code when i inserted the first value and next time when i give different value, the old one gets updated with the new one. I am trying to insert mutiple values and store them without getting them updated. How do i do that? import pymysql conn = pymysql.connect(host='<IP_ADDRESS>', user='root', passwd='nazaf123', db='nazafdatabase') cur = conn.cursor() cur.execute("DROP TABLE IF EXISTS EMPLOYEE") sql = """CREATE TABLE EMPLOYEE ( FIRST_NAME CHAR(20) NOT NULL, LAST_NAME CHAR(20), AGE INT, SEX CHAR(2), SALARY FLOAT )""" cur.execute(sql) sql = """INSERT INTO EMPLOYEE(FIRST_NAME, LAST_NAME, AGE, SEX, SALARY) VALUES ('Nazaf', 'Anwar', 22, 'M', 10000)""" try: cur.execute(sql) conn.commit() except: conn.rollback() cur.execute("""SELECT * FROM employee;""") print(cur.fetchall()) cur.close() conn.close() Please explain scenario i meant to ask. when i inserted the first value and next time when i give different value, the old one gets updated with the new one. I am trying to insert mutiple values and store them without getting them updated. How do i do that? I do not see any code here which does actual updating, unless you can clarify what you are seeing and what you expect to see the following may not help here. Right now you are deleting a table and recreating it, this will remove all previous data from it. Your Insert statement will typically never alter previous rows for which none exist so I am going to take a guess based on your question that you may have an Update Employee Set (New Data) which would alter the previous record. To insert new data you will need to perform another Insert command. If you are performing an Update command it will update all records and not insert a new row. Typically Update is followed at the end with a where (Conditional Statement) which lets you limit how the records are modified. I don't have python or SQL database at this hour this should work. I added onto your provided code however I removed the authentication data. import pymysql conn = pymysql.connect({Connection Data here}) cur = conn.cursor() sql = """INSERT INTO EMPLOYEE(FIRST_NAME, LAST_NAME, AGE, SEX, SALARY) VALUES ('Nazaf', 'Anwar', 22, 'M', 10000)""" try: cur.execute(sql) conn.commit() except: conn.rollback() cur.execute("""SELECT * FROM employee;""") #You should see the current data here. sql = """INSERT INTO EMPLOYEE(FIRST_NAME, LAST_NAME, AGE, SEX, SALARY) VALUES ('John', 'Doe', 44, 'M', 300000)""" cur.execute(sql) cur.execute("""SELECT * FROM employee;""") #You should see both Nazaf and John sql = """UPDATE EMPLOYEE SET SALARY=30 WHERE FIRST_NAME='John'""" cur.execute(sql) cur.execute("""SELECT * FROM employee;""") #You should see both Nazaf and John, however John's salary will be 30 sql = """UPDATE EMPLOYEE SET FIRST_NAME='BOB'""" cur.execute(sql) cur.execute("""SELECT * FROM employee;""") #You should see the first names changed to bob. print(cur.fetchall()) cur.close() conn.close() It works but i meant to ask. when i inserted the first value and next time when i give different value, the old one gets updated with the new one. I am trying to insert mutiple values and store them without getting them updated. How do i do that? Please help. If your running the exact code provided try removing the " Execute Drop" and "Create Table" lines. They are deleting everything each time you run it. You only need to create a table structure once, typically when the application is first setup. I modified my answer to reflect this.
common-pile/stackexchange_filtered
What does "generally" means in "Single-table UPDATE assignments are generally evaluated from left to right"? I read this in MySQL docs for a while Single-table UPDATE assignments are generally evaluated from left to right. It's the same since 5.6 https://dev.mysql.com/doc/refman/5.6/en/update.html https://dev.mysql.com/doc/refman/5.7/en/update.html https://dev.mysql.com/doc/refman/8.0/en/update.html What does "generally" means here? I never encountered any case where this left-to-right order is not honored, but in such case, why would documentation not be simply "Single-table UPDATE assignments are evaluated from left to right"? This may sound quibbling but the "generally" word makes me think (as non-english native maybe) that "there are some cases where order might not be left-to-right, but we won't tell which ones (because we don't know)"... Ah, how I hate vague docs! This makes me think of Hibernate.initialize(...): "it is not guaranteed that the elements INSIDE the collection will be initialized/materialized"... So what, will they be or will they be not? Well, update t set a = b, b = a isn't really evaluated from left to right (because the values aren't correctly swapped) @a_horse_with_no_name it seems L2R evaled to me, like a=1,b=2 => a=b so a=2,b=2 => b=a so a=2,b=2 but maybe it's the reason why doc says that (tho, it's good to me, it means "guaranteed order" one-by-one) What does "generally" means here? It means "but not always", "with no guarantee". @Xenos Well, if you start with a = 1, b = 2 that update statement should result in a = 2, b = 1 according to the SQL standard (and what every other DBMS does) @a_horse_with_no_name ah, makes sense then! Do you have a link to SQL standard for UPDATE statement that states this behavior? that would certainly answer my question
common-pile/stackexchange_filtered
How Can I install Twisted + Scrapy on Python3.6 and CentOs I use the newest Python on Centos 7, and a dedicated virtualenv (ENV) [luoc@study ~ ]$ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch Distributor ID: CentOS Description: CentOS Linux release 7.3.1611 (Core) Release: 7.3.1611 Codename: Core (ENV) [luoc@study ~ ]$ python --version Python 3.6.0 When I install scrapy, the error (ENV) [luoc@study ~ ]$ pip install scrapy Collecting scrapy Using cached Scrapy-1.3.2-py2.py3-none-any.whl Collecting cssselect>=0.9 (from scrapy) Using cached cssselect-1.0.1-py2.py3-none-any.whl Requirement already satisfied: six>=1.5.2 in ./ENV/lib/python3.6/site-packages (from scrapy) Collecting Twisted>=13.1.0 (from scrapy) Could not find a version that satisfies the requirement Twisted>=13.1.0 (from scrapy) (from versions: ) No matching distribution found for Twisted>=13.1.0 (from scrapy) and when I install twisted independence, the error (ENV) [luoc@study ~ ]$ pip install Twisted Collecting Twisted Could not find a version that satisfies the requirement Twisted (from versions: ) No matching distribution found for Twisted (ENV) [luoc@study ~ ]$ pip install --verbose Twisted Collecting Twisted 1 location(s) to search for versions of Twisted: * https://pypi.python.org/simple/twisted/ Getting page https://pypi.python.org/simple/twisted/ Looking up "https://pypi.python.org/simple/twisted/" in the cache Current age based on date: 40208 Freshness lifetime from max-age: 600 Freshness lifetime from request max-age: 600 The cached response is "stale" with no etag, purging Starting new HTTPS connection (1): pypi.python.org "GET /simple/twisted/ HTTP/1.1" 200 10196 Updating cache with response from "https://pypi.python.org/simple/twisted/" Caching b/c date exists and max-age > 0 Analyzing links from page https://pypi.python.org/simple/twisted/ Skipping link ... Could not find a version that satisfies the requirement Twisted (from versions: ) Cleaning up... No matching distribution found for Twisted Exception information: Traceback (most recent call last): File "/home/luoc/ENV/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/home/luoc/ENV/lib/python3.6/site-packages/pip/commands/install.py", line 335, in run wb.build(autobuilding=True) File "/home/luoc/ENV/lib/python3.6/site-packages/pip/wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "/home/luoc/ENV/lib/python3.6/site-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/home/luoc/ENV/lib/python3.6/site-packages/pip/req/req_set.py", line 554, in _prepare_file require_hashes File "/home/luoc/ENV/lib/python3.6/site-packages/pip/req/req_install.py", line 278, in populate_link self.link = finder.find_requirement(self, upgrade) File "/home/luoc/ENV/lib/python3.6/site-packages/pip/index.py", line 514, in find_requirement 'No matching distribution found for %s' % req pip.exceptions.DistributionNotFound: No matching distribution found for Twisted So why can't install Twisted on Python3.6? Does something wrong on my environment? Kmike suggested me to ask twisted's developer Add --verbose --verbose --verbose to your pip install Twisted command and include the output in your question. @Jean-PaulCalderone I add the --verbose in my question The more --verbose you add, the more info it dumps. it's too long , I post it on the answer that's all, I have to separated it Did you compile Python 3.6 by yourself? It seems your python version was compiled without bzip2 support. Here is a past ticket for the same issue: https://twistedmatrix.com/trac/ticket/8177 I'd suggest to use Miniconda to have a Python 3.6 environment and follow this instructions to install Scrapy. I'm sorry to thank you so late. I'm complie Python3.6 by myself , I will try to download bzip2-dev, and then re-compile Python3.6, thanks again I know problem is arleady solved but I want to help future users. I'm on Debian Strech and just installed Python 3.6, If you want to install scrapy compiling Python 3.6 yourself, try run sudo apt-get install bzip2 libbz2-dev After that, just recompile Python 3.6 and it should works Had the same issue when using pyenv install 3.6.1 After installing these libs and re-issuing the above command, it works.
common-pile/stackexchange_filtered
How to merge two different code results in R in the same output file I have to combine the results of following two codes using a single program - Book1 <- read.csv("Book1.csv" , header = FALSE) Book2 <- read.csv("Book2.csv" , header = FALSE) Book3 <- read.csv("Book3.csv" , header = FALSE) sink("output.txt") for (i in seq(1,3)) { for (j in seq(2,5)) { if(Book1[i,j]==1 & Book2[i,j]==2 & Book3[i,j]==1) print(1) else print(0) } } sink() Now, in the second code everything else is same except the condition inside if which is Book1[i,j]==2 & Book2[i,j]==2 & Book3[i,j]==4. I am running these two codes separately and getting two output text files. How can I run the two codes together and get the output in the same text file. The output should be looking like this in a single text file without any [1] in the beginning - 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 I tried using concatenation command but always got an error. And here is the result for deput() - > dput(head(Book1)) structure(list(V1 = c(1L, 2L, 3L, 6L), V2 = c(3L, 2L, 6L, 3L), V3 = c(7L, 3L, 5L, 5L), V4 = c(2L, 2L, 3L, 1L), V5 = c(7L, 1L, 4L, 1L)), .Names = c("V1", "V2", "V3", "V4", "V5"), row.names =c(NA, 4L), class = "data.frame") > dput(head(Book2)) structure(list(V1 = c(2L, 4L, 1L, 6L), V2 = c(6L, 2L, 6L, 3L), V3 = c(3L, 3L, 2L, 5L), V4 = c(2L, 2L, 3L, 2L), V5 = c(7L, 2L, 4L, 2L)), .Names = c("V1", "V2", "V3", "V4", "V5"), row.names = c(NA, 4L), class = "data.frame") > dput(head(Book3)) structure(list(V1 = c(1L, 2L, 3L, 6L), V2 = c(3L, 4L, 6L, 3L), V3 = c(2L, 3L, 5L, 2L), V4 = c(2L, 2L, 6L, 1L), V5 = c(1L, 1L, 4L, 1L)), .Names = c("V1", "V2", "V3", "V4", "V5"), row.names = c(NA, 4L), class = "data.frame") Please provide the input data as well. You can refer here for guidance. Also, it looks like you want the outputs to be concatenated column-wise. Is that so? @Aramis7d Yes and sorry but the link directs me to a 'page not found' page. could you look into cbind()? to combine all Book's together @Dark_Knight weird. try http://stackoverflow.com/help/mcve . Also, are you confident of all individual outputs being of the same length? @Aramis7d Yes, all individual outputs are of same length. The loop runs for equal no. of time for each code. You do not need loops at all. sinking printed values is not how we do such things in R. I'd show you how you can do this very simply if you only provided the output of dput(head(...)) for your Book1 to Book3. @Roland I have made the edit in question. This data is a small subset for a very large size actual data. Let's write a vectorized function: fun <- function(a, b, c) { #calculate the combinations of i and j values #and use them for vectorized subsetting inds <- as.matrix(expand.grid(2:5, 1:3))[, 2:1] #vectorized comparisons as.integer((Book1[inds] == a & Book2[inds] == b & Book3[inds] == c)) } res <- cbind(fun(1, 2, 1), fun(2, 2, 4)) #export the result write.table(res, "test.txt", sep = "\t", row.names = FALSE, col.names = FALSE) #0 0 #0 0 #0 0 #0 0 #0 1 #0 0 #0 0 #1 0 #0 0 #0 0 #0 0 #0 0 What does [, 2:1] do? Study help("["). It switches the columns.
common-pile/stackexchange_filtered
xy tolerance from c#.net For some graph work, I need the value of X,y tolerance. I found ISpatialReferenceTolerance.XYTolerance Property. But I couldn't find any option that how can I get ISpatialReferenceTolerance interface from the map? Can anybody give me some pointer? double GetXyTolerance() { var map = ArcMap.Document.FocusMap; var spatialRef = map.SpatialReference; if (spatialRef!=null && spatialRef.HasXYPrecision()) { var spRefTolerance = spatialRef as ISpatialReferenceTolerance; if (spRefTolerance != null && spRefTolerance.XYToleranceValid == esriSRToleranceEnum.esriSRToleranceOK) return spRefTolerance.XYTolerance; } return -1.0; }
common-pile/stackexchange_filtered
How to display labels from many-to-many relationship in ArcGIS 10 using VB.NET? I have two tables in a geodatabase: parcel; and person 1 or + parcel, can have 1 or + person (relationship is many-to-many) Using VB.NET I can display all the parcel labelled with the "id" field (see picture on left). What I want is to display the parcel with labels for the Names of their persons (see picture on right): I think that I have to use join between two tables I can get the name with this code Sub addNom() ' Open the origin and destination classes. Dim featureWorkspace As IFeatureWorkspace = CType(Sp, IFeatureWorkspace) Dim originClass As IFeatureClass = featureWorkspace.OpenFeatureClass("parcelle") Dim destinationClass As IObjectClass = CType(featureWorkspace.OpenTable("personne"), IObjectClass) ' Create a memory relationship class factory and open a relationship class with it. Dim relClassFactoryType As Type = Type.GetTypeFromProgID("esriGeodatabase.MemoryRelationshipClassFactory") Dim memRelClassFactory As IMemoryRelationshipClassFactory = CType(Activator.CreateInstance(relClassFactoryType), IMemoryRelationshipClassFactory) Dim memRelClass As IRelationshipClass = memRelClassFactory.Open("pll_personne", originClass, "id", _ destinationClass, "id", "Contains", "Is in", esriRelCardinality.esriRelCardinalityOneToMany) ' Find the counties related to the state with an ObjectID of 1. Dim originFeature As IFeature = originClass.GetFeature(6) Dim relatedSet As ISet = memRelClass.GetObjectsRelatedToObject(originFeature) relatedSet.Reset() ' Iterate through the set while finding the names of the counties. Dim nameIndex As Integer = destinationClass.FindField("nom_fr") Dim setObject As Object = relatedSet.Next() While Not setObject Is Nothing Dim row As IRow = CType(setObject, IRow) Console.WriteLine("OID: {0}, NAME: {1}", row.OID, row.Value(nameIndex)) setObject = relatedSet.Next() End While End Sub The Question is: How can I display the labels using this result? You could take a look at the AnnotationFeature object. From what I know about it, the IAnnotationFeature interface lets you build your own annotation. http://resources.arcgis.com/en/help/arcobjects-net/conceptualhelp/index.html#//0001000000nt000000 and http://resources.arcgis.com/en/help/arcobjects-net/componenthelp/index.html#//001200000206000000
common-pile/stackexchange_filtered
Query string is appending when using $_SERVER['HTTP_REFERER'] Can someone explain me why when I execute this code multiple times it append the substring ?val=1 to my url? Example: My script is located in index.php and if I execute it 3 times I will have this url in my browser: http://localhost/index.php?val=1?val=1?val=1 I would like to have http://localhost/index.php?val=1 . . . <?php if(isset($_POST['hidden']) && $_POST['hidden'] == 2){ $page = $_SERVER['HTTP_REFERER']; header("location: $page?val=1"); } ?> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="POST"> <input name ="hidden" value="2"> <input type="submit"> </form> Sidenote: You shouldn't rely on $_SERVER['HTTP_REFERER'] http://stackoverflow.com/a/6023980/1415724 The reason being, is that you're using the same file, so it redirects to the same one, uses the GET method and appends the $page?val=1. It's pretty much self-explanatory. Redirect it to another page instead. Just add a test before appending ?val=1 if(isset($_POST['hidden']) && $_POST['hidden'] == 2){ $page = $_SERVER['HTTP_REFERER']; if(strpos($page, '?val=1') === false) $page .= '?val=1'; header("location: $page"); exit; // Avoid further execution if more code is below this. } @Fred-ii- Not at all. Needs to be in there
common-pile/stackexchange_filtered
Multithreaded IllegalMonitorStateException I'm writing a multithreaded tic tac toe program where a server is the connection between two clients. The server opens and waits for connections: the first client to connect is player X and the second is player O. Players have the ability to quit the game at any point, and once a client quits the other client resets its board and waits for a new connection. This part is working fine, but the issue is that once I spawn a new client I get the following errors: Exception in thread "pool-1-thread-1" java.lang.IllegalMonitorStateException at java.base/java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:149) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1300) at java.base/java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:439) at TicTacToeServer.processMessage(TicTacToeServer.java:329) at TicTacToeServer$Player.run(TicTacToeServer.java:498) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:844) It appears that whichever thread has been closed is the one throwing this error. Player X is always whichever client connects first, so it is always thread 1 while player O is always thread 2. For the above error, I had closed player X and opened a new one (thread 3). Each client window has a button allowing them to quit which uses an action listener, created as shown: quitGame = new JButton("Quit"); quitGame.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Quit button was clicked!"); displayMessage( "You have quit the game.\n" ); output.format("Quit. " + myMark + "\n"); output.flush(); try { connection.close(); // close connection to client System.exit(0); } // end try catch ( IOException ioException ) { ioException.printStackTrace(); System.exit( 1 ); } // end catch } }); panel2.add( quitGame, BorderLayout.SOUTH ); // add button to panel add( panel2, BorderLayout.CENTER ); // add container panel This then corresponds to the following code in the server: else if (message.equals("Quit. X")) { //connection.close(player[PLAYER_X]); currentPlayer = PLAYER_X; displayMessage("\nPlayer X has ended the game.\n"); players[1].send("Player X has ended the game."); players[1].send("Please wait for new player to connect."); players[(currentPlayer+1)%2].send("Restart."); players[currentPlayer].send("Restart."); for(int x = 0;x<board.length;x++) { board[x] = ""; numRep[x] = 4; }// end for loop try // wait for connection, create Player, start runnable { players[ 0 ] = new Player( server.accept(), 0 ); Player temp = players[0]; players[1].send("New player has connected. Game on!"); players[1].send("Please wait for your turn."); runGame.execute( players[0] ); // execute player runnable } // end try catch ( IOException ioException ) { ioException.printStackTrace(); System.exit( 1 ); } // end catch //gameLock.lock(); try // allow x to go first { players[ PLAYER_X ].setSuspended( false ); // resume player X otherPlayerConnected.signal(); // wake up player X's thread } // end try finally { gameLock.unlock(); // unlock game after signaling player X } // end finally } // end if The lines specifically mentioned in the error are: gameLock.unlock(); // unlock game after signaling player X which is shown in the finally block above and if ( input.hasNextLine() ) processMessage(input.nextLine()); // get message Which is found in run() and calls processMessage, which processes messages sent by the client and is partially shown above. I have been trying to mess around with closing the connection, input, output, etc. but I can't seem to get rid of this error. It does not affect the actual game, I'm assuming because the error is on the thread that is closed, but I'd still like to fix it. Any help is appreciated! From the doc of ReentrantLock.unlock(): If the current thread is not the holder of this lock then IllegalMonitorStateException is thrown. You need to aquire the lock before unlock. The new thread/player or the one that is closed? Also how do I go about acquiring the lock? I thought it just existed for the game
common-pile/stackexchange_filtered
Slow evaluation of large matrix I have been trying to evaluate the matrix ce[n] in the code below but it is taking too long and still running as you increase n (still running since 6 hours only for n=5). I am interested to calculate ce[n] for n=10. Does anyone have a suggestion to solve the issue? Your support much appreciated! Here is the code: phi[x_]:=Piecewise[{{1,0<=x<1},{0,True}}] psi1[x_] := (phi[2 x] - phi[2 x - 1]); psijk[x_, j_, k_] := Piecewise[{{(Sqrt[2])^j psi1[2^j x - k],0 <= j}, {(2)^j psi1[2^j (x - k)], j <= -1}}] PSI[j_, k_, l_, s_] := Integrate[psijk[x, j, k]*psijk[x, l, s], {x, -1, 1}] - Integrate[x*t*(1 + x*t)*psijk[t, j, k]*psijk[x, l, s], {x, -1, 1}, {t, -1, 1}] ce[n_] =: ArrayReshape[ Table[PSI[j, k, l, s], {j, -n, n}, {k, -2^n, (2^n)-1}, {l, -n, n}, {s,-2^n, (2^n)-1}], {(2n+1)2^(n+1), (2n+1)2^(n+1)}]; So you want to generate 1.7 billion entries. Is that correct? From your indices in Table; n = 10; 2 n*(2*2^n - 1)*2 n*(2*2^n - 1). @Edmund it is<PHONE_NUMBER> entries if n=10. For n=5 it is 495616 entries but it is still running. The size of the matrix will be (2n+1)2^(n+1) by (2n+1)2^(n+1) I was just checking that you knew how many entries you where creating. @Edmund So from your experience is it possible to find a solution for the issue? You will need a lot of memory. With my rough calculation of 16 bytes per entry you will need 27Gb to store the result; n = 10; len = 2 n*(2*2^n - 1)*2 n*(2*2^n - 1); N@UnitConvert[Quantity[len 16, "Bytes"], "Gigabytes"]. But some of the more experienced people here could tell you more. Also, try NIntegrate and phi should use SetDelayed and not Set. @Bill so dear Bill how to skip the reshape? I'm in fact need to find pseudoinverse [ce]* certain vector, that is only what I need and I don't want to store the huge matrix. How I can do that? I put the new definition of Boole you provided but I got the following error: "Tag Times in for\ is\ n\ GeoPosition[{24.47,54.37}] is Protected." Let us continue this discussion in chat.
common-pile/stackexchange_filtered
Is it possible to bring a sparkline from microsoft excel into microsoft word? Pretty much trying to show the small sparkline from an excel report in a table within a microsoft word document. The only guidance that I have found so far is to convert each individual sparkline into an image. Is there a better way? You can click Insert > Table > Excel Spreadsheet. Create a table with the small sparkline .Or you can directly copy and paste the table in the excel report into the blank Excel Spreadsheet. If you just want to insert the small sparkline, it won't work if you copy and paste the sparkline only. There is another way that worked real well for me, but you might need an adobe acrobat editor. First you set up your spreadsheet in the print preview the way you want it to look. Second you save the excel file as a pdf. Next you export the pdf to a word document. The end result is a word document with your sparklines perfectly transferred. One problem I have yet to resolve with this, is that result looks perfect, but it doesn't move around the document cleanly
common-pile/stackexchange_filtered
Animating a UISlider I'm animating a UIView, UIText, UILabel and UIImage element in my app to transition into position on button click which works perfectly. However the code I am using doesn't work on a UISlider and I can't figure out way. Can anyone help me out. My code is: [UIView animateWithDuration:0.8f delay:0.0f options:UIViewAnimationOptionTransitionNone animations: ^{ if (!viewExpanded) { //x y width height viewExpanded = YES; //going down _blueMove.frame = CGRectMake(0, 0, 500, 150); //slide down panel _titleLabel.frame = CGRectMake(30, 30, 247, 44); // player title _timeElapsed.frame = CGRectMake(40, 95, 34, 21); //time counter _duration.frame = CGRectMake(230, 95, 35, 21); //duration _playerBg.frame = CGRectMake(0, 84, 320, 45); //player bg colour box }else{ //going up viewExpanded = NO; _blueMove.frame = CGRectMake(0, 0, 500, 0);//slide down panel _titleLabel.frame = CGRectMake(30, -40, 274, 44);// player title _timeElapsed.frame = CGRectMake(40, -20, 34, 21); //time counter _duration.frame = CGRectMake(230, -20, 35, 21); //duration //_currentTimeSlider = CGRectMake(40, 0, 175, 34); //time slider _playerBg.frame = CGRectMake(0, -45, 320, 45); //player bg colour box } You can see I've commented out an example of what I'm using in the else statement to animate my UISlider but this throws up an error. Thanks for any help. Shouldn't it be _currentTimeSlider.frame = ...? The code you have posted has the slider commented out //_currentTimeSlider = CGRectMake(40, 0, 175, 34); //time slider And it's also missing .frame
common-pile/stackexchange_filtered
Calculate Phase margin of open loop system I have this system: $$\frac{(s+2)^2}{s(s-4)^2}$$ To calculate phase and magnitude margins, I used the margins function on matlab, and it says that the phase margin is 112º. When evaluating the phase of the transfer function on the crossover frequency, I get -428º. The only way I found to get the same result as matlab, is to add 360º ass well as the 180º that you normally add to get the phase margin. When is it ok to add those 360º? Why does matlab not add 360º when tracing the phase bode plot?(plot goes from -450º to -90º Is this in any way related to minimun phase systems? You can always add or subtract a multiple of 360°. I think it is easier to see why when drawing the Nyquist diagram. However in this case the closed-loop will not be stable because the open-loop is unstable (two poles at $s = 4$) and there are no encirclements of the minus one point in the Nyquist diagram. Since one normally only talks about margins when the system is stable, you could say that this system has no margins.
common-pile/stackexchange_filtered
Setting a contact to "deceased" cause the contact to lose its subtype Drupal 7 with CiviCRM 5.68. Setting a contact to "deceased" cause that the contact lose the subtype, and all the custom fields, which are visible / filled (because the contact had a subtype). How can I prevent losing subtype? Thnaks. I can't reproduce this. Do you have any related extensions that might cause it? I found the string, which is shown in a Javascript alert. It's in templates/CRM/Contact/Form/Contact.tpl, in line 222. That string is normal when you remove a contact subtype, but setting to deceased shouldn't cause it to remove contact subtypes. Do you have any extensions installed related to subtypes or deceased? No. I have Contact Layout Editor installed, which is somehow related with contact's data, and one trigger on civicrm_contact table, to update the is_deceased boolean, if the date of decease is in empty. Here are my extensions: AuthX, Access Control by Financial Type for Reports, CiviCRM Export to Excel, CiviCRM Log Viewer, CiviCampaign, CiviContribute, CiviEvent, CiviMail, CiviMember, CiviPledge, CiviReport, AdminUI, CiviGrant, CKEditor4, Contribution cancel actions, Civisualize, Finsbury Park Theme, Custom search framework, Message Administration, CSV GUI Import to api, ExtendedReport, CiviRules, Form Core, FormBuilder, Contact Layout Editor, FlexMailer, SearchKit, Pivot Report, General Data Protection Regulation, Email Address Corrector I found an extremely interesting log entry: Array ( [civi.tag] => deprecated )``` Could this cause loosing contact's subtype? Maybe. There was a recent update for that warning - can you try applying this patch: https://github.com/civicrm/civicrm-core/pull/28612/files
common-pile/stackexchange_filtered
StructureMap IOC/DI and object creation I'm building small web shop with asp.net mvc and Structuremap ioc/di. My Basket class uses session object for persistence, and I want use SM to create my basket object through IBasket interface. My basket implementation need HttpSessionStateBase (session state wrapper from mvc) in constructor, which is available inside Controller/Action. How do I register my IBasket implementation for SM? This is my basket interface: public interface IBasketService { BasketContent GetBasket(); void AddItem(Product productItem); void RemoveItem(Guid guid); } And SM registration: ForRequestedType(typeof (IBasketService)).TheDefaultIsConcreteType(typeof (StoreBasketService)); But my StoreBasketService implementation has constructor: public StoreBasketService(HttpSessionStateBase sessionState) How do I provide HttpSessionStateBase object to SM, which is available only in controller? This is my first use of SM IOC/DI, and cann't find solution/example in official documentation and web site ;) If you absolutely have to have your StoreBasketService use the session, I'd be tempted to define an interface and wrapper around HttpSessionState instead of using HttpSessionStateBase so that you can register it with StructureMap as well.The wrapper would get the session state from the current context. Register the wrapper with StructureMap and then have your StoreBasketService take the interface as the argument to the constructor. Structure map should then know how to create an instance of the interface wrapper and inject it into your StoreBasketService class. Using an interface and wrapper will allow you to mock the wrapper in your unit tests, muc in the same way HttpSessionStateBase allows mocking the actual session. public interface IHttpSessionStateWrapper { HttpSessionState GetSessionState(); } public class HttpSessionStateWrapper : IHttpSessionStateWrapper { public virtual HttpSessionState GetSessionState() { return HttpContext.Current.Session; } } ForRquestedType(typeof(IHttpSessionStateWrapper)) .TheDefaultIsConcreteType(typeof(IHttpSessionStateWrapper)); public class StoreBasketService { HttpSessionState session; public StoreBasketService( IHttpSessionstateWrapper wrapper ) { session = wrapper.GetSessionState(); } // basket implementation ... } However, you can have StructureMap actually store your basket in the session using .CacheBy(InstanceScope.HttpContext) when registering it. It may actually be better to have your StoreBasketService implement internal storage instead of storing things in the session -- then you lose the dependency on the session state entirely (from the perspective of your class) and your solution could be simpler. Your internal storage could be a Dictionary<Guid,Product> since this is how you access them via your interface. See also: http://www.lostechies.com/blogs/chad_myers/archive/2008/07/15/structuremap-basic-scenario-usage.aspx http://www.lostechies.com/blogs/chad_myers/archive/2008/07/17/structuremap-medium-level-usage-scenarios.aspx I will give it a try today, thanx. But for caching of Basket class with SM, IS.Httpcontext stores object in httpcontext.items dictionary, which is available just over one page request, but i want it through entire users session. ForRequestedType<IBasketService>() .TheDefault.Is.OfConcreteType<StoreBasketService>() .WithCtorArg("sessionState").EqualTo(HttpContext.Current.Session); ?? does that work?
common-pile/stackexchange_filtered
Changing background color in vim at a certain column I'd like to be able to highlight the wrap margin/text width in vim by changing the background color (or maybe just a line?). A lot of IDEs have this. I mocked up what I'm talking about: Anyone know if this can be done in macvim or gvim? Try this: :match ErrorMsg '\%>80v.\+' It will highlight text beyond 80 characters, you can replace '80' with whatever wrap-width you have. However, it will only highlight the characters that exceed the width, and then only on lines that are actually longer than the width. Check http://vim.wikia.com/wiki/Highlight_long_lines for more info, but they all pretty much accomplish the same thing. Yes! That works well; I wasn't wrapping my head around the fact that this is a way to "highlight long lines" (which your link is the first hit for). Since Vim 7.3 it's possible to have columns highlighted like this: To set it to the current textwidth: :set cc=+1 Or you can set it to predefined value: :set cc=80 You can change its color like this: :hi ColorColumn ctermbg=lightgrey guibg=lightgrey See help for more details: :help colorcolumn autocmd FileType * execute "setlocal colorcolumn=" . join(range(&textwidth,250), ',') highlight ColorColumn guibg=#303030 ctermbg=0 Big problem with this is that the colorcolumn highlighting has higher priority then hlsearch! So basically you wont be able to see highlighted search items beyond that margin... You will obviously have to pick the right bg colors for your colorscheme.
common-pile/stackexchange_filtered
WLPs MicroProfile (FaultTolerance) Timeout Implementation does not interrupt threads? I'm testing the websphere liberty's fault tolerance (microprofile) implementation. Therefore I made a simple REST-Service with a ressource which sleeps for 5 seconds: @Path("client") public class Client { @GET @Path("timeout") public Response getClientTimeout() throws InterruptedException { Thread.sleep(5000); return Response.ok().entity("text").build(); } } I call this client within the same application within another REST-service: @Path("mpfaulttolerance") @RequestScoped public class MpFaultToleranceController { @GET @Path("timeout") @Timeout(4) public Response getFailingRequest() { System.out.println("start"); // calls the 5 seconds-ressource; should time out Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get(); System.out.println("hello"); } } Now I'd expect that the method getFailingRequest() would time out after 4 ms and throw an exception. The actual behaviour is that the application prints "start", waits 5 seconds until the client returns, prints "hello" and then throws an "org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException". I turned on further debug information: <logging traceSpecification="com.ibm.ws.microprofile.*=all" /> in server.xml. I get these information, that the timeout is registered even bevor the client is called! But the thread is not interrupted. (if someone tells me how to get the stacktrace pretty in here... I can do that.) Since this a very basic example: Am I doing anything wrong here? What can I do to make this example run properly. Thanks Edit: Running this example on WebSphere Application Server <IP_ADDRESS>/wlp-1.0.21.cl180220180619-0403) auf Java HotSpot(TM) 64-Bit Server VM, Version 1.8.0_172-b11 (de_DE) with the features webProfile-8.0, mpFaultTolerance-1.0 and localConnector-1.0. Edit: Solution, thanks to Andy McCright and Azquelt. Since the call cannot be interrupted I have to make it asynchronous. So you got 2 threads: The first an who invoke the second thread with the call. The first thread will be interrupted, the second remains until the call finishes. But now you can go on with failure handling, open the circuit and stuff like that to prevent making further calls to the broken service. @Path("mpfaulttolerance") @RequestScoped public class MpFaultToleranceController { @Inject private TestBase test; @GET @Path("timeout") @Timeout(4) public Response getFailingRequest() throws InterruptedException, ExecutionException { Future<Response> resp = test.createFailingRequestToClientAsynch(); return resp.get(); } } And the client call: @ApplicationScoped public class TestBase { @Asynchronous public Future<Response> createFailingRequestToClientAsynch() { Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get(); return CompletableFuture.completedFuture(response); } } Hi @Fdot - how are you invoking the remote JAX-RS resource? Some calls (particularly networking calls) are non-interruptible. The MP community is actively investigating an approach to resolve this issue by dynamically setting the connection and read timeouts of the resource to the value specified in the @Timeout annotation - but this would apply primarily to the MP Rest Client - and would not be available until a future version of MP. I am using the Jax-RS Client: Response response = ClientBuilder.newClient().target("http://localhost:9080").path("/resilience/api/client/timeout").request().get(); But there are others call before and after that. So there would be a possibility to interrupt method prior. It does interrupt threads using Thread.interrupt(), but unfortunately not all Java operations respond to thread interrupts. Lots of things do respond to interrupts by throwing an InterruptedException (like Thread.sleep(), Object.wait(), Future.get() and subclasses of InterruptableChannel) but InputStreams and Sockets don't. I suspect that you (or the library you're using to make the request) is using a Socket which isn't interruptible so you don't see your method return early. It's particularly unintuitive because Liberty's JAX-RS client doesn't respond to thread interrupts as Andy McCright mentioned. We're aware it's not a great situation and we're working on making it better. I guessed it could be something like this. But then the complete microprofile-faulttolerance would be pointless. The timeout is a killer feature. If this does not work, nothing works for me (at least I'd combine it with retry oder circuit breaker). I found a solution thanks to your comment about Future.get(). I'll edit my question, because I can format code there. Thanks! @Fdot, thank you for reporting this! We noticed this and raised the issue in Open Liberty https://github.com/OpenLiberty/open-liberty/issues/3918. Unfortunately the fix should be done in JAX-RS. Andy McCright, who commented earlier, has been investigating this issue and hope to provide a solution. By the way, Open Liberty is not the only server that suffers this defect. Thorntail hits the same issue. See https://github.com/eclipse/microprofile-rest-client/issues/107, which is the similar issue. Please monitor these issues for the upcoming fix. I had the same problem. For some URLs I consume, the Fault Tolerance timeout doesn't work. In my case I use RestClient. I solved my problem using the readTimeout() of the RestClientBuilder: MyRestClientClass myRestClientClass = RestClientBuilder.newBuilder().baseUri(uri).readTimeout(3l, TimeUnit.SECONDS) .build(MyRestClientClient.class); One advantage of using this Timeout control is that you can pass the timeout as a parameter.
common-pile/stackexchange_filtered
GA4 event firing twice on Vue async method I've been trying to track a product selection on my website using vue-gtag. The event is sent using an abstraction I created called 'notification'. I've been capturing all events successfully except this one which is being fire twice for some reason. Here's the piece of code: methods: { ...mapActions('menu', { updateCurrentProduct: 'updateCurrentProduct', getOriginators: 'getOriginators', }), async selectProduct(product, parent) { await this.$dialog.confirm({ text: 'Deseja realmente alterar? Você será reiniciado para a tela inicial.', title: 'Alterar produto', actions: { false: { text: 'Cancelar', color: 'primary', }, true: { text: 'Sim', color: 'secondary', handle: () => { this.updateCurrentProduct({ ...product, parent, }); let selectedProduct = product.register.name; notification('ANALYTICS_EVENT', { event: 'product_selection', payload: { selectedProduct, }, }); this.$emit('onClose', { ...product }); }, }, }, }); }, }, In addition, when using GA debug extension that's what happens I'm new to JavaScript and vue coding. That being said I tried to simply use a console.log in the exact same spot as the tag code and it is just being printed once and that's what's bugging me. Is there something I could do to eliminate the double sending? Don't use the ga debug extension. It's outdated. Use Adswerve datalayer inspector. Pay attention to the last update date when installing extensions. Double check the number of your events in the network tab to make sure they're really duplicating. Show the screenshot of your dataLayer, check it for duplicates. Don't implement analytics through gtag.js. Use GTM. Reasons? Maintainability, scalability, manageability, abstraction of completely unneeded complexity and awkward architecture of the gtag's api. Any suggestion if console log shows the event creation code executed only once, but network tab shows the event is fired twice? The 2nd event comes after ~4-5s delay. Their payloads are exactly the same, except "_et" and "tfd" fields have different value, and "_ee" field is missing in 2nd event. No, the extension should pick it up. If it doesn't, there's something wrong with the endpoint or the response code probably. But hey, try the Omnibug extension instead. The Adswerve's extension is core analytics only. Omnibug tries to capture all the marketing pixels plus the core analytics. Maybe that's a pixel you're looking at. Tho a pixel's payload would be significantly different.
common-pile/stackexchange_filtered
Downgrade Ubuntu 22.10 to 20.04 for ROS I would like to downgrade to Ubuntu 20.04 is it is officially supported by ROS Noetic Ninjemys. What are the exact steps that I can follow without breaking my machine? There is no way to "downgrade" safely. The only solution for your system is to backup the data you want to keep to external media, and then clean install Ubuntu 20.04. However, keep in mind that 20.04 is old and 22.04 is the current LTS. Unless you absolutely need Ubuntu 20.04, you should use Ubuntu 22.04. As per the ROS Noetic Ninjemys website, it officially supports the 20.04. I could not install ROS on 22.10 so yeah I have to clean the install. Plus, I am wondering if there is an option to just downgrade it to 20.04 without losing my data, using a bootable flash. There is no Downgrade option. There is no downgrade on the bootable USB media either. Your only option is a clean install after you back up the data you need/want to keep to external media. I’m not familiar with ROS at all, but would Docker be an option? Cab you run 20.04 in a virtual machine instead? I'll provide an FYI, but you didn't specify what 22.10 product you're trying to downgrade as that very much matters. You can non-destructively re-install a Ubuntu Desktop system (this works with flavors too being desktops!) which will cause all Ubuntu repository to be re-installed automatically (if available for that release) and no user file/config is touched... As there is no check on release; you can re-install the same, a newer or in fact an older release too! But please read next comment first, HOWEVER there is nothing done to ensure data modified by newer programs is readable via the older programs; so that's homework you need to perform yourself package by package if data matters to you. This can have consequences to your data if the older version of the app can't deal with your datafiles modified by newer versions of the same app... I've experienced problems with this (not easy to detect either!) so ensure you do your homework well before using it for every package/app where data in that app works for you. This is for Desktop installs only I'm talking about.
common-pile/stackexchange_filtered
How do I make the scroll bars show up on a TScrollBox? The TScrollBox control looks like it's supposed to basically be a TPanel with scroll bars attached along the bottom and the right edge. I tried placing one on a form, but no matter what I do, I can't make the scroll bars actually appear, either at design-time or at runtime. Does anyone know how to make them show up? Set AutoScroll property to True. Now if you add controls that clip the box borders, the bars will appear. OK, it looks like I've got the wrong control, then. I need scroll bars that can be controlled programmatically, based off other factors than the sizes and positions of the controls inside the box. @Mason Wheeler, in case you don't find a proper control for your task, an ugly solution will be to place a panel inside the scroll box. By resizing the panel you can adjust the scroll bars. Any other control must reside on that panel. Of course, if you need more control, you can always use TScrollBar controls. Mason You can't see the scrolling bars until there's actually something to scroll to. To see the scrollbars try this 1.Set the BorderStyle property of the Form to bsSingle 2.Insert a button in a form 3.Put a scrollbar in a form 4.Set the Align property of the TScrollBox to alClient 5.Run this code in a button click procedure TForm10.Button1Click(Sender: TObject); Var i : integer; ed : TEdit; begin for i:=1 to 30 do Begin ed:=TEdit.Create(self); ed.Parent:=ScrollBox1; ed.Top:=5+((i-1)*30); ed.Left:=10; ed.Width:=100; ed.Text:='Editext'+ IntToStr(i); End; end; Bye. If I'm not mistaken (no Delphi around to check) it suffices to set HorzScrollBar.Range big enough. EDIT: IIUC this DFM does what you want - entirely at design-time: object Form1: TForm1 Left = 0 Top = 0 Caption = 'Form1' ClientHeight = 206 ClientWidth = 312 Color = clBtnFace ParentFont = True OldCreateOrder = True PixelsPerInch = 96 TextHeight = 13 object ScrollBox1: TScrollBox Left = 8 Top = 8 Width = 150 Height = 150 HorzScrollBar.Range = 300 VertScrollBar.Range = 300 AutoScroll = False TabOrder = 0 end end
common-pile/stackexchange_filtered
IOS NAT with multiple interfaces I am trying to create a DMZ type environment between two internal networks using Cisco NAT. The below configuration is working to permit the two 'nat outside' interfaces to access the dmz 'nat inside' server via the <IP_ADDRESS> & <IP_ADDRESS> hide nat IP addresses. However the server (<IP_ADDRESS>) is not being nat'd when it accesses the other two devices. I need this to bi-directionally nat in order to hide the 'nat inside' network. For instance, <IP_ADDRESS> is able to ping/telnet to <IP_ADDRESS> and access the <IP_ADDRESS> device and is successfully translated. However, when <IP_ADDRESS> telnets to <IP_ADDRESS> the traffic shows up with the original ip address and not the desired <IP_ADDRESS> address. ! interface GigabitEthernet0/0 description insidetrusted ip address <IP_ADDRESS> <IP_ADDRESS> ip nat outside ! interface GigabitEthernet0/1 description dmz ip address <IP_ADDRESS> <IP_ADDRESS> ip nat inside ! interface GigabitEthernet0/2 description outside ip address <IP_ADDRESS> <IP_ADDRESS> ip nat outside ! ! ip nat inside source static <IP_ADDRESS> <IP_ADDRESS> route-map netoutside ip nat inside source static <IP_ADDRESS> <IP_ADDRESS> route-map netinside ! route-map netoutside permit 10 match ip address 101 match interface GigabitEthernet0/2 ! route-map netinside permit 10 match ip address 100 match interface GigabitEthernet0/0 ! Resources I've used in my research: How to configure static NAT withroute-maps Network Address Translation (NAT) Introduction Configuring Network Address Translation: Getting Started IOS NAT Load-Balancing with Optimized Edge Routing for Two Internet Connections NAT Support for Multiple Pools Using Route Maps The source of the problem was the 'log' keyword at the end of my ACL's. access-list 100 permit ip host <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> **log** access-list 101 permit ip host <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> **log** According to Cisco, you should never use 'log' with NAT ACLs. The observed behavior was that the traffic would be properly NAT'd in one direction, but in the other the NAT would not be applied. The corrected ACL's are: access-list 100 permit ip host <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> access-list 101 permit ip host <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS>
common-pile/stackexchange_filtered
Why label encoding before split is data leakage? I want to ask why Label Encoding before train test split is considered data leakage? From my point of view, it is not. Because, for example, you encode "good" to 2, "neutral" to 1 and "bad" to 0. It will be same for both train and test sets. So, why do we have to split first and then do label encoding? Where does this presumption come from? I don‘t see why - as you described the problem - this should lead to leakage. Imagine that after the split there is no "good" in the training data. If you had done the encoding after the split, then you would have no idea that there can be a "good". There you have your leakage. As you mentioned, we have problem when we do encoding after split. So why do you prefer encoding after split? Still didn't get why it is leakage. Can you please give some clarification? Thanks in advance. The problem we encounter when doing the split before encoding is just the real world, where we do not have perfect information about the data that our system will be fed in production. That is why we must evaluate our model on unseen data. If you split after encoding, you are evaluating your model under a false premise of knowledge about that very unseen data. @Anar I condensed my comments as an answer. Imagine that after the split there is no "good" in the training data. If you had done the encoding after the split, then you would have no idea that there can be a "good". There you have your leakage. Of course, as you mention in the comments, this is a problem. Nevertheless, this problem is just the real world, where we do not have perfect information about the data that our system will be fed in production. That is why we must evaluate our model on unseen data. If you split after encoding, you are evaluating your model under a false premise of knowledge about that very unseen data. If you perform the encoding before the split, it will lead to data leakage (train-test contamination) In the sense, you will introduce new data (integers of Label Encoders) and use it for your models thus it will affect the end predictions results (good validation scores but poor in deployment). Suppose test data has new class which was not available in train data but you do label encoding it will be available in the model which leads to data leakage After the train and validation data category already matched up, you can perform fit_transform on the train data, then only transform for the validation data - based on the encoding maps from train data. Almost all feature engineering like standarisation, Normalisation etc should be done after train testsplit. Hope it helps
common-pile/stackexchange_filtered
Force between proton in a conducting shell and electron outside of shell? There's a proton inside a conducting shell and an electron outside of it. Inside the shell, there is no field due to the electron, but the electron feels the field due to the proton. Therefore the electron should move towards the immobile proton, but what happened to Newton's third law? Does the whole shell move? "Inside the shell, there is no field due to the electron," Can you say a bit more about that? Imagine the system without the proton. There's no field inside a conductor (essentially acts as a Faraday's Cage). Yes but the problem is that there's a proton inside which feels no force from the electron but the electron feels a force from the proton You seem to be confused as to where the electron and proton actually reside. shells are found on the beach. Please give a drawing of your problem. Why do you think that the electron feels a force from the proton? is the shell open? then the two will feel a force from each other at the opening. If it is closed it is a faraday cage and neither feels the other, only their image on the respective surface https://en.wikipedia.org/wiki/Method_of_image_charges this shows what happens https://en.wikipedia.org/wiki/Method_of_image_charges The whole shell moves because the electric field experienced by the outer electron has its origin in positive charges induced by the inner proton on the outer metal surface equivalent to the proton charge. The metal shell would even move without a proton inside because the electron induces positive charges on the surface near the electron so that a net attraction occurs. All this, of course, are minimal effects when the whole setup is floating in free space. And, as M. Enns has alluded to above, there is, of course, a field due to the proton inside the shell which induces negative charges on the inner surface that exert a force on the proton. I would edit this just to make it a little more clear. At the moment, it sounds confusing. The error that you have made is to apply Newton's third law incorrectly. On the conducting sphere there are two sets of induced charge. One set of positive and negative induced charges is due the field of the electron having to be negated so that there is no electric field inside the conducting sphere. These positive and negative induced charges reside on the outside of the conducting sphere. So you are correct in your assertion that the proton does not feel the effect of the electron directly. Put another way - the proton does not know that there is an electron outside the conducting shell. The other set of induced charges are produced by the proton and they are present to ensure that the electric field inside the conductor is zero. These charges reside on both the inside (negative) and the outside (positive) of the conducting sphere. So on the outside of the conducting sphere you have induced charges produced by the presence of the electron and the proton. There is a force on the electron due to the induced charges on the outside of the conducting sphere and the Newton's third law pair to this force is that there is force on the induced charges on the outside of the conducting sphere due to the electron. So indirectly, via the induced charges on the outside of the conducting sphere produced by the proton, the electron feels the presence of the proton. Inside the sphere there is a force on the proton due to the induced negative charges that it has induced on the inside of the sphere and the Newton's third law pair to this force is that there is a force on the induced negative charges on the inside of the sphere due to the presence of the proton.
common-pile/stackexchange_filtered
How can I add a header to a PDF file using iText in C#? I am trying to create a PDF using iText. I am trying to add a header that will appear on every page, but I want add some text like the report name that will be different for every page. How can I solve this problem? If you don't understand what I'm trying to ask, please see the example below: In Page 1 Report Name: Test Report Emp_id Emp_Name Emp_sal ----- ----- ------ ----- ----- ------ ----- ----- ------ In Page 2 Emp_id Emp_Name Emp_sal ----- ----- ------ ----- ----- ------ ----- ----- ------ Note: In Page 2, "Report Name" is not repeating. Do you know how to create a new page? If you do, then just add the part you required at the beginning of the new page. In iText 5 you can create custom headers using page events. using System.Collections.Generic; using iTextSharp.text; using iTextSharp.text.pdf; namespace PDFLib.PageEvents { public class CustomPageEvent : PdfPageEventHelper { private int _page; private readonly Rectangle _marges; private string _text; private static readonly Font FontHf = FontFactory.GetFont(FontFactory.HELVETICA_OBLIQUE, 9); private readonly Dictionary<int, float> _posicions; public CustomPageEvent(string text){ _text = text; _marges = new Rectangle(10, 10, _pageSize.Width - 20, _pageSize.Height - 20); _posicions = new Dictionary<int, float> { {Element.ALIGN_LEFT, _marges.Left + 10}, {Element.ALIGN_RIGHT, _marges.Right - 10}, {Element.ALIGN_CENTER, (_marges.Left + _marges.Right)/2} }; } public override void OnStartPage(PdfWriter writer, Document document) { _page++; base.OnStartPage(writer, document); } public override void OnOpenDocument(PdfWriter writer, Document document) { _page = 0; } public override void OnEndPage(PdfWriter writer, Document document) { if (page==1) ColumnText.ShowTextAligned(writer.DirectContent, Element.ALIGN_CENTER, new Phrase(string.Format("{0}", _text), FontHf), _posicions[Element.ALIGN_CENTER], _marges.Top-10, 0); base.OnEndPage(writer,document); } } } Something like this. It can be simplified as I extracted this from a page event that did many more things as adding watermarks and custom headers and footers. That should do quite nicely. Bhaskara might need to play around with the page margins to reserve enough space for the header, but other than that you should be good to go.
common-pile/stackexchange_filtered
Disable Login Prompts in mod_auth_sspi_1.0.4-2.2.2 What I'm trying to do: Create a home page on our company's intranet that automatically grabs the logged-in Windows username of the person viewing the page without the person being prompted to enter these credentials when the page loads. Currently, I just want it to grab the local username, since it'll be awhile before our IT guys get a domain setup. For example, right now I'd want it to capture the "(PC-Name)\windows.user.name" without any prompts. Environment: Apache 2.2.21 on Windows 7 x64 (will be on CentOS once it's in production). PHP 5.3.8 (VC9-ZTS). Internet Explorer 9.0.8x and Firefox 6.0.2 (will worry about Chrome later). Current test page is just a PHP script calling print_r( $_SERVER ). To keep things simple, the directory I'm testing this on is not a VirtualHost. Steps I've taken thus far: Downloaded mod_sspi_1.0.4-2.2.2 from SourceForge and extracted mod_auth_sspi.so to the Apache modules directory. Added the module declaration: LoadModule sspi_auth_module modules/mod_auth_sspi.so Added the directory definition: AllowOverride None Options None Order allow,deny Allow from all AuthName "My Intranet" AuthType SSPI SSPIAuth On SSPIAuthoritative Off require valid-user Enabled Integrated Authentication in Firefox by going to about:config and setting network.automatic-ntlm-auth.trusted-uris to the absolute URL path of the PHP script then restarted Firefox. I haven't done the equivalent step in IE yet but I will once I get Firefox working as that's our primary supported browser internally. Restarted Apache and attempted to load the test PHP script. The result: In both IE and Firefox, I get prompted for a username and password before the page loads. I don't want that prompt. I want the username to be detected automatically without a prompt. Troubleshooting thus far: I've tried cycling through the various SSPI options, such as authritative on/off and whatnot. No effect. Prompt no longer appears if I remove "require valid-user", but then the username is not passed, either (it isn't NULL; simply not set in the array period). If I hit "Cancel" on the prompt, I get the standard "Authentication Required" page. If I enter an invalid username, or the correct username but with the wrong password, the page will load but the username will be "(PC-Name)\Guest". If I enter the correct username/password, then the username is displayed instead of Guest. Once I enter a username/password in IE or Firefox, the browser remembers that username on subsequent page loads until I clear the stored passwords cache or restart the browser. I've spent the last 3 or so hours Googling and random guessing. Zero success. I've found a few isolated forum posts of people asking this question, but either they went unanswered or offered solutions that I've already tried without success. Again, what I want is for the page to load, WITHOUT any prompting, and display the currently logged-on Windows username in the $_SERVER array output. As far as I can fathom, this is either: A Windows configuration issue, an Apache configuration issue, or a browser configuration issue. Other than that, I'm fresh out of ideas. I would be very grateful for any help you can offer. Thanks! --Kris Sorry didn't read the above entirely, you're working on a windows station. Took a couple days, but I eventually figured it out on my own. Apparently, the various documents and tutorials out there describing the Firefox about:config setting are wrong. They claim that the full URI, including protocol prefix, must be included. As it turns out, the exact opposite is true. As a random shot in the dark, I tried setting it to just "localhost" (the domain the test server's running on). And voila! That fixed it! "http://localhost", on the other hand, caused it to break. Once I got it working in Firefox, having verified that the server-side configuration was correct, applying it to IE and Chrome was a cinch. For IE, I just added "http://localhost" (in this case, you do want the protocol prefix) to the "Intranet" zone. And since Chrome makes use of the same network settings that IE uses, that step got it working for both browsers. As far as server-side config goes, it looks like I had that right from the beginning. I was able to simplify it a bit, though, so really all you need in the directory block is this: AuthName "Whatever you want to call your intranet" AuthType SSPI SSPIAuth On require valid-user With this setup, if you point to a PHP script doing a print_r( $_SERVER ), the output will contain something like this: [REMOTE_USER] => dev-kdc-pc01\kris.craig [AUTH_TYPE] => NTLM [PHP_AUTH_USER] => dev-kdc-pc01\kris.craig If you want to get rid of the domain part (i.e. the "dev-kdc-pc01\"), you can either parse it out in PHP or add this line to your SSPI stuff in the directory block in httpd.conf mentioned above: SSPIOmitDomain On Please note that I've only tested this on a Windows system where the Apache webserver was running on localhost. I have not yet tested it in a situation where the Apache server is running on Linux, though that shouldn't have any impact on the results since the server is just accepting whatever the browser sends it. This also requires that the client be running Windows or some other SSPI-compatible environment. I haven't yet determined how to make this work for our Mac-using employees. Also note that I've successfully tested this on a network that does not currently have a domain configured. According to articles published elsewhere, the behavior should be identical on a workstation that is a member of a domain. I hope this helps! Thanks! This might be a partial answer. BOOL WINAPI GetUserName( __out LPTSTR lpBuffer, __inout LPDWORD lpnSize ); http://msdn.microsoft.com/en-us/library/windows/desktop/ms724432%28v=vs.85%29.aspx It looks like you can fetch system information using this, the other half might be automating it using PERL or Python to scrape the information, then post it to PHP. Here's the outline on fetching SysInfo in Windows. http://msdn.microsoft.com/en-us/library/windows/desktop/ms724426%28v=vs.85%29.aspx Honestly this is a pretty easy thing to do with PECL/PAM if you are using Linux. Wouldn't this have to be running client-side though? The server will ultimately be running CentOS and will basically just accept whatever the browser claims the credentials to be.
common-pile/stackexchange_filtered
Getting the screen brightness value in iOS 5 I've started to use the new iOS 5 brightness setter in UIScreen. Is there a getter property I can use to know what the display brightness is set to on launch? Thanks very much. The same property. These are the methods I use to store the current brightness before changing it, then reset brightness to the previous value later: - (void)dimScreen { previousBrightness = [UIScreen mainScreen].brightness; [UIScreen mainScreen].brightness = 0; } - (void)restoreScreen { [UIScreen mainScreen].brightness = previousBrightness; } Update: It's useful to note that the brightness reported by UIScreen is only the brightness the user has set in Settings, and does not report the auto brightness adjusted value. If auto brightness is enabled, I am aware of no way to get the adjusted value. As an example, if the user has the brightness slider at 100% in Settings but they are currently in a very dark room, then UIScreen will report a brightness of 1.0, but the true value might be closer to 0.5. iOS does not save this value. After lock/unlock your device will end up with the brightness controlled by system settings. @MikhaloIvanokov In my example code I'm treating "previousBrightness" as an ivar, so yes, it's your responsibility to retain this value.
common-pile/stackexchange_filtered
Bitnami Helm Chart not working due to startup probe fail I'm using Bitnami MySQL Helm Chart (Version 8.8.6) with image tag of 5.7. MySQL container is killed without any meaningful logs. From the K8s logs, the message is as below. Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure. mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists! The values.yaml I'm using is as below. mysql: image: tag: 5.7 auth: database: ${DATABASE} username: ${USERNAME} password: ${PASSWORD} serviceAccount: create: true name: rds-backup annotations: eks.amazonaws.com/role-arn: "arn:aws:iam::${ID}:role/mysql-dump-v1" primary: service: type: NodePort nodePort: 30006 initContainers: - name: sql-loader image: amazon/aws-cli:latest command: ["/bin/sh", "-c"] args: - echo "Loading the backup snapshot from S3 ..." && aws s3 cp s3://${BUCKET_NAME}/backup.sql /workdir/backup.sql volumeMounts: - name: loaded-sql mountPath: "/workdir" extraVolumes: - name: loaded-sql emptyDir: {} extraVolumeMounts: - name: loaded-sql mountPath: "/docker-entrypoint-initdb.d" containerSecurityContext: enabled: true runAsUser: 0 I generated the backup.sql from the Aurora MySQL cluster. The .sql file is saved in S3 Bucket. Database, Username, Password specified in the values.yaml are same to the ones set at the original Aurora MySQL cluster. The original Aurora MySQL cluster version is 5.7, which is the same version I'm trying to spin up using the Helm Chart. This question is about configuring and running a container, not about programming, therefore it is off topic here on SO. Serverfault sister site of SO deals with questions about infrastructure - even if it is virtual.
common-pile/stackexchange_filtered
How to summarize a variable based on another column in R? I have a dataset that looks like this: study_id weight gender 1 100 55 Male 2 200 65 Female 3 300 84 Female 4 400 59 Male 5 500 62 Female 6 600 75 Male 7 700 70 Male I would like to find the mean, median, etc. (everything that the summary() function gives) for the weight variable, but separately for both men and women. In other words, I would like to find the summary statistics of the weight variable for males and females separately. How can I go about doing this? Reproducible Data: data<-data.frame(study_id=c("100","200","300","400","500","600","700"),weight=c("55","65","84","59","62","75","70"),gender=c("Male","Female","Female","Male","Female","Male","Male")) Although there are reasonable suggestions by harre, I prefer to do it this way: library(dplyr) data |> group_by(gender) |> mutate(weight = as.numeric(weight)) |> summarise( across(weight, list(mean = mean, median = median)) ) # # A tibble: 2 x 3 # gender weight_mean weight_median # <chr> <dbl> <dbl> # 1 Female 70.3 65 # 2 Male 64.8 64.5 The advantages of mutate(across()) are that if you had 2 columns, or 5, you could easily extend it e.g. mutate(across(weight:height)). There are more examples of this in the docs. For a base R solution (literally replying to "everything that the summary() function gives"): tapply(as.numeric(data$weight), INDEX = data$gender, FUN = summary) $Female Min. 1st Qu. Median Mean 3rd Qu. Max. 62.00 63.50 65.00 70.33 74.50 84.00 $Male Min. 1st Qu. Median Mean 3rd Qu. Max. 55.00 58.00 64.50 64.75 71.25 75.00
common-pile/stackexchange_filtered
Finding nested iFrame using Selenium 2 I am writing tests for a legacy application in which there is an iFrame within the main document, and then another iFrame within that. So the hierarchy is: Html Div (id = tileSpace) iFrame (id = ContentContainer) iFrame (id = Content) Elements This is my code (I am using C#) RemoteWebDriver driver = new InternetExplorerDriver(); var tileSpace = driver.FindElement(By.Id("tileSpace")); var firstIFrame = tileSpace.FindElement(By.Id("ContentContainer")); var contentIFrame = firstIFrame.FindElement(By.Id("Content")); The problem is, I am unable to reach the 2nd level iFrame i.e. contentIFrame Any ideas? I'm currently testing on a similar website. (nested iframes inside the main document) <div> <iframe> <iframe><iframe/> <iframe/> </div> It seems that you are not using the frame switching method provided in Api. This could be the problem. Here is what I'm doing, it works fine for me. //make sure it is in the main document right now driver.SwitchTo().DefaultContent(); //find the outer frame, and use switch to frame method IWebElement containerFrame = driver.FindElement(By.Id("ContentContainer")); driver.SwitchTo().Frame(containerFrame); //you are now in iframe "ContentContainer", then find the nested iframe inside IWebElement contentFrame = driver.FindElement(By.Id("Content")); driver.SwitchTo().Frame(contentFrame); //you are now in iframe "Content", then find the elements you want in the nested frame now IWebElement foo = driver.FindElement(By.Id("foo")); Try below code: //Switch to required frame driver.SwitchTo().Frame("ContentContainer").SwitchTo().Frame("Content"); //find and do the action on required elements //Then come out of the iFrame driver.SwitchTo().DefaultContent();
common-pile/stackexchange_filtered
AWS Cloudwatch insight shows no data I want to build a Logging Dashboard to monitor a application in AWS EC2. So I configure the cloudwatch stuff and everything looks like a charm. But when I go to the cloudwatch logs insights and create a query for the logs, I'm getting 'no data found' for every query/time range I'm using. I can see there are some logs in the stream when I click on it (in the logs panel) but it cannot discover from insights. What I'm doing wrong? Maybe someone could help me, thanks a lot Try changing the query to: fields @logStream, @message | limit 20 And expand the time frame to, say, 4 weeks, making sure there are log streams within that time frame that contain log events. For example:
common-pile/stackexchange_filtered
I want to update my android app contents from my local server I am new to android programming, my problem is i am working on android app with FAQ related to Covid-19 for communities living in a remote area with no internet connection. And i want to update the information on the app from the local server which is setup in one location. when users connect to the server via wifi the app should update its information from the server. And i want the updated information to be persistent after the update. what is the best way to do this? Add some code of what you have done so far! till now my app has only hard coded text only. I haven't implemented any code so far regarding my question. and used Flutter to build the app. Do not hard code the text. Put all text in a .txt file for instance. Then you update that file by downloading a new file from your server. I think using database makes sense, and i thought of using json files but it can't be changed after compilation? @blackapps You can use a json file of course. And with updating the file i ment replacing the file with the downloaded one. Then update the database. But you do not need a database if all info is in that file. Unless reading from file is too slow yes then put it in a database. Nothing changes! Everything goes the same way as when fetching data form remote server, except that you just need to set your local address when creating the connection. private static final String BASE_URL_INTERNET = "http://<IP_ADDRESS>:2045/api/"; private static final String BASE_URL_LOCAL = "http://<IP_ADDRESS>:2045/api/" All the rest goes the same way. For the other question updated information to be persistent after, you need to use a database like SQLite. thank you for the answer that is very helpful. Can I use django rest framework for the api? The fact that you can connect to a local server on an Android device is independent from the framework you choose to code. So technically yes, you can use any favorite framework to code your back end. @bilubash
common-pile/stackexchange_filtered
LTI and Deterministic Channel Can someone explain me the statistical relationship of random signals with a linear time-invariant and deterministic channel to the output of the channel when having random signal as an input? If your input "random signal" is modeled as a zero-mean wide-sense-stationary random process $\{X(t)\}$ with autocorrelation function $R_X(t)$, then the output of the channel (which is basically an LTI system with impulse response $h(t)$) is also a zero-mean wide-sense-stationary process $\{Y(t)\}$ with autocorrelation function $R_Y(t)$ given by $$R_Y = R_h \star R_X$$ where $R_h$ is the autocorrelation function of the impulse response $h(t)$ $\big($that is, $R_h(t) = \int_{-\infty}^\infty h(t)h(t+\tau) \, \mathrm d\tau$ or $\int_{-\infty}^\infty h(t)h(t-\tau) \, \mathrm d\tau$ for left-handed folks$\big)$ and $\star$ notes convolution. Indeed, since both $R_h$ and $R_X$ are even functions, that convolution can be written as a cross-correlation if you like, which makes for a nice mantra to murmur to impress your boss during your presentation: "The output autocorrelation function is found by cross-correlating the input autocorrelation with the channel autocorrelation". If you prefer power spectral densities, then $$S_Y(f) = |H(f)|^2S_X(f)$$ where $H(f)$ is the channel transfer function. Does this change in case i have a limited bandwidth? No ${}{}{}{}{}$ Bless you mate! If the channel (LTI filter) is $h(t)$ and the input signal is $X(t)$, then the output of the channel is $$Y(t) = X(t)\star h(t) = \int_{-\infty}^{\infty}X(\tau)h(t-\tau)\,d\tau = \int_{-\infty}^{\infty}X(t-\tau)h(\tau)\,d\tau$$ where $\star$ is the convolution. EDIT: Statistical relationships between the input and output signals: 1- The mean of the output signal: $$E[Y(t)] = \int_{-\infty}^{\infty}h(t-\tau)E[X(\tau)]\,d\tau$$ 2- The cross-correlation between the input and output signals: $$R_{YX}(t_1, t_2) = E[Y(t_1)X(t_2)] = \int_{-\infty}^{\infty} h(t_1-\tau)\underbrace{E[X(\tau)X(t_2)]}_{R_{XX}(\tau,\, t_2)}\,d\tau$$ where $R_{XX}(t_1,\,t_2)$ is the auto-correlation function of the random signal $X(t)$. 3- The auto-correlation function of the output signal $$R_{YY}(t_1, t_2) = E[Y(t_1)Y(t_2)] = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}h(t_1-\tau_1)h(t_2-\tau_2)\underbrace{E[X(\tau_1)X(\tau_2)]}_{R_{XX}(\tau_1,\,\tau_2)}\,d\tau_1\,d\tau_2$$ The question is about "random signals" $x(t)$ and so perhaps a little more explanation is needed? @DilipSarwate My mistake. I thought the question was about the instantaneous output of the filter. I edited my answer for the statistical relationships. Thanks Does this explanation change in case i have a limited bandwidth, or not?
common-pile/stackexchange_filtered
Reverse Frames in Sprite Sheet - Imagemagick How do i reverse the frames of an sprite sheet with each frame of size 30x30 and horizontally aligned using Imagemagick? Example: https://i.sstatic.net/OIoYc.png ->>> https://i.sstatic.net/7WR9v.png (edited) Are there always just two? Do you mean to reverse the order or to flip the entire sheet left to right? You can do that like this... First, create three little 30x30 images in Red, Green and Blue: convert -size 30x30! xc:red red.png convert -size 30x30! xc:green green.png convert -size 30x30! xc:blue blue.png then append them side by side: convert +append red.png green.png blue.png image.png to get this: Now crop that into 30x30 pixel frames, and reverse the order of the frames, then re-append them together into the output image. convert -crop 30x30 image.png -reverse +append out.png Note: For future reference, if anyone wants to do the same thing but vertically instead of horizontally, just change +append to -append.
common-pile/stackexchange_filtered
not getting Microsoft Docs VB.NET examples anymore. They are all in C# For example, for a line in my Visual Studio 2017 Pro .vb file which is: Process.Start( my_process ) I double-click "Start" and press F1, and then the Process.Start Method page opens, but all examples are only in C#. Here's the URL it opens: https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.start?f1url=https%3A%2F%2Fmsdn.microsoft.com%2Fquery%2Fdev15.query%3FappId%3DDev15IDEF1%26l%3DEN-US%26k%3Dk(System.Diagnostics.Process.Start);k(TargetFrameworkMoniker-.NETFramework,Version%3Dv4.6.1);k(DevLang-VB)%26rd%3Dtrue&view=netframework-4.7.2 It used to be, it would show either C# or VB. This started happening about a month or two ago. If you click this button on the top right you can change the language the examples are in. Cookies in your browser will make it remember your last choice next time you open a new page to the help. Refer to: https://learn.microsoft.com/en-us/ef/ef6/modeling/code-first/workflows/new-database No VB choice. Sad really. VB isn't dead and needs to be included. Nothing you can do in C# I can't do in VB. Yes, VB.NET is not dead. As long as the VB.NET lovers like us continue to use it, it won't die. I still code in VB.NET both for Classic Windows Forms and ASP.NET MVC web apps. Yes, majority of the sample codes around are in C# but I could convert it to VB.NET with little effort. I could understand C# but since I am so used to VB.NET, I am more productive on it and is much faster. Even the simple "Try Catch", intellisence works perfect. Just type "Try" and intellisense would auto populate it. Try that with C#, nothing happens. Just understand the C# code (not memorize), what does it do and we won't have trouble converting it to VB.NET. You need basic C# basic Case Statement, If then Else, List, Arrays, etc. There are also online converters from C# to VB.NET
common-pile/stackexchange_filtered
Estimating error variance for simulated path analysis I want to run a simulation using lavaan and simsem to determine the sample size to use in a study using path analysis. The conceptual model for my study is as below, with 2 independent variables, 1 mediator and 2 moderators. From what I understand, the corresponding statistical model is as below: I am following the reference book suggested in this answer for how to run the simulation (Latent Variable Modeling Using R: A Step-by-Step Guide by A. Alexander Beaujean). I would like to estimate the error variance of M and Y to use in the simulation. However, the book only provides a method for estimating error variance for a simple regression (i.e, 1-R^2, using path tracing rules). How can I calculate error variances in my case? Var(error) = Var(Y)*(1 - R^2), where Y is the dependent variable (M and Y in your model) in a given regression equation. I would like to estimate the error variance of M and Y You are running a simulation, so can specify whatever population values you want for the error variances. I assume you mean that you would like to determine what the residual variance would be in order to yield a population total variance = 1? (e.g., so that paths are interpreted as standardized in the population) Not sure if you are using the matrix-style specification, but if so, you can use findFactorResidualVar() to determine the residual variance that yields whatever total variance you want. There is an example of its use in this tutorial article, which discusses only slightly simpler models than the one you are simulating: https://doi.org/10.3758/s13428-022-01996-0 Very useful function. Spent quite a bit of time agonizing over this problem the last two days. Thanks!
common-pile/stackexchange_filtered