text
stringlengths
70
452k
dataset
stringclasses
2 values
Why does Blender 3.6 not save addon preferences? I unchecked several addons that I dont use and checked otehrs that I need, and theres checkbox Automatic save preferences, yet, it`s all the same after the next launch It should work as is, and it works for me on Blender v3.6.5. Try on that version, if it still doesn't work please use the menu Help > Report A Bug. Could you tell us what OS you are on, and whether there's anything unconventional about your installation? it`s windows, unconventional may be that I have it installed on D drive, though addons and most files still are in the App data folder. I just thought that if I install it on D drive I would save memory on the C drive, but most of the Blender size is still there on C. Anyway it worked on previous versions
common-pile/stackexchange_filtered
Gradle: How to define common dependency for multiple flavors? My app has 3 flavors (free, paid, special) and there's a dependency LIB_1 need only for free flavor and other dependency LIB_2 needed for both paid and special flavors. So, my question is how to define these dependencies in build.gradle file? Currently, i define them like this: dependencies { freeImplementation 'LIB_1' paidImplementation 'LIB_2' specialImplementation 'LIB_2' } Is there a better way to define them instead of duplicating the same dependency for different flavors? Yes, according to the documentation of android gradle dependency management, this is the only way of declaring flavor-specific dependencies. If you are in a multi-modular project (and you don't want to repeat those lines in each submodule), you can also define these dependencies using the subproject block in the build.gradle of the root project: subprojects { //all subprojects` config } //or configure(subprojects.findAll {it.name != 'sample'}) { // subprojects that their name is not "sample" } If i understood you right, so i have to duplicate the dependency for each flavor? Yes, it's possible to avoid duplicating the dependency across flavors, as I describe in my answer here: https://stackoverflow.com/a/75083956/6007104 Utility Simply add these functions before your dependencies block: /** Adds [dependency] as a dependency for the flavor [flavor] */ dependencies.ext.flavorImplementation = { flavor, dependency -> def cmd = "${flavor}Implementation" dependencies.add(cmd, dependency) } /** Adds [dependency] as a dependency for every flavor in [flavors] */ dependencies.ext.flavorsImplementation = { flavors, dependency -> flavors.each { dependencies.flavorImplementation(it, dependency) } } Usage You use the utility like this: dependencies { ... def myFlavors = ["flavor1", "flavor2", "flavor3"] flavorsImplementation(myFlavors, "com.example.test:test:1.0.0") flavorsImplementation(myFlavors, project(':local')) ... } How it works The key to this utility is gradle's dependencies.add API, which takes 2 parameters: The type of dependency, for example implementation, api, flavor1Implementation. This can be a string, which allows us to use string manipulation to dynamically create this value. The dependency itself, for example "com.example.test:test:1.0.0", project(':local'). Using this API, you can add dependencies dynamically, with quite a lot of flexibility!
common-pile/stackexchange_filtered
Trying to create a loop in SQL to count every time the agent Id shows up in different month it would count one more than the previous month I'm trying to create a count where every time Agent Id shows up it would count one more than the previous month. Image of end state: You seem to be looking for row_number(): select t.*, row_number() over(partition by name order by d_month desc) cnt from mytable t This very close to what i'm looking for is there a way to get the newest d_moth.i'm getting the oldest to newest in the query ....Select* ,row_Number()over(partition by PERNR order by d_Month)cnt from #CAREERADVANCE9
common-pile/stackexchange_filtered
How much portion of the given text will be displayed in given Size To get the part of the NSString which is displayed in the view. Or To get the NSString which can be displayed in the given CGRect. It returns the string which is displayed on the view (UIlabel, UITextFiled etc). This is useful when the string to large and view is not long enough to display the whole string. So i have written the code and added it here. //If you want the string displayed in any given rect, use the following code.. @implementation NSString (displayedString) //font- font of the text to be displayed //size - Size in which we are displaying the text -(NSString *) displayedString:(CGSize)size font:(UIFont *)font { NSString *written = @""; int i = 0; int currentWidth = 0; NSString *nextSetOfString = @""; while (1) { NSRange range; range.location = i; range.length = 1; NSString *nextChar = [self substringWithRange:range]; nextSetOfString = [nextSetOfString stringByAppendingString:nextChar]; CGSize requiredSize = [nextSetOfString sizeWithFont:font constrainedToSize:CGSizeMake(NSIntegerMax, NSIntegerMax)]; currentWidth = requiredSize.width; if(size.width >= currentWidth && size.height >= requiredSize.height) { written = [written stringByAppendingString:nextChar]; } else { break; } i++; } return written; } @end
common-pile/stackexchange_filtered
Python - Seaborn grid shows, but no content (image) is showing I'm very new to python, and recently started learning seaborn. When I ran the code, there was no track back and the grid was showed up in a new window. But the problem was no image showed for the FaceGrid, the distplot was showning. Not sure what happened, really appreciate if anyone could help me! Thnank you! import pandas as pd import seaborn as sns import matplotlib as plt train = pd.read_csv("train.csv") train["Age"] = train["Age"].fillna(train["Age"].median()) #THIS ONE IS NOT SHOWN sns.FacetGrid(train, col='Survived', row='Pclass', size=2.2, aspect=1.6) #THIS ONE WAS SHOWED sns.distplot(train['Age']) sns.plt.show() click to see the image From the seaborn docs it appears that calling sns.FacetGrid initializes the grid. After that you need to map plots onto the grid. Hopefully that helps.
common-pile/stackexchange_filtered
The error "requires SystemVerilog extensions" while declaring an error What's wrong with the following code? The array "FIFO" is declared correctly, but an error appears. Can you please help how to fix this? module fifo( input clk, input [7:0]data_in , output reg [7:0] FIFO [0:8] ); integer i; always@(posedge clk) begin for(i = 8; i > 0; i=i-1) begin FIFO[i] <= FIFO[i-1]; end FIFO[0] <= data_in; end endmodule Error (10773): Verilog HDL error at fifo.v(29): declaring module ports or function arguments with unpacked array types requires SystemVerilog extensions Exactly what it says: you have a two-dimensional output port which is an unpacked array. This is a packed array: output reg [7:0] FIFO This is an unpacked array: output reg FIFO [0:7] Therefore your two-dimensional array is an unpacked array. Verilog allows only packed arrays for ports. If you want two or more dimensions you need to compile with System-Verilog.
common-pile/stackexchange_filtered
my PC wont go to sleep even though all options are set to put it to sleep I googled around and just cant figure out why my pc isn't going to sleep Here are some commands I found to check on sleep setting and my settings that I've done Some of the commands I tried: My power settings: My sleep settings: My powercfg /energy result. It says they don't interfere. I had a similar problem. Every possible setting was set to "go to bloody sleep!". No issues in powercfg. All drivers updated. I eventually gave up and resorted to unplugging every device except the screen and waiting. Guess what? It went to sleep! I gradually plugged things back in and the culprit was a PS4 controller. Totally ridiculous but if you plug a PS4 controller into my PC via USB it will keep the PC awake and the screen on indefinitely. There's no Power Management tab in its driver settings so you can't really do anything about it as far as I can tell. One thing I also noticed is that the Power and Sleep timeouts don't seem to be obeyed properly. I set sleep and screen-off to 1 minute, but screen-off actually took about 5 and sleep took more like 20. No idea why but that's good enough for me. Note that the same issue applies to the PS5 controller Try the following: Select the “Start” button, then type “device“. Open “Device Manager“. Expand the “Mice and other pointing devices” section. Right-click on the mouse you are using, then choose “Properties“. Select the “Power Management” tab. Uncheck the “Allow this device to wake the computer” box, then select “OK“. hi. mine is already unchecked, but its still not going to sleep
common-pile/stackexchange_filtered
Kubernetes - How to debug Failed Scheduling "0 nodes are available" I often find myself trying to spin up a new pod, only to get an error saying that no node is available. Something like: 0/9 nodes are available: 1 node(s) had no available volume zone, 8 node(s) didn't match node selector. I'm always at a loss when I get those messages. How am I supposed to debug that? please provide Taints information from your nodes kubectl describe your node and provide your pod/deployment yaml. You are looking for Taints and Tolerations Hi Hanx. I'm not looking for a solution specific to this instance of my problem. Indeed, this very problem is solved. I'm looking for general advice concerning node allocation and that debugging. At the beginning my advice is to take a look at Kubernetes Scheduler Component: Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.[-] Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines. A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on For every newly created pods or other unscheduled pods, kube-scheduler selects a optimal node for them to run on. However, every container in pods has different requirements for resources and every pod also has different requirements. Therefore, existing nodes need to be filtered according to the specific scheduling requirements. As per documentation: In a cluster, Nodes that meet the scheduling requirements for a Pod are called feasible nodes. If none of the nodes are suitable, the pod remains unscheduled until the scheduler is able to place it. kube-scheduler selects a node for the pod in a 2-step operation. Standard kube-scheduler based on Default policies: Filtering Scoring Looking into those two policies you can find more information where the decisions were made. For example: for scoring at the stage CalculateAntiAffinityPriorityMap This policy helps implement pod anti-affinity. Below you can find quick review based on Influencing Kubernetes Scheduler Decisions Node name: by adding a node’s hostname to the .spec.nodeName parameter of the Pod definition, you force this Pod to run on that specific node. Any selection algorithm used by the scheduler is ignored. This method is the least recommended. Node selector: by placing meaningful labels on your nodes, a Pod can use the nodeSelector parameter to specify one or more key-value label maps that must exist on the target node to get selected for running that Pod. This approach is more recommended because it adds a lot of flexibility and establishes a loosely-coupled node-pod relationship. Node affinity: this method adds even more flexibility when choosing which node should be considered for scheduling a particular Pod. Using Node Affinity, a Pod may strictly require to be scheduled on nodes with specific labels. It may also express some degree of preference towards particular nodes by influencing the scheduler to give them more weight. Pod affinity and anti-affinity: when Pod coexistence (or non-coexistence) with other Pods on the same node is essential, you can use this method. Pod affinity allows a Pod to require that it gets deployed on nodes that have Pods with specific labels running. Similarly, a Pod may force the scheduler not to place it on nodes having Pods with particular labels. Taints and tolerations: in this method, instead of deciding which nodes the Pod gets scheduled to, you decide which nodes should not accept any Pods at all or only selected Pods. By tainting a node, you’re instructing the scheduler not to consider this node for any Pod placement except if the Pod tolerates the taint. The toleration consists of a key, value, and the effect of the taint. Using an operator, you can decide whether the entire taint must match the toleration for a successful Pod placement or only a subset of the data must match. As per k8s documentaions: 1. NodeName is the simplest form of node selection constraint, but due to its limitations it is typically not used. Some of the limitations of using nodeName to select nodes are: If the named node does not exist, the pod will not be run, and in some cases may be automatically deleted. If the named node does not have the resources to accommodate the pod, the pod will fail and its reason will indicate why, e.g. OutOfmemory or OutOfcpu. Node names in cloud environments are not always predictable or stable. 2. The affinity/anti-affinity feature, greatly expands the types of constraints you can express. The key enhancements are: - the language is more expressive (not just “AND of exact match”) - you can indicate that the rule is “soft”/“preference” rather than a hard requirement, so if the scheduler can’t satisfy it, the pod will still be scheduled - you can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located The affinity feature consists of two types of affinity, node affinity and inter-pod affinity/anti-affinity. Node affinity is like the existing nodeSelector (but with the first two benefits listed above), There are currently two types of pod affinity and anti-affinity, called requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution which denote “hard” vs. “soft” requirements. Hope this help. Additional resources: Affinity and anti-affinity scheduler Performance Tuning Making Sense of Taints and Tolerations in Kubernetes Kubernetes Taints and Tolerations PreferNoSchedule Ooh apologies, I never saw your answer for some reason... Thanks, that's a really nice one!
common-pile/stackexchange_filtered
Synchronization between 2 systems I'm using Laravel to migrate data between 2 different systems. My sync will be executed as a cron each daily. Basically, I have 5000 rows, I need to copy from 1 DB to another ( I need to process it with Laravel ). What is the best way to manage such a long script, Nginx will probably gives me a timeout if I do it like that. Increase PHP script execution time Changes in php.ini Change max execution time limit for php scripts from 30 seconds (default) to 300 seconds. vim /etc/php5/fpm/php.ini Set max_execution_time = 300
common-pile/stackexchange_filtered
How Can I Bypass the X-Frame-Options: SAMEORIGIN HTTP Header? I am developing a web page that needs to display, in an iframe, a report served by another company's SharePoint server. They are fine with this. The page we're trying to render in the iframe is giving us X-Frame-Options: SAMEORIGIN which causes the browser (at least IE8) to refuse to render the content in a frame. First, is this something they can control or is it something SharePoint just does by default? If I ask them to turn this off, could they even do it? Second, can I do something to tell the browser to ignore this http header and just render the frame? If the 2nd company is happy for you to access their content in an IFrame then they need to take the restriction off - they can do this fairly easily in the IIS config. There's nothing you can do to circumvent it and anything that does work should get patched quickly in a security hotfix. You can't tell the browser to just render the frame if the source content header says not allowed in frames. That would make it easier for session hijacking. If the content is GET only you don't post data back then you could get the page server side and proxy the content without the header, but then any post back should get invalidated. displaying in an iframe is not allowed, but is there a way to still get the html as a raw String? You should be able to scrape it and do whatever you want with the markup. @Legends that's what I meant by proxy the content :-) I would rephrase probably ;-) UPDATE: 2019-12-30 It seem that this tool is no longer working! [Request for update!] UPDATE 2019-01-06: You can bypass X-Frame-Options in an <iframe> using my X-Frame-Bypass Web Component. It extends the IFrame element by using multiple CORS proxies and it was tested in the latest Firefox and Chrome. You can use it as follows: (Optional) Include the Custom Elements with Built-in Extends polyfill for Safari: <script src="https://unpkg.com/@ungap/custom-elements-builtin"></script> Include the X-Frame-Bypass JS module: <script type="module" src="x-frame-bypass.js"></script> Insert the X-Frame-Bypass Custom Element: <iframe is="x-frame-bypass" src="https://example.org/"></iframe> This approach looks to be blocked now. "Refused to display 'https://news.ycombinator.com/' in a frame because it set 'X-Frame-Options' to 'DENY'." Followed by "fiddle.jshell.net/:64 Uncaught SecurityError: Sandbox access violation: Blocked a frame at "http://fiddle.jshell.net" from accessing a frame at "null". The frame being accessed is sandboxed and lacks the "allow-same-origin" flag." @brichins Refresh the page. It works for me in Firefox 46. @niutech Results are now... inconclusive. Linked version still doesn't load - however, it does load in the jsFiddle editor, and then reloading the embedded view (as linked) displays - but it's pulling from cache. Clearing caching and reloading again fails. I wondered if the contents were being served via a proxy on jsFiddle's server, but the dev tools show network traffic directly to the target site. Even though the console still shows the "refused to display" error. Must do more digging... Yes this is not working. Anyone has identified any way to handle this? @Samir & BlueBird: I have updated the demo, it is working for me in Chrome 67. @Samir Tried refreshing the page? What's the error in web console? @niutech - this is working amazingly well. Thanks for the solution. How come this is not fixed in all major browsers yet? For me, it works with Firefox 61 against both X-Frame-Options: sameorigin and X-Frame-Options: deny. @caw it is in Chrome 70 where I get an Uncaught DOMException: Blocked a frame with origin "https://example.org" from accessing a cross-origin frame. @JeroenWiertPluimers Have you refreshed the page? @niutech odd: sometimes it works. Sometimes it fails. Not sure why yet. If I find out, I will post here. @JeroenWiertPluimers Try my new X-Frame-Bypass custom element! x-frame component was great but it wasn't accessible throught google chrome extension.... sigh...! any other clues? This doesn't bypass X-Frame-Options options at all, it just uses a proxy to scrape the target page and return the content without the header. It will only work for GET requests, won't get cookies, can only scrape pages the third party proxies (one ofcors.io, jsonp.afeld.me, cors-anywhere.herokuapp.com) can access (and may leave a copy of the content on one of those sites). As the OP is asking about Sharepoint this connection is likely to be over a VPN and certain to require cookies, neither of which will work with the undocumented third party proxies. It seem that this tool is no longer working and no longer being maintained from all the issues at github. Even the example page in the README which try to load something from Hacker News, fails. anyone find any solution ? X-Frame-Bypass in this post does not work anymore The tool is working fine as of today, at least in Chrome, but I can't get it to work with VueJS. I've tried registering the component, embedding the JS with my Vue code, etc., but no luck. Is there a way to protect against this? Since X-Frame-Bypass is working? @Noob Why would you protect from this functionality? The web should be open, free, not blocked. @NickDimou It works fine as of now: https://i.postimg.cc/CLHBFnZ7/X-Frame-Bypass-Web-Component-Demo.png @niutech, if this is the case, please update your answer with today's date. Thank you! @niutech in order to protect my website from clickjacking. most sites use dynamic javascript to check robots or proxies—that is why these don't work with high-end websites. The X-Frame-Options header is a security feature enforced at the browser level. If you have control over your user base (IT dept for corp app), you could try something like a greasemonkey script (if you can a) deploy greasemonkey across everyone and b) deploy your script in a shared way)... Alternatively, you can proxy their result. Create an endpoint on your server, and have that endpoint open a connection to the target endpoint, and simply funnel traffic backwards. Yes Fiddler is an option for me: Open Fiddler menu > Rules > Customize Rules (this effectively edits CustomRules.js). Find the function OnBeforeResponse Add the following lines: oSession.oResponse.headers.Remove("X-Frame-Options"); oSession.oResponse.headers.Add("Access-Control-Allow-Origin", "*"); Remember to save the script! As for second question - you can use Fiddler filters to set response X-Frame-Options header manually to something like ALLOW-FROM *. But, of course, this trick will work only for you - other users still won't be able to see iframe content(if they not do the same).
common-pile/stackexchange_filtered
How to share EV Code Signing Certificate with USB token with other developers? I got the Extended Validation Code Signing Certificate from Global Sign https://www.globalsign.com/en/code-signing-certificate/ They sent over a USB Token which I installed the certificate to ... or that's what I think I did. My question is (how) can remote developers use that certificate or is the signing now limited exclusively to my machine / USB? Any help is appreciated. Thanks. Where is the private key? On the USB? That's what's needed to, sign. What exactly is the USB token you have? I assume some sort of smartcard? How exactly did you install the certificate there? Did you create a private key on the PC and later just copied it to the device? Did you create the private key on the device? To sign, you need the private key and the cert. It sounds like the private key is on the USB, but I can't be sure. Best practices is to keep the private key, well, to keep it very private. It is generally only stored on a single machine and an automated workflow is used to copy binaries to that machine for signing. That machine should be as vanilla as possible to reduce the chance that it is vulnerable to an attack. The reason for such care is that malware authors really like to get their malware signed with a valid, trusted code signing key. There's a famous case where, despite very strong security, Adobe's private key was used to sign malware. Now Adobe's code signing key is much more attractive to an attacker than yours due to the fact that many people have already told their computer to trust Adobe's key, but basic security practices such as those described above are still appropriate. Remember that it is your company's reputation at stake if you end up signing malware with your key.
common-pile/stackexchange_filtered
How add select in list without join I don't need to join in a whole table, but just want a value from the table returned in my select list (but other tables are joined for other items). It's giving me an error...incorrect syntax near select select c.id as caseId ,sc.date ,select id from Queues where name = 'BB' as queueId --incorrect syntax near select..but just need this id without joining with other data FROM dbo.Cases c INNER JOIN Extensions sc on sc.id=c.id Cases: id area user 1 here Michele 2 there George Extensions: id subArea line 1 hereThere b 2 ThereHere c I'm having trouble searching online for this. What is wrong with joining with Queues and getting the id? Also, please specify the SQL flavor you are using. If that sub-select returns zero or 1 row at most, then it's a "scalar subquery" and must be enclosed in parenthesis. If it returns more than one row the query will crash. Microsoft sql server. Queue names don't join to cases. I just need to look one up so no join needed. [Edit] the question and provide a [example], i.e. the CREATE statements of the tables or other objects involved (paste the text, don't use images, don't link to external sites), INSERT statements for sample data (dito) and the desired result with that sample data in tabular text format. Also tag the DBMS you're using. @Michele Oh, then you can store the id in a variable DECLARE @qId INT = (select id from Queues where name = 'BB') and return it in your Select. I assume you get only one id for that name. If not, either use TOP 1 or clarify the requirement since what you want is not clear (i.e. which Queue id to use). Assuming the scalar subquery returns one row at most, you can enclose it between parenthesis, as in: select c.id as caseId ,sc.date ,(select id from Queues where name = 'BB') as queueId FROM dbo.Cases c INNER JOIN Extensions sc on sc.id=c.id If the scalar subquery returns more than one row the whole query will crash.
common-pile/stackexchange_filtered
How should I connect a Wink relay when two circuits are involved? I'm installing a Wink relay that has two switches for lights. The double gang box I want to install it in also has two switches that are operating lights only. My problem is that each light is on a different circuit breaker. The Wink relay has only one line in and two loads out. Can I switch one of the lights to the other's circuit and leave a hot line in the box with a wire nut? Probably, but might depend on how the lights are wired. Please post a clear photo of the box or a sketch of the connections. It is always safe to leave a hot wire capped off with a wire nut. However (as Harper says), you must also disconnect and cap off the corresponding neutral wire. Find and remove the wire nut connecting the neutral that you want to discontinue. Having separated the wires, cap off the one leading to the circuit breaker. Connect the one from the light to the neutral wire nut that will remain in use. fig.1a, left: Existing circuits; fig.1b, right: Modification for Wink 2-load control. You should add up the total loads on each of the two circuit breakers, and power the switch from the one with the lighter load. The illustration assumes that the power feed on the right was from the more heavily loaded breaker. You might consider leaving a label "no longer needed" taped to the wires, but that is not required, and any future maintainer who sees the two loads and the double gang box will not be horribly confused by what you did. Not a single wire, if he wants to deprecate a connection, he needs to do the same with both hot and neutral. if you are not religious about that, you will inevitably create a crossed-neutral situation with potential for fire and maddening incompatibility with AFCI or GFCI. @Harper: You are correct. This is a serious omission. I will modify my answer.
common-pile/stackexchange_filtered
Mysql multiply count I have a query, which will loop through an orders table, and as a sub query, will also count the number of rows in another table using a foreign key. SELECT eo.*, ( SELECT COUNT(*) FROM ecom_order_items eoi WHERE eo.eo_id = eoi.eoi_parentid AND eoi.site_id = '1' ) AS num_prods FROM ecom_orders eo WHERE eo.site_id = '1' AND eo_username = 'testuser' Now, on the ecom_order_items table, it has a eoi_quantity field. I want the query to display the number of products for each order, taking into account the quantity field. SELECT eo.*, ( SELECT COUNT(*) * eoi.eoi_quantity FROM ecom_order_items eoi WHERE eo.eo_id = eoi.eoi_parentid AND eoi.site_id = '1' ) AS num_prods FROM ecom_orders eo WHERE eo.site_id = '1' AND eo_username = 'testuser' However when I try this, it just gives me the same number as before. For the order I am testing on, it has 2 items, one with quantity 1, the other with quantity as 10, and num_prod is returning as 2 on both queries. Any ideas? :) If the value of eoi.eoi_quantity consists of multiple values, 1 and 10, only one of them will be selected and multiplied by COUNT(*). So you are getting the result 1 * 2. As COUNT(*) = 2 for two items, do you want the result 2 * 1 + 2 * 10? It doesn't seem right to me. I'm thinking you want to count the total quantity over all the relevant orders, so why not use SUM(eoi.eoi_quantity) to get 1 + 10 instead? Thus you would have: SELECT eo.*, ( SELECT SUM(eoi.eoi_quantity) FROM ecom_order_items eoi WHERE eo.eo_id = eoi.eoi_parentid AND eoi.site_id = '1' ) AS num_prods FROM ecom_orders eo WHERE eo.site_id = '1' AND eo_username = 'testuser'; But this is still going to get you the same value for each column. In fact, a subquery like (SELECT eo.*,...) will result in an error if it returns more than one row. That's why Johan suggested that you would want to join the tables. You need to join the tables to be sure that the rows are properly linked and then group them by the id's you want for each row. How about: SELECT eo.*, SUM(eoi.eoi_quantity) FROM ecom_orders eo JOIN ecom_order_items eoi ON eo.eo_id = eoi.eoi_parentid WHERE eo.site_id = '1' AND eo_username = 'testuser' GROUP BY eo.eo_id; Rewrite the query into: SELECT eo.*, sum(eoi.eoi_quantity) as sales FROM ecom_orders eo INNER JOIN ecom_order_items eoi ON (eo.eo_id = eoi.eoi_parentid) WHERE eo.site_id = '1' AND eoi.site_id = '1' AND eo_username = 'testuser' GROUP BY eo.eo_id; Because the count(*) for each individual item is 1, count(*) * quantity is the same as sum(quantity). I get an error from COUNT(eoi.) in the query, although COUNT() seems to work fine. `SELECT eo.*, (COUNT(*) * eoi.eoi_quantity) as sales FROM ecom_orders eo INNER JOIN ecom_order_items eoi ON (eo.eo_id = eoi.eoi_parentid) WHERE eo.site_id = '1' AND AND eoi.site_id = '1' AND eo_username = 'testuser' GROUP BY eo.eo_id;` However the calculation seems The use of a multi-valued eio_quantity is possibly a violation of the ISO standard, but MySQL will still provide an answer. It simply selects one of the values and returns it. Therefore, if there are two values in a single group (e.g. 1 and 10) then one of the two values, either 1 or 10, will be selected to display on the single row for that group. @Tom, that's the beauty of MySQL :-). It will bent over backwards to give you an answer. As will stackoverflow users. Thanks guys :)
common-pile/stackexchange_filtered
When and why threading TimerCallback is triggered? We have an asp.net application running on IIS6 with .net 2.0 (XP) that calls a web service in a button click event. It would take over 30 mins for web service to finish. But somehow after a certain period of time, usually between 20 ~ 30 mins, asp.net did another postback of button click event which cause problem in our application. The track stack (see below) shows that postback was triggered by Threading.TimeCallback. Does anyone know why? or anything related to IIS setting? Appreciated! Michael at ASP.platform_workflow_executeworkflow_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) at System.Web.HttpApplication.ApplicationStepManager.ResumeSteps(Exception error) at System.Web.HttpApplication.ResumeStepsFromThreadPoolThread(Exception error) at System.Web.HttpApplication.AsyncEventExecutionStep.ResumeStepsWithAssert(Exception error) at System.Web.HttpApplication.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) at System.Web.HttpAsyncResult.Complete(Boolean synchronous, Object result, Exception error, RequestNotificationStatus status) at System.Web.SessionState.SessionStateModule.PollLockedSessionCallback(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading._TimerCallback.PerformTimerCallback(Object state) Could be a problem with the session timing out. Increasing the session timeout period could solve the problem. However, with an operation running that long it might be better considering the WebService asynchronously and providing some information to the user while the process is running.
common-pile/stackexchange_filtered
Converting an array list to a group of static variables in Java I'm wondering if it is somehow possible in Java (for Android in this specific case) to get from an ArrayList of strings a set of static, preferably final, variables stored in a resource class or enum, like aapt does for the resource class "R". An example, if I have the following ArrayList: ArrayList<String> AL=new ArrayList<String>(); AL.add("chat"); AL.add("greeting"); AL.add("conversation"); method_to_get_the_class(AL); //or something like this...... I'd like to get somewhere a static class like static class ModeList { public final static String CHAT="chat"; public final static String GREETING="greeting"; public final static String CONVERSATION="conversation"; } Do you think it could be possible, maybe using some kind of build pre-processing method like Android's aapt? Maybe with Gradle (which I don't know very well)? The final target is to be able to recall each ArrayList entry by a variable named like its content, exactly like layout resources of Android. Thanks Enum? HashMap with key named as the value it contains? Why using arraylist in the first place? What exactly are you trying to achieve? If you use the preferred way of storing strings in /res/values/strings.xml (or similar) then you're effectively providing static string resources at build time. Not only that but you can make it support multiple languages and Android will adjust accordingly at runtime depending on the users' locale. @Squonk What I'm trying to achieve is something like how the R class works: if I declare a layout element in XML and I give it a name (say a TextView named tx1) then I can get an Integer id through the Activity method "findViewById(R.id.tx1)". The integer id of tx1 is a final variable set in the static final class R, generated by aapt. What I need is to set a static final variable named like the content of the variable. It's a little bit different from the example above with the R class but in the end is the same concept more or less. Hope this clarifies. @martin.p : No, sorry that hasn't clarified things. Strings declared in strings.xml are pre-compiled into R.java also and become static final resource ids accessible using getResources().getString(int resId). If you have a string with the key "CHAT" and value "chat" you'd simply use R.string.CHAT as the resource id to return "chat" as the string. As I said, this also makes multi-language support posible by having different /res/values-XX folders where XX is a country code and Android does it automatically for you. @Squonk what I'm trying to achieve is outside the aapt and the R resource class, I'd like to create similar mechanism to define static final variables from xml files differenti from layout, string.xml or similar. public static void CreateClass() throws IOException { ArrayList<String> al = new ArrayList<>(); al.add("sun"); al.add("mon"); al.add("tue"); File file = new File("RR.java"); PrintWriter pw = new PrintWriter(file); pw.println("public class RR {"); for (String temp : al) { pw.println(" public static final String " + temp.toUpperCase() + " = \"" + temp + "\";"); } pw.println("}"); pw.close(); } @Kansara Thanks for the answer, I will probably try tomorrow. What is not clear to me is how the class generated is included in the building path and included in the building process? @martin.p Above code snippet will ONLY create a java class. You have to include it manually in your project. Add path before file name if you cant find it.
common-pile/stackexchange_filtered
Can you produce a dynamically generated field in MySQL at the server lever? We have an older system that's being replaced piecemeal. The people who originally designed it broke US telephone numbers for our clients up into three fields: phone_part_1, phone_part_2, and phone_part_3, corresponding to US Areacodes, Exchanges, and Phone Numbers respectively. We're transitioning to use a single field, phone_number, to hold all 10 digits. But, because some pieces of the system will continue to reference the older fields, we've been forced to double up for the moment. I'm wondering if it's possible to use MySQL built-in features to reroute requests for the old fields (both on read and write) to the newer field without having to change the old code (which is in a language nobody here is comfortable in anyhow.) So that: SELECT phone_part_1 FROM users; Would end up the same as SELECT SUBSTRING( phone_number, 1, 3 ); To be clear, I want to do this without manipulating the individual queries. Is it possible? How? You could define a VIEW: CREATE VIEW users AS SELECT SUBSTRING( phone_number, 1, 3 ) AS phone_number, ... FROM real_users; Then you can query it as if it were a table: SELECT phone_number FROM users; But that would require your "real" table to be stored with a distinct table name. You can't make a view with the same name as an existing table. When you're ready to really replace the table with the new structure, then you can use RENAME TABLE to change tables as a quick action (no table restructure required). Have you looked into views? A view will take the place of a new table for now, providing a way to have your new structure, but still access the data in the original tables. Once you are ready for your final move, you can implement new tables and do a mass conversion of any remaining data you haven't done yet. Or you can go in reverse, which is what it sounds like you really would prefer. Create your new table, convert your data, and set up a view that mimics the old structure. Views in MySQL: http://dev.mysql.com/doc/refman/5.0/en/create-view.html
common-pile/stackexchange_filtered
How can I add same extension point in multiple plugin project? I have three plugin projects which collectively constitute a feature for eclipse. And I have an another project which deals with licensing and other stuffs. I have created an extension point in the licensing plugin and successfully connected this to one of my project.But when I repeat the same procedure for second project it throws some error like "Access restriction", "A cycle was detected in the build path of project". Please help me guys, I am completely new to plugin development Thanks in advance. This doesn't sound like an extension point problem. More likely circular dependencies in the plugins (A depends on B and B depends on A). You need to show us more details of the plugins. Checkout When to use import package and require bundle
common-pile/stackexchange_filtered
singularity column width without floating I've 12 columns grid setup using singularity. Many times I'm looking for a simple variable of column width. I can use grid-span(), but it involves margins and floats. Is there any mixin/function which will return only column width in %? I'm searching for this for two days without any success. I found only span-columns(x, y); but it does not work. Thanks for pointing to a direction. EDIT: solution found It was my mistake trying to use column-span (https://github.com/Team-Sass/Singularity/wiki/Grid-Helpers#column-span) as a mixin with @include. Correct use is for example: width: column-span(2, 6); //width will be set to 2 columns at six position. Thanks @andrey-lolmaus-mikhaylov for pointing this out. I'm still new to stackoverflow... Solution found It was my mistake trying to use column-span (https://github.com/Team-Sass/Singularity/wiki/Grid-Helpers#column-span) as a mixin with @include. Correct use is for example: width: column-span(2, 6); //width will be set to 2 columns at six position.
common-pile/stackexchange_filtered
I get an "error, string or binary data would be truncated" with Hibernate/JPA So I got an error, string or binary data would be truncated error from sql-server. I am using hibernate to persist data to MS sql-server. Usually when you get an error like this it means that the column you're trying to save to is not big enough to hold the data you are trying to insert. I carefully started to compare to see if one of my BigDecimal variables in the domain doesn't have the correct precision, but after spending a lot of time, I realized this was not the case. Turns out, I was missing @ManyToOne(optional = false) in the child domain. My parent domain object has a one to many relationship with the child. Adding the annotation solved the issue for me. And I am Hoping this will help somebody seeing similar error. The question is: Why is database giving a data truncation error when ManyToOne annotation is missing in the domain? Because since you haven't annotated your field, Hibernate applies the default mapping: treating the field as a serializable object that, once serialized, is stored in a binary column. The length of the byte array resulting from the serialization was too large for the column, hence the error. If no annotation is provided, Hibernate will apply the default mapping and will create a column with datatype tinyblob. This will be storing the serialize data of that entity. TINYBLOB: maximum length of 255 bytes. Once the Entity is serialized, and data length exceeds 255 bytes, it will throw the error data truncation error Thank you ankur, but you wrote the same thing that JB wrote, so I'm choosing his answer over yours. @pacman yes no issues, i just wanted to define the exact colummn datatype and its size, just to be more precise.
common-pile/stackexchange_filtered
Rails: flash notice unless only certain attributes updated I want to show a flash notice in my Rails app all the time unless only certain attributes are updated. def update if @user.update_attributes(user_params) if user_params == [:user][:certain_params] redirect_to users_path else redirect_to users_path, notice<EMAIL_ADDRESS>saved." end else redirect_to edit_user_path(@user), flash: { error: @user.mapped_errors } end end Something like this pseudocode? I don't understand what you mean by "unless only"... are you saying that the user must change certain attributes? He's not allowed to leave them unchanged? If only one particular attribute is changed, let's call it photo, I want to hide the flash message. Otherwise I want it shown. So if they change photo plus some other attribute, you want to see the message. If they don't change photo but they change some other attribute, you want to see the message. If they change photo but don't change anything else, you don't want to see the message. Is that right? That's exactly right, yeah. Do you need it exactly updated or are you trying to remind them to upload a photo so you want that shown if the photo is not uploaded? Use the changed method to get an array of the attributes that are changed, create a flash message if the attributes are not the "non-notify" attributes. def update @user.assign_attributes(user_params) changed_fields = @user.changed if @user.save flash[:notice] =<EMAIL_ADDRESS>saved." if changed_fields != ["photo"] redirect_to users_path else redirect_to edit_user_path(@user), flash: { error: @user.mapped_errors } end end This will show the saved notice if they change photo plus other attributes, or if they change other attributes but not photo. The message is suppressed only if only photo is changed. Perfect! Thank-you so much. I know this was not part of the question, yet I think worth noting: this is NOT working for NESTED ATTRIBUTED So add it to the changed_fields list... changed_fields = (@user.changed + @user.name_of_nested_associations.map(&:changed)).flatten def update @user = User.find(params[:id]) # lets say you have these attributes to be checked attrs = ["street1", "street2", "city", "state", "zipcode"] attributes_changed = (@user.changed & attrs).any? if @user.update_attributes(user_params) flash[:notice] =<EMAIL_ADDRESS>saved." if attributes_changed redirect_to users_path else redirect_to edit_user_path(@user), flash: { error: @user.mapped_errors } end end For more info see Rails 3 check if attribute changed http://api.rubyonrails.org/classes/ActiveModel/Dirty.html
common-pile/stackexchange_filtered
How to ignore query parameters in web cache? Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use. Fiddling with the cache is not the right way to go about this. The "right" way to handle URLs with this sort of tracking is to send a 301 redirect to your canonical URL (after actually doing any necessary tracking, of course). Not sure I entirely follow you, but: With squid I believe you would create a url regex ACL and then use the cache directive to tell it not to cache those requests. In varnish, in the vcl_recv function set req.url = regsub(req.url, "\?.*", ""); You would really need to evaluate whether you wanted to do that though. If it is served from Varnish rather than your backend, are you altering any of your stat collection processes? Just removing all utm_ parameters -- [?&]utm_[a-z]*=[^&]* (or something similar) -- might be better than removing all of them. Having example.com/?page=this and example.com/?page=that cached as the same entity can cause headache.
common-pile/stackexchange_filtered
Automatically renumbering element parts in Eagle I have a circuit in Eagle, and say that counting from left, I have R1, R2, R3, then between R1 and R2 I'm adding a new resistor, it will be named R4. I will end with (looking on the circuit) R1, R4, R2, R3. It's not ordered. I would like to keep elements ids (separately for each element type, e.g resistors, capacitors) in order from lowest to highest (counting from left). Is there a script or settings in eagle that will do it automatically for me? Although Tom Carpenters anwser is right, I am adding this as an alternative. Eagle already has an inbuilt tool that will allow you to renumber parts without the need for external ULP scripts. In your schematic editor in your menu, open "Tools" and click "Renumber parts". This will automatically renumber all your parts in the schematic as you wish. +1 for learning new things every day - didn't know that was in the menu. But as is the way with Eagle, this is actually just a shortcut which calls the renumber-sheet ULP. There is a ULP included with Eagle to do this. It is called 'renumber-sheet.ulp'. What this does is count in the direction you specify (up/down, left/right) in the schematic and renumber all parts with the same letter (e.g. all "R###", all "C###") to be in sequential order. I believe that is exactly what you want, but if you are talking about in the layout, I don't think this will do that, though it could probably be modified to do so. I'm not an Eagle user and I'm not disputing your answer, but if they provide positional reference renumbering on the schematic and not on the PCB, they sure got it backwards. IME, it is far more important to have it on the PCB, especially for troubleshooting (with a scope or meter) or hand-assembly from a BOM. I can't imagine why you would want it on the schematic instead, unless you are not planning on making a PCB. FYI ... I just found an interesting discussion on this: http://www.eaglecentral.ca/forums/index.php/mv/msg/36342/123835/ @Tut the ULP renumbers both the components in the layout and in the schematic together (to retain consistency). However you have to run the ULP from the schematic - in otherwords you can't say number each component left to right as they appear in the board without modifying the ULP (which should actually be quite trivial to do). From the discussion I linked to: "The ULP cmd-renumber.ulp renumbers components on the PCB in a logical order, and if the schematic is open, back annotation happens automatically." ... This would seem to indicate that it is possible to do a positional renumber for the PCB, but as I said, I'm not an Eagle user. I use Cadstar. With Cadstar you do a "positional rename" from the PCB editor (with adjustable automatic features or you can do it manually), and then when all finished you perform a "back annotation" from the schematic editor. There is a great ULP named 'cmd-renumber'. This ULP will effortlessly renumber your components as they appear on your board layout. This is contrary to the ULP 'renumber-sheet' which will renumber your components as they appear in the schematic. The built in tool in Eagle appears to simply run the 'renumber-sheet' ULP. The 'renumber-sheet' ULP may also require some extra work around or understanding, you can read more about this here: https://forums.autodesk.com/t5/eagle-forum/bug-report-renumber-ulp-broken-in-9-5-2/td-p/9155600
common-pile/stackexchange_filtered
How to configure single internal ALB for multiple EKS services? I am running an AWS EKS cluster and in that, there are multiple applications running, my eks is in a private subnet so to access those applications I am using a VPN and creating internal ALBs to access some application dashboard in the browser. I am able to get their dashboard in the browser but now I am trying to make a single alb to access all these applications. I want to configure a single alb and with that, I want to call my application which is running in my eks cluster. Suppose, I have an application uiserver running in my eks cluster, I want an ALB on which I call alburl/uiserver/somequery, and then it will direct me to alburl/somequery and also call my specific service uiserver. I am not getting anything to configure this. If anyone has any idea about configuring of this type of ALB then please reply. Thanks Are looking for path based routing. Following link has details https://aws.amazon.com/premiumsupport/knowledge-center/elb-achieve-path-based-routing-alb/
common-pile/stackexchange_filtered
ImageButton in WPF Hy!! I added a png to the debug dic. Now my Xaml Code: <Button Height="23" HorizontalAlignment="Left" Margin="160,249,0,0" Name="button1" VerticalAlignment="Top" Width="57" "> <Image Source="\back.png"></Image> </Button> The image won'T be found. I read some tutorials about a image in a button, but they all speak about resources etc. Please write your answer step by step i am very new to WPF Do you want the image to be embedded, a part of the exe/dll, or should the image be a separate file? Add the file to the solution. Open the image properties by right click on an image and make sure the build action for image is set to Resource. You have to add the image to the solution and give the path Like and Set its property BuildAction to Resource <Image Margin="2" Source="/ApplicationName;component/FolderName/back.png" />
common-pile/stackexchange_filtered
how to clear cache using javascript when starting Offline application I need a help to clear cache using Javascript, Is it possible to clear cache using Javascript?. I have developed a Offline Chrome application using Javascript & Node WebKit. When using this application, the cache sizes increasing more day by day. So I want to delete cache directory or clearing cache from AppData/Local/MyAPP.1.0 whenever I'm starting application. Kindly help me to clear the cache using Javascript (related solution). Please let me know, if you need any information on this. Thanks in advance. Please add code to show your progress and help us pinpoint your problem Try this, It might works require("nw.gui").App.clearCache(); You can just do nw.App.clearCache(); now. The require('nw.gui') part was only needed on versions below 0.13.0. I always disable disk cache because I find it slows everything down; even in the browser I wrote, so if you want to disable it which also means that there won't be any garbage to clean up, use this setting in your package.json: "chromium-args": "--disk-cache-dir=W:/abc --media-cache-dir=W:/abc --disk-cache-size=1 --media-cache-size=1", NB: The above are DUMMY drives/paths that don't exist to kill the cache. Use this recursive function to delete all files from AppData/Local/{MyAPP.1.0}/ deleteFolderRecursive:function(path) { var fs = require("fs"); if( fs.existsSync(path) ) { try{ fs.readdirSync(path).forEach(function(file) { var curPath = path + "/" + file; if(fs.statSync(curPath).isDirectory()) { try{ deleteFolderRecursive(curPath); }catch(e){ console.log(e); } } else { try{ fs.unlinkSync(curPath); }catch(e){ console.log(e); } } }); fs.rmdirSync(path); }catch(e){ console.log(e); } } } Get AppData folder path using var path = require("nw.gui").App.dataPath; Get AppData folder path using deleteFolderRecursive(path);
common-pile/stackexchange_filtered
Avoiding a lost update in Java without directly using synchronization I am wondering if it is possible to avoid the lost update problem, where multiple threads are updating the same date, while avoiding using synchronized(x) { }. I will be doing numerous adds and increments: val++; ary[x] += y; ary[z]++; I do not know how Java will compile these into byte code and if a thread could be interrupted in the middle of one of these statements blocks of byte code. In other words are those statements thread safe? Also, I know that the Vector class is synchronized, but I am not sure what that means. Will the following code be thread safe in that the value at position i will not change between the vec.get(i) and vec.set(...). class myClass { Vector<Integer> vec = new Vector<>(Integer); public void someMethod() { for (int i=0; i < vec.size(); i++) vec.set(i, vec.get(i) + value); } } Thanks in advance. For the purposes of threading, ++ and += are treated as two operations (four for double and long). So updates can clobber one another. Not just be one, but a scheduler acting at the wrong moment could wipe out milliseconds of updates. java.util.concurrent.atomic is your friend. Your code can be made safe, assuming you don't mind each element updating individually and you don't change the size(!), as: for (int i=0; i < vec.size(); i++) { synchronized (vec) { vec.set(i, vec.get(i) + value); } } If you want to add resizing to the Vector you'll need to move the synchronized statement outside of the for loop, and you might as well just use plain new ArrayList. There isn't actually a great deal of use for a synchronised list. But you could use AtomicIntegerArray: private final AtomicIntegerArray ints = new AtomicIntegerArray(KNOWN_SIZE); [...] int len = ints.length(); for (int i=0; i<len; ++i) { ints.addAndGet(i, value); } } That has the advantage of no locks(!) and no boxing. The implementation is quite fun too, and you would need to understand it do more complex update (random number generators, for instance). It's a bit on the low-level side. java.util.concurrent is at a more generally useful level. vec.set() and vec.get() are thread safe in that they will not set and retrieve values in such a way as to lose sets and gets in other threads. It does not mean that your set and your get will happen without an interruption. If you're really going to be writing code like in the examples above, you should probably lock on something. And synchronized(vec) { } is as good as any. You're asking here for two operations to happen in sync, not just one thread safe operation. Even java.util.concurrent.atomic will only ensure one operation (a get or set) will happen safely. You need to get-and-increment in one operation. Other than with getAndIncrement and similar? (Technically implemented as multiple operations but in a loop if it fails.)
common-pile/stackexchange_filtered
Error: "no such directory" when running click package on phone. Error: "no such directory" when running click package on phone. On my desktop it works. Testing it out on the phone though, it does not work. The error stems from the import of that folder called "Components". How can I fix this? I went to my .pro file and found this: #specify where the qml/js files are installed to qml_files.path = /Poker-Puzzle qml_files.files += $${QML_FILES} changed it to this: #specify where the qml/js files are installed to qml_files.path = /Poker-Puzzle/QML/components qml_files.files += $${QML_FILES}
common-pile/stackexchange_filtered
Conditional Compilation in RPG(LE) Can I include a section of code based on whether a variable is defined in my program, or is the preprocessor completely unable to access this information, only compilation conditions? I.e. I'm after something like: /IF DEFINED(myVariable) D myOtherVariable S like(myVariable) /ELSE D myOtherVariable S 20A /ENDIF This link appear to suggest it is not possible. If so does anyone know of another way to achieve this? It would be a nice feature to have, but unfortunately, it doesn't work that way today. The value in parenthesis after DEFINED must be a defined condition name, not a variable name. The way to make it work is to have a /DEFINE directive whenever you define that specific variable in any of programs or copybooks. It all depends on usage for these situations as far as whether or not this is a recommended practice. The link provided in your question is spot-on and I agree with the conclusions. Yes, I've been reading more since posting your question and agree with your agrement. As an additional hint, use C programming books for this subject, not the IBM RPGLE books. The IBM books describe what can be done with directives. The C books describe how it is used in real world programs (e.g. Linux open source programs). Following that style makes your RPG sources much more readable for other programmers.
common-pile/stackexchange_filtered
Chained methods and continuation indent in Intellij I've never figured out how to make Intellij handle continuation indent for chained methods properly, and apparently today is the day it's annoyed me enough to consult you lovely people. What I want is this: makeAThing( "with", "params" ) .setProperty("with some more params") .start(); What I get is this: makeAThing( "with", "params" ) .setProperty("with some more params") .start(); I get this in Java, Groovy, JavaScript and a bunch of other places. How can I persuade Intellij not to add continuation indent after a chained method call? This comes up a lot when using angular.js, because the convention is to break lines when defining modules. We badly need this for SwiftUI too in AppCode I just switched to intellij and also have found this rather annoying. Only found two solutions: forcing the coding style to have 0 for "continuation indent" which I'm starting to like anyway albeit not very canonical Java. Turing off the formatter for blocks of code and press shift tab Works for Java not sure for JS: // @formatter:off ... // @formatter:on
common-pile/stackexchange_filtered
MBN Profile Corrupted with HRESULT 0x800704B6 I'm trying to connect to an 3g network using MBN Api. But when i call the connect method in IMbnConnection it throws an exception. The network connection profile is corrupted. (Exception from HRESULT: 0x800704B6) I have tried to find typos and other errors in the profile i generate in my code using Microsoft's Mobile Broadband Documentation (Link), but i can't find one. I also found on a blog that this HRESULT can be from a wrong IMSI number (here), so i connected manually with Windows and compared the numbers in my profile with the one in the properties of the connection and found that they are the same, both the IMSI and ICC numbers. This is how my XML is currently generated. <MBNProfile xmlns="http://www.microsoft.com/networking/WWAN/profile/v1"> <Name>boomer3g</Name> <ICONFilePath>Link/To/BMPFILe</ICONFilePath> <Description>3G Network profile created by Boomerweb</Description> <IsDefault>true</IsDefault> <ProfileCreationType>UserProvisioned</ProfileCreationType> <SubscriberID>IMSI Number (i counted 15 characters)</SubscriberID> <SimIccID>ICC number (i counted 19 characters)</SimIccID> <AutoConnectOnInternet>false</AutoConnectOnInternet> <ConnectionMode>auto</ConnectionMode> </MBNProfile> And this is the code that generated the XML profile. //XML Namespaces XNamespace xmlns = XNamespace.Get("http://www.microsoft.com/networking/WWAN/profile/v1"); XDocument xmlDocument = new XDocument( new XElement(xmlns + "MBNProfile", new XElement(xmlns + "Name", "boomer3g"), new XElement(xmlns + "ICONFilePath", Path.GetFullPath("Resource/KPN-icon.bmp")), new XElement(xmlns + "Description", "3G Network profile created by Boomerweb"), new XElement(xmlns + "IsDefault", true), new XElement(xmlns + "ProfileCreationType", "UserProvisioned"), new XElement(xmlns + "SubscriberID", subscriberInfo.SubscriberID), new XElement(xmlns + "SimIccID", subscriberInfo.SimIccID), new XElement(xmlns + "AutoConnectOnInternet", false), new XElement(xmlns + "ConnectionMode", "auto") ) ); //Create xml document string xml; XmlWriterSettings XmlWriterSet = new XmlWriterSettings(); XmlWriterSet.OmitXmlDeclaration = true; using (StringWriter StrWriter = new StringWriter()) using (XmlWriter XWriter = XmlWriter.Create(StrWriter, XmlWriterSet)) { xmlDocument.WriteTo(XWriter); XWriter.Flush(); xml = StrWriter.GetStringBuilder().ToString(); } What else can give this HRESULT? can you guys give an example of MBN profile? or is there something wrong in my code that generates the profile? Guys i have solved my problem. It turns out i dind't had my XMl declaration above the xml. My xml is now like this: <?xml version="1.0" encoding="utf-8" ?> <MBNProfile xmlns="http://www.microsoft.com/networking/WWAN/profile/v1"> <Name>boomer3g</Name> <ICONFilePath>Link/To/BMPFILe</ICONFilePath> <Description>3G Network profile created by Boomerweb</Description> <IsDefault>true</IsDefault> <ProfileCreationType>UserProvisioned</ProfileCreationType> <SubscriberID>IMSI Number (i counted 15 characters)</SubscriberID> <SimIccID>ICC number (i counted 19 characters)</SimIccID> <AutoConnectOnInternet>false</AutoConnectOnInternet> <ConnectionMode>auto</ConnectionMode> </MBNProfile> So it turns out nothing was wrong with my code, i just removed something i shouldn't had. A stupid mistake on my part i know.
common-pile/stackexchange_filtered
Lease term is described as additions instead of a sum I have been looking at an investment property, it states: Term: 30 years based on 15 + 5 + 5 + 5 Initial rent per unit: Ranging from $13,500 to $19,000 gross plus GST if any (including free-stay value). Reviews: 2% (compounding) increase starting after year 3 and market review on renewals. Return: Up to 6.95% gross (see pricing schedule) Price per title: Ranging from $199,855 to $273,542 + GST (if any). Country is New Zealand, it is a commercial apartment hotel investment. Can I please ask why it is presented in such a way? why not tell me it is a 30 year lease? You need to add more details. As it is your question is unclear. What country is this? Is this residential or commercial? A 15/5/5/5 lease is an initial 15 year lease with three 5 year options (meaning the tenant can leave without penalty, or without significant penalty, after 15, 20, or 25 years). This is common in commercial properties where the tenant would like the surety of knowing the lease terms for a longer time (30 years) but also the ability to leave prior to the full lease term if business conditions change. Of course, a 15/5/5/5 lease will cost a bit more than a straight 30 year lease typically (the tenant pays some premium for the ability to have option years). The lease would have more specifics, including whether anything changes at the option year points and whether there is any exit fee for turning down the option years.
common-pile/stackexchange_filtered
Unity set default image sprite programmatically I am trying to create inputfields in Unity 3D programmatically. I succeeded in this when running in the editor emulator, but trying to built into an android device just provides an error. "BCE0005: Unknown identifier: 'AssetDatabase'." Apparently this AssetDataBase is only available in editor. inputFieldGO.AddComponent.<Image>(); var image : Image = inputFieldGO.GetComponent.<Image>(); image.sprite = AssetDatabase.GetBuiltinExtraResource.<Sprite>("UI/Skin/InputFieldBackground.psd"); image.type = Image.Type.Sliced; How do I get around this? How do I set the sprite of this image to the default InputFieldBackground entirely programmatically, without using the AssetDataBase? I'd move the InputFieldBackground into the project resources, but I don't know where the file is or if it's even accessible. AssetDatabase is an Editor class, that means that can be used in Editor but not in devices. Unity Scripting Reference For AssetDatabase Solution: Do you have your files on Resources folder? Try this: Sprite newSprite = Resources.Load<Sprite>(spritePath); From: Unity Scripting Reference for Resources.Load I tried image.sprite = Resources.Load("UI/Skin/InputFieldBackground.psd") as Sprite; since I am using UnityScript. This doesn't give an error, but also doesnt load the sprite. I don't know how I can access that sprite, without the AssetDatabase, and I can't move it to the resources folder, because I don't have the sprite! What I'm trying to say is that I can't set the default Input Field sprite, because I don't have default Input Field sprite. I don't know where to get the default sprite so I could move it to the the resources folder. Ok, first you should put your psd file into your Resources folder, ie, Resources/UI/Skin/InputFieldBackground.psd Then select your psd file in Unity and you will see its properties on Inspector, select Texture Type : Sprite (or Advanced). This will generate a sprite associated to your psd file. Then try to load your sprite using: Resources.Load("UI/Skin/InputFieldBackground") as Sprite. Let us know how it went ! Okay I tried this. Code to download sprite: "image.sprite = Resources.Load("sprite") as Sprite;" My resource folder: http://puu.sh/j2iMh/cbc332cebd.png The rendered end result: http://puu.sh/j2iTy/2e09799e39.png So it looks like its still not downloading the sprite. Changing the path to "UI/Skin/sprite" didn't help either. mmm, weird... can you upload a screenshot of your Project view, showing your image file? ie: http://snag.gy/50UDm.jpg Mmm, I think something is wrong with your file, the icon of your "sprite" is an icon of a prefab. If you check my previous link, the icon of the image on your resurce folder should be like an preview icon, not like a prefab icon.
common-pile/stackexchange_filtered
Remote DB connection to SQL Server 18 (single sign on) using PgAdmin I am currently trying to figure out the best way to do a remote database connection via windows server that has SQL Server 18 database (single sign on), and I am wanting to use a tool to help parse the data, like PgAdmin. We have a server here in our building, that we have to use Windows based remote connection, which stores the SQL Server 18 in it, that holds the data that I need. I am needing to get that data, to my PC (not connected to the server so I can't make edits on the main server ((read only access))), and need to create my own Schema to generate the reports/information that I am in need of. Any help with this would be greatly appreciated. Thank you in advance! My boss has attempted to do this in the past, I am not completely sure of what methods he used, but he was unsuccessful at it. There's no such thing as sqlserver 18 I'm a little confused by the details but generally speaking on windows with sql server you can use SSMS (and many other similar tools) to connect to a database and read data (run queries, view tables, etc.) https://learn.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms If you are using SQL Server 18, that might well be your problem; no drivers for it exist yet. I don't expect them to be released until SQL Server 18 is released, which is probably sometime between 2026-2028. You're using an incredibly early pre-release build (or you have access to a time machine). @ThomA -- I just verified and we are actually using SQL Studio Management vs 14.0.2052.1 The version of SSMS is irrelevant, @Felecia , we need to know the version of SQL Server. What does PRINT @@VERSION return? SQL Server Management Studio (SSMS) is the equivalent for PgAdmin for Microsoft SQL Server. Does that answer your question?
common-pile/stackexchange_filtered
Using php facebook sdk in joomla Here is the sample what i have used for a social login in joomla. Iam having a problem to get extended permission so that i can recieve there email id.. $facebookuser = @$_SESSION['facebookuser']; $facebook = new Facebook(array( 'appId' => $params->get('fb_appid'), 'secret' => $params->get('fb_appsecret'), 'cookie' => true )); $session =& JFactory::getSession(); $session->set( 'facebook', $facebook ); if(empty($facebookuser)) { $session = $facebook->getUser(); if (!empty($session)) { # Active session, let's try getting the user id (getUser()) and user info (api->('/me')) try { $uid = $facebook->getUser(); $user = $facebook->api('/me'); } catch (Exception $e) { } if (!empty($user)) { $fbobject = new facebookloginHelper(); $storefb = $fbobject->fbstoreuser($user); } else { # For testing purposes, if there was an error, let's kill the script die("There was an error."); } } else { # There's no active session, let's generate one //$login_url = $facebook->getLoginUrl(); $login_url = $facebook->getLoginUrl(array( 'canvas' => 1, 'fbconnect' => 0, 'req_perms' => 'publish_stream,email', 'next' => 'http://www.oyeparty.com/bangalore', 'cancel_url' => 'http://www.oyeparty.com/bangalore' )); //header("Location: " . $login_url); } } Please help me. Is there a reason not to use JFacebook? I have no idea regarding JFacebook.. So you should look at that package and see if you can use it to do what you want, a lot of the code is already written. https://www.facebook.com/dialog/oauth?client_id=202020556638870&redirect_uri=http%3A%2F%2Fwww.some.com%2F&state=c3900422cab698db3d680804e77abf32&req_perms=publish_stream%2Cemail&next=http%3A%2F%2Fwww.some.com%2Fbangalore&cancel_url=http%3A%2F%2Fwww.some.com%2F This is the href attribute but still it is not asking for extended permission... Not sure if Joomla has a custom Facebook PHP SDK but the code presented above doesn't match the latest Official Facebook PHP SDK https://github.com/facebook/facebook-php-sdk With this being the way to get email permissions $params = array( 'scope' => 'publish_stream,email', 'redirect_uri' => 'http://www.oyeparty.com/bangalore' ); $loginUrl = $facebook->getLoginUrl($params); https://www.facebook.com/dialog/oauth?client_id=202020556638870&redirect_uri=http%3A%2F%2Fwww.some.com%2F&state=c3900422cab698db3d680804e77abf32&req_perms=publish_stream%2Cemail&next=http%3A%2F%2Fwww.some.com%2Fbangalore&cancel_url=http%3A%2F%2Fwww.some.com%2F This is the href attribute but still it is not asking for extended permission...
common-pile/stackexchange_filtered
Visual studio Online : how to Strcuture I have a common DataAccess Class Library Project. This project needs to be Referenced in multiple Visual Studio Solution. Currently we are Referencing this DA Library via Folder created in each project called binary. so whenever there is a change in DataAccess Library project, we have Manually update all the projects that are Referencing this DAL. I was thinking about creating Single Solution, which will have All the Projects including DAL & all other Projects that are Referencing it and change the Reference to PRoject Reference DAL from other Projects, instead of File Reference from Binary folder. Is there any other Better Solution around sharing this DAL ? The answer is Nuget. You should package you dal output as a nuget package and push it to a nuget server.a nuget server can be a network share our an application like ProGet. Preferably you have an automated build do there package and push. That makes it easy. Then each of your other solutions can take a dependency on that package. When you update it in the Nuget server each of the solutions will notify of a new version that can be used.
common-pile/stackexchange_filtered
Deterministic Finite Automaton - Java I need to create DFA which has alphabet: {a,b,c} & this alphabet accept words which first and last letter are different. i.e. "a" - is unacceptable "ab" - is acceptable "aaa bb" - is unacceptable "cbba" - is acceptable I'm trying firstly to check if there's an "a" at the beginning, but something is wrong, especially if I have i.o. "ab" or "ac" in file.txt. Source: import java.io.*; import java.util.ArrayList; public class Task { public static void main(String[] args) throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader("file.txt ")); ArrayList<String> wordList = new ArrayList<>(); String line = null; while ((line = reader.readLine()) != null) { wordList.add(line); } for (String word : wordList) { if (word.matches("^a")) { if (word.matches("ab") || word.matches("^ac")) { System.out.print(word+" - OK\n"); } else { System.out.print(word+" - STOP (word doesn't exists in alphabet)\n"); System.exit(0); } } } } } What you do - a series of string matches, against the whole input words, does not really look like a DFA to me. To implement a DFA you should think in terms of "input signals","states" and "transitions" between those states . Taking that your alphabet is defined as "a,b,c,{,},&", your code should be accepting one of these chars at a time, and switch to the next state based on the current state and the input char. your first "words.matches" will only match "a", if you want to match all word starting by a "a" then followed by something else you have to use "^a.*", same for other matches. Thanks - it works now. Can you tell me how can I stop the program on bad position? for example if I got in file: "abbc", I want to stop on "abb". Fixing the regexes, won't turn this code into a DFA (which is the mission statement), moreover, it may trick OP into thinking that he has accomplished the task.
common-pile/stackexchange_filtered
Worklight 6.1 Location Services Sample works in simulator, but not actual phone I can't get the Worklight "Location Services" "SmallSample" app to work on my Droid 4 phone. I am using the "smallSample" sample project provided by IBM Worklight. The "SmallSample" app works great in the Mobile Browser Simulator, but when I install it on my physical phone. When I press the button on the app to retrieve my GPS coordinates, the Android GPS icon appears for about 2 seconds in my notification bar, then disappears. The GPS coordinates are never displayed, and there are no errors. Details: I'm using Worklight Developer Edition <IP_ADDRESS> My phone is the Droid 4, Android 4.1.2 (API 16) I imported the app to the Studio, then exported a signed APK using a new Keystore, and installed it on my phone. I'm outside when using the app to ensure I can get GPS signal. Other GPS apps work on my phone (Google maps). I verified connectivity to my Worklight Server by opening the Worklight console with my mobile browser. In the Worklight settings of the app, I verified the app is using the correct IP address, port, and context root to my Worklight Server. I verified all the default permissions are there including: <uses-permission android:name="android.permission.INTERNET"/> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/> UPDATE: Without changing anything else, I was able to retrieve GPS coordinates on my device a single point in time by using the code from this question: IBM Worklight - How to implement GPS functionality? navigator.geolocation.getCurrentPosition(onSuccess, onError); function onSuccess(position) { alert(JSON.stringify(position)); } function onError(error) { alert(JSON.stringify(error)); } The code above works perfectly on my phone and GPS coordinates are returned. However, the code from SmallSample (referenced in the link above) still doesn't work. One difference is the GPS coordinates are continually updated in Small Sample. I pasted the main part of the code below. WL.Device.Geo.acquirePosition( function(pos) { // when we receive the position, we display it and start on-going acquisition displayPosition(pos); var triggers = { Geo: { posChange: { // display all movement type: "PositionChange", callback: function(deviceContext) { displayPosition(deviceContext.Geo); } }, leftArea: { // alert when we have left the area type: "Exit", circle: { longitude: pos.coords.longitude, latitude: pos.coords.latitude, radius: 200 }, callback: function() { window.alert('Left the area'); } }, dwellArea: { // alert when we have stayed in the vicinity for 3 seconds type: "DwellInside", circle: { longitude: pos.coords.longitude, latitude: pos.coords.latitude, radius: 50 }, dwellingTime: 3000, callback: function() { window.alert('Still in the vicinity'); } } } }; WL.Device.startAcquisition({ Geo: geoPolicy }, triggers, { Geo: alertOnGeoAcquisitionErr } ); }, function(geoErr) { alertOnGeoAcquisitionErr(geoErr); // try again: getFirstPositionAndTrack(); }, geoPolicy ); Also, here are the errors I get from LogCat(debug): 06-04 15:42:00.243: D/NONE(3962): wlclient init success 06-04 15:42:03.602: D/WL gps Listener: The location has been updated! 06-04 15:42:03.602: D/WL gps Listener: Acquired location age: 17306 milliseconds. More than maximumAge of 10000 milliseconds. Ignoring. 06-04 15:42:03.602: D/WL gps Listener: The status of the provider gps has changed 06-04 15:42:03.602: D/WL gps Listener: gps is TEMPORARILY_UNAVAILABLE 06-04 15:43:03.509: D/CordovaLog(3962): file:///android_asset/www/default/worklight/worklight.js: Line 12769 : Uncaught ReferenceError: PositionError is not defined 06-04 15:43:03.509: E/Web Console(3962): Uncaught ReferenceError: PositionError is not defined at file:///android_asset/www/default/worklight/worklight.js:12769 06-04 15:43:04.040: D/CordovaActivity(3962): Paused the application! To summarize, the app gets my GPS coordinates initially, but then immediate throws "gps is TEMPORARILY_UNAVAILEABLE", and that process repeats. Perhaps my phone can't handle Live Tracking? Although live tracking works in Google maps on my device. Have you tested it in the Android emulator, to see if it works there? You can dynamically change the GPS coordinates there to ensure that it is working. That would help narrow it down to whether the problem is on the device or on Android. This question explains how to change the GPS coordinates in the emulator: http://stackoverflow.com/questions/2279647/how-to-emulate-gps-location-in-the-android-emulator LogCat logs would help. Thanks Daniel. I verified the app does work in the Emulator, so the problem is when it runs on my Droid 4 physical device. Per Idan's suggestion, I will next investigate LogCat logs. I'd like to verify that you are using the latest available version of Worklight - you can re-download the developer edition from the eclipse marketplace, and check whether the issue persists (delete the native folder and re-build afterwards). There was some issue around PositionError as seen above in the logs. I was using build: <IP_ADDRESS>-20140427-1450 and then I upgraded to <IP_ADDRESS>-20140518-1532. Unfortunately, since the 1532 upgrade, now the geolocation doesn't work in the Mobile Browser Simulator or Android Emulator any more. I think I'm just going to wait a few more days for 6.2 to be release. Thanks for your help! @user3700295, 6.2 is released for some time now, did you try? @user3700295, by now <IP_ADDRESS> is also released for some time now. Did you try? file:///android_asset/www/default/worklight/worklight.js: Line 12769 : Uncaught ReferenceError: PositionError is not defined This issue was resolved in later versions of Worklight; my suggestion for you, if possible, is to upgrade your Studio installation to either the latest v<IP_ADDRESS> release or even v6.3. If you require it in v6.1, try upgrading to the latest iFix for <IP_ADDRESS> from IBM Fix Central.
common-pile/stackexchange_filtered
Problems with using session data using nw.js with PHP web server I am trying to convert my web game into a desktop app. Everything works except for the session data. I got websockets, images, audio, and all of that working. Great! But when I try to authenticate, it POSTs to the web server just fine, returns a successful authentication, and attempts to reload the page. However, upon page reload, the subsequent AJAX requests cannot get any of the $_SESSION values again. I was able to find my PHPSESSID using win.cookies.getAll inside of the nw.js app, but I don't see how this helps me at all. Normally this is automatic when loading the webpage from the server. How do I fix this? Can you show the php file you are postig to? First thing that comes in my mind: are you using session_start() at the top of your php script? Yeah, this is something that works fine if I load it from a normal web browser. Okay, it might seem like a minor comment, but you helped me out. First of all I got some sleep and now I'm thinking more clearly. All of my problems boiled down to using the wrong absolute path to point to the file I was using to authenticate. It actually works now... I'm super tired, but also incredibly happy. Now, I will rest.
common-pile/stackexchange_filtered
Material UI + styled with props throws "React does not recognize zIndex prop on a DOM element." I'm struggling with React error -> React does not recognize zIndex prop on a DOM element. It happens when I'm using MUI component and Styled with props. Code example: styled: import {Drawer as DrawerBase, styled} from '@material-ui/core' interface DrawerProps { zIndex: number } export const Drawer = styled(DrawerBase)<Theme, DrawerProps>(({zIndex}) => ({ zIndex })) implementation: <Drawer zIndex={10} /> I'm using: "@material-ui/core": "^4.12.3" Is there any possibility to prevent passing props to DOM elements? Try transient props from styled components Usless comment, I clearly told which version of material-ui I'm using. Material UI's 'styled' syntax is very confusing unfortunately. This should work though: interface DrawerProps { zIndex: number } export const Drawer = styled(DrawerBase, { shouldForwardProp: (prop) => prop !== 'zIndex', })<Theme, DrawerProps>(({zIndex}) => ({ zIndex })) 'shouldForwardProp' is also an unfortunate name as it sounds like the property zIndex isn't forwarded, but it is actually accessible. Hope it helps. I've not used styled components within MUI before, so take what I'm saying with a grain of salt. But I have a few ideas: You're importing styled from '@material-ui/core', but you're calling a style function (without the "d"). Looking at the MUI docs for using styled components, they're using it the way we traditionally use styled components, which is using CSS syntax. I'd expect your code to look like this: const Drawer = styled(DrawerBase)` z-index: ${props => props.zIndex}; `; The DOM element needs "z-index", not "zIndex". Your code snippet also has a typo with your const Drawer where you've left out the "s": cont. Hopefully this helps. Let's skip all of the typos in here. I just wrote it by myself here to not copy the source code from the project, but the example is 1:1 reproduction. Also, typescript doesn't allow me to use it as you've presented here Gotcha. I figured the typos were just for the sample code. Sorry I couldn't be of more help.
common-pile/stackexchange_filtered
Trying to apply design pattern with the below methods I have class like this as below for design project where i am using this one to generate word document. the process is we are using ReactJS UI and .Net core and when the user click on button in UI we will be sending the request to API where we can download the word document. public class DesignProject : PatchEntityProperties { [Key, GraphQLNonNullType] public string ProjectNumber { get; set; } public string Name { get; set; } [Column(TypeName = "jsonb")] public ProjectSectionStatus SectionStatuses { get; set; } [ForeignKey("AshraeClimateZone"), GraphQLIgnore] public Guid? AshraeClimateZoneId { get; set; } public virtual AshraeClimateZone AshraeClimateZone { get; set; } public virtual ProjectPhase ProjectPhase { get; set; } [Column(TypeName = "jsonb")] public List<ProjectObject<LocalCode>> LocalCodes { get; set; } = new List<ProjectObject<LocalCode>>(); ......... ...... } and project object is looks like as below public class ProjectObject<T> { public Guid? Id { get; set; } public T OriginalObject { get; set; } public T ModifiedObject { get; set; } [GraphQLIgnore, JsonIgnore] public T TargetObject { get { return ModifiedObject ?? OriginalObject; } } } and then i do have individual classes as mentioned in DesignProject class are like as below public class LocalCode : AEIMaster { public string Edition { get; set; } public string City { get; set; } public State State { get; set; } public Country Country { get; set; } } i have few more classes and all are implementing AEIMaster interface used in word document generation and code is like as below this is controller method where i am calling from UI [Route("api/[controller]")] [ApiController] public class DesignProjectsController : ControllerBase { private readonly APIDbContext _context; public DesignProjectsController(APIDbContext context) { _context = context; } [Authorize, HttpGet("{id}")] public async Task<ActionResult<DesignProject>> GetDesignProject(string id) { var designProject = await _context.DesignProjects.FindAsync(id); MemoryStream document = new DocumentGeneration().GenerateBasisOfDesign(designProject); return new InlineFileContentResult(document.ToArray(), "application/docx") { FileDownloadName = fileName }; } } and then below is the code for DocumentGeneration class public MemoryStream GenerateBasisOfDesign(DesignProject designProject) { DesignProj = designProject; MemoryStream mem = new MemoryStream(); using (WordprocessingDocument wordDoc = WordprocessingDocument.Create(mem, WordprocessingDocumentType.Document)) { var mainDocumentPart = wordDoc.AddMainDocumentPart(); Document doc = new Document(); mainDocumentPart.Document = doc; doc.Body = new Body(); Body body = wordDoc.MainDocumentPart.Document.Body; if (designProject.SectionStatuses.CodesAndGuidelinesSectionStatus != Design.Entities.Enums.ProjectSectionStage.NOT_APPLICABLE) { body.AppendChild(new Paragraph(new Run(new Text()))); if ((designProject.LocalCodes?.Count ?? 0) > 0) { body.AppendChild(new Paragraph(new Run(new Text()))); body.AppendChild(BuildSubHeaderPart("Applicable Local Codes", 2)); body.Append(RenderBulletedList(wordDoc, RenderRunList(designProject.LocalCodes.Select(s => s.TargetObject).Select(selector).ToList()))); } } if(designProject.SectionStatuses.SpaceTypesSectionStatus != Design.Entities.Enums.ProjectSectionStage.NOT_APPLICABLE && (designProject.SpaceTypes?.Count ?? 0) > 0) { // use other classes which are derived from AEIMaster to generate a table and append that table to body } if(Condition 3) { } mainDocumentPart.Document.Body = body; mainDocumentPart.Document.Save(); ApplyHeader(wordDoc); } return mem; } i am looking to apply builder design pattern for this method and classes and i know it will separates the construction of complex object from its representation and i am not sure where to start with the above classes and methods. Could any please suggest ideas or suggestions on this, many thanks in advance. What and how are Condition1 and Condition2 defined? I don't think using the Builder pattern is useful here because your code shows you only have a single scenario where you're creating objects. The Builder pattern is most useful when you need a flexible way to create objects - such as if you're making a redistributable library with a variety of use-cases. BTW, you probably need to rewind your MemoryStream before you return it from GenerateBasisOfDesign. i have added condition 1 and condition 2 here .. in each if condition i am generating table and appending that table to document I’m voting to close this question because your question kinda sounds like code improvement and/or review and if so may be off-topic for SO. It may be better suited for another SE site but be sure to read the relevant FAQ; and/or re-wording your question as necessary before cross-posting. [ask]. Good luck!
common-pile/stackexchange_filtered
Why SSgompertz does not work on similar data sets? I have data that follows a sigmodial shape and the Gompertz function seem to make sense. I wanted to use SSgompertz in nls to find the parameters, but I get sometimes errors. So, I wanted to learn more about and went through the help documentation (?SSgompertz) but run into a problem: I subset the data just as in the example DNase.1 <- subset(DNase, Run == 1) I plotted the data to verify it has "sigmoidal" shape-ish: plot(log(DNase.1$conc),DNase.1$density) and run nls and SSgompertz: nls(density ~ SSgompertz(log(conc),Asym ,b2,b3), data = DNase.1) This works, and returns the parameters of an equation of the form: y<-4.6033*exp(-2.2713*0.7165^x) Now, if I save the same data in another object: x<- log(DNase.1$conc) probando<-data.frame(x=x,y=y) And plot it to be sure it returns the same behavior as in the orignial dataset (which it does): plot(probando$x,probando$y) But, if I want to re-run the same model I got an error: nls(y ~ SSgompertz(x,Asym ,b2,b3), data = probando) Why is that so? You must have left something out as in one model you predict density, in the other y. Hi, "y" and "density" are basically the same values. "y" was obtained by using the parameters (asym, b2, b3) after fitting density ~ log(conc). That´s why I found confusing how using almost identical dataset in one case it does not work.
common-pile/stackexchange_filtered
Ubuntu 23.04 won't boot/black screen into GUI Dell PowerEdge 2900 So I Installed Ubuntu GUI on my PowerEdge 2900, installed it via USB media, it installed just fine. Then after rebooting the system, Ubuntu OS would go boot a load screen (during boot process) then cuts black, monitor displayed message that no VGA signal is found. I believe it may be a Display Driver issue as I booted into recovery mode and I was able to get the CLI prompt on my screen, Just that going into the actual desktop doesn't work. Tried a reinstall, Boot-Repair (The boot of your PC is in Legacy mode. Please change it to EFI mode. Please use Boot-Repair-Disk-64bit, Older system doesn't have UEFI..). I'm not sure what else to try..any help would be appreciated.
common-pile/stackexchange_filtered
Error "None of... index are in the columns I'm trying to use the Stratified k-fold on my dataset but this is the error when i run the code: import pandas as pd import numpy as np from sklearn.model_selection import StratifiedKFold from imblearn.over_sampling import SMOTE #leggo dataset df_train = pd.read_csv("train_numeric_shuffled_50000_cleaned_90.csv")`` #creazione train set x_train=df_train.drop(['Id','Response'], axis=1) y_train = df_train['Response'] #applico Stratified K-fold con 4 suddivisioni skfold = StratifiedKFold(n_splits=4) for train_index, test_index in skfold.split(df_train, y_train): x_train_skf = df_train[train_index] x_test_skf = df_train[test_index] y_train_skf = y_train[train_index] y_test_skf = y_train[test_index] This is the error: KeyError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_12808\1982191024.py in <module> 1 # ciclo for per la creazione dei train e dei test 2 for train_index, test_index in skfold.split(df_train, y_train): 3 x_train_skf = df_train[train_index] 4 x_test_skf = df_train[test_index] 5 y_train_skf = y_train[train_index] ~\anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 3509 if is_iterator(key): 3510 key = list(key) 3511 indexer = self.columns._get_indexer_strict(key, "columns")[1] 3512 3513 # take() does not accept boolean indexers ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in _get_indexer_strict(self, key, axis_name) 5794 keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr) 5795 5796 self._raise_if_missing(keyarr, indexer, axis_name) 5797 5798 keyarr = self.take(indexer) ~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in _raise_if_missing(self, key, indexer, axis_name) 5854 if use_interval_msg: 5855 key = list(key) 5856 raise KeyError(f"None of [{key}] are in the [{axis_name}]") 5857 5858 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique()) KeyError: "None of [Int64Index([11161, 11444, 11451, 11678, 12034, 12181, 12411, 12454, 12508, 12509,...49990, 49991, 49992, 49993, 49994, 49995, 49996, 49997, 49998,49999],dtype='int64', length=37500)] are in the [columns]" Please don't put code in quote blocks, instead put code in code blocks. The code will look better, and if you're using a language like python, indentation matters. I don't fully understand what is in train_index and test_index, but could you try writing df_train.loc[train_index, :] and df_train.loc[test_index, :], etc... to see if that resolve your issue? @Yevhen Kuzmovych, why is it not appreciated to indicate the current research status of the OP? @SaaruLindestøkke because you don't know the current research status of the OP. Don't write your assumptions into the edit (or any additional sentences). If you have a question or a comment, use comments. Thank you for the formatting fix though. @MartinaPascucci, could you please let us know what you've looked into so far? It prevents that answerers suggest something that you've tried already and that didn't work. @YevhenKuzmovych Know I do not, strong suspicion I have...
common-pile/stackexchange_filtered
Running scripts parallel in Chrome and Firefox Here is my Test Scenario: I have an issue while running parallel test using Selenium Webdriver + Nunit. When I run my script using two browsers (each script with 21 test methods), i.e Chrome and Firefox, at one point my test fails in only one browser. When running the same script again it passes but I get error in some other test method. Sometimes, Chrome works perfectly fine while Firefox fails because of "Element is not visible error" but I can see the element on the screen or vice versa. At one point both the browsers would work fine and my test passes. Moreover, the script runs perfectly fine when I execute it individually. I have no clue why this happens. Am I lacking something in settings or my script? Possible duplicate of ElementNotVisibleException : Selenium Python What programming language are you using in conjunction with Selenium Webdriver? you need to call ignoring with exception to ignore while the WebDriver will wait for the element to populate @Reezo I am using C# class library to write my code The philosophy behind NUnit's parallel execution feature is that it launches your tests in parallel and reports their success or failure but does nothing special to make it possible for them to run in parallel. That's up to you. From your description, it seems likely that your failing test is not written in a way that allows two instances to run in parallel. Without seeing some code it's not possible to give specific advice but you should look for fixture members that are using common object state. If you add some sample code, then it might be possible to tell you more. Thank you for the reply. Here is a snippet of my code: [TestFixture] [Parallelizable] public class test{ [Test,Order(1)] public void startup() ..... [Test, Order(21)] public void cleanup() } Also in every method i am using try{} catch{} to capture a snapshot in case my test method fails. We would need the actual code to do any more, not just the method headers. If you are able to clean up proprietary info and post it, you should edit your question rather than putting it in a comment because it can be formatted better in that way. Alternatively, link to some code. If you want to continue on your own, then I suggest removing all tests and gradually adding them back until you find the problem.
common-pile/stackexchange_filtered
[net]how to inject debugging code to an assembly? Given an assembly with an entry point like: int FooClass::doFoo(int x, double y) { int ret; // Do some foo return ret; } Is it possible to use yet another assembly to simulate something like: int FooClass::doFoo(int x, double y) { int ret; TRACE_PARAM_INT(x) TRACE_PARAM_DOUBLE(y) // Do some foo TRACE_RETURN_INT(ret) return ret; } And only enable this code injection when DEBUG is present. If there is such way, how do you load the "debugging" assembly? EDIT 1: #ifdef is not an option. Say, I don't want to modify the code base. EDIT 2: My main question is "How to INJECT code to an already compiled assembly". I do have the base code but I'd rather not add the K of lines for tracing in that main code but have another assembly that do such. I do know how to use VS to debug, what I want to is add tracing mechanism of variables (among other things). Are you saying you cannot touch the original assembly for any reason? You could try an AOP post-compiler like PostSharp. It works with all .net languages, but I have not tried it with C++. agreed, i've used postsharp with the enterprise library logging block, to log parameters passed to a method, it works well (c#) For injecting code into an existing assembly, I would use the Cecil library, which lets you work with IL. This would let you rewrite the assembly if that's what you're after. I have to warn you: it's no small feat. Oh, there's also an add-in for Reflector, called Reflexil, which lets you edit assemblies. By the way, AOP-based tracing doesn't add code directly to your assembly. You can keep all the AOP stuff in a separate assembly (in fact, it's a very good idea), and then apply it with attributes. PostSharp will hard-wire code for you, but other AOP frameworks such as Spring or PIAB make things more flexible as they use dynamic proxies, so you can effectively 'turn off' your aspects when they are not needed. The Enterprise Library Policy Injection Application Block allows you to execute code between method calls. It wont do the complex things you've asked for in your question which involve injecting code inside methods, but it may be sufficient for your needs and it's freely availably. this might actually suffice my needs. I don't really need to inject code to the method's body, but keep track of method calls and return statements. If the doFoo function is not virtual, or if the class is not accessed through an interface, you cannot do it. The reason is that when you are compiling the class that uses the doFoo is compiled to call the exact function on that exact class. So the target class for receiving the call is evaluated at compile time. If the function is virtual however, or you access the class through an interface, the target class for receiving the call is evaluated at runtime. So in order to use any AOP (aspect oriented programming) or DI (direct injection) frameworks for accomplishing what you want, the target class needs to fulfill either of these conditions. Normally accessing the class through an interface would be the preferred way. There are many different AOP and DI frameworks out there, but the post was not about which one to use, so I'll keep myself from doing that, but if you don't need such one, you can use DynamicProxy to create a decorator to add logging functionality, both to an interface, and to a class with virtual functions. The latter creates a new class that is a subclass of the one that you already have Depends on whether you have the source code for the assembly and whether you're able to recompile it to allow debugging. Assuming that you have the source code, and you can compile it with debugging enabled, then you should be able to use your developer tool (Visual Studio, I'm guessing) to single step through the code and see the values of X, Y and ret, as you go. This would not requie modification of the code - just the ability to compile a debug version.
common-pile/stackexchange_filtered
What is a good English translation of 相见亦无事,别后常忆君。 I see this sentence in a question about friendship, and I like this answer the most, who can give me a good English translation. Thank you. question should not be merely contained in headline Growth in longing comes from a place of absence Literally: When we are together, there is nothing special, but once we part from each other, I miss you day and day. Is there any poem in English has the same meaning? Um... I think this just happened to me two months ago... 相见亦无事,别后常忆君。 When we're together, it just seems so natural, when we're apart, I can't stop thinking of you.
common-pile/stackexchange_filtered
DAX language- Microsoft Power BI - SUMMARIZE inside SELECTCOLUMNS - simple syntax I have a situation below (Power BI - DAX) in which I am trying to SUMMARIZE a table called Product with a SUM aggregation to get the total cost of all products in each Category; but then I have to change the column name of one or two columns in the summarized result set. I have written the below code to develop a calculated table: ProductCategoryCostCT = SELECTCOLUMNS ( SUMMARIZE( Product, Product[Category], "TotalCostOfAllProductsInThisCategory", SUM(Product[ProductCost]) ), "CategoryName", Product[Category], "Cost", Product[TotalCostOfAllProductsInThisCategory] ) The above code throws an error. Can someone help me correct this ? This may be pedestrian to many of you! (The source Product table has ProductCost column at the individual Product level, with Category as another column in the same table) ProductCategoryCostCT = SUMMARIZE( SELECTCOLUMNS( Product, "CategoryName", Product[Category], "ProductCost", Product[ProductCost] ), [CategoryName], "Cost", SUM(Product[ProductCost]) )
common-pile/stackexchange_filtered
Can I replace my halogen with led? So, long story short: An Halogen (MR16/GU5.3) light in my bathroom went out and I replaced it with an led light (from Wilko UK, 3 pack). I know nothing of electrics, but thought heck, I should be able to change a light bulb, right? A couple of weeks go by and the Led goes out, too. I still had 2 bulbs in the "3pack" I bought, maybe this one was just dodgy? Replaced with new Led. 2 weeks later, same thing happens. I'm wondering if, based on the supplied images, anyone could guide me in terms of what bulb to buy (Halogen or Led!), so I can stop wasting money buying the wrong bulbs. First picture shows the old bulb (top of my hand) and the new led I bought (bottom). Second picture shows the transformer. Did I buy the wrong bulbs or is my issue somewhere else? See https://diy.stackexchange.com/q/172177/39705 In this case, I am replacing halogen -> led, that link seems more related to led -> led. I understand Wilko might not be reliable, but I'm wondering if there is something wrong with my setup or more than likely bad bulbs? It would stand to reason the dimmable electronic transformer is not compatible with LED bulbs. Either 2 people are having the same problem the same day, or you asked the question twice, presumably because you couldn't find your original question. This is probably because you're using an informal "cookie based account" and you'll get a different account on every browser and device, or anytime you reset cookies. To fix that, tie an email address to your account, and use email/password, Google or Facebook login. Then you will have a stable ID to use. You can also change your display name at will. Be sure the LED module handles AC correctly (it needs a bridge rectifier). Reverse current can incrementally damage the diode's junction. Save that, look out for heat. Use lower wattages or increase airflow.
common-pile/stackexchange_filtered
Meaning of "send as sms when imessage is unavailable"? I want the option to "send as sms when imessage is unavailable" to turn off. But if it's on, does that mean it sends as SMS when imessage is unavailable on MY end (the sending end), the receiving end, or both? Example: while I had this option on, if I texted a friend whose phone was on airplane mode or off, it would go through as sms and not imessage. With this option off, if I imessage someone and their phone is on airplane mode/off, would it send as SMS because imessage is available on my end? Or keep it as an undelivered imessage because imessage is unavailable on their end?? The option to "send as sms when imessage is unavailable" refers to the sender side, meaning that if the sender is not connected to iMessage (i.e. offline), the message will be sent to the receiver as a SMS. Example: Sender is not connected to the internet / have bad internet connection and cannot connect to iMessage server, then their message sending to a registered iMessage receiver will be sent as SMS instead. iMessage will not be used at all if the receiver is not an iMessage user. iMessage will be sent if the sender has connection to iMessage server but the receiver does not have connection to iMessage. The receiver will receive the message the next time they connect to the iMessage server (when their connection is back). This option means when a user with iMessage doesn't have an Internet connection or iMessage is disabled. For example: One day while chatting with a friend via iMessage your friend goes into a tunel and disconnects from the Internet, your iPhone detects that your friend is off-line. If the option "send as SMS when iMessage is unavailable" is enabled, the messsage will be send as an SMS. If the option "send as SMS when iMessage is unavailable" is disabled, the message will only be sent as an iMessage when your friend comes back online.
common-pile/stackexchange_filtered
Is there a way to run an electron app Build? I am a very beginner in Electron JS and was working with Fifo Browser Code. I have created the build for the Fifo browser and wanted to run that build through code but hasn't been successful. I tried running that build using this code: const { app, BrowserWindow } = require('electron'); const path = require('path'); const url = require('url'); let win; function createWindow() { const startUrl = url.format({ pathname: path.join(__dirname, 'build', 'app.html'), protocol: 'file:', slashes: true, }); win = new BrowserWindow({ width: 644, height: 470, webPreferences: { nodeIntegration: true, contextIsolation: false, enableRemoteModule: true, } }); win.loadURL(startUrl); win.webContents.openDevTools({ mode: 'detach' }); } app.whenReady().then(createWindow); app.on('window-all-closed', () => { if (process.platform !== 'darwin') { app.quit() } }); app.on('activate', () => { if (BrowserWindow.getAllWindows().length === 0) { createWindow() } }); When i run this code the window opens and I get the blank screen and in the devTools console i get this error: Console Errors Now How can I run this build? Try swapping out win.loadURL(startUrl); with win.loadFile(startFile); and update your startUrl constant to const startFile = path.join(__dirname, 'build', 'app.html');. See win.loadFile(filePath[, options]) for more information.
common-pile/stackexchange_filtered
Eloquent / Laravel 8 : eloquent request doesn't store wanted values into $variable I have a problem, in my code below, I want to update my database with new entries, great it worked fine for my other tables beauce I could use $data = Table::find($id); but here I can't because my primary key is two foreign keys.. (customersid and bookid make the primary key of my table). I tried the code below but it doesn't work, like the $data doesn't contain the values that it was supposed to get. <table> <form action="/editloan" method="POST"><br><br> @csrf <input type="hidden" name="number" value="{{ $data['number'] }}"> <tr> <!-- Here nothing shows up :( --> <td><input type="number" name="clientid" value="{{ $data['clientid'] }}"><br><br></td> </tr> <tr> <td><input type="number" name="livreid" value="{{ $data['livreid'] }}"><br><br></td> </tr> <tr> <td><button class="btn btn-primary btn-block" type="submit">Modifier</button></td> </tr> </form> </table> And when I want to pass the data to my final function to change the data in the database, it doesn't work either ... : function updateLoan(Request $req) { // du coup ne fonctionne pas non plus $data=DB::table('emprunts')->where('number',$req); $data->clientid=$req->clientid; $data->livreid=$req->livreid; $data->save(); $listloans = Emprunt::all(); return view('crud.showloans',['emprunts'=>$listloans]); } How could I do ? Sorry I know it's probably a dumb question but it's my first work on laravel and in only a student :) function showFormEdit($id) { $data = DB::table('emprunts')->where('number','=',$id)->get(); return view('crud.formloanedit',['data'=>$data]); } First check here, I can see you forgot to ->first() the results. // du coup ne fonctionne pas non plus $data=DB::table('emprunts')->where('number',$req->number)->first(); // devrait mieux fonctionner Second part to get data: function showFormEdit($id) { $data = DB::table('emprunts')->where('number','=',$id)->first(); return view('crud.formloanedit',['data'=>$data]); } and you get like this: $data->columnName; Just edit: you need to pick the field after $req I was checking the update method. To get your field, change get() by first() and you always get an object not an array. I update my answer No worries, if it's fine please accept the answer ;)
common-pile/stackexchange_filtered
Google Cloud Functions, NodeJS monorepo and local dependencies So I have this NodeJS monorepo which is structured as follows: monorepo/ ├─ src/ │ ├─ libs/ │ │ ├─ types/ │ │ │ ├─ src/ │ │ │ ├─ package.json │ ├─ some_app/ │ ├─ gcp_function/ │ │ ├─ src/ │ │ ├─ package.json ├─ package.json Multiple projects use the same types library, so anytime a type changes, we can update all references at once. It has worked really great so far and I'm really happy with the structure. Except now I needed to create a function on the Google Cloud Platform. In this function I also reference the types project. The functions package.json is as follows: { "devDependencies": { "@project_name/types": "*", "npm-watch": "^0.11.0", "typescript": "^4.9.4" } } The @project_name/types refers to the src/libs/types project. This works with every project, because we have a centralised build tool. This means the function works locally on my machine and I have no problem developing it, but as soon as I push it to Google Cloud (using the command listed below) I get the following error: npm ERR! 404 '@project_name/types@*' is not in this registry. I use this command to deploy from ./monorepo: gcloud functions deploy gcp-function --gen2 --runtime nodejs16 --trigger-topic some_topic --source .\src\gcp_function I think it's pretty clear why this happens: Google Cloud only pushes the function, which then builds on Google Cloud Build. But because the rest of the monorepo doesn't get pushed, it doesn't work as it has no reference to @project_name/types. I've been struggling with this issue for quite a while now. Has anybody ever run into this issue and if so, how did you fix it? Is there any other way to use local dependecies for Google Cloud functions? Maybe there is some way to package the entire project and send it to Google Cloud? I have resolved this issue by using a smart trick in the CI pipeline. In the CI I take the following steps: Build the types and use npm pack to package it. Copy the compressed pack to the function folder. There install it using npm i @project_name/types@file:./types.tgz. That way it gets overridden in the package.json. Zip the entire GCP Function and push it to Google Cloud Storage. Tell Google Cloud Functions to build and run that zip file. In the end my CI looks a bit like this: steps: - name: Build types library working-directory: ./src/libs/types run: npm i --ignore-scripts && npm run build && npm pack - name: Copy pack to gcp_function and install run: cp ../libs/types/*.tgz ./types.tgz && npm i @project_name/types@file:./types.tgz - name: Zip the current folder run: rm -rf node_modules && zip -r function.zip ./ - name: Upload the function.zip to Google Cloud Storage run: gsutil cp function.zip gs://some-bucket/gcp_function/function.zip - name: Deploy to Google Cloud run: | gcloud functions deploy gcp_function \ --source gs://some-bucket/gcp_function/function.zip This has resolved the issue for me. Now I don't have to publish the package to NPM (I didn't want to do that, because it doesn't fit in the monorepo narrative). And I can still develop the types which get updated live in every other project while developing.
common-pile/stackexchange_filtered
java.lang.NullPointerException while saving bulk documents to couchdb using couchdb4j api in java I tried to save document consisting around 1,30,000 records, and I used the bulksavedocument method to save the document, but I am getting the following error java.lang.NullPointerException at com.fourspaces.couchdb.Database.bulkSaveDocuments(Database.java:280) Here is the code I used to save bulk documents. JSONArray json=new JSONArray(); Document[] newdoc = null; newdoc = new Document[json.size()]; for(int i=0;i<json.size();i++) { Document singleDoc = new Document(json.getJSONObject(i)); newdoc[i]=singleDoc; } Session s = new Session("localhost",5984); Database db = s.getDatabase("test"); db.bulkSaveDocuments(newdoc); when I tried to debug the program along with the source code getting the following error net.sf.json.JSONException: A JSONArray text must start with '[' at character 1 of {"db_name":"item_masters_test","doc_count":0,"doc_del_count":0,"update_seq":0,"purge_seq":0,"compact_running":false,"disk_size":79,"instance_start_time":"1337249297703950","disk_format_version":5,"committed_update_seq":0} at net.sf.json.util.JSONTokener.syntaxError(JSONTokener.java:499) at net.sf.json.JSONArray._fromJSONTokener(JSONArray.java:1116) at net.sf.json.JSONArray._fromString(JSONArray.java:1197) at net.sf.json.JSONArray.fromObject(JSONArray.java:127) at net.sf.json.JSONArray.fromObject(JSONArray.java:105) at com.fourspaces.couchdb.CouchResponse.getBodyAsJSONArray(CouchResponse.java:129) at com.fourspaces.couchdb.Database.bulkSaveDocuments(Database.java:282) at ItemMasterTest4.main(ItemMasterTest4.java:565) Please suggest the solution to get rid of this exception. @ckuetbach * slaps forehead * not line 280 of the resultset, of course line 280 of the sourcecode. I think it is Document singleDoc = new Document..., but I don't know it. I don't know really well this JSON lib, but this JSONArray json=new JSONArray(); is probably an array with size 0 (empty). So your loop enters with index 0, that does'nt exists. So json.getJSONObject(i) propably returns null. Where you write this for(int i=0;i<json.size();i++) You probably mean that for(int i=0;i<json.size()-1;i++) No Database is NOT null. There would be a different StackTrace then. There is a access of Database in line 280. There is a null pointer. My bad; that went absolutely over my head. I need to sleep, haha. i changed the code with for(int i=0;i<json.size()-1;i++) it didn't work. com.fourspaces.couchdb.Database.bulkSaveDocuments(Database.java:280) https://github.com/mbreese/couchdb4j/blob/master/src/java/com/fourspaces/couchdb/Database.java Without further info from the OP about CoucheDB4j version we cannot refer to this source line for shure. But is a good direction ... the couchdb4j version is couchdb4j-0.3.0
common-pile/stackexchange_filtered
Google Lens Api/ Webscraping / SerpApi I checked several posts about google lens API. However, I was not able to come to a good conclusion. Google does not have a Lens API, but is clearly the best for visual search. I tried to scrape it but I was not successful yet. driver.get(url) time.sleep(2) # Locate the input field for file upload input_field = driver.find_element(By.XPATH, "//input[@type='file']") # Open the File Explorer to select the image using pyautogui pyautogui.hotkey('ctrl', 'e') # Open File Explorer (customize this shortcut based on your system) time.sleep(1) # Get the selected image path image_path = get_image_path() # Type the image path and press Enter using pyautogui pyautogui.write(image_path) pyautogui.press('enter') time.sleep(1) # Copy the image path to the clipboard using pyperclip pyautogui.hotkey('ctrl', 'c') time.sleep(1) # Click on the input field to activate it input_field.click() # Paste the image path from the clipboard into the input field using Ctrl+V pyautogui.hotkey('ctrl', 'v') # Submit the form or perform any necessary actions # Wait for the result page to load (adjust this time based on the page loading time) time.sleep(10) print("Image uploaded successfully.") I tried this code since if you are on the following URL https://lens.google.com/search?p=0 you can copy and past a picture. However, I was not able to make it work and I am looking for a fast and good solution to it. Furthermore, I would like to know if it is legal to use google lens that way. Furthermore, be able to scrape it in headless mode. I would like a clear answer regarding how to best scrape google lens and if it is possible and legal.
common-pile/stackexchange_filtered
Admob.java can't resolve ads I am new to android studio and I am having a problem with admob.java it cant resolve ca-app-pub. and it says that the numbet of the ad unit is too long while it just a test adunit.enter image description here Firstly, don't post errors as images. SEcondly, it's a String meaning you need double quotes around it. I voted to close this as off-topic because of no code (applies when images are involved), though it should be closed as a simple typographical error sorry about this! didn't mean to cause this trouble. I just needed help, and I am very thankful. put your id in " " example mInterstitialAd.setAdUnitId("your admob id"); Thank you so much for your help! @Grey Ish if answer helpful to you then you may check it as right answer and also may up it..! Please I was banned from posting another question but I really need help. My interstitial ads show once every time the player looses or pass the level. i want it to show only after two levels and after 3 looses so that I can avoid bad user experience. If you could help me with I will be so thankful to you.
common-pile/stackexchange_filtered
Pointers golang from string My application accepts a pointer from os.Args. For example pointer := os.Args[1] //"0x7ffc47e43200" How can I use that pointer and get the value that is stored on that location? How can you possibly know if that address is even valid? What do you plan to do with the data? @zmb Another application calls this application (not go) with the memory address. But most operating systems use virtual memory these days, so that other application won't share a virtual address space with the Go app. You can try your luck with the unsafe package, converting from str to int and then casting to an unsafe pointer. "How can I use that pointer and get the value that is stored on that location?" By writing your own OS. Seriously: You can't and you shouldn't. Why don't you just pass the data over to the go app with STDIN/STDOUT pipes, rather than calling with the pointer and hoping that it works? Disclaimer: As you are probably aware, this is dangerous and if you're going to do this in a production application, you'd better have a really good reason. That being said... You need to do a few things. Here's the code, and then we'll walk through it. package main import ( "fmt" "os" "strconv" "unsafe" ) func main() { str := "7ffc47e43200" // strconv.ParseUint doesn't like a "0x" prefix u, err := strconv.ParseUint(str, 16, 64) if err != nil { fmt.Fprintln(os.Stderr, "could not parse pointer:", err) os.Exit(1) } ptr := unsafe.Pointer(uintptr(u)) // generic pointer (like void* in C) intptr := (*int)(ptr) // typed pointer to int fmt.Println(*intptr) } You can run this on the Go Playground. First, we need to parse the string as a numerical value. In your example, you gave a hexadecimal number, so we'll parse in base 16 (that's the "16" argument to strconv.ParseUint). Note that strconv.ParseUint doesn't like the "0x" prefix, so I removed it. Then, we need to convert the number into a pointer type. For this, we will use the unsafe.Pointer type, which is special to the Go compiler. Normally, the compiler won't let you convert between pointer types. The exception is that, according to the unsafe.Pointer documentation: A pointer value of any type can be converted to a Pointer. A Pointer can be converted to a pointer value of any type. A uintptr can be converted to a Pointer. A Pointer can be converted to a uintptr. Thus, in order to convert to a pointer, we'll need to first convert to a uintptr and then to an unsafe.Pointer. From here, we can convert to any pointer type we want. In this example, we will convert to an int pointer, but we could choose any other pointer type as well. We then dereference the pointer (which panics in this case).
common-pile/stackexchange_filtered
Not able to update entity with multipart/formdata - NgRx Data I am trying to update one of my entity which has file upload functionality as well. I can send(add) FormData using add method, but can't update. NgRx gives below error: Error: Primary key may not be null/undefined. Is this even possible? Or am I doing something wrong. Please have a look at below code: const ad = { id: form.id, accountId: this.accountId } const data = new FormData(); data.append('ad', JSON.stringify(ad)); // photos is an array of uploaded files if(this.photos.length) { this.photos.forEach(photo => { data.append('offure_ad', photo, photo['name']); }); } // NgRx Data update mothod this.adService.update(data); Please direct me to right direction. Thank you are you sending primary key in the request? Try this below code const ad = { id: form.id, accountId: this.accountId } const data = new FormData(); data.append('ad', JSON.stringify(ad)); // photos is an array of uploaded files if(this.photos.length) { this.photos.forEach(photo => { data.append('offure_ad', photo, photo['name']); }); } // send ad like below this.adService.update(data,ad); Second parameter of update method accepts only EntityActionOptions type. We can not pass anything there. Well, It's NgRx Data's built in method. Do you have a good context over NgRx Data library? @Raj You have to create a DataService class and use a custom update function like this. @Injectable() export class PromotionDataService extends DefaultDataService<Promotion> { httpClient: HttpClient; constructor(httpClient: HttpClient, httpUrlGenerator: HttpUrlGenerator) { super('Promotion', httpClient, httpUrlGenerator); this.httpClient = httpClient; } updatePromotion(promotionId: string, formData: FormData): Observable<Promotion> { formData.append('_method', 'PUT'); return this.httpClient.post(`${environment.apiUrl}/api/v1/manager/promotion/` + promotionId, formData) .pipe(map((response: Promotion) => { return response; })); } } I am using post request, because put request is not working with multipart/form-data. Then register the data service as follow. @NgModule({ providers: [ PromotionDataService ] }) export class EntityStoreModule { constructor( entityDataService: EntityDataService, promotionDataService: PromotionDataService, ) { entityDataService.registerService('Promotion', promotionDataService); } } @NgModule({ declarations: [ PromotionComponent, ], exports: [ ], imports: [ EntityStoreModule ], }) export class PromotionModule {} in your component first inject the data service in the constructor, then you can use the custom update function like this onSubmit() { if (this.promotionForm.invalid) { return; } const newPromotion: Promotion = this.promotionForm.value; const fileControl = this.f.banner; let files: File[]; let file: File = null; if(fileControl.value){ files = fileControl.value._files } if(files && files.length > 0) { file = files[0]; } const formData = new FormData(); if(file != null){ formData.append('banner', file, file.name); } formData.append('data', JSON.stringify(newPromotion)); this.service.updatePromotion(promotion.id, formData) }
common-pile/stackexchange_filtered
Simple XML to Java variables I am just wondering if there is a simple method to load Java variables (only String) from a XML file, that is like the following. Most of the libraries and methods i read seem to be quite confusing. <?xml version="1.0" encoding="UTF-8"?> <registerData> <firstname>Max</firstname> <lastname>Mustermann</lastname> <EMAIL_ADDRESS> <company>Max's Mustermänner</company> </registerData> Thank you for your help! Benjamin What libraries you tried? Mention which library you are using? Refer - HowToAsk. This is way too broad. Pick one of the "confusing" methods, and try to make it work. Chances are, it would work right away. Otherwise, come back and post a more specific question. Good luck! here's an example using xpath http://stackoverflow.com/a/21266808/217324 . this way doesn't take a lot of code, it's declarative, and has no configuration overhead. You can use JAXB with annotations. From memory: @XmlRootElement("registerData") public class RegisterData { @XmlAttribute public String firstName; ... } See http://www.vogella.com/tutorials/JAXB/article.html - marshal = save, unmarshall = load XML.
common-pile/stackexchange_filtered
How to declare more than one option for SCNPhysicsShape I need to declare both ShapeTypeKey as well as ShapeScaleKey in [SCNPhysicsShape shapeWithGeometry: options:] all options that come to mind come up short. For example my current code is similar to; NSValue *nodeScale = [NSValue valueWithSCNVector3:SCNVector3Make(200, 400, 150)]; SCNScene *stackScene = [SCNScene<EMAIL_ADDRESS>SCNNode *stackNode = [stackScene.rootNode childNodeWithName:@"Grid" recursively:NO]; SCNGeometry *nodeGeometry = stackNode.geometry; stackNode.physicsBody.physicsShape = [SCNPhysicsShape shapeWithGeometry:nodeGeometry options:@{SCNPhysicsShapeTypeKey:SCNPhysicsShapeTypeConcavePolyhedron}]; stackNode.physicsBody.physicsShape = [SCNPhysicsShape shapeWithGeometry:nodeGeometry options:@{SCNPhysicsShapeScaleKey:nodeScale}]; This obviously overwrites the former with the latter. Being it's a dictionary, you can do something like this: stackNode.physicsBody.physicsShape = [SCNPhysicsShape shapeWithGeometry:nodeGeometry options:@{SCNPhysicsShapeTypeKey:SCNPhysicsShapeTypeConcavePolyhedron, SCNPhysicsShapeScaleKey:nodeScale}]; Each element in a dictionary can be separated with a comma
common-pile/stackexchange_filtered
How can @Configuration class is loaded while @ComponentScan is not annotated for the same class I come across this tutorial which shows how to use the H2 embedded database in spring application and it is working fine without any issue. However by checking the code I did not understand how can the configuration class DBConfig is being discovered and treated as configuration class for the ApplicationContext so can the beans inside be created. Please note that it is not used as argument in AnnotationConfigApplicationContext() and @ComponentScan and @Configuration are not annotated in the same class. As you can see below for the class Application @ComponentScan(basePackages = "com.zetcode") public class Application { private static final Logger logger = LoggerFactory.getLogger(Application.class); public static void main(String[] args) { // the application context is taking as argument the same Class var ctx = new AnnotationConfigApplicationContext(Application.class); var app = ctx.getBean(Application.class); app.run(); ctx.close(); } @Autowired private JdbcTemplate jdbcTemplate; private void run() { var sql = "SELECT * FROM cars"; var cars = jdbcTemplate.query(sql, new BeanPropertyRowMapper<>(Car.class)); cars.forEach(car -> logger.info("{}", car)); } } and class DBConfig @Configuration public class DBConfig { @Bean public DataSource dataSource() { var builder = new EmbeddedDatabaseBuilder(); var db = builder .setType(EmbeddedDatabaseType.H2) // HSQL or DERBY .addScript("db/schema.sql") .addScript("db/data.sql") .build(); return db; } @Bean public JdbcTemplate createJdbcTeamplate() { var template = new JdbcTemplate(); template.setDataSource(dataSource()); return template; } } I also did some JUnit Tests and I found that the DBConfig was created and so the Beans defined inside it. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes=Application.class) public class EmbeddedTest { @Autowired private DBConfig dbConfig; @Autowired private DataSource dataSource; @Test public void test() { assertNotNull(dbConfig); assertNotNull(dataSource); } } basePackages = "com.zetcode"? Kindly check the link in the Tutorial to see the structure of the project ! The entire question should be in the … question. Looked. See previous comment - your config is in a sub-package. I had to do some debugging in order to figure out how spring works for the above case. As we did not provide the Configured Class to our Application context, Spring will use the @ComponentScan to fetch it. How ? via the package com.zetcode provided in the annotation. It scans the package and all subpackages under com.zetcode searching for classes: src ├───main │ ├───java │ │ └───com │ │ └───zetcode │ │ │ Application.java │ │ ├───config │ │ │ DBConfig.java │ │ └───model │ │ Car.java │ └───resources │ │ logback.xml │ └───db │ create-db.sql │ insert-data.sql └───test └───java Thus, it will finds 3 : Application, DBConfig, Car and for each one it sees if it is a candidate components. In our Example, the generic Beans are Application DBConfig only. Once they are identified, the next step was to check the @Bean Method. And for each one identified it will create the dedicated Bean. Finally it will do the autowiring.
common-pile/stackexchange_filtered
How to add view to frame In my Xamarin.Forms app I want to create a Frame programmatically and add a view to it as a child: var child = new Label() var frame = new Frame(); // How do I add child to frame? How do I do this? Should be as simple as: var child = new Label(); var frame = new Frame { Content = child }; Official documentation can be found here.
common-pile/stackexchange_filtered
Fixing up BAC program Looking for some help with fixing up this program that calculates BAC and says whether or not you're alright to drive. I'm having some trouble with the last 3 scanf statements not running and with infinite prints of the statement saying if you're alright to drive. #include <stdio.h> #include <math.h> int main() { float drinks; float shot = 1.5; int beer = 12; int wine = 5; float floz; float alcohol; float weight; float hours; float bac; printf("\nEnter beer if you drank beer, wine if you drank wine, and shot if you drank shots: "); scanf(" %f.", &drinks); printf("\nEnter how many fluid ounces of alcohol your drank: "); scanf(" %f.", &floz); printf("\nEnter your weight in pounds: "); scanf(" %f.", &weight); printf("\nEnter how long you've been drinking in hours: "); scanf(" %f.", &hours); do { alcohol = shot * floz * .4; } while (drinks == shot); do { alcohol = beer * floz * .05; } while (drinks == beer); do { alcohol = wine * floz * .12; } while (drinks == wine); bac = ((alcohol * 5.14) / (weight * .73)) - (.015 * hours); printf("Your BAC is %f.\n", bac); while (bac <= .03) { printf("You are ok to drive.\n"); } while (.04 <= bac <= .08) { printf("You may drive but it would be unsafe.\n"); } while (bac >= .09) { printf("You are guaranteed a DUI if pulled over.\n"); } return 0; } what if drinks == shot is true? it will in infinite loop. check other do..while in same manner. The last 3 whiles should be changed into if. Try with if condition instead of while... Is there no way to use while like this? The assignment wants us to use while, for, and do while loops. If someone types beer in response to the first prompt, the scanf() for a float will fail, and every subsequent input looking for a number will also fail, and you'll get gibberish out of your program — boring, repetitive gibberish. You should check each scanf() return value to ensure it successfully converted the value. You might consider a loop around each prompt and input operation so you go back (for a limited number of times) after a failed input. You might also have a loop within that loop to read to the end of the line before repeating the input/prompt/read cycle. Your calculation loops are either done once or an infinite number of times because you never change the tested variables within the body of the loop. Your output loops are either done zero times or an infinite number of times because you never change the tested variables within the body of the loop. Fixed program: #include <stdio.h> #include <math.h> #include <string.h>// for strcmp int main() { char drinks[5];// you need a string not a float float shot = 1.5; int beer = 12; int wine = 5; float floz; float alcohol; float weight; float hours; float bac; printf("\nEnter beer if you drank beer, wine if you drank wine, and shot if you drank shots: "); scanf("%s", drinks);// scanning string printf("\nEnter how many fluid ounces of alcohol your drank: "); scanf(" %f", &floz); printf("\nEnter your weight in pounds: "); scanf(" %f", &weight); printf("\nEnter how long you've been drinking in hours: "); scanf(" %f", &hours); //compare the input and word using strcmp if(strcmp(drinks,"shot")==0) alcohol = shot * floz * .4; else if(strcmp(drinks,"beer")==0) alcohol = beer * floz * .05; else if(strcmp(drinks,"wine")==0) alcohol = wine * floz * .12; bac = ((alcohol * 5.14) / (weight * .73)) - (.015 * hours); printf("Your BAC is %f.\n", bac); // no need to loop if (bac <= .03) printf("You are ok to drive.\n"); // your condition dosen't do what you think else if (bac >= .04 && bac <= .08) printf("You may drive but it would be unsafe.\n"); // use else as it is sure that bac > .08 else printf("You are guaranteed a DUI if pulled over.\n"); return 0; }
common-pile/stackexchange_filtered
if a topology $\tau$ is finer than the usual topology $\tau_0$ on $\mathbb{R}$ then ($\mathbb{R}$,$\tau$ ) is Hausdorff A topology $\tau$ is finer than $\tau_0$ if $\tau_0 \subset \tau$. I know that ($\mathbb{R}$,$\tau_0$) is Hausdorff where $\tau_0$ is the usual topology on $\mathbb{R}$. Now, why ($\mathbb{R}$,$\tau$ ) is also Hausdorff? Hint: Proof by definition. Intuitively, a finer topology means that it has more open sets. If $(\mathbb{R},\tau_0)$ is Hausdorff, then it is also Hausdorff with any finer topology, since we have more open sets to choose now. To show that $(\mathbb{R},\tau)$ is Hausdorff, we need to prove that for any pair of distinct points $x\neq y$ in $\mathbb{R},$ there exist $U\in\tau$ and $V\in\tau$ of $x$ and $y$ respectively such that $U\cap V=\emptyset.$ Fix $x\in\mathbb{R}$ and $y\in\mathbb{R}$ such that $x\neq y.$ Since $(\mathbb{R},\tau_0)$ is Hausdorff, there exist $U\in \tau_0$ and $V\in \tau_0$ such that $x\in U, y\in V$ and $$U\cap V=\emptyset. $$ As $\tau_0\subseteq \tau,$ so $U\in\tau$ and $V\in\tau.$ Therefore, $(\mathbb{R},\tau)$ is Hausdorff. Let, $x,y\in(\mathbb{R},\tau)$ s.t. $x\neq y$. Since, $\tau_0\subset \tau$, so $\exists$ open sets $U,V\in \tau_0\subset\tau$ s.t. $x\in U$ $y\in V$ with $U\cap V= \emptyset$ (since $(\mathbb{R},\tau_0)$ is Hausdroff). Hence $(\mathbb{R},\tau)$ is Hausdroff.
common-pile/stackexchange_filtered
Resource nesting with tastypie Trying to do something pretty simple with tastypie, but cannot figure out how. Lets say I have a model Author and a mode Book with a foreign key pointing to author. And I have an Author resource. So, /api/v1/author would get me a list of authors and /api/v1/author/1 would get me details on a particular author. I want /api/v1/author/1/books to get me a list of books for this particular author. How? Example code: from django.db import models from tastypie.resource import ModelResource class Author(models.Model): name = models.CharField(max_length=200) class Book(models.Model): title = models.CharField(max_length=200) author = models.ForeignKey('Author') class AuthorResource(ModelResource): queryset = Author.objects.all() Looks like the recipe for this was actually in the cookbook, but for something so obvious, the implementation is rather awkward. http://django-tastypie.readthedocs.org/en/latest/cookbook.html#nested-resources
common-pile/stackexchange_filtered
Why is my Mango leaves turning brown? Can someone help me?!?!? I started leaving my mango plant outside so it can get used to it, but 2 days later the leaves started turning brown! It was fine the first day, but when I woke up in the morning on the second day the leaves were brown. I don’t know what to do. Is it normal for the plant to turn brown? If anyone knows please tell me what to do, I want the leaves to turn green again. Btw I am new gardener, that is why I am really worried. It might be a reaction to the new light intensity (outside light is more intense). My advice is to let it get used to outside sunlight, by giving it a little more light every day during morning or evening sun. Build it up slowly. that is a soil issue. you need to remove it and make a soil that is a 50/50 mix of peat and perlite and also add some sand to the mix. that soil looks very compact and the roots cant breathe which is slowly killing the plant. also it is too young to get full direct sun. It definitely looks like it needs more aireation (perlite), but if the plant with the same soil, never had an issue, leaving it outside would only but help (unless it rains or it is very humid) for aireation... A young mango plant often has green and brown leaves. So it is hard to say if there is something wrong with it. There are many photos, here protected, so I can't copy them here. This is my memory of the plants when I was in India. There would be reason to worry if they started to shrivel or curl.
common-pile/stackexchange_filtered
How to stop Gradle from copying resources in build folder I have a database (.db-File) in my resources folder, which gets copied in every run of my Java code (I want to keep that file there because it is 'only' a university project and I need this for simplicity). That copying leads to the problem, that all changes made during runtime via code get lost every time, because the changes are only made to the copy in the build folder and not to the original. How can I make Gradle to stop copying this file and force it to use the original file in the resources folder? (Similar question, but without micronaut: Is it possible prevent copying of resources to build directory in Gradle project and directly use them from original location?) You can exclude certain files in the build process. build.gradle file: sourceSets { main { resources { exclude '*.db-File' } } } Gradle 1.2: Exclude directory under resources sourceSets That works. Thanks! But before I could access that File via DriverManager.getConnection("jdbc:sqlite::resources:" + dbUrl); (dbUrl was only 'UserData.db', because it was straight in the resources folder without subdirectories) Now it only works with: dbUrl = "src/main/resources/UserDb.db"; which I assume is kinda dirty.. Any better way for that issue? @fusion, Ideally, you should pass the connection URL of the running database in the environment variable. To make things little simpler, you could put the connection URL in "application.properties" file and read it from there. I decided to setup some test data in the db inside the resource folder and let my program copy this db to home directory (if it doesn't yet exist there) and access it from there. Probably a better approach at all. But thanks though, this did actually what I wanted first.
common-pile/stackexchange_filtered
Problems with Access 2007 Project Form filters I am in the process of moving several Access databases into a SQL 2008 R2 server using Access 2007 Projects as the frontends and we're running into problems when users are trying to filter data from the forms. Example: I have one project file setup so that the users can search customer data and I'm using a login to the server that only has "CONNECT" and "SELECT" rights so they can't change any of the data. The only form in this project has it's record source set directly to the table, no views or queries. If a user selects the "Customer#" and then presses the "Filter" button, selects "Text Filter" and enters a customer number they get an "Enter a valid value" error (same thing happens if they select a field on the form and right click and try to set a filter). If the user uses the "Advanced/Filter By Form" there's no problems. There are no other filters set on the form or in code, no input validations, just a plan form. Anyone have any ideas where to start on debugging this? Thanks. At first, you need to confirm that it is a MSSQL permission issue. To check this - try the same with MSSQL user that doesn't have any permission restrictions. Then, you can use MSSQL profiler to look at what actual MSSQL statements are being sent by Access. I beleive that it is not the simple "SELECT", but it will be some system stored procedures calls (that's how Access works with MSSQL). Look at this trace and try to understand permissions that should be added. If your Access application works on the tables level, then may be it would be easier to deny update/delete instead of granting select only - not sure that it will help, but it's just an idea what you can try.
common-pile/stackexchange_filtered
Understanding Convolution Reverb This question is more for me to understand the uses of convolution reverb, how to perform it, and clear up any misconceptions I have. As I understand it, to create a convolution impulse you: Set up a speaker and measurement mic in the space you wish to model Play an impulse from the speaker and record the result. I was told to use a sound file that is a click followed by a sine sweep, followed by another click. Load the original file and the recorded file into a deconvolver and create an output file. Use the output file with your convolution reverb plugin Convolution reverb should be the ideal way to process your ADR, but I've never been on a project that used it. I've only played with the built in profiles that come with the plug-in. My questions are: What kind of equipment do you use to record in the field? What deconvolver program do you use? What kinds of applications do you profile for? Thanks! You have a pretty thorough understanding of the process. To answer your questions: • What kind of equipment do you use to record in the field? There are no hard and fast rules but you should try to use as professional a product as you can get your hands on. A powered speaker is a must, so be sure you have a source of power wherever you are planning on recording. Re. microphones, many folks use DPAs or Earthworks, but Schoeps, ATs and Neumanns are all great choices. If you don't have access to any of those, use what you do have! You also have a choice as to how many channels you are going to record, ie. mono, stereo, quad or higher. What configuration you choose will depend on your recorder, how much time you allot for setup, and of course your application. If you are preparing for strictly ADR purposes then stereo would more than suffice. • What deconvolver program do you use? Altiverb is the most popular in film post-production (at least here in LA). Another choice is TL Space. Still another is IR-L from Waves (although I've not used it). Here's an excellent article from Sound on Sound regarding the various apps and how they compare. • What kinds of applications do you [use] the profile for? Totally depends on what you want to use if for! You could stay strictly within the constraints of treating ADR to match dialog (or whatever space you are trying to match into), or use it on music or sound effects to simulate a desired real-space. I often use Altiverb to change the original file altogether, to design it into something new. Also, remember that convolution reverbs do not have to use only impulse responses from real spaces; they can use any sound sample that you throw into it to convolve against something else, ie. an anvil hit used as an impulse response will impart a metallic characteristic into whatever sound you process. Experiment and have fun! PS. There are many 3rd party impulse response libraries out there for users of convolution reverbs. Here's one that I've enjoyed using from time to time. Is Altiverb a deconvolver or a convolver? Convolver generates the reverberated audio from the impulse response. A deconvolver generates the impulse response from a recording of some other known signal, which can then be processed with the convolver. Endolith - AudioEase is the company that makes Altiverb. Altiverb itself is a convolution reverb, meaning it uses IRs to convolve with other sounds to create reverbs, echos, etc. AudioEase also bundles a program with Altiverb called the Altiverb IR Preprocessor, which is a deconvolver. See my comments above on your post for more information. Or, you can use a convolution reverb for very odd effects, not just realistic sounds. I've made some very interesting tones using an audio sample of a bowed cymbal as an IR, and struck Slinkies, and... ;-) Thanks for your (and everyone else's) detailed response, as well as the response library link. I believe I have everything I need to get going and I'm excited to start gathering my own reverb library! You're quite welcome. I'd love to hear some of your responses if you feel like sharing. I also had the impression that one could use just a static click for the impulse sound. I was thinking of using a start pistol for recording convolution impulses. Now that I think of the click-sweep-click-theory, it could have the idea of gathering attack - pre delay - frequency response of the reflections - and lastly decay information. Would be nice to know more on this approach. Anyone? Many folks use the starter pistol method; this, however, does not cover the full frequency spectrum. A sine wave sweep covers all frequencies (at least 20Hz - 20kHz) so you often get a truer picture of the space you are sampling. PS. Regarding your "click-sweep-click-theory" comment: To the best of my knowledge, the click or pop does not play a part in the sampling process. It merely tells the pre-processor when to start and stop listening for the sine sweep. Corrections? The "chirp" before and after the sweeps are specific to using GratisVolver, the deconvolver he recommended. It's easier to line up the response to the orginal file with them in place. How does a starter pistol not cover all frequencies? Yep, that would seem true concerning a starter pistol. Just guessing a starter pistol is likely to have quite a narrow frequency response around 3-10khz (comments or facts concerning this?). This is why an IR made from a starter pistol recording will have very little response information on bass frequencies. Due to digital recording devices having a finite frequency response - 20 to 20Khz, any impulse signal ( hand clap, gun shot etc) and sweeps ( linear or log 20 -20KHz) can be convolved and will cover the whole spectrum from 20 to 20KHz. At least that's what I've learned so far in my Acoustics course. Some are better than others for dealing with signal to noise ratio, that's why log sine sweeps are good. "I was told to use a sound file that is a click followed by a sine sweep, followed by another click." Hmmm... Who told you that? If you record the response to a click, then that is the impulse response. It's literally the response of a room to an impulsive noise, like a click. No deconvolution or other processing necessary. You then convolve your music with the impulse response, and it will sound as if you had played the music in the space instead of the click. You probably want to use a dedicated clicker, though, rather than a speaker. You want the sound as close to an ideal impulse as possible, not filtered through a speaker's frequency and phase response. You want it to sound as if your performer is actually in the cathedral, not like you're playing a boombox of their CD inside a cathedral. You also want it as loud as possible (without clipping), so that the signal-to-noise ratio of your recording is high. You can also derive the impulse response from a sine sweep or maximum-length sequence or other signal by deconvolving first. This improves the signal to noise ratio, but (ideally) it's going to produce the exact same thing as the straight impulse response. Practically, one method might produce better results than the other. See Wikipedia See here and here for the use of maximum-length sequences to measure impulse responses. These work like a chirp, but better. And remember you can record from two microphones at once to get a stereo image of the response. To clarify - some apps use both a click (pop) and a sine sweep to use in the recording of an impulse response of a space, such as Altiverb. AudioEase provides a program that creates sweeps for you to use called the Altiverb Sweep Generator. Preceeding the sweep is a pop. That pop tells the IR Preprocessor that it's time to start listening for the sweep. After the sweep has finished it is followed by an end pop which tells the IR Preprocessor that the sweep is complete. The pops ensure that no extreme high or low freqs are missed in the deconvolution of the sweep. Corrections, anyone? It probably helps in syncing the recording with the test signal. "There are other broadband stimuli also characterized by flat specta magnitudes, from which frequency response or impulse response data can be derived, but which possess friendlier crest factors than does the impulse. One is the frequency sweep, or its optimized version, the chirp. This stimulus is not as simply generated and getting phase information can be difficult unless the system being measured is already definitely known to be minimum phase." Be sure to use as flat a mic and speaker set as you can in terms of frequency response so that you capture as accurate an impulse response as possible. @JustinMacleod I got some measurement mics, so I'm set there. I really need to tune the impulse to match my speakers, but they're fairly flat response.
common-pile/stackexchange_filtered
Optimise my query select REMITTORACCOUNTNUMBER, TRANTYPE, RRN, AMOUNT, responsecode , (CASE when auxrc is NULL then 'N' else auxrc END) as cbsstatus, responsecode as switchstatus from tableA t left join tableB i on t.rrn=i.TransactionSerialNumber where traninfo like '%O' and txndate>'2018-10-24' and txndate<='2018-10-25'; It is taking more time to get a response after creating indexes. Can you help me optimize my query for best performance? tableA columns : REMITTORACCOUNTNUMBER, TRANTYPE, RRN, AMOUNT, responsecode , auxrc,traninfo and txndate . tableA Indexes columns :rrn,txndate tableB columns: TransactionSerialNumber tableB indexes :TransactionSerialNumber I required based on tableA which records are exists in tableB and which records are not exists in tableB. comparison column in tableA is rrn and tableB is TransactionSerialNumber So what you are using? Mysql or Oracle? we are using oracle Specify table alias for EACH field in your query. If you think slow use this Explain for see plan execution.. Unfortunately traninfo like '%O' can't be optimized. Only like 'O%' and not like '%O%' or like '%O' can be speeded up by index. All we can offer with this little to go on is guesswork (I have an idea, but prefer not to just "guess"). You should provide basic table definitions including what indexes and keys exist on "tableA" and "tableB". It would also be helpful (as suggested already by Akina) to include the query plan the engine is currently using. The more detail you provide, the more you help us help you. Please add the extra detail by editing the question, rather than by replying to comments directly, as details in comments can easily get buried. Please provide full table and index definitions, and the execution plan of your query. INDEX(txndate) will help some. But that is all. Your inequalities seem strange. You are leaving out midnight of the first day, then spilling into the second day. Normally one uses >= and <.
common-pile/stackexchange_filtered
How to delete old documents from lucene index (filter by Date parameter) I am new to lucene (i'm using JAVA) and i've tried to read all the existing answers in order to find out a solution to my problem. Unfortunately I didn't fix my problem, so I try to ask you again this question: I have a Date parameter. I need to delete all the documents inside a lucene index that are older than that Date parameter. How can I do it? I've found one solution with the method isDeleted but, unfortunately, is currently deprecated. Thanks in advice! You can delete all the documents matching any query with the IndexWriter.DeleteDocuments method, such as: String dateString = DateTools.dateToString(myCutoffDate, DateTools.Resolution.DAY); Query dateQuery = TermRangeQuery.newStringRange("dateField", null, dateString, false, false); myIndexWriter.DeleteDocuments(dateQuery);
common-pile/stackexchange_filtered
See if you can figure this out: There is a story that a man and not a man, did see and not see a bird and not a bird, Perched on a branch and not a branch, And hit him and did not hit him with a rock and not a rock. How is this possible? Were Schroedinger and Heisenberg involved in this somehow? No, Heisenberg is not involved in this riddle, good try though. Is it a dream? You dreamed about a man throwing a rock but in reality there were no actual physical objects. Just figments of your imagination. Nope, incorrect :), the answer is actually really quite simple, a story in itself. @natural ... Plato? Is that you? @Rubio uhhhhhhh, noooooooo. :P A eunuch who did not see well saw a bat perched on a reed and threw a pumice stone at him which missed. @natural Was this an unbelievably lucky guess? Or a very well known answer to a well known riddle? Or just that the question was too vague and pretty much anything would have been acceptable? This is an old riddle, apparently attributed to Plato, whose answer can be found online elsewhere pretty readily. I’ll concede/disclose that I have a dog in this fight, a horse in this race, a bird on this branch, a squirrel in this yard, a snake in the grass, etc., but … … … … … … … this whole answer seems a bit thin to me, and I especially don’t see how it even marginally satisfies the “And hit him” clause. @stackreader Have you never heard this riddle before? It's well known, yes, but this also seems to be the only answer that really fits properly. A man and a woman were somewhere near a tree.  The man saw a bird on a branch; he threw a rock at it and hit it.  The woman saw a squirrel on the ground and said, "Aww, isn't that cute?" Building on the accepted answer, which “can be found online elsewhere pretty readily” (for example, here), A eunuch who did not see well saw a bat perched on a reed and threw a pumice stone which hit the bat.  However, the bat was transsexual, so the “And hit him” clause is only half true. [Not that there’s anything wrong with that.  $:)$ ] Meanwhile, in a nearby sandlot, kids with good vision were hitting a pumice stone with a bat.  $\large :)$
common-pile/stackexchange_filtered
Failed to compile: Module not found: Can't resolve '../../common/form' in 'src' components time' Attempts to import the file form.jsx in the filetime.jsx. I have an error: Module not found: Can't resolve '../../common/form' in 'src / components / time' //src //common //form.jsx //components //calendar //time.jsx time.jsx import form from '../../common/form' Updated form.jsx const form = () => { return ( <div > </div> ); }; export default form; @JosephD. export default form; Can you update your question with the code you use for the export Are you export default form in file form.jsx ? Are you sure time.jsx is in a folder named calendar? I think the error says its directly in the components folder. '../../../form', how about this import form from '../../common/form.jsx'; how about this? @DragonWhite you shouldnt have to specify .jsx it should work without explicititly specifying the file extension try this import form from '../../common/form' It doesn't work import { FormComponent } from '../../common/form'; //access it as this error is because of tag form which is one of the keyword of jsx //form.jsx const FormComponent = () => { return ( <div > </div> ); }; export default FormComponent; yes it's a jsx keyword, but not a js keyword. And the import happens in Js please do right tick as well if you are getting help.
common-pile/stackexchange_filtered
C programming Problem - Maximum Partitions I'm struggling to solve and implement in C a function that gets as input a "string" which it contains the chars 'a' or 'b' (the input string might be included chars 'a' and 'b' altogether) , and what the function does is to divide the string to three parts each permutation/set without having NULL/empty chars at each permutation/set. the length of the permutation/set isn't necessarily equal, but they all (all the three parts of each set/permutation) have the same number of occurrence/appearance of chars 'a' . the function must return the maximum possibilities that we can divide the string by 3 at each possible set/permutation. Examples: #1 input string: "ababa" is having 4 possibilities of permutations/sets (as I said each permutations is contains from 3 parts and if there's no 3 parts so this possibility is discarded ! ), in this case the maximum possibilities are: (ab,a,ba) , (a,bab,a) , (a,ba,ba) ,(ab,ab,a). #2 input string: "bbbbb" is having 6 possibilities of permutations/sets (as I said each permutations is contains from 3 parts and if there's no 3 parts so this possibility is discarded ! ), in this case the maximum possibilities are: (be careful here 'a' is o has no occurance so at each permutation/set there's no 'a' and it's equal for each set -has no 'a'-) (bb,b,bb) , (bbb,b,b) , (bb,bb,b) ,(b,bbb,b),(b,bb,bb),(b,b,bbb). #3 input string:'ababb' is having 0 possibilities, so the function returns empty string or NULL or " " even could return whatever we want which resemble about there's no possibilities for this string. Im struggling to implement this function in C and Im stuck on it about three days. I started to think to solve this problem in recursive approach because we are talking about permutations and maximum possibilities. so I deeply thought and started with my algorithm as this: first condition is to check : 'a' % 3 == 0 for every 3 parts of one permutation/set . (set consists 3 parts ) and then to think what's the recursive rule for the other condition (the length of string > 3). but Im stuck and I couldn't complete I don't know how to continue, any help please? the hardest thing is to discover the recursive rule for my problem with its edge conditions. Any help out?! thanks alot. These things are not called permutations but partitions. The word "permutation" is reserved for something else. Do you want to output the number of possibilities only, or to print the different partitions? @Damien to print all the different possible sets-partitions. (every possibility has a set of 3 partitions as far as it possible to be partitioned) @n.'pronouns'm. sorry for that , missed it up .. so it's partitions . With fixed number of parts and known number K of "a"'s in every part you need just: find position of K-th "a" p1 find position of K+1-st "a" p2 find position of 2K-th "a" p3 find position of 2K+1-st "a" p4 Then make two for-loops - one (i index) goes from p1 position to p2-1, second (j index) goes from p3 position to p4-1, and output string parts limited by [0..i], [i+1..j], [j+1..strlen-1] indices. Example in Python: def makeparts(s): acnt = s.count('a') if acnt % 3: return #no solutions ca = 0 ka = acnt // 3 for i in range(len(s)): if s[i] == 'a': ca += 1 if ca == ka: ip1 = i elif ca == ka + 1: ip2 = i if ca == 2 * ka: #note separate "if" ip3 = i elif ca == 2 * ka + 1: ip4 = i for left in range(ip1, ip2): for right in range(ip3, ip4): print(s[:left + 1], s[left+1:right+1], s[right+1:]) makeparts('abbabbba') makeparts('abbababbbababa') a bba bbba a bbab bba a bbabb ba a bbabbb a ab ba bbba ab bab bba ab babb ba ab babbb a abb a bbba abb ab bba abb abb ba abb abbb a abba babbba baba abba babbbab aba abbab abbba baba abbab abbbab aba Appreciate your reply, but didn't understand you well. may you explain it by an example or even a code in any language you want (python / matlab / java .. whatever ) to understand you well .. doesn't matter for me which language would you code because I want to understand the concept. thanks in advance a pseudo code for demonstrating the concept / idea would also be good for me to understand well. if I take the approach of recursive apparoch then the part of the recursive rule is really the hardest part for me to understand / to implement it. @Ryan OK, look at Python implementation thanks alot, but I still confused the concept of your algorithm. why I need K+1 and 2k , 2k+1 positions? not clearing for me why I need to find the K and K+1 , 2k , 2k+1 positions? why for instance not k , k+1 , 3k , 3k+1 ....could you please explain in words the algorithm (exactly the part of finding the Kth positions and 2kth positions) Whole string contains 3K of "a"'s, so the first part should contain K "a"'s, the second part should K "a"'s numbered from K+1 to 2K, the third part should contain K "a"'s numbered from 2K+1 to 3K. Is it clear? Yes, but what's K? is it number of 'a' per part? in other words equal to 3? if so then the indexes of k , 2k+1 , 3k is not compatible .. Yes, K is number of 'a' per part - this is written in the first sentence of answer. So overall number of 'a' is 3K. k , 2k+1 aren't indexes in string, they are numbers of 'a' upto current (i-th) position I still confused, lets assume I have input "abbabba" . we need at every part K to be equal to the each of the other part ..so here in my case how K applied to my example? is it equal to 1? Yes. K is 1 here To get three parts, you have to do 2 splits. One can determine in advance between which occurrences of "a" these splits have to be. Each split has to be done in a (possibly empty) sequence of "b"s. If the are n "b"s at the split position there are n+1 possible cuts. Just combine all possibilities for the first split with all possibilities for the second split. Yes, I need to split it twice (by 2) . but Im stuck on the recursive rule and it's the hardest part for me , three days just stuck on this ! , may you explain more by any pseudo code or whatever? would be more appreciated . There is no recursion necessary. Just use two nested for loops. One for the first split position, the other for the second split position. Yes , apparently my recursive approach makes problem more complex so wouldn't go to this approach. according to what you claimed, so lets assume I have string with length 5 so two for loops on it and they iterate till length 5? I didn't honestly understand how two loops are solving the problem. could you elaborate that by pseudo code? just for understand what you mean.
common-pile/stackexchange_filtered
Download all files with some strings in their names On http://www.inf.usi.ch/carzaniga/edu/algo08f/schedule.html, I would like to download all pdf files of the name *-2up.pdf, for example, http://www.inf.usi.ch/carzaniga/edu/algo08f/intro-2up.pdf Can that be done using wget in bash? Thanks. To retrieve files recursively but only retrieve those that end in 2up.pdf, try: wget -r -nd -A 2up.pdf 'http://www.inf.usi.ch/carzaniga/edu/algo08f/schedule.html' Explanation: -r tells wget to get files recursively -nd tells wget to keep all downloaded files in the current directory. Otherwise, it would try to recreate the directory structure at www.inf.usi.ch. -A 2up.pdf restricts downloads to filenames ending with 2up.pdf. Refinement When told to be recursive, wget will search through all html links looking for links to file that can be accepted. If we know that all the files that we want are linked directly in the source URL, then we don't want this behavior. To restrict the depth to which wget will search, use the --level option: wget -r -nd -A th-2up.pdf -A schedule.html --level=1 'http://www.inf.usi.ch/carzaniga/edu/algo08f/schedule.html' The also demonstrates that multiple -A options can be used together. Thanks. (1) What if I also would like to download the webpage itself besides those pdf files? (2) also it seems that -A will still download other files not ended in 2up.pdf and then remove them, so it seems to take very long to run to finish. Is it possible not to spend time on files not needed? @Tim It can be added it to the accept list: wget -r -nd -A 2up.pdf -A schedule.html 'http://www.inf.usi.ch/carzaniga/edu/algo08f/schedule.html' Thanks. I see. But with -A, it will download everything, and remove those files not satisfy -A. That seems to take forever to finish running. Anyway not to spend time on files not needed? @Tim It doesn't download "everything." For example, it never downloads any of the non-2up PDFs. But, it does download every html link in hopes of finding that it has a link to a 2up.pdf file. If you know, however, that everything you want is in the schedule.html file, use --level=1 as shown in the updated answer.
common-pile/stackexchange_filtered
Why the UILongPressGestureRecognizer in custom UIWindow is not reflecting in UIWebview of the application? I have implemented a custom UIWindow, this will be my main window. I have added a UILongPressGestureRecognizer to it. All the views in the application responds to this except the UIWebView. I am bit confused on this peculiar problem. Below is the way of my implementation of UILongPressGestureRecognizer in UIWindow, - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code UILongPressGestureRecognizer* longPress = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(longPressAction:)]; [longPress setMinimumPressDuration:3]; [self addGestureRecognizer:longPress]; } return self; } Could any one reason out for this problem? How to resolve this? Thanks ! UIWebview has its own private views and they handle the gesture recognisers by themselves. This may be hampering your window to respond to the long press. @Puneet How to subside these gesture recognizers to make my UIWindow to respond for the same? I tried the way mentioned by Avi Tsadok below and it worked. Also if you do not want UIWebview to handle any gestures, just set its userInteractionEnabled property to NO. @Puneet _longGesture - Is this the reference of the UILongPressGestureRecognizer that I have mentioned in custom UIWindow? Yes it is. In your case the reference of UILongGestureRecogniser you have created is by name longPress. @Puneet Please find my comment to the answer of Avi Tsadok. Please suggest ! It should work. Ok try one more thing, set userInteractionEnabled property on webview to NO and then check whether it is working. If yes, then the issue will be solved by the below mentioned code otherwise we need to keep looking. No Its not working @Puneet UIWebView has a scrollview inside it, with it's own gestures. Basically, you have a reference to those gestures: _webView.scrollView.panGestureRecognizer; So, you can create a condition between those two gestures: [_webView.scrollView.panGestureRecognizer requireGestureRecognizerToFail:_longGesture]; Haven't tried it myself, but maybe it will work. _longGesture - Is this the reference of the UILongPressGestureRecognizer that I have mentioned in UIWindow? _longGesture is the reference to the long press gesture. Tell me if it worked for you Tried as below, but its not working. Please suggest ! EBaseUIWindow *baseWindow = (EBaseUIWindow *) [[UIApplication sharedApplication] keyWindow]; [webview loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.google.com"]]]; [webview.scrollView.panGestureRecognizer requireGestureRecognizerToFail:[baseWindow longPress]];
common-pile/stackexchange_filtered
"Argument data type ntext is invalid for argument 1 of len function" error I'm using entity framework and I write this code to get some result from DB: ReviewsDBEntities DB = new ReviewsDBEntities(); var result=DB.Review.Where(r => r.ReviewText.Length > 200); But I get this error as an inner error: "Argument data type ntext is invalid for argument 1 of len function" I looked it up, and I found out that because the type of ReviewText is defined as ntext, the function Len won't run it on the Database side. Now, I don't know how I can change the code to get ReviewTexts with Length more than 200. Your error is pointing to the right direction. Your underlying type I am guessing is 'text' and should no longer be used. If you can update that type to be varchar(max) or nvarchar(max). Entity has a problem with determining length on this old type. You can do a hack I believe to go '.ToString().Length' and attempt that but that is still not resolving the underlying problem is the type. I just wrote '.ToString().Length' and it still didn't work. That is too bad, I would suggest changing this data type if possible though even if you do get a proper answer from Jeroen below or someone else. It is not best practice to use this type and it will limit you in other ways down the road later too. @djangojazz I remember the day I changed the type to ntext because I couldn't find any other types to store a large text. Do you recommend any? varchar(max) or nvarchar(max) due to the same reasons you are having a problem with in Entity. By all accounts MS is going to deprecate the type of ntext in the future too so it is a good idea to get rid of it: https://msdn.microsoft.com/en-us/library/ms187993.aspx var result=DB.Review.Where(r => SqlFunctions.DataLength(r.ReviewText) / 2 > 200); Why / 2? Because DATALENGTH returns the length in bytes, and NTEXT contains Unicode characters, each of which consume 2 bytes.
common-pile/stackexchange_filtered
Random background change in Xojo I've tried to create a button in Xojo, so every time I press it the background color should randomly change. The name of my window is just Window1. I can't figure out how to do it. I would appreciate an example please. I know how to generate random colors, but how to do it with the background? Please provide enough code so others can better understand or reproduce the problem. If you've already tried to set the background color, please also ensure the HasBackgroundColor property of your window is set to True: Self.HasBackgroundColor = True Self.BackgroundColor = Color.Red // Use your random color variable instead
common-pile/stackexchange_filtered
issue while trying to run selenium on jenkins 1,setup jenkins on ubuntu 2,setup selenium on ubuntu 3,create selenium test script code as below: from selenium import webdriver import time driver=webdriver.Chrome(executable_path='/home/test/chromedriver') driver.get('https://www.yahoo.com') time.sleep(2) screenshot=driver.save_screenshot("yahoo.png") driver.quit() 4, success execute script locally 5, trying to use jenkins to run script, under jenkins build execute shell, cd /home/test python test.py 6, Jenkins build failure console output: Running as SYSTEM Building in workspace /var/lib/jenkins/workspace/test [test] $ /bin/sh -xe /tmp/jenkins127263638261616263.sh + cd /home/test + python test.py Traceback (most recent call last): File "test.py", line 5, in <module> driver=webdriver.Chrome(executable_path='/home/test/chromedriver') File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__ desired_capabilities=desired_capabilities) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__ self.start_session(capabilities, browser_profile) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute self.error_handler.check_response(response) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally. (unknown error: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.) Build step 'Execute shell' marked build as failure Finished: FAILURE seems webdriver couldn't start, not sure how can i fix this issue and run selenium on jeninks. new to automation, pls advise thanks! Try adding the below on your selenium test script : chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument('--headless') @yandm are you using a headles chrome if so add also this line below the ones above: chrome_options.add_argument('--headless') yes, you are awesome!! added this two line, it's work! from selenium.webdriver.chrome.options import Options chrome_options.add_argument('--headless') I think the issue is with chromedriver path. Does this path exist on jenkins node "executable_path='/home/test/chromedriver'" ? Provide the correct path of chromediver which is accessible in jenkins node. yes, the path "/home/test/chromedrive" is on jenkins node...
common-pile/stackexchange_filtered
grep + when need to add backslash before dot character Under the folder - /var/my_private_pkgs We transferred packages that are split from ISO rhel 8.0 rhel8.0aa rhel8.0ab rhel8.0ac . . We are doing the following in order to verify the sum of packages in bytes du -sb /var/my_private_pkgs/* | grep rhel8.0[a-z] 419430400 rhel8.0aa 419430400 rhel8.0ab 419430400 rhel8.0ac . . But I am not sure regarding to case above if we need to add the backslash before dot? in grep as the following du -sb /var/my_private_pkgs/* | grep rhel8\.0[a-z] Dose both cases are ok? , Or maybe better to add backslash before dot? here, you could probably also use du -sb /var/my_private_pkgs/rhel8.0[a-z]*, leaving it to the shell to filter the file names. Here, in the shell glob, the . is not special, but the [a-z] works similarly. You need to add backslash, when you want to match . ! In a regex, . is a special character meaning "any character". It will e.g. also therefore match rhel18x0a which you might not want. Btw, you should also qoute your pattern, as the shell will interpret your backslash, and effectively remove it before sending it to grep: du -sb /var/my_private_pkgs/* | grep 'rhel8\.0[a-z]'
common-pile/stackexchange_filtered
How to add parameters to a React object? I have a shared style with error: export const modalStyle(height)= { // <-- Whats the syntax for adding parameter here? width:MODAL_WIDTH, height:height, backgroundColor:color_theme_light.mainGreen, borderRadius:22, justifyContent:"center", alignItems:"center", shadowColor: "#000", shadowOffset: { width: 0, height: 1, }, shadowOpacity: 0.20, shadowRadius: 1.41, elevation: 2 } How can I add a parameter to it, so when I call this style I can change the height dynamically? import {modalStyle} from './modalStyles' <View style={modalStyle(40)}> ... </View> In your case, you are having two common ways to declare and export the modalStyle function. Function declaration: export function modalStyle(height) ({}) Arrow function: export const modalStyle = (height) => ({}) I still got errors: Expression excepted Can you edit your question and add a copy of your error? I was missing the ().. :D this works! export const modalStyle = (height) => ({})
common-pile/stackexchange_filtered
Debug an Office Add-in in web on Ubuntu platform I wanted to debug office Add-in on the web in Linux. I installed the package office-addin-debugging and configured. When I tried to ran the office-addin-debugging start manifest.xml web command it shows up with some errors. The error comes as follows. office-addin-debugging start manifest.xml web Debugging is being started... App type: web Unable to start the dev server. Error: Platform not supported: Linux Debugging started. I am running the above command in the following environment and its corresponding version. OS: Ubuntu 14.04 Node version: v10.14.0 Angular CLI: 8.0.0 Angular: 8.0.0 Apart from this, I tried to validate the manifest file for any errors and removed all the possible errors but no luck. Any help would be really appreciated !!
common-pile/stackexchange_filtered
How to ensure that the username that is auto-populated does not get replaced by another one in InfoPath I have an approve and reject form having three views. In the first view i.e the requester's view I auto-populate the requester name by using "GetUserProfileByName". Now I also want to auto populate the name of the user who approves the form in a different view (but this view also has the requester's name)- so I was wondering if I actually use "GetUserProfileByName" again won't it change the first name too (i.e the requester's name). I don't have other accounts to test it out. Can someone please provide a workaround to this problem So basically I want this to happen:- For example when John enters the form his name should be auto populated in the requester's name field. And once this form is send to Michelle who approves it - the approver's name field should have been autopopulated with Michelle's name. How can I avoid overwriting of data. Thank you for helping Add fields to the form data source to store the data returned from GetUserProfileByName service. Do not use default values for the values in these fields. Instead, I generally use form load rules for this - run the query, and then, if the requester field is blank, set the field to the user's name from the datasource. When the approved view is submitted, you can take a similar approach with the approver name field - if it is blank, set it.
common-pile/stackexchange_filtered
Access object in client returned by WCF I have a WCF service which returns a class object. How can I access these return values in my client application. The service code: public ET_ITAM_RequestDetails GetAssociateFreewareRequestDetails(ET_ITAM_RequestDetails objET_ITAM_RequestDetails) { SqlDataReader rdr = null; connect.Open(); SqlCommand cmd = new SqlCommand("ET_ITAM_GetAssociateFreewareRequestDetails", connect); cmd.CommandType = CommandType.StoredProcedure; while (rdr.Read()) { objET_ITAM_RequestDetails.AssociateID = (string)rdr[0]; objET_ITAM_RequestDetails.AssetID = (string)rdr[1]; objET_ITAM_RequestDetails.ETRequestID = (int)rdr[2]; objET_ITAM_RequestDetails.FreewareName = (string)rdr[3]; objET_ITAM_RequestDetails.InstallationCommand = (string)rdr[4]; objET_ITAM_RequestDetails.InstallationArguments = (string)rdr[5]; objET_ITAM_RequestDetails.VerificationType = (bool)rdr[6]; objET_ITAM_RequestDetails.VerificationPath = (string)rdr[7]; } return objET_ITAM_RequestDetails; } In the client : ServiceReference1.ET_ITAM_RequestDetails objItam = new ServiceReference1.ET_ITAM_RequestDetails(); // need to get return value. // if i create another object it not work as expected obj_service.GetAssociateFreewareRequestDetails(objItam); Judging by ServiceReference1 it looks like you've added your WCF reference to the client solution successfully. You are however missing the WCF client as far as I can tell. You haven't given enough information to know what your client would be called, but your code should look something like: ServiceReference1.ET_ITAM_RequestDetails objItam = new ServiceReference1.ET_ITAM_RequestDetails(); // the service generation will create a WCF client for you, thought I'm not sure what your client's name would be. objItam = obj_service.GetAssociateFreewareRequestDetails(objItam); Looking over your code again, I think I see your issue. You did new up your wcf client, but the code for that was not provided. obj_service.GetAssociateFreewareRequestDetails(objItam); You're simply calling the function, but not assigning its value back to your object. objItam = obj_service.GetAssociateFreewareRequestDetails(objItam); I'm unclear on why you're newing up an empty object, passing it into your function, and returning it. Why not just new it up and return it within the function? public ET_ITAM_RequestDetails GetAssociateFreewareRequestDetails() { ET_ITAM_RequestDetails objET_ITAM_RequestDetails = new ET_ITAM_RequestDetails(); SqlDataReader rdr = null; connect.Open(); SqlCommand cmd = new SqlCommand("ET_ITAM_GetAssociateFreewareRequestDetails", connect); cmd.CommandType = CommandType.StoredProcedure; while (rdr.Read()) { objET_ITAM_RequestDetails.AssociateID = (string)rdr[0]; objET_ITAM_RequestDetails.AssetID = (string)rdr[1]; objET_ITAM_RequestDetails.ETRequestID = (int)rdr[2]; objET_ITAM_RequestDetails.FreewareName = (string)rdr[3]; objET_ITAM_RequestDetails.InstallationCommand = (string)rdr[4]; objET_ITAM_RequestDetails.InstallationArguments = (string)rdr[5]; objET_ITAM_RequestDetails.VerificationType = (bool)rdr[6]; objET_ITAM_RequestDetails.VerificationPath = (string)rdr[7]; } return objET_ITAM_RequestDetails; } The above (and your original) will of course have (probably) unintended results if your reader has more than one row - just calling that out in case you were unaware. Do not use using with WCF clients: Avoiding Problems with the Using Statement i have added service reference in my client successfully as follow. ServiceReference1.ETServiceClient obj_service = new ServiceReference1.ETServiceClient (); I add one object of ET_ITAM_RequestDetails in client which is return type from my service. ServiceReference1.ET_ITAM_RequestDetails objItam = new ServiceReference1.ET_ITAM_RequestDetails();
common-pile/stackexchange_filtered
BlueImp Carousel - TypeError this.slides[t] is undefined I followed the official setup guide of blueimp. I downloaded the latest version of blueImp (2.25.0) First I included the css and js: <!-- Blueimp Gallery --> <link rel="stylesheet" href="../public/Gallery-2.25.0/css/blueimp-gallery.min.css"> <script src='../public/Gallery-2.25.0/js/blueimp-gallery.min.js'></script> Then I prepared the carousel: <div id="blueimp-gallery-carousel" class="image blueimp-gallery blueimp-gallery-controls blueimp-gallery-carousel"> <div class="slides"></div> <h3 class="title"></h3> <a class="prev">‹</a> <a class="next">›</a> <a class="play-pause"></a> <ol class="indicator"></ol> </div> Now I prepared the pictures: <div id="links"> <a href="../pcs/pic1.png"></a> <a href="../pcs/pic2.png"></a> <a href="../pcs/pic3.png"></a> </div> At last I included a script: <script src="../js/blueImp.js"></script> ...with the following content: <script> blueimp.Gallery( document.getElementById('links').getElementsByTagName('a'), { container: '#blueimp-gallery-carousel', carousel: true } ); </script> Result: As you can see I followed the setup guide, however I get TypeError: this.slides[t] is undefined after clicking on an arrow. I also tried it with 2.23.0, same error. What am I doing wrong? Test Browsers: Mozilla Firefox 52.0.2 (64-Bit), Mozilla for Ubuntu Mozilla Firefox 52.0.1 (32-Bit), Windows 7 I solved it by wrapping the script at the bottom with Jquerys $(document).ready(function(){ ... }) $(document).ready(function() { blueimp.Gallery( document.getElementById('links').getElementsByTagName('a'), { container: '#blueimp-gallery-carousel', carousel: true } ); }); I don't understand why this was necessary though, since the script was included at the very last position, just before the closing body tag. Any explanation?
common-pile/stackexchange_filtered
Permutations within a specific boundary Let's have the following sequence of natural numbers: 1, 2, 3, 4, 5, 6, 7, 8. The permutations of these 8 numbers are equal to 8!. We can obtain some of these permutations by adding and subtracting one or more numbers within this sequence e.g. 8-1=7, 7+1=8, 6-1=5, 5+1=6, 4-1=3, 3+1=4, 2-1=1, 1+1=2; also we have 8-3=5, 7+1=8, 6-3=3, 5+1=6, 7-3=4, 6+1=7, 4-3=1, 1+1=2, and so on. My question is: how many permutations we can obtain with this method of adding and subtracting numbers within the sequence? addendum: The question asks the following: If we have a sequence of natural numbers in ascending order and we add to or subtract from each number of this sequence one or more numbers within this sequence how many permutations of this sequence we can obtain? Every derangement of the ordered list $8,7,6,\cdots,2,1$ can be obtained by adding or subtracting one of the elements of $[8]:={1,\cdots,8}$ to each entry of the list (with no restrictions on repetitions). This is because $(i,j)\mapsto |i-j|$ surjects from $[8]\times[8]\setminus\rm diag([8])$ onto $[8]$. If you disallow leaving a list entry fixed, these are all of the possible effects, whereas if you do allow entries to remain preserved, every possible permutation of the list is possible. It is unclear to me however what you are actually asking. @anon.We know the number of permutations of these 8 elements.Now if you find another permutation except the three above will become clear to you what I am asking. @anon.Can you tell me please what number can you add and subtract to obtain the permutation 3,7,1,8,4,6,2,5. That's a list, not a permutation. But I will assume you are writing in nonstandard one-line notation. In which case, either (a) 7 in particular cannot be obtained from 7 through adding/subtracting positive numbers, or (b) 7 can be obtained from 7 by doing nothing. In either case, and in every case, my first comment applies - the fact you are asking a question I have already bifurcated into two scenarios tells me you do not understand my comment. @anon: If you look carefully at Vassilis’s examples, both follow the format $8-a$, $7+b$, $6-a$, $5+b$, $4-a$, $3+b$, $2-a$, $1+b$, where $a=b$ is permitted. He may be considering only modifications of this type. (And it seems perfectly clear that he’s using one-line notation with commas and without parentheses.) @BrianM.Scott Thanks! I should have been able to figure that out. That would make the question nontrivial. Vassilis, would you care to edit to clarify your question? The sequence 8-3=5, 7+1=8, 6-3=3, 5+1=6, 7-3=4, 6+1=7, 4-3=1, 1+1=2 doesn't match the -a/+b description (e.g. 7-3=4). @DouglasS.Stones.Dear Douglas the sequence you are mentioning has two repetitions.Only when we add and subtract 1,2 we obtain we obtain no repetitions of numbers of the original sequence of the 8 natural numbers. The plus and mines signs always follow the above order.
common-pile/stackexchange_filtered
Is there any 3d mindmapping software available? Preferrably with developer API..? As the title states: Is there any 3d mindmapping software available? Preferrably with developer API..? The first thing that comes to mind! is FreeMind. There isn't a developer API as far as I know, but it is free and open source, so you can get the source code and do a lot more than an API would probably allow. I actually downloaded this and it's very intuitive! However I really like some more freehand: sizing objects, -more- objects (custom preferrably) and really 3d space :) I'm afraid I haven't seen any so far. But then again, I don't use them much. I'm not sure actually how that would even look like (not very clearly). In any case, here is a list of free and proprietary mind mapping software, many of them have screenshots attached, so check it out ... see, maybe you'll find something you like.
common-pile/stackexchange_filtered
Reproduce AES decryption method from C# in JavaScript I am trying to reproduce the following C# decryption method in JavaScript. This method is used to decrypt short strings: names, addresses, email addresses, etc. It feels tantalisingly close, because the strings I have been able to "successfully" decrypt seem partially decrypted. For instance, some of the emails look like this<EMAIL_ADDRESS>CSharp public static readonly byte[] INIT_VECTOR = { 0x00, 0x00, ... }; public static string Decrypt(string cipherText) { string EncryptionKey = "Some Encryption Key"; byte[] cipherBytes = Convert.FromBase64String(cipherText); using (Aes encryptor = Aes.Create()) { ​ Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(EncryptionKey, INIT_VECTOR); encryptor.Key = pdb.GetBytes(32); encryptor.IV = pdb.GetBytes(16); using (MemoryStream ms = new MemoryStream()) { using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateDecryptor(), CryptoStreamMode.Write)) { cs.Write(cipherBytes, 0, cipherBytes.Length); cs.Close(); } cipherText = Encoding.Unicode.GetString(ms.ToArray()); } } return cipherText; } JavaScript import atob from 'atob'; import forge from 'node-forge'; const InitVector = [0x00, ...]; const EncryptionKey = 'Some Encryption Key'; const iv = Buffer.from(InitVector).toString(); const convertBase64StringToUint8Array = input => { const data = atob(input); const array = Uint8Array.from(data, b => b.charCodeAt(0)); return array; }; const decrypt = cipher => { const cipherArray = convertBase64StringToUint8Array(cipher); const key = forge.pkcs5.pbkdf2(EncryptionKey, iv, 1000, 32); const decipher = forge.cipher.createDecipher('AES-CBC', key); decipher.start({ iv }); decipher.update(forge.util.createBuffer(cipherArray, 'raw')); const result = decipher.finish(); if (result) { return decipher.output.data; } else { return false; } }; In C# the default mode is CBC. You should transfer the IV. In C# the IV is random. In the JS it is all 0. The partial decryption says that the IV is incorrect since it is effecting only one block, the first block. Note: CBC is archaic stick to Authenticated encryption mode as AES-GCM if there are no restrictions. Note 2: usually the IV is prepended to the ciphertext. You need to use the same salt in JS as you used in C#. Thus you need to send the salt along with the enciphered data. That's ok, that salt does not need to be secret, it just needs to be unpredictable. Thanks to kelalaka I managed to figure this out! This was the code I ended up with. import atob from 'atob'; import forge from 'node-forge'; const InitVector = [0x00, ...]; const EncryptionKey = 'Some Encryption Key'; const initKey = Buffer.from(InitVector).toString(); // Changed this to `initKey` const convertBase64StringToUint8Array = input => { const data = atob(input); const array = Uint8Array.from(data, b => b.charCodeAt(0)); return array; }; const decrypt = cipher => { const cipherArray = convertBase64StringToUint8Array(cipher); const key = forge.pkcs5.pbkdf2(EncryptionKey, iv, 1000, 32); /** * Added the following * Note the key size = 48 * This was due to the fact that the C# dictated that * the IV was 16 bytes, starting at the end of the key. */ const keyAndIV = forge.pkcs5.pbkdf2(encryptionKey, initKey, 1000, 32 + 16); /** * Therefore, we cut the iv from the new string */ const iv = keyAndIV.slice(32, 32 + 16); // 16 bytes const decipher = forge.cipher.createDecipher( 'AES-CBC', forge.util.createBuffer(key) ); decipher.start({ iv }); decipher.update(forge.util.createBuffer(cipherArray, 'raw')); const result = decipher.finish(); if (result) { return decipher.output.data; } else { return false; } };
common-pile/stackexchange_filtered
mail app is not opening from other app if it is not configured I am trying to share a picture via mail by calling the mail application in iPad. But when the mail is not configured it is not opening the mail application. Kindly provide some suggestions. Just check if mail is configured and alert the user to configure if not.. if ([MFMailComposeViewController canSendMail]) { // mail is configured } else { //alert the user } Before Displaying mail composer you should check whether device is able to sent mail or not. If no mail configured, than display mail configuration screen/page in setting app. if([MFMailComposeViewController canSendMail]) { // device can send mail, display mail composer here } else { //No mail acc is configured, so display setting for configuration [[UIApplication sharedApplication] openURL: [NSURL URLWithString<EMAIL_ADDRESS>; }
common-pile/stackexchange_filtered
Is it possible to upload a file with cURL from a pipe? I mean POSTing a standard file upload form. Usual command line contains this switch in this case: -F<EMAIL_ADDRESS> However when I try to feed a named pipe made by linux command "mkfifo", eg. "mkfifo filename.zip", I always get an error message on the producer side: curl: (23) Failed writing body (1856 != 16384) And also some error message appears at consumer side of the fifo. I fed my fifo with another curl command on producer side, eg.: curl http://example.com/archive.zip > filename.zip And on consumer side: curl http://example.com/fileupload.php -F<EMAIL_ADDRESS> When I pass a Content-Length HTTP header at the consumer side of my fifo, I don't get error message at the producer side, but error message still appears at the consumer (uploading) side, unsuccessful upload. curl http://example.com/fileupload.php -F<EMAIL_ADDRESS>-H "Content-Length: 393594678" I also tried feeding cURL file upload a non-named pipe, causing cURL to read data from stdin (marked as @-), like: curl -# http://example.com/archive.zip | curl -# http://example.com/fileupload.php -F "file=@-" In this case upload is successful, however the downloading and uploading progress are not in sync, I can see too separate hashmark progress indicators, one for the download and one for the upload, rather consecutively and not operating at the same time. On the top of that remote file is always named as "-", but this is not an issue for me, can be renamed later. Further notice: I tried the aboves from a Ruby command line IRB / Pry session, and I have noticed that when I used Ruby command "system" to call the piped construct: system %Q{curl -# http://example.com/archive.zip | curl -# http://example.com/fileupload.php -F "file=@-"} I can see only one hashmark progress indicator in this case, so I think piping works as it should be, but I can see two consecutive hashmark progress indicator in this second case like this: %x{curl -# http://example.com/archive.zip | curl -# http://example.com/fileupload.php -F "file=@-"} Yes it is possible! By default, curl will check all the provided arguments, figure out the size of all involved components (including the files) and send them in the constructed POST request. That means curl will check the size of the local file, which thus breaks when you use a fifo. Thus you need to do something about it! chunked for fifo By telling curl it should do the POST with chunked encoding instead of providing the full size ahead of time, curl will instead read the file in a streaming manner and just allow it to turn up to be whatever size it needs to be at the time the file (fifo) is read. You can do this by setting the chunked header, which is used by curl as a signal to do the request chunked. curl -H "Transfer-Encoding: chunked" -F file=@fifo https://example.com caveat The reason this isn't the default behavior by curl is that this requires that the receiver is using HTTP/1.1 or later (which curl doesn't know until it gets the response back from the server). Old HTTP/1.0 servers don't speak "chunk". formpost from stdin When doing a formpost from stdin, curl will read the entire file off stdin first in memory, before doing the POST, to get the size of the content so that it can include that in the POST request. Thx. It would be better, if one could pass the file size as a command line option, because it is known in many cases. For example when I feed my FIFO from an HTTP url. File size can be queried at first from the http server, where the file resides. Chunked upload is disabled on many websites (eg. video sharing websites, on which there is a file size limitation), so it is not an option in these cases unfortunately.
common-pile/stackexchange_filtered
Why this `case` is a must needed? object Test1 { def main(args: Array[String]) { val list = List("a", "b") list map { x ⇒ println(x) } list map { case x ⇒ println(x) } val list2 = List(("aa", "11")) list2 map { case (key, value) ⇒ println("key: "+key+", value: "+value) } } } Please note the last line, why the keyword case must be used, but the list map { x ⇒ println(x) } can remove it? I removed my misleading answer as it didn't compile as you noted, sorry about that, I had no compiler handy :( The case looks like boilerplate, sure. The real question here is: why does the compiler not try what happens with a case if it finds something weird instead of a parameter identifier? Maybe because you can do only so much implicitly without confusing everybody. { case (key, value) => f } is not the same thing as { (key, value) => f } The first is a pattern match that breaks a Tuple2 into its components, assigning their values to key and value. In this case, only one parameter is being passed (the tuple). { x => println(x) } works because x is assigned the tuple, and println prints it. The second is a function which takes two parameters, and makes no pattern matching. Since map requires a function which takes a single parameter, the second case is incompatible with map. For those wanting chapter and verse, see section 8.5, "Pattern Matching Anonymous Functions", in the Book of Martin. You can't break open the tuple in function literal. That's why you'll have to use case to match them instead. Another way is using tupled to make your function with two arguments fit: import Function.tupled list2 map tupled { (key, value) => println("key: "+key+", value: "+value) } list2 has elements of type Tuple2[Int, Int], so the signature of the function you have to pass map (in this case ... foreach is a more natural choice when you don't return something) is Tuple2[Int, Int] => Unit. Which is to say, it takes a single argument of type Tuple2[Int, Int]. Since Tuple2 supports unapply, you can use pattern matching to break apart that tuple within your function, as you did: { case (key, value) ⇒ println("key: "+key+", value: "+value) } The signature of this function is still Tuple2[Int, Int] => Unit It is identical to, and probably compiles to the same bytecodes, as: { x: Tuple2[Int, Int] => println("key: "+x._1+", value: "+x._2) } This is one of so many examples where Scala combines orthogonal concepts in a very pleasing way. list2 map { x => println(x) } works without problems for me. If you want to have pattern matching (splitting your argument in its parts according to its structure) you need always case. Alternatively you can write: list2 map { x => println("key: "+x._1+", value: "+x._2) } BTW, map should be used to transform a list in another one. If you just want to go through all elements of a list, use foreach or for comprehension. I am still learning Scala, but I believe what's happening is that you've defined a partial function taking one argument. When invoking methods such as List.map or List.foreach that only require one argument you can omit the underscore or named val. Example ommitting val name in closure: val v = List("HEY!", "BYE!") v.foreach { Console.println } // Pass partial function, automatically This is the same as: val v = List("HEY!", "BYE!") v.foreach { Console.println _ } // Pass partial function, explicitly Using the anonymous val: val v = List("HEY!", "BYE!") v.foreach { Console.println(_) } // Refer to anonymous param, explicitly Or using a named val: val v = List("HEY!", "BYE!") v.foreach { x => Console.println(x) } // Refer to val, explicitly In your closure you use a partial function (the case statement) that takes an anonymous variable and immediately turns it into a tuple bound to two separate variables. I imagine I goofed up on one of the snippets above. When I get to my work computer I will verify in the REPL. Also, take a look at Function Currying in Scala for some more info. What makes them partial functions? @Eric With case, you're doing pattern matching, which might not match. If I define: val f: (Int, Int) => Unit = { case (x: Int, y: Int) => println(x) }, then ask, f.isInstanceOf[PartialFunction[(Int, Int),Unit]], the answer is false. I don't believe it is a partial function, it's pattern matching inside a normal function. Ok, I see, it is a partial function, but not a PartialFunction.
common-pile/stackexchange_filtered
Issue with non responsive vertically centered div I am trying to make a div vertically centered inside a parent div.At the same time I don't want the child div not to be responsive. Here is the html code <div class="wrap"> <div style="background: url('http://www.screensavergift.com/wp-content/uploads/GoldenNature2-610x320.jpg') no-repeat; background-position: 25% 50%; background-size: cover;" class="menu_item"></div> <div class="menu-box-border"></div> <div class="menu-box-content-box"> <h3>Some Text</h3> </div> </div> I have created a demo here -- http://jsfiddle.net/squidraj/4eewp8x0/7/ As you can see the semi transparent box with overlay is always staying at the bottom and on browser resize the box is also resizing and as a result the overlay box is causing problem to the text. Any help/suggestion is highly appreciated. Thanks in advance. Do you want the child div to be responsive or not? I don't quite understand what you want. Yes the child div should not be responsive and horizontally centered. So you mean, as the browser resizes, the overlay box should remain the same at all time, instead of changing its own size? yes exactly...also if you see the demo then the overlay box is not centered. I was trying to adjust the top and bottom pixels but not working. First of all, there is a well-known trick to do this, you can read it on https://css-tricks.com/centering-in-the-unknown/ This will give you this: http://jsfiddle.net/4eewp8x0/8/ However, as we have more and more powerful tools of CSS3, I'd like to present some more tricks. CSS Transform The essence of this trick is that we absolutely position the child at (50%,50%), and then do a translate of (-50%,-50%) You will have this: http://jsfiddle.net/4eewp8x0/9/ Flex Layout At last, people who are filled up with fury toward the difficulties aligning elements come up with flex layout. You can read about flex layout here:https://css-tricks.com/snippets/css/a-guide-to-flexbox/ You will have this: http://jsfiddle.net/4eewp8x0/11/ Simply brilliant. Thank you so much for explaining step by step and advanced css. Learnt a lot.
common-pile/stackexchange_filtered
Minimizing Sum of Distances: Optimization Problem The actual question goes like this: McDonald's is planning to open a number of joints (say n) along a straight highway. These joints require warehouses to store their food. A warehouse can store food for any number of joints, but has to be located at one of the joints only. McD has a limited number of warehouses (say k) available, and wants to place them in such a way that the average distance of joints from their nearest warehouse is minimized. Given an array (n elements) of coordinates of the joints and an integer 'k', return an array of 'k' elements giving the coordinates of the optimal positioning of warehouses. Sorry, I don't have any examples available since I'm writing this down from memory. Anyway, one sample could be: array={1,3,4,5,7,7,8,10,11} (n=9) k=1 Ans: {7} This is what I've been thinking: For k=1, we can simply find out the median of the set, which would give the optimal location of the warehouse. However, for k>1, the given set should be divided into 'k' subsets (disjoint, and of contiguous elements of the superset), and median for each subset would give the warehouse locations. However, I don't understand on what basis the 'k' subsets should be formed. Thanks in advance. EDIT: There's a variation to this problem also: Instead of sum/avg, minimize the maximum distance between a joint and its closest warehouse. I don't get this either.. Is this a homework? If so, tag it as such please. Well this came in a competition. @ArpitTarang I came across the same problem. Were you able to solve it? The straight highway makes this an exercise in dynamic programming, working from left to right along the highway. A partial solution can be described by the location of the rightmost warehouse and the number of warehouses placed. The cost of the partial solution will be the total distance to the nearest warehouse (for fixed k minimising this is the same as minimising the averge) or the maximum distance so far to the closest warehouse. At each stage you have worked out the answers for the leftmost N joints and have them indexed by number of warehouses used and position of the rightmost warehouse - you need to save only the best cost. Now consider the next joint and work out the best solution for N+1 joints and all possible values of k and rightmost warehouse, using the answers you have stored for N joints to speed this up. Once you have worked out the best cost solution covering all the joints you know where its rightmost warehouse is, which gives you the location of one warehouse. Go back to the solution that has that warehouse as the rightmost joint and find out what solution that was based on. That gives you one more rightmost warehouse - and so you can work your way back to the location of all the warehouses for the best solution. I tend to get the cost of working this out wrong, but with N joints and k warehouses to place you have N steps to take, each of the based on considering no more than Nk previous solutions, so I reckon cost is kN^2. "Now consider the next joint and work out the best solution for N+1 joints and all possible values of k and rightmost warehouse, using the answers you have stored for N joints to speed this up." Could you please give (at least) a hint on how to do this? The basic idea is that the best answer for a given point can be described as something like "put a warehouse at this point and then use the answer for the point 4 to the left to say where to put the other warehouses" - but there are usually a few details to worry about that vary from problem to problem. If you are not familiar with dynamic programming look at e.g. the first two examples at http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html - or search for other tutorials. This is NOT a clustering problem, it's a special case of a facility location problem. You can solve it using a general integer / linear programming package, but because the problem is on a line, there may be more efficient (and less expensive software-wise) algorithms that would work. You might consider dynamic programming since there are probably combination of facilities that could be eliminated rather quickly. Look into the P-Median problem for more info. I didn't get much from the p-median problem articles.. most of them have an extra parameter of 'transportation cost' which makes the problem more complex. Please help. I found out it was the 'facility location problem'. But still, they had complex algos for 2d problems... mine's only 1d. Help? Doesn't matter. You still have distances, they are just in one dimension.
common-pile/stackexchange_filtered
Week long trek, how far each day? pace? Background info: For environment, it is southwestern British Columbia and the weather is fair and there is plenty of water around. Pace wise, on level terrain, with or without water I have sometimes been able to travel 20-30km (~15mi) with no break but feel quite sore and dead in the end beyond my lingering endurance. Going on 10-15km with breaks seems to be nice, and recoverable. How should I go about my first week long trip in terms of distance, each day? My draft plan: Wake up and do my morning routine Hike from 7-noon (5hrs = ~20km with my ~40lb (18 kg) pack big pack) Take an hour or two break for a main meal Hike until 5pm or so (~4 hours = ~10-15km) Set up camp, small nightly meal for energy and soak in the sunset. The journey is roughly 150km in a valley route by the way, which if I do that routine each day without falter can be done in ~4 1/2 days, but I plan to give myself lenience and visit small towns/and interesting things along the way giving 7 a good margin. How's my routine sound? should something like every second day be half the distance, to help preserve my legs from lugging 40lbs (18 kg) or as felt is needed? (I tend to overdo it.) Any tips or hints on sustaining long distance travels for journeys many weeks long if I were to above this relatively simple starting trek, that I can practise? A few things to consider. When you have walked before was that with a pack or without and was that the entire distance you did in a day/ have you done multiple days of 20km in a row? @nivag, with 10-20lb pack before, be it a masochistic traverse to somewhere interesting or someone's house, but usually without break. I've done 20km going to a backcountry camp spot and back, usually up vast altitudes, but not just linear and level like this valley trek I wish to do. @BenCrowell, you're right, I don't need to set up a storm shelter and 20lbs will make more sense with everything I need (exped. hammock, first aid, backup food/clothing, tarp, ...) and food can be a pile'o rice mixed in with spices and natural flora. For distance, terrain makes a huge difference. Depending on terrain, a regular pace may be as much as 6 km/hour or as little as 1 km/hour, if you're bushwhacking without a trail and have many rivers to cross. In practice, there is no meaningful lower limit. It also depends on what weather one expects, I disagree with Ben Crowell's mass estimates, but it's a matter of taste, as I don't feel safe if I don't have equipment I feel safe in. Distance wise my days vary from 10 km to 40 km (longest I've done is 45 km / 15 hours of swamps), I usually hike in places that don't have any trails. My experience is that an hour break does not do a huge amount in terms of recovery. So if you struggle now with 30km, you are going to struggle doing 35km in a day - especially on the 4th or 5th day. But if you have some time to build up your endurance before your trip with some long days 35km should be reasonable. And 18kg sounds like too much. My personal opinion is you may be being a little bit optimistic on your speed. I would probably aim for 20-30km per day (~15 miles) if you have not done such a long trip before and are reasonably fit. Although there are several factors to consider: Hiking with a 40lb pack is significantly different and more tiring than hiking with just a day pack, especially if you get sore knees. This is a good reason to get hiking poles, although I know some people don't like them. Doing a week long hike is not just doing a day hike seven times. You don't get a chance to recover fully in between each day, especially if didn't sleep particularly well because you're in your tent and rain/wildlife/whatever. Terrain is also a significant factor. 20km on the flat != 20km with 6000ft of climb. As a rough guide Naismith's rule adds 1 hour (~5km distance) for every 2000 ft of ascent. If your route involves any bushwacking/river crossings these can take a significant amount of time too. Your camping arrangement is also a concern. If you are wild camping distance is not too important. You can just go at your own pace and are only limited by the availability of suitable camping spots. However, if you are camping at a fixed campsite/campgrounds or otherwise have limited camping options it is better to be conservative on the distance you need to travel. There are few things worse than having some unforeseen detour/delay and then having to navigate to you camp in the dark. This is more personal preference but I prefer to have several more regular stops than have one big stop for lunch. This may be because I mostly walk in the UK where it is often raining so stopping for long is grim. Also when it is nice regular stops are good for admiring the scenery and stuff. Hiking is generally more enjoyable if you don't push yourself too hard. On one hand completing a challenging hike is very rewarding in its own right. However, speed and distance aren't everything. If you're hiking for a week you probably want to actually enjoy it while its happening. It is therefore important to not be completely exhausted/in pain because you've tried to do too long a walk. Some people I know may disagree with this idea. They also run mountain marathons for fun so.... In conclusion 5 days would be a reasonable time to do your walk in, although it is definitely good to leave a couple of days extra in case there are any unforeseen issues/interesting detours/you're not as fit as you thought. for pt. 1, I tend to find hiking staffs and enjoy that. I tend to abuse, lose or break any of my hiking poles :) Terrain is valley with footbridges across the river, if it were many ascents, especially over poor terrain i'd be going 1-2km/hr at points and would have to triple my time. Camping is back-country on an expedition (Hennessey) hammock and it's on crown land/provincial parks so anywhere is good. I do like to see you mention the "week long hike isn't a day hike 7x" - glad I am clever enough to think of more rest time every second day or so. Making 30-35 km every day is perfectly sound, if only you're used to making such distance. The famous Polish hiker, Łukasz Supergan have written to made 30 km a day on average on his 4000 km long trip to Santiago de Compostella. The trained hiker can made 30-50 km a day without days off. This is confirmed in the memories of wandering workers, partisants, historical sources about marching troops etc. So such tempo can be surely kept for much longer than a week. However, it may be hard if you've never hiked so long. But with proper training, everyone who's not disabled should be able to keep such tempo (note that the ability to walk the long distances is our genetic adaptation, whoever wasn't able to keep up, died and left no genes to that day). This is encouraging, as if I pushed myself, 30-50 would be easy. But as it is more a self-adventure and some sight-seeing along the way, and it is a test-trek beyond the little (but daring) hikes and camps I've done, scaling that down to see if I get through it fine seems the thing I will do. @Lukasz Then I could say "there are lots of people who can do a marathon in under 2h30m so everyone who isn't disabled should do it". There is just anything between the pros and a physically affected. It may be in our genes and of course it's a matter what you understand of "proper training" and for how long you are training. But still, everyone has very different abilities and in the end doing that trip should be fun for the hiker and not only pain/stress. You can actually estimated your hiking distance with Tobler’s hiking function with default parameters (https://en.wikipedia.org/wiki/Tobler%27s_hiking_function). This means that speed on flat surfaces is estimated to be 5km/h and maximum speed of 6km/h is achieved when going downhill at 2.86 degrees. The platform FATMAP has a great tool that uses this algorithm and shows you exactly where you can hike in a set amount of time on their 3D map, www.fatmap.com.
common-pile/stackexchange_filtered
Show/Hide div tag javascript I have a gridview column that's gets a large amount of text from the server. So instead of showing all that text after the grid loads I want to show it only if a user clicks an Expand link then closes it with a collapse link. Here is what I have. Please note that I already know that I can put both javascript functions in one; I'm testing right now in two separate functions. <script type="text/javascript" language="javascript" > function hidelink() { var col = $get('col'); var exp = $get('exp'); col.style.display = 'none'; exp.style.display = ''; } function showlink(){ var col = $get('col'); var exp = $get('exp'); col.style.display = ''; exp.style.display = 'none'; } <asp:GridView ID="GridView2" Width="400px" runat="server" AutoGenerateColumns="False" AllowPaging ="True" BackColor="White" BorderColor="#999999" BorderStyle="None" BorderWidth="1px" CellPadding="3" DataKeyNames="APPID" DataSourceID="SqlDataSource3" PagerSettings-Mode="NextPreviousFirstLast" EnableSortingAndPagingCallbacks="True"> <PagerSettings Mode="NextPreviousFirstLast" /> <RowStyle BackColor="#EEEEEE" ForeColor="Black" /> <Columns> <asp:BoundField DataField="stuff" HeaderText="Name" ReadOnly="True" SortExpression="app" /> <asp:BoundField DataField="Description" HeaderText="Short Descr" ReadOnly="True" SortExpression="des" /> <asp:TemplateField HeaderText="Long Descr" SortExpression="data"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("data") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <div id="col"> <asp:LinkButton ID="expand" runat="server" OnClientClick ="hidelink();return false;">Expand</asp:LinkButton> </div> <div id="exp" style="display:none"> <asp:LinkButton ID="collapse" runat="server" OnClientClick ="showlink();return false;">Collapse</asp:LinkButton> </div> <asp:Panel ID="Panel1" runat="server" > <table> <tr> <td> <%#Eval("LongDescription")%> </td> </tr> </table> My issue is that only the first record does everything it should. (expand/collapse) but the other rows only expand and does not hide the expand link in the div tag. It is only finding the id of the first row because when the expand button is hit on any other row it changes the first row to show the collapse link. How can i fix this? The problem is that because you have repeating elements, the ids of the DIVs are being reused. This is illegal in HTML. The id property of each element must be unique. A better way to handle it is to pass in a reference to the current element to the handler and have it derive the element that it needs to operate on by traversing the DOM. <div> <asp:LinkButton ID="expand" runat="server" OnClientClick ="hidelink(this);return false;">Expand</asp:LinkButton> </div> <div style="display:none"> <asp:LinkButton ID="collapse" runat="server" OnClientClick ="showlink(this);return false;">Collapse</asp:LinkButton> </div> Note: I'm using jQuery in these functions as it makes it easier to traverse the DOM. You can do the same with your own DOM traversal functions and by setting the style properties if you like. function hidelink(ctl) { var myDiv = $(ctl).closest('div'); myDiv.hide(); myDiv.next('div').show(); } function showlink(ctl){ var myDiv = $(ctl).closest('div'); myDiv.hide(); myDiv.prev('div').show(); } You've got my attention, but I get an object expected error. And though you are missing the ctl parameter in the showlink, I did catch it and put it in my showlink function eg. showlink(ctl). What am i missing? Are you using jQuery -- my example depends on jQuery being loaded. If you aren't using jQuery, you'll need to rewrite the functions to find the relative DOM elements using whatever framework/native javascript you are using. I'm not really as familiar with jquery. I have used it....but very little. What do you mean by having Jquery loaded? how would I do that? I am able to use Jquery if that is what you mean. i can do a window.onload = function() { alert("welcome"); } So I assume jquery is loading fine. Am I right? Nope. Your function is only using javascript. jQuery != javascript. jQuery is a framework built with javascript that allows you more convenient ways of manipulating the DOM. You'd need something like to load jquery before using the referenced code. I see what you mean. I did have that. except i had "/js/jquery-1.2.6.js" My issue is that it doesn't see to like $(ctl).closest('div'); I'm getting object expected. +1 for your help thus far! Closest is new in 1.3.2. You can probably use parent() in 1.2.6. Right. Shouldn't something like this work? var myDiv = $(ctl).parent().get(0).tagName; You should be able to do show/hide without getting the actual element. Those functions operate on the jQuery object. You need to use unique IDs for each row. IDs can only apply to one element in the page, and your code is applying one ID to all the instances of this large column in the table. Alternatively, you can just use the DOM methods to locate the right element to show/hide. For example: <div> <a href="#" onclick="showHideDesc(this); return false;">Expand</a> <table style="display: none;"> <tr> <td><%#Eval("LongDescription")%></td> </tr> </table> </div> Then for your script: function showHideDesc(link) { var table = link.parentNode.getElementsByTagName("TABLE")[0]; if (table.style.display == "none") { table.style.display = ""; link.innerHTML = "Collapse"; } else { table.style.display = "none"; link.innerHTML = "Expand"; } } How do I do that in Javascript Are you saying to remove my panel that i'm using? Yes, remove the Panel and the LinkButton controls that you are using. It's a lot simpler to just use straight HTML. "This seems to only work for the first row also" - Sorry, I had a bug in my code - it's fixed now.
common-pile/stackexchange_filtered
Org-mode Source Block Visibility Cycling I'm trying to add source blocks to the visibility cycling tree. Essentially I want a source block to be treated as a child of its heading. Consider the following org-mode document: * Heading 1 ** Heading 2 #+BEGIN_SRC R print("hello world") #+END_SRC ** Heading 3 I would like to be able to press TAB on heading one to cycle through the folding of the various parts including the source block. Currently org-mode does seem to have the facilities for folding the source block, because I can fold that if I go to #+BEGIN_SRC R and hit tab, but it doesnt seem to be being treated in the global cycling. Any suggestions to add it? Thanks! I'm not sure if that is built-in, but here is a link to a modification I did for html blocks and footnotes: http://stackoverflow.com/a/21594242/2112489 See also, this related thread: http://stackoverflow.com/a/21563210/2112489 Ok i'm looking through your code and trying to piece this together. Since BEGIN_SRC blocks already fold, all that I need to do it seems is to modify org-cycle-internal-local in a similar way to what you did. Or am I missing something? I would try the variable mentioned in the second link and see if that does the trick before looking at the first idea. I should have placed the second link first in time . . . sorry. That variable folds the blocks, but cycling of the blocks still doesn't work. I'll try to piece together something from your code. Thanks! This is a slight modification of the code contained in the link mentioned in my first comment above: https://stackoverflow.com/a/21594242/2112489 All I did was replace the begin / end html regexp for the SRC regexp. Go ahead and give it a whirl and see if it is what you're looking for. I left my prior footnote modification in there. (require 'org) (defalias 'org-cycle-hide-drawers 'lawlist-block-org-cycle-hide-drawers) (defun lawlist-block-org-cycle-hide-drawers (state) "Re-hide all drawers, footnotes or html blocks after a visibility state change." (when (and (derived-mode-p 'org-mode) (not (memq state '(overview folded contents)))) (save-excursion (let* ( (globalp (memq state '(contents all))) (beg (if globalp (point-min) (point))) (end (cond (globalp (point-max)) ((eq state 'children) (save-excursion (outline-next-heading) (point))) (t (org-end-of-subtree t)) ))) (goto-char beg) (while (re-search-forward ".*\\[fn\\|^\\#\\+BEGIN_SRC.*$\\|^[ \t]*:PROPERTIES:[ \t]*$" end t) (lawlist-org-flag t)))))) (defalias 'org-cycle-internal-local 'lawlist-block-org-cycle-internal-local) (defun lawlist-block-org-cycle-internal-local () "Do the local cycling action." (let ((goal-column 0) eoh eol eos has-children children-skipped struct) (save-excursion (if (org-at-item-p) (progn (beginning-of-line) (setq struct (org-list-struct)) (setq eoh (point-at-eol)) (setq eos (org-list-get-item-end-before-blank (point) struct)) (setq has-children (org-list-has-child-p (point) struct))) (org-back-to-heading) (setq eoh (save-excursion (outline-end-of-heading) (point))) (setq eos (save-excursion (1- (org-end-of-subtree t t)))) (setq has-children (or (save-excursion (let ((level (funcall outline-level))) (outline-next-heading) (and (org-at-heading-p t) (> (funcall outline-level) level)))) (save-excursion (org-list-search-forward (org-item-beginning-re) eos t))))) (beginning-of-line 2) (if (featurep 'xemacs) (while (and (not (eobp)) (get-char-property (1- (point)) 'invisible)) (beginning-of-line 2)) (while (and (not (eobp)) (get-char-property (1- (point)) 'invisible)) (goto-char (next-single-char-property-change (point) 'invisible)) (and (eolp) (beginning-of-line 2)))) (setq eol (point))) (cond ((= eos eoh) (unless (org-before-first-heading-p) (run-hook-with-args 'org-pre-cycle-hook 'empty)) (org-unlogged-message "EMPTY ENTRY") (setq org-cycle-subtree-status nil) (save-excursion (goto-char eos) (outline-next-heading) (if (outline-invisible-p) (org-flag-heading nil)))) ((and (or (>= eol eos) (not (string-match "\\S-" (buffer-substring eol eos)))) (or has-children (not (setq children-skipped org-cycle-skip-children-state-if-no-children)))) (unless (org-before-first-heading-p) (run-hook-with-args 'org-pre-cycle-hook 'children)) (if (org-at-item-p) ;; then (org-list-set-item-visibility (point-at-bol) struct 'children) ;; else (org-show-entry) (org-with-limited-levels (show-children)) (when (eq org-cycle-include-plain-lists 'integrate) (save-excursion (org-back-to-heading) (while (org-list-search-forward (org-item-beginning-re) eos t) (beginning-of-line 1) (let* ( (struct (org-list-struct)) (prevs (org-list-prevs-alist struct)) (end (org-list-get-bottom-point struct))) (mapc (lambda (e) (org-list-set-item-visibility e struct 'folded)) (org-list-get-all-items (point) struct prevs)) (goto-char (if (< end eos) end eos))))))) (org-unlogged-message "CHILDREN") (save-excursion (goto-char eos) (outline-next-heading) (if (outline-invisible-p) (org-flag-heading nil))) (setq org-cycle-subtree-status 'children) (unless (org-before-first-heading-p) (run-hook-with-args 'org-cycle-hook 'children))) ((or children-skipped (and (eq last-command this-command) (eq org-cycle-subtree-status 'children))) (unless (org-before-first-heading-p) (run-hook-with-args 'org-pre-cycle-hook 'subtree)) (outline-flag-region eoh eos nil) (org-unlogged-message (if children-skipped "SUBTREE (NO CHILDREN)" "SUBTREE")) (setq org-cycle-subtree-status 'subtree) (unless (org-before-first-heading-p) (run-hook-with-args 'org-cycle-hook 'subtree))) ((eq org-cycle-subtree-status 'subtree) (org-show-subtree) (message "ALL") (setq org-cycle-subtree-status 'all)) (t (run-hook-with-args 'org-pre-cycle-hook 'folded) (outline-flag-region eoh eos t) (org-unlogged-message "FOLDED") (setq org-cycle-subtree-status 'folded) (unless (org-before-first-heading-p) (run-hook-with-args 'org-cycle-hook 'folded)))))) (defun lawlist-org-flag (flag) "When FLAG is non-nil, hide any of the following: html code block; footnote; or, the properties drawer. Otherwise make it visible." (save-excursion (beginning-of-line 1) (cond ((looking-at ".*\\[fn") (let* ( (begin (match-end 0)) end-footnote) (if (re-search-forward "\\]" (save-excursion (outline-next-heading) (point)) t) (progn (setq end-footnote (point)) (outline-flag-region begin end-footnote flag)) (user-error "Error beginning at point %s." begin)))) ((looking-at "^\\#\\+BEGIN_SRC.*$\\|^[ \t]*:PROPERTIES:[ \t]*$") (let* ((begin (match-end 0))) (if (re-search-forward "^\\#\\+END_SRC.*$\\|^[ \t]*:END:" (save-excursion (outline-next-heading) (point)) t) (outline-flag-region begin (point-at-eol) flag) (user-error "Error beginning at point %s." begin))))))) (defun lawlist-toggle-block-visibility () "For this function to work, the cursor must be on the same line as the regexp." (interactive) (if (save-excursion (beginning-of-line 1) (looking-at ".*\\[fn\\|^\\#\\+BEGIN_SRC.*$\\|^[ \t]*:PROPERTIES:[ \t]*$")) (lawlist-org-flag (not (get-char-property (match-end 0) 'invisible))) (message "Sorry, you are not on a line containing the beginning regexp."))) Glad to help. The footnote code only contemplated one (1) footnote per heading, and needs to be improved upon. Please feel free to remove the footnote code, or improve upon it if you like. This introduces some strange 'dependent lightswitch' behavior: While you can unfold the code block from it's headline, you can't unfold it from the source code block line anymore. but if you try once, then the unfolding from the headline doesn't work anymore until you 'switch' it on the code block line again. At least that's what I'm experiencing
common-pile/stackexchange_filtered
Calculating GEO transformation params in Vega/d3 Copy this code to https://en.wikipedia.org/wiki/Special:GraphSandbox The scale and translate parameters of the geo transformation were manually set to match the width & height of the image (see red crosses). How can I make it so that geo transformation matches the entire graph size (or maybe some signal values) automatically, without the manual adjustments? UPDATE: The translation parameter should have been set to HALF of WIDTH and HEIGHT of the image. See the answer below, and center should have been set to [0,0]. For equirectangular projection, the graph size should have ration 2:1. { "version": 2, "width": 800, "height": 400, "padding": 0, "data": [ { "name": "data", "values": [ {"lat":0, "lon":0}, {"lat":90, "lon":-180}, {"lat":-90, "lon":180} ] } ], "marks": [ { "type": "image", "properties": { "enter": { "url": {"value": "wikirawupload:{{filepath:Earthmap1000x500compac.jpg|190}}"}, "width": {"signal": "width"}, "height": {"signal": "height"} } } }, { "name": "points", "type": "symbol", "from": { "data": "data", "transform": [{ "type": "geo", "projection": "equirectangular", "scale": 127, "center": [0,0], "translate": [400,200], "lon": "lon", "lat": "lat" }] }, "properties": { "enter": { "x": {"field": "layout_x"}, "y": {"field": "layout_y"}, "fill": {"value": "#ff0000"}, "size": {"value": 500}, "shape": {"value": "cross"} } } } ] } Found an answer (the example above was updated): The "translate" should be set to the center of the image. The "center" on the other hand should be set to [0,0]. The "scale" for the equirectangular projection needs to be set to width/(2*PI)
common-pile/stackexchange_filtered
Redux state lost when moving from one module to another module I am using react, redux, react redux with redux dynamic module. For loading modules, I am using the dynamic module loader. After loading the modules, when I move from one module to another, the state has been lost. I don't know why. I cannot utilize the redux feature. Whenever I move from one module to another, I lost state. https://redux-dynamic-modules.js.org/#/reference/DynamicModuleLoader See the image. Here I am changing routing. Whenever I change the route, the state has been added and removed. The old state didn't retain. Sorry, the mistake is mine. Dynamic Module Loader concept is "When component is unmounted, the module will be removed from the store and the state will be cleaned up". So the logic is working as expected. I have used moduleStore.addModule(createModuleA()); instead of dynamic module loader. The redux concept is working as expected.
common-pile/stackexchange_filtered
Dichotomize data by factor I need to create a dichtomized variable based on two factors (one hopes it's possible). Let's say I have the data: d <- data.frame ( agegroup = c(2,1,1,2,3,2,1,3,3,3,3,3,1,1,2,3,2,1,1,2,1,2,2,3) , gender = c(2,2,2,2,2,2,1,2,1,1,1,2,1,1,2,2,1,1,1,1,2,1,1,1) , hourwalking = c(0.3,0.5,1.1,1.1,1.1,1.2,1.2,1.2,1.3,1.5,1.7,1.8,2.1,2.1,2.2,2.2,2.3,2.4,2.4,3,3.1,3.1,4.3,5) ) I would like to create a binary (LowWalkHrs) using the gender- and agegroup-specific median (e.g., when agegroup = 1 and gender = 1, median = 2.1 (median was found using excel)). The LowWalkHrs would be an added variable in the dataset, so the output would be: agegroup gender hourwalk LowWalkHrs 2 2 0.3 1 1 2 0.5 1 1 2 1.1 0 2 2 1.1 1 3 2 1.1 1 2 2 1.2 0 1 1 1.2 1 .... 3 1 5 0 I have a rather large dataset (~10k observations), so Excel is out of the question. In R I've tried cut and cut2, which doesn't seem to take factor variables, as well ddply, which gave me an error message of (Error in $<-.data.frame(*tmp*, "lowWalkHrs", value = list(hourwalking = c(0.63, : replacement has 949 rows, data has 11303). I suspect this might be slow, but I think it works: z <- mapply(d$agegroup, d$gender, d$hourwalking, FUN=function(a,g,h) as.numeric(h < median(d$hourwalking[d$agegroup==a & d$gender==g])) ) It does work, I haven't tried it on the larger dataset, but I will. Thanks! Okay, tried it with the larger dataset, and I get the error message: Error in mapply(d$agegroup, d$gender, d$hourwalking, : Zero-length inputs cannot be mixed with those of non-zero length. I do have missing data in my larger dataset, is this what's causing the error? Yea, the stuff in [...] won't work well with missing values. Fixed the first problem....the error message is now: In is.na(x) : is.na() applied to non-(list or vector) of type 'NULL'. how did you incorporate is.na? Just needed a "na.rm=TRUE" d <- data.frame ( agegroup = c(2,1,1,2,3,2,1,3,3,3,3,3,1,1,2,3,2,1,1,2,1,2,2,3) , gender = c(2,2,2,2,2,2,1,2,1,1,1,2,1,1,2,2,1,1,1,1,2,1,1,1) , hourwalking = c(0.3,0.5,1.1,1.1,1.1,1.2,1.2,1.2,1.3,1.5,1.7,1.8,2.1,2.1,2.2,2.2,2.3,2.4,2.4,3,3.1,3.1,4.3,5) ) d$LowWalkHrs=1*with(d,hourwalking<ave(hourwalking,list(factor(agegroup,exclude=NULL),factor(gender,exclude=NULL)),FUN=median)) factor(...,exclude=NULL) added for treating NA's as separate group.
common-pile/stackexchange_filtered
WSDL-SOAP Webservice on PHP I have created php file. It contains details about to access (wsdl)xml file and function. I need to call a function with paramaters but i can't able to call a function. So, I have been given a WSDL file and I need to create a SOAP service for it in PHP. Could any one tell me what the correct way of doing it is? Here is link of the WSDL: http://<IP_ADDRESS>/se/ws/wf_ws.php?wsdl and Here is my php code: require_once 'lib/nusoap.php'; ini_set("soap.wsdl_cache_enabled", "0"); $soapclient = new nusoap_client("http://<IP_ADDRESS>/se/ws/wf_ws.php?wsdl",'wsdl'); $soapclient->setHTTPProxy("soap:address_location",8080,"usr_name","pwd"); when run above the php code return this error: wsdl error: Getting "WSDL link" - HTTP ERROR: Couldn't open socket connection to server "WSDL link" prior to connect(). This is often a problem looking up the host name. Try to use http://php.net/manual/en/class.soapclient.php All you need is: Create SOAP client: $client = new SoapClient("http://<IP_ADDRESS>/se/ws/wf_ws.php?wsdl"); You can pass many options to the constructor, such as proxy, cache settings, etc. Some examples for you: $client = new SoapClient("some.wsdl", array('soap_version' => SOAP_1_2)); $client = new SoapClient("some.wsdl", ['login' => "some_name", 'password'=> "some_password"]); $client = new SoapClient("some.wsdl", ['proxy_host' => "localhost", 'proxy_port' => 8080]); If ssl certificate is wrong you can ignore it. Example is here: https://gist.github.com/akalongman/56484900eaf19b18cfbd Call one of the service defined functions: $result = $client->getResult(['param_name' => 'param_value']); Pay attention that your service function may have required parameters. Usually it can be found in the result message. ok, but that web service need https to username and password, also try to this link in my used wsdl page: https://<IP_ADDRESS>/softexpert/webserviceproxy/se/ws/wf_ws.php for this request my sets: $soapclient->setHTTPProxy("https://<IP_ADDRESS>/softexpert/webserviceproxy/se/ws/wf_ws.php",8080,"usr_name","pwd"); but not working, result error is: wsdl error: Getting "WSDL link" - HTTP ERROR: Couldn't open socket connection to server "WSDL link" prior to connect(). This is often a problem looking up the host name. If you need to pass username and pass you can do this:
common-pile/stackexchange_filtered