text
stringlengths
70
452k
dataset
stringclasses
2 values
ObservableValue change listeners: how to access I'm new to JavaFX and have started converting custom components written in Swing. As a best practice I have always checked whether an event listener (PropertyChangeListener, MouseListener, ActionListener etc.) is already included in the target object's listeners to make sure the same listener isn't being added twice. I'm trying to do the same with JavaFX but can't find any way to access the list of listeners (for example, to run a list.contains(listener) check). Am I looking for something that doesn't exist? If so, is there some good reason why JavaFX doesn't include this feature (IMHO it should)? Thanks for the feedback! AFAIK there's no way to access those in JavaFX. Can you give an example of why you need them (or think you do)? I've never come across a need to do that. As mentioned in my question I'm trying to avoid registering the same object as a listener more than once. A use case where this can happen follows in an additional comment, but believe me it is possible and when it does the event gets fired to the listener as many times as the listener appears in the event listeners list. Inefficient and unnecessary because a listener almost always (99.9999999% of the time) needs to handle the event Yes, I understood that: I've just never come across a situation where you would not know if the listener was already registered. That's the use case I was wondering about. .. handle the event only once. Also - and this caught my eye - the JavaFX documentation specifically states that if a listener is registered more than once, then invoking [list].removeListener() removes only the first entry in the list. In other words, if you think you've removed a listener and it's somehow been added twice to the list, you might wonder why the listener still handles the event. USE CASE - In a composite component that includes a text field and a button, the button may be positioned above the text field or to the left. Both components are have a BorderPane as their parent. If the button appears above (BorderPane.TOP) the button length should not stretch to match the text field (BorderPane.CENTER) so I have an event handler that calculates the button's right-side padding value. The event handler catches changes in width to keep things in sync. If the button is on the left, this process isn't necessary, and in that case I would like to disconnect the event handler. But in that use case, you clearly always know if the handler is registered or not without having to query the list of handlers. A simple solution is to keep track of when everything happens, but I'm lazy and worked out a solution a long time ago where I use a utility method to check a listener list EVERY time before a listener gets added. That solves the problem without my having to keep track. You may probably just always remove a listener before adding it. From the removeListener javadoc: If the given listener has not been previously registered (i.e. it was never added) then this method call is a no-op. They do it like that in the framework, e.g. in the com.sun.javafx.binding.ContentBinding.bind() methods. It's kind of moot because the answer is that there is no way to access that list in JavaFX, but I just don't buy that you ever need this (I've been working with Swing and/or JavaFX since 1998, and I've never needed it). Your UI (component) is always, in some sense, a view of some data; so whether or not a listener is added is always a function of the state of your data. In your use case, the listener is added if the custom component is oriented vertically, and not added if it is oriented horizontally. I saw that as well and had the same idea, your confirmation is helpful :). FWIW - I'm new to JavaFX and coming from Swing, and IMHO Swing (still) does some things better, for example the point in my question. JavaFX provides a ton of info not available before, not all of it useful, yet something like making the listener list accessible (as in Swing) doesn't happen when it would've been so easy to make that info available. Everything is based on lists, why not expose a [list].contains(listener) method? @James_D - thanks for your comments. What I'm trying to accomplish is (i) have the listener removed when button is on the left so the handler code doesn't even get invoked, but (ii) have the listener restored if the button is positioned at the top. Yes, again, I had understood that. I just don't see a need to access the list of listeners in order to accomplish that. See answer for a concrete example of why. There is no mechanism in the public API to access the list of listeners for a property in JavaFX. I'm not sure I ever really see a need for this. Your code is in control of when the listeners are added and removed, so you basically always "know" when the listeners have been added. In a broader sense, your UI or UI component is always a presentation of some form of data, and so whether or not the listener is registered is just a function of those data. For a concrete example, consider the use case cited in the comments: public class CustomComponent extends BorderPane { private final Button button = new Button("Button"); private final TextField textField = new TextField(); private final ObjectProperty<Orientation> orientation = new SimpleObjectProperty<>(); public ObjectProperty<Orientation> orientationProperty() { return orientation ; } public final Orientation getOrientation() { return orientationProperty().get(); } public final void setOrientation(Orientation orientation) { orientationProperty().set(orientation); } public CustomControl(Orientation orientation) { setCenter(textField); ChangeListener<Number> widthBindingListener = (obs, oldWidth, newWidth) -> button.setPrefWidth(newWidth.doubleValue()); orientationProperty().addListener((obs, oldOrientation, newOrientation) -> { if (newOrientation == Orientation.HORIZONTAL) { textField.widthProperty().removeListener(widthBindingListener); button.setPrefWidth(Control.USE_COMPUTED_SIZE); setTop(null); setLeft(button); } else { textField.widthProperty().addListener(widthBindingListener); button.setPrefWidth(textField.getWidth()); setLeft(null); setTop(button); } } setOrientation(orientation); } public CustomControl() { this(Orientation.VERTICAL); } // other methods etc.... } In this example, you would probably just use a binding instead of the listeners: button.prefWidthProperty().bind(Bindings .when(orientationProperty().isEqualTo(Orientation.HORIZONTAL)) .then(Control.USE_COMPUTED_SIZE) .otherwise(textField.widthProperty())); but that doesn't demonstrate the concept... Note that, as in @Clemens comment, you can always ensure a listener is only ever registered once with the idiom: textField.widthProperty().removeListener(widthBindingListener); textField.widthProperty().addListener(widthBindingListener); but this just seems to me not to be very good practice: the removeListener involves iterating through the list of listeners (and in the case where it's not already added, the entire list of listeners). This iteration is unnecessary because the information is already available elsewhere. Thanks for your replies, had to do a double take while reading through your example b/c its resemblance to my own code is uncanny. Your suggestion re binding came to me while out for a walk, it's probably the most efficient solution for this particular case. Reason for original question is an event model developed while working with Swing, it's the ancestor for a number of other models and works very well. In switching over to JavaFX I'm trying to keep the good stuff from Swing (at least the processes) and translate them into the new format. When that's not possible I wonder why. I think we probably disagree on whether exposing the list of listeners is part of Swing's "good stuff". To me it's a bit of a code smell ("Inappropriate intimacy"). If you have addListener and removeListener methods, why do you need to expose the list; or, conversely, if you are exposing the list, why have the additional methods? There was an interesting interview with Richard Bair (lead architect of JavaFX and prominent on Swing) where he said "in Swing we broke a lot of people's code". (continued...) I think JavaFX, while very far from perfect, improves a lot on Swing in terms of good OOP (mostly on concepts understood between the first releases of each): you will see many more final methods in JavaFX that are designed to enforce class invariants, and you will find you can write JavaFX with far less reliance on subclassing existing classes, and (pertinent to your question) less exposure of properties that are really implementation details. You may have to work a bit harder to code around that, but you end up with better code in the end imho. re your question "... why do you need to expose the list; or, conversely, if you are exposing the list why have the additional methods?" In my earlier Swing work on custom components there were instances when listeners needed to be disconnected to change the value in that particular instance without notifying the listeners, which were almost always (99.9999% of the time) external to the component. After the change they would be reconnected. This was dictated by design factors. Other designs wouldn't require this step, but came with other costs. (continued...) Regarding "... exposing the list, [then] why the additional methods [add/removeListener]", it must be noted that the Swing event models make listeners accessible as an ARRAY, .. not a list, and arrays are IMHO infinitely more difficult to access and handle than lists. Even converting the array to a list doesn't mean that changes made to the list will be reflected in the array of listeners that's maintained by the component or the component's event managing support class. Thanks again for your input.
common-pile/stackexchange_filtered
How can i open an hidden cmd in java and then write on it commands? What i want to do is open a new cmd as the application starts and then write on it different commands in different moments. Example: i start my application, it runs a new cmd that is hidden so it can't be seen, and then it writes "cd ..", the application wait ten seconds(for example) and then it writes "cd .." another time and finally it writes "dir" and it prints out the results of 'dir' command. I've tryed to use this code to do that public static void main(String[] args) throws IOException{ Runtime rt = Runtime.getRuntime(); Process process = rt.exec("cmd /c cd .. "); process = rt.exec("cmd /c cd .. "); process = rt.exec("cmd /c dir"); BufferedReader commReader = new BufferedReader(new InputStreamReader(process.getInputStream())); String line = ""; while((line = commReader.readLine()) != null){ System.out.println(line); } } but as i've seen it doesn't work because it runs command on different cmds. So sorry for my terrible english and does anyone know how to solve that? You are overwriting your process variable so of course it's going to execute multiple instances of cmd. What you need to do is open a single process and then write commands to its OutputStream which is connected to the normal input of the sub-process. public static void main(String[] args) throws IOException { ProcessBuilder builder = new ProcessBuilder("cmd"); Process process = builder.start(); OutputStream stdin = process.getOutputStream(); BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(stdin)); writer.write("cd ..\n"); writer.write("dir\n"); writer.flush(); writer.close(); BufferedReader commReader = new BufferedReader(new InputStreamReader(process.getInputStream())); String line = ""; while((line = commReader.readLine()) != null){ System.out.println(line); } }
common-pile/stackexchange_filtered
Proof in Peano Arithmetic I am trying to prove $S0 \times SS0$ using the axioms of Peano Arithmetic. The axioms are: $\forall x \hspace{0.05cm}0 \ne Sx$ $\forall x \forall y \hspace{0.05cm} (Sx = Sy \rightarrow x = y)$ $\forall x \hspace{0.05cm} x + 0 = x$ $\forall x \forall y \hspace{0.05cm} x + Sy = S(x + y)$ $\forall x \hspace{0.05cm} x \times 0 = 0$ $ \forall x \forall y \hspace{0.05cm} x \times Sy = x \times y + x$ Two useful tools that are available to us (I have proven them) are the symmetry of identity and the transitivity of identity, respectively: $\vdash \forall x \forall y \hspace{0.05cm} (x = y \rightarrow y = x)$ $\vdash \forall x \forall y \forall z \hspace{0.05cm} (x = y \wedge y = z \rightarrow x = z)$ We are using an axiomatic proof system for first-order logic, so all those logical axioms are also available to us. I have completely stalled on this proof. My hunch is that we are going to want to make use of the both symmetry and identity, but I've played around with the axioms and found no route forward. In particular, we can get: $$S0 \times SS0 = S0 \times S0 + S0$$ from 6, and from here and 4 can get: $$S0 \times S0 + S0 = S(S0 \times S0 + 0)$$ But we're no closer to $SS0$ from these. When you stare long enough at these ugly symbols, you are wont to get dizzy, which is what has happened to me. I wonder if anyone can shine a light on this issue so that I might get out of this hole? Note: I have proven universal elimination, so we are fine for that. Yes, you're right. I will correct that You are going in the right direction. For your next step, note that $S0 \times S0$ is equal to $S0 \times 0 + S0$ by axiom 6, which in turn is equal to $0 + S0$ by axiom 5. Now you've gotten rid of all the multiplication, and the next step is to get rid of all the addition (and then you'll be done). Thank you, this is encouraging. Are you suppressing an extra step in the move from $S0 \times 0 + S0$ to $0 + S0$? From axiom five I can get $S0 \times 0 = 0$ but then we need to find a way to substitute that in @DanÖz See https://math.stackexchange.com/questions/3446823/am-i-allowed-to-substitute-terms-when-reasoning-in-peano-arithmetic This changes everything. Thank you both. Of course! our axioms from first-order logic include: $\forall v \forall u (u = v)$ $\rightarrow (\phi [u/v] \rightarrow \phi)$ Maybe some intuition for how the addition and multiplication axioms work will help. Axioms 3 and 4 are a recursive definition of addition: for evaluating $x+y$, they either tell you the answer (if $y=0$) or reduce the addition to a different addition with smaller $y$. Using the mysterious symbols "$1, 2, 3, 4$" as shorthand for "$S0, SS0, SSS0, SSSS0$", they tell you things like: \begin{align} x + 0 &= x \\ x + 1 &= S(x+0) \\ x + 2 &= S(x+1) \\ x + 3 &= S(x+2) \\ x + 4 &= S(x+3) \end{align} and so on. We are defining $x+1$ in terms of $x+0$, $x+2$ in terms of $x+1$, and so forth, recursively. Axioms 5 and 6 are doing the same thing for multiplication. An expression $x \times y$ can be either evaluated directly (if $y=0$) or reduced to another multiplication with smaller $y$. You will not really go wrong applying these axioms in just about any order you can, so in some sense all you have to do is just be patient and keep going the way you have been! However, if you want to be systematic about it, you could first apply axioms 5 and 6 over and over (until you're left with only addition) then apply axioms 3 and 4 over and over (until you're left with no binary operations at all). Make sure to keep all parentheses intact as you substitute, because associativity is not one of the axioms.
common-pile/stackexchange_filtered
Installing scikit-image on Android Can I run scikit-image on Android? Do I need a specific version of the OS? I haven't been able to find a doc or tutorial explaining its installation on the platform. The instructions only mention Debian and Ubuntu. Would I need to compile from source? Scikit-image is strictly a Python library, and Android runs on Java. As an alternative, I would suggest JavaCV if you want something like scikit-image. It's a Java port of the OpenCV library, which is very similar to what you are looking for. https://github.com/bytedeco/javacv However, if you are intent on using scikit-image, you can try using Jython, although it may be buggy when used with Android. http://www.jython.org/ You can use Cygwin if you want to install with Linux commands on a Windows computer. https://www.cygwin.com/
common-pile/stackexchange_filtered
Count Occurrences of Character in Order I want to find occurrence of each character in their increasing order. For eg. for input abcddec output should be a1b1c1d1d2e1c2 , but my code is giving me output as a1b1c2d2e1c2 what changes I should make? package com.Java8; public class Occurences { public static void main(String[] args) { String str = "abb"; char[] arr = str.toCharArray(); String result = ""; for (int i = 0; i < arr.length; i++) { int count = 0; for (int j = 0; j < arr.length; j++) { if (arr[i] == arr[j]) { count++; } } result = result + arr[i] + count; } System.out.println(result); } } You don't need to iterate over the string that much, you need only one for loop actually. You just need to store the occurrences of the characters you encountered. take a look here as well - http://stackoverflow.com/questions/275944/java-how-do-i-count-the-number-of-occurrences-of-a-char-in-a-string Avoid the double for. Store the state in a map or something: String str="abbcece"; char []charArray=str.toCharArray(); StringBuilder result = new StringBuilder(); Map<Character, Integer> occurenceMap = new HashMap<Character, Integer>(); for(Character character:charArray){ Integer occ = 1; if(occurenceMap.containsKey(character)){ occ = occurenceMap.get(character)+1; } occurenceMap.put(character, occ); result.append(character).append(occ); } System.out.println(result.toString()); just the way I was thinking String str = "abcddecca"; char[] arr = str.toCharArray(); StringBuilder sb = new StringBuilder(); Map<Character, Integer> counters = new HashMap<>(); for( int i = 0; i < arr.length; i++ ) { Integer count = counters.get( arr[i] ); if ( count == null ) { count = 1; } else { count++; } counters.put( arr[i], count ); sb.append( arr[i] ); sb.append( count ); } System.out.println( sb ); I would prefer to create some counters state holder and avoid double FOR loop. Also it's not a good practice to use String concatenation in loops,it's better to use StringBuilder. Better answer. Avoids the double for plus you used builder :) Your counting technique counts every time the character appears in the string, not every time the character appears before it in the string + 1. You need to change the forloop to something like this: for (int i = 0; i < arr.length; i ++){ int count = 1; for (int j = 0; j < i; j ++){ if(arr[i] == arr[j]){ count ++; } } result = result + arr[i] + count; } This will iterate through every character before the character in the forloop and check if they are equal You can use a Map to count the occurrence of each letters: public static void main(String[] args){ String s = "abcddec"; Map<Character, Integer> mapCount = new HashMap<>(); StringBuilder sb = new StringBuilder(); for (char c : s.toCharArray()){ if(mapCount.containsKey(c)){ mapCount.put(c, mapCount.get(c) +1); } else mapCount.put(c, 1); sb.append(String.valueOf(c) + mapCount.get(c)); } System.out.println(sb.toString()); } Time complexity of the solution is O(N) Your inner for loop upper limit should be less than 'i'. public class Occurences { public static void main(String[] args) { String str = "abb"; char[] arr = str.toCharArray(); String result = ""; for (int i = 0; i < arr.length; i++) { int count = 1; for (int j = 0; j < i; j++) { //j upper limit should be i if (arr[i] == arr[j]) { count++; } } result = result + arr[i] + count; } System.out.println(result); } }
common-pile/stackexchange_filtered
Trouble uploading json file with flask For some reason, this won't accept json files. @app.route('/get_data', methods=['POST']) def get_data(): dataFile = request.files['file_path'] dataFileName = dataFile.filename dataFile.save(os.path.join(uploads_dir, dataFileName)) I keep getting this error: that seems to be in the front end; check the file upload dialog/thing settings @Pat-Laugh it was a front-end issue! Thank you! You seem to have json set as an file ending in your template by <input type="file" accept="json">. (The template is not supplied so I can't pinpoint the line. This is not an error of the backend (flask) but of the your template code (jinja/html). It would be nice if you could supply a MRE for such issues. For more information about <input type="file"> take a look at the MDN Documentation. Example of correct accept: <input type="file" accept=".json"> This will only allow *.json file but keep in mind that users may supply other files manually and create a fallback or validation when parsing/ saving the file. You're totally right, it was a front end issue. There was a list of acceptable extensions that I had to add json to. Thank you sm and I'll keep in mind adding an MRE for the future! @Mehak Would you mind marking this answer as correct to close this question or are there other related problems?
common-pile/stackexchange_filtered
Example of using 3rd party React component to RequireJS, TypeScript project How to import 3rd party React components to Typescript TSX files? I have a pretty big JavaScript front-end project which uses: Typescript React (0.14.6) React-dom (0.14.6) RequireJS I want to add two 3rd party React components: React-DatePicker (https://github.com/Hacker0x01/react-datepicker) React-Select (https://github.com/JedWatson/react-select) To make things little clear, I have created a basic lightweight project that duplicates basic tools on the GitHub (https://github.com/mgrackerl/react-requirejs-example) where you can try your code. About the sample project files: rconfig.js - RequireJS module configuration file. typings - this directory holds d.ts files. scripts/main.tsx - main JSX file where I should add 3rd party components To use the DatePicker, what I want as final result is something like this: import React = require('react'); import ReactDOM = require('react-dom'); import DatePicker = require('react-datepicker'); let handleChange = function(date: Date) { this.setState({ startDate: date }); }; var myDivElement = <div> <DatePicker selected={new Date()} onChange={handleChange} /> </div>; ReactDOM.render(myDivElement, document.getElementById('example')); Can you explain what isn't working with your current approach? Are you asking about how to set your project up in general? Yes, how to setup in general. How to import correctly 3rd party components in the current project setup. I would like to hear from other developers who already went through this path. Of course, I will be updating the post (and the Github project) while do more research. Thanks @DanielRosenwasser Our website actually has a guide on React (with Webpack) here and a guide that uses AMD modules (using RequireJS) here. Perhaps that will help a bit. I would continue using the same types of imports you've been using though, as it correctly reflects that these 3rd party libraries aren't actually ES-style modules. After some digging in GitHub issues, finally, I have imported the DatePicker component. Commit: https://github.com/mgrackerl/react-requirejs-example/commit/4afcd28db1ee0f5a9fa0fe65bf505e521814f562 Notes about importing react-datepicker component: 1) Have to bring typings files to the project to make Typescript compiler work (the github project dir: typings/react-datepicker) 2) For the DatePicker component, we have to use MomentJS (http://momentjs.com/) not a Javascript Date object. scripts/main.tsx file: /// <reference path="../typings/index.d.ts" /> import React = require('react'); import ReactDOM = require('react-dom'); import DatePicker = require('react-datepicker'); // 3rd party DatePicker component import moment = require('moment'); // use for the DatePicker component // callback function for DatePicker onChange event let handleChange = function(){ return function(date:any) { console.log("data is: " + date); }; } // Create new React component that consumes 3rd party DatePicker component var myDivElement = <div> <DatePicker selected={moment()} onChange={handleChange} /> </div>; // Render newly create "myDivElement" component ReactDOM.render(myDivElement, document.getElementById('example')); And of course setup RequireJS settings: require.config({ "urlArgs": "v=01", "baseUrl": "", "paths": { "react": "node_modules/react/dist/react", "react-dom": "node_modules/react-dom/dist/react-dom", "moment": "node_modules/moment/min/moment.min", "react-onclickoutside": "node_modules/react-onclickoutside/index", "react-datepicker": "node_modules/react-datepicker/dist/react-datepicker", "main": "scripts/main", }, "shim": { "datatable": { "deps": ['jquery'] }, "dtbootstrap": { "deps": ['datatable'] }, }, jsx: { fileExtension: ".js", } });
common-pile/stackexchange_filtered
What is DISPLAY=:0? What is DISPLAY=:0 and what does it mean? It isn't a command, is it? (gnome-panel is a command.) DISPLAY=:0 gnome-panel This question is not a duplicate. The linked question is explaining the format and meaning of the DISPLAY env variable whereas this question is asking about "assignment to an env variable and the affected program be written in the same line". DISPLAY=:0 gnome-panel is a shell command that runs the external command gnome-panel with the environment variable DISPLAY set to :0. The shell syntax VARIABLE=VALUE COMMAND sets the environment variable VARIABLE for the duration of the specified command only. It is roughly equivalent to (export VARIABLE=VALUE; exec COMMAND). The environment variable DISPLAY tells GUI programs how to communicate with the GUI. A Unix system can run multiple X servers, i.e. multiple display. These displays can be physical displays (one or more monitor), or remote displays (forwarded over the network, e.g. over SSH), or virtual displays such as Xvfb, etc. The basic syntax to specify displays is HOST:NUMBER; if you omit the HOST part, the display is a local one. Displays are numbered from 0, so :0 is the first local display that was started. On typical setups, this is what is displayed on the computer's monitor(s). Like all environment variables, DISPLAY is inherited from parent process to child process. For example, when you log into a GUI session, the login manager or session starter sets DISPLAY appropriately, and the variable is inherited by all the programs in the session. When you open an SSH connection with X forwarding, SSH sets the DISPLAY environment variable to the forwarded connection, so that the programs that you run on the remote machine are displayed on the local machine. If there is no forwarded X connection (either because SSH is configured not to do it, or because there is no local X server), SSH doesn't set DISPLAY. Setting DISPLAY explicitly causes the program to be displayed in a place where it normally wouldn't be. For example, running DISPLAY=:0 gnome-panel in an SSH connection starts a Gnome panel on the remote machine's local display (assuming that there is one and that the user is authorized to access it). Explicitly setting DISPLAY=:0 is usually a way to access a machine's local display from outside the local session, such as over a remote access or from a cron job. It's an environment variable that is passed just to that program, rather than the shell as a whole. This happens when you set a variable on the same line as a command. X11 programs need to know where to display windows, since it's a client/server system and you could be displaying on a remote machine. This simply means use the first display on the local machine. This is normally set up automatically when logging in to a desktop environment. For example, open a graphical terminal and type echo $DISPLAY. does bash syntax allow the assignment to an env variable and the affected program be written in the same line, and separated by white space? Yes, it's exactly as you quoted in the question. @Tim All POSIX-compliant shells allow that. It means to set and export the environment variable just for that one command, but not to affect the value of the shell variable (if it already has a value) afterwards. @MarkPlotnick - it's a little more finer-grained than that. When prefixing a variable declaration to any command but a shell function or a special builtin the declaration should not affect the parent shell's definition for the spec'd var, but when doing it for either of those POSIX states that the variable declaration should stick. So, POSIXLY, one=$1 shift should simultaneously define the current shell variable $one to value the same as the current shell's first positional while removing said positional. bash, by the way, breaks w/ the spec by default. @mikeserv Thanks as always for the more precise and correct description.
common-pile/stackexchange_filtered
What is the use of UV LEDs in my fridge? I have a new fridge in which there is, apparently, some LEDs emitting UVs. I read on the internet that it kills bacteria and helps the food last longer. Does this really work? And what about the claim that it somehow preserves vitamins? I think the vitamins part of the question is probably okay, since it's not really nutritional, just a food storage and preservation question. I'm glad this was migrated. I'm interested in answers. What's the wavelength of these LED's? What's the output in mW per square centimeter at the food surface? Are the fridge shelves UV transparent or are there shadowed areas? It's POSSIBLE to sterilize things with current generation UV-LED's, but it's also a great marketing gimmick. Not having run across this gimmick, I don't know how they are arranged in the fridge. I know that germicidal lamps are used to kill airborne germs in fixtures arranged (overhead or in ventilation ducts) so that they don't expose eyes to the light - so I could imagine they might be doing something similar to reduce germs/spores without needing to directly irradiate all food surfaces. But gimmick still seems like the most accurate explanation. These are cool: http://www.qphotonics.com/Deep-UV-light-emitting-diodes/ I didn't realize they'd gotten all the way down to 240nm. Not sure it's right top call these things LEDs, since we can't see the output. X-Ray "LEDs" are next. @WayfaringStranger we've had IR LEDs almost as long as we've had LEDs, and they're not visible either. Defining "light" as "visible light" is all very well in everyday use, but the edges of the visible spectrum aren't well defined so this definition doesn;t work technically Yes, the UV light will make your fruit and vegetables last longer. Really it depends on the wavelength of UV. 275nm is used for killing bacteria and will burn your retinas, 385nm is less harsh to the skin or eyes and will still kill bacteria to a degree. I just read a study in which they used 285nm in a controlled refrigerator setting against a static test with no UV on strawberries and how long they would stay fresh. The UV irradiated strawberries lasted 9 days without growing mold. The non uv, started growing mold all over them by 9 days. They tested the nutritional content of each and the UV treated had a higher content of nutrition then the non uv. There are 285nm LEDs and even 254nm LEDs available ... and since you can switch them off the moment the fridge door is opened, you could use them ... Problem: Most glass containers are black to those wavelengths (even Pyrex - actually that is exactly how the UV-A/B/C bands were defined, by what kinds of glass did or did not filter the radiation). It's for sterilizations -- for years they've sold "UV pens" for hikers to sterilize water, and kits with UV lamps to keep fish tanks clean. Of course, it won't help if items are in opaque containers, tightly packed, wrapped in foil, etc ... so you'll likely need to start using clear containers for it to be beneficial ... and even then, it'll only help the outside of the food, and the shelves and walls of the fridge, not the inner portions of the food being stored. UV light also causes clear plastics to degrade over time. They'll become less transparent (typically taking on a yellow/brown hue), and become more brittle. ... so it could also cause you to need to replace your storage containers much more often. I have no knowledge of UV affects on vitamins. I can't imagine it has any real effect on vitamins/nutrients, since it's just affecting the surface. (Unless you're comparing to food that spoils and gets thrown out and therefore does not supply you any vitamins!) Thank you for your answer ! I don't do anything special to keep my aliments in some transparent containers, they are just translucent. Now that I have been using my fridge for a while I saw that my aliments are degrading less less quicker than on my old fridge so I am having hard time telling wether it comes from the temperature that is cooler or if it's because of the UVs or... both of them ? Whether UV might help a food last longer depends on what the food is. Fats (like shortening or ghee), and most spices should be protected from light. The UV in sunlight is part of what turns fats rancid, and helps dried herbs and spices loose their flavor. You will never achieve a truly sterile environment at home. Bacteria and molds are everywhere, in the air, on every surface. In any case, any reduction in pathogens due to the UV is strictly a surface treatment, and the food will be quickly recolonized from the environment. I recommend practicing good sanitation (such as not cutting vegetables on a board just used for raw chicken) in general, and not worrying about a gimmick such as a UV light. I cannot speak to the stability of vitamins under UV. So I'm guessing the best case is for really sensitive foods that go off quickly from the surface, so if you're lucky and everything's powerful enough, keeping the surface a little cleaner could make the spoilage take a couple times as long... True, Bacteria and molds are everywhere. But a fridge would be an airtight, closed environment. It should be possible to achieve a higher level of sterility here. Or indeed reduce the number of pathogens. Certainly over a considerable amount of time (over multiple nights). If not, sterilization would not be possible at all. UV light will destroy bacteria etc. On direct contact only They are used in commercial food storage to self clean all the surfaces of the food storage system and containers placed within it Food should be in light proof containers if the UV light is very strong, or there will be some surface degradation For a domestic fridge, this is most likely a marketing gimmick, though it may help reduce odors etc. if the owner doesn't clean the fridge very well or often? Since the problematic germs tend to be on the surface of the produce, why should it not work - another question is how to irradiate all sides of a given piece of produce without moving it... Is it a sub zero? Sub zero uses a uv light to filter the air every 20 minutes not light up the entire fridge. It is taking bacteria and odors as well as ethelyne gas out of the air circulated in the fridge. This is not a new concept Revco corporation introduced this at least as far back as the 60's in their bilt-in Gourmet line. They continued it until they stopped making home refrigeration in the late 70's or early 80's. They continue to make lab refrigeration so they probably continue to use uv air filtration. https://www.google.com/amp/gizmodo.com/5034434/sub-zero-fridge-uses-nasa-air-purification-technology-to-keep-foods-fresh/amp
common-pile/stackexchange_filtered
LeetCode #512 Game Play Analysis MySQL I was doing this question LeetCode #512 The original code I wrote was: select a.player_id, a.device_id from (select player_id, device_id, min(event_date) from Activity group by player_id) a My logic here is to firstly, filter out the minimal corresponding event date for each player. It seems that the code had selected the correct earliest date that I wanted, but failed to correspond the device id that matches that specific date. Can someone tell me why this is happening and how to fix my code? Thank you upfront for your time! Modify your query as below to select player_id,device_id corresponding to minimum event_Date. select a.player_id, a.device_id from (select player_id, device_id, min(event_date) from Activity group by player_id,device_id) a Alternatively use window function as below, select a.player_id, a.device_id from (select player_id, device_id,event_Date,row_number() over (partition by player_id order by event_Date asc) AS rn from Activity) a where rn = 1 Thanks for the quick response, but it does not do the job. can you show sample of your data and current output? Try the second query with window function. Sample data: {"headers":{"Activity":["player_id","device_id","event_date","games_played"]},"rows":{"Activity":[[1,2,"2016-03-01",5],[1,2,"2016-05-02",6],[1,3,"2015-06-25",1],[3,1,"2016-03-02",0],[3,4,"2016-02-03",5]]}} Current output: {"headers": ["player_id", "device_id"], "values": [[1, 2], [3, 1]]} Expected: {"headers": ["player_id", "device_id"], "values": [[1, 3], [3, 4]]} The window function does work! Thanks for your help!
common-pile/stackexchange_filtered
Automapper not mapping my list properties In my MVC5 project I use automapper to map my viewmodels to my models. But it seems that I'm doing something wrong, because not all my properties are mapped. Here is my View Model public class PlanboardViewModel { public int Id { get; set; } [Display(Name = "Titel")] public string Title { get; set; } [Display(Name = "Omschrijving")] public string Description { get; set; } [Display(Name = "Verzoektype")] public int AbsenceTypeId { get; set; } public List<PlanboardEventMapViewModel> EventMap { get; set; } public List<PlanboardEventDetail> EventDetails { get; set; } public List<PlanboardRequest> PlanboardRequests { get; set; } } I use a separate class for my profiles: public class PlanboardMappingProfile : Profile { protected override void Configure() { //CreateMap<AbsenceType, AbsenceTypeViewModel>(); //.ForAllMembers(opt => opt.Condition(s => !s.IsSourceValueNull)); CreateMap<PlanboardViewModel, Planboard>() .ForMember(dest => dest.CSVHPlanboardEventDetail, opt => opt.MapFrom(src => src.EventDetails)) .ForMember(dest => dest.CSVHPlanboardEventMap, opt => opt.MapFrom(src => src.EventMap)) .ForMember(dest => dest.CSVHPlanboardRequest, opt => opt.MapFrom(src => src.PlanboardRequests)); CreateMap<PlanboardEventMapViewModel, PlanboardEventMap>(); } } In my repository I have the following code: public int Create(PlanboardViewModel planboardViewModel) { try { // map the viewmodel to the planboard model // Map the planboards to the view model var config = new MapperConfiguration(cfg => { cfg.AddProfile<PlanboardMappingProfile>(); cfg.CreateMap<PlanboardViewModel, Planboard>(); cfg.CreateMap<PlanboardEventMapViewModel, PlanboardEventMap>(); }); IMapper mapper = config.CreateMapper(); Planboard planboard = mapper.Map<Planboard>(planboardViewModel); // Some more code here When I submit the page and use the debugger, It shows that planboardViewModel has a list of values for PlanboardEventMapViewModel, PlanboardEventDetail, PlanboardRequest. The system is not telling me that there are any errors. When I check planboard after the mapping, it does not show any values for CSVHPlanboardEventDetail, CSVHPlanboardEventMap and CSVHPlanboardRequest. EDIT My PlanboardModel is created DB First EF6: public partial class Planboard { [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors")] public Planboard() { this.CSVHPlanboardEventDetail = new HashSet<PlanboardEventDetail>(); this.CSVHPlanboardEventMap = new HashSet<PlanboardEventMap>(); this.CSVHPlanboardRequest = new HashSet<PlanboardRequest>(); } public int ID { get; set; } public string Title { get; set; } public string Description { get; set; } public int StatusID { get; set; } public Nullable<int> AbsenceTypeID { get; set; } public virtual AbsenceType AbsenceTypes { get; set; } [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")] public virtual ICollection<PlanboardEventDetail> CSVHPlanboardEventDetail { get; set; } [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")] public virtual ICollection<PlanboardEventMap> CSVHPlanboardEventMap { get; set; } [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")] public virtual ICollection<PlanboardRequest> CSVHPlanboardRequest { get; set; } } Can you also show the Planboard model? @AlexandruMarculescu PlanboardModel is created with EF6 I've added the model to the question
common-pile/stackexchange_filtered
How do I launch my app via a custom URL in an email I'm adding a custom URL using the android:scheme in my intent filter as follows <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <data android:scheme="myscheme" android:host="myhost" /> </intent-filter> I'm sending the phone an email with the following in the contents: myscheme://myhost?data=whatever but the above link shows up as plain text, i.e. not as a link. You need to send your email in HTML, with your link in an <a> tag: <a href='myscheme://myhost?data=whatever'>Launch App Automatic link parsing is almost certainly only done with links starting with http:// or www., and it varies from email client to email client anyway. Ok, I tried that and it didn't work. The only solution I can offer is to actually use http:// with a link going to your site, to a specific app page, with the same GET parameters. You can register an intent-filter to intercept this with the app and handle it appropriately, and if the user doesn't have the app, the web page instructs them to install it. thanks. I tried that, but now Launch App shows up in the email, as plain text @Russell You're right, just tried it myself. In that case I'm going to have to say there's no solution. using http will work, although it makes the user 'Complete action using' either the Browser or my app. We must be missing something here, I don't understand why android:scheme doesn't work. Link to your website and then redirects to "myscheme://myhost?data=whatever" Link to your website and then redirects to "myscheme://myhost?data=whatever"
common-pile/stackexchange_filtered
Logical OR operator in IF statement I want if col1_list is not null or not empty or if col2_list is not null or not empty do some logic and set rc = 0, else set rc = 1 . If some of col_list is null or empty set rc 1 . But logical OR operator not work in if statement or what I'm doing wrong ? declare @col1_list varchar(max) , @col2_list varchar(max) declare @tbl TABLE (col1 int , col2 int) declare @rc char(1) = '0' set @col1_list = '2|6|7|8|' set @col2_list = '1|' IF (@col1_list IS NOT NULL OR @col1_list <> '' OR @col2_list IS NOT NULL OR @col2_list <> '') BEGIN DECLARE @myXML1 AS XML = N'<H><r>' + REPLACE(@col1_list, '|', '</r><r>') + '</r></H>' DECLARE @myXML2 AS XML = N'<H><r>' + REPLACE(@col2_list, '|', '</r><r>') + '</r></H>'; with mycte as (SELECT Vals1.id.value('.', 'NVARCHAR(50)') AS val1 FROM @myXML1.nodes('/H/r') AS Vals1(id)), mycte1 as (SELECT Vals2.id.value('.', 'NVARCHAR(50)') AS val2 FROM @myXML2.nodes('/H/r') AS Vals2(id)) insert into @tbl (col1,col2) select val1,val2 from mycte,mycte1 where val1 <> '' and val2 <> '' set @rc = '0' END ELSE set @rc ='1' select @rc as [rc] Have Rc = 0 when some of col_list is null or empty what exactly is not working? When you say that it is not working, explain what you want and what you actually get I have rc = 0 when some of col_list is null or empty declare @rc char(1) = '00' its CHAR(1) and you are assigning two character to it. Not fair :) Try changing your if condition to this: IF (NOT(@col1_list IS NULL OR @col1_list = '') AND NOT(@col2_list IS NULL OR @col2_list = '')) This one checks 2 things: @col1_list is neither null nor empty, AND @col2_list is neither null nor empty. I want when some of col_list is empty or is null query must enterd in else statement and set @rc = 1 but it doesnot entered in else statement
common-pile/stackexchange_filtered
MONGO DB Alias name usage in find query not working i am using the below command to find records in contacts collection's db.contacts.find({"field1":"1"}) the problem is i want to use alias name for field1 as SL.NO i tried this way and its not working... db.contacts.find({"field :n, as: :SL.NO" : "1"} can anyne help me in this issue? BTW I am currently using mongo 1.6.5 possible duplicate of Mongodb creating alias in a query Where are you finding that syntax? Are you using the Mongo shell? Can you point to the documentation about this feature? I've never seen it before. @ thilo yes...i will point it.. @noa yes..i am trying to execute it from the mongo shell Some drivers like Mongoose support that feature, but there is no such feature in MongoDB itself or the Mongo shell. This was accurate at the time of writing. If this answer is obsolete, perhaps instead of downvoting you could comment and provide more information.
common-pile/stackexchange_filtered
Object slicing with base class vector and derived class pointers Here is a simplified version of my scenario: class Base { class Entry { ... } std::vector<Entry *> entryList; void processEntry() = 0; void OnProcessEntry(); // processEntry callback }; class Derived : public Base { class Entry : public Base::Entry { ... } void processEntry() { ... entryList.push_back(new Entry); OnProcessEntry(); } }; When I copy a Derived::Entry pointer into entryList, the object is sliced to Base::Entry. Is there any way I can get the Entry back to its original "unsliced" state within the base class method OnProcessEntry? If anybody can recommend a more appropriate title please do and I will edit. Thanks. "When I copy a Derived::Entry pointer into entryList, the object is sliced to Base::Entry" why do you think so? You should reread what object slicing is. It appears with objects. Pointers dont slice. If you have some code with an actual problem, then you should show that. There is no problem in the example you posted here. @formerlyknownas_463035818 Because when I look at the pointers inside entryList when debugging, I can no longer see the members exclusive to Derived::Entry, only the base members. If that isn't slicing, then what is it? thats not slicing, thats the interface you can access via a Base pointer. The data is still there (not sure how to see it with a debugger though). Cast the pointer to Derived* and you can access also Deriveds members Get a better debugger? please try to include such information in the question. It appears that your actual question/problem is that your debugger shows you only Base interface for a Base pointer, but thats actually not a big surprise... @formerlyknownas_463035818 I see, that's very interesting. I didn't know that and hopefully you can see why I made the incorrect assumption. Thank you for teaching me something. :) i will flag this as a duplicate, it is not really the same question, but it explains why your premise is broken, if you want a dedicated answer, please fix your questin to include why you think your code has a problem / what is the actual "problem" you want to fix Ok. I was thinking maybe I should delete the question. Do you think it is better that I edit to include what lead me to think what I did and keep it up? @notanalien You probably should explain the whole story regarding your debug observations in your question title and body.
common-pile/stackexchange_filtered
How to convert the following grammar to LL(1)? The following grammar is given: \begin{align*} M &\rightarrow d M d \\ M &\rightarrow e M e \\ M &\rightarrow f M f \\ M &\rightarrow \varepsilon \end{align*} I've checked it with the LL(1) table and there are ambiguous cell entries. Now, I've tried to convert it to LL(1), but without success. Is this not possible for this grammar? Why? Have you tried removing left recursion and left-factoring the grammar? In my opinion, there is no left recursion. Left-factorizing the grammar will break up the grammar, because, as shown above, it always produces 2 non-terminals. Good, you're right about left-recursion. Sometimes you need to add/modify production rules of your grammar to convert to LL(1), and left-factoring is definitely required in your case to get closer to a more useful result. This generates the language of palindromes with character d, e, f, which I believe to be LL(*). Consider the case where you have a string consisting only of d's. To choose the right production you need to know if the d is the left one or the right one. To know that, you have to know the length of the string, which requires arbitrary lookahead.
common-pile/stackexchange_filtered
Is there a way for an application to remain on a specific workspace? I have a workspace layout, where several apps are opened in different workspaces in a full-screen mode. I got used to this layout and remember corresponding numbers, so it is very fast to switch between full-screen apps just by pressing Win+num. However, if I close an app in any workspace and it becomes empty, all other apps opened in other following workspaces shift left to that available workspace and their corresponding workspace numbers change as well. This is very annoying. Is there a way to stick an app to a specific workspace, so if a workspace becomes empty it remains empty? PS: On macOS one can assign a specific workspace to an app, so then you run it - it is opened in that workspace automatically and stays there, which is a bliss to implement in eOS. Thanks! I just found a solution - Dynamic Workspaces must be disabled in GNOME settings: gsettings set org.pantheon.desktop.gala.behavior dynamic-workspaces false Now the empty workspace is not removed from a workspace chain.
common-pile/stackexchange_filtered
Multiple values for the 'scope' property in a manifest.json file. I am working on an existing application with legacy code base. While I would love to see the entire app converted to a PWA some day, for now my plan is one page (one url) at a time. For this, I know that the "scope" property is going to be my best friend for some time. While I can pass "." as a value to the property and treat all the routes as PWS, but as I mentioned earlier, that's not the plan. Hence, below is not an option for me. { "scope" : "." } For now, I plan on covering only two routes under the PWA scope, "list page" and the "details page". Hence I would have preferred something like below to work, but it did not. { "scope" : [ "/list", "/id/details" ] } Any suggestion(s)? The scope member is a string that represents the navigation scope of this web application's application context. https://w3c.github.io/manifest/#scope-member It will not support an array of multiple values. An option would be to use the scope /pwa/ (or similar) and as you migrate sections of the app redirect /list to /pwa/list, etc.
common-pile/stackexchange_filtered
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value I have an error when I try to run my program: The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. Is there anything wrong with my select statement? int month=0; if (RadioButtonList1.SelectedIndex == 0) { month = 1; } else if(RadioButtonList1.SelectedIndex == 1) { month = 2; } else if(RadioButtonList1.SelectedIndex == 2) { month = 3; } SqlCommand cmd = new SqlCommand("SELECT thDate, thType, thAmountIn, thAmountOut from [Transaction] Where thDate > dateadd(month, -" + month + ", GETDATE())", myConnection); da = new SqlDataAdapter(cmd); myConnection.Open(); da.Fill(ds, "[Transaction]"); myConnection.Close(); if (!IsPostBack) { ViewState["vs_Transaction"] = (DataTable)ds.Tables["Transaction"]; GridView1.DataSource = ds.Tables["Transaction"]; GridView1.DataBind(); The below is my database table for thDate is in the title, I guess There was a comment that says "And what is the error?".. but I think that was deleted :P Do you have any idea what's the problem with my coding that causes the erorr ? where do you get the error ? What line ? give us a little more info also, your data is varchar type ? that's what is looks like.. yah my data is varchar. they said The conversion of a varchar data type to a datetime data type resulted in an out-of-range value. At da.Fill(ds, "[Transaction]"); try to use Convert(datetime, thDate,120) in your select, and in the where. If this work for you, I will post it as an answer hi, you mean which part of the select? at there where clause ? SELECT Convert(datetime, thDate,120), thType, thAmountIn, thAmountOut from [Transaction] Where Convert(datetime, thDate,120)> dateadd(month, -" + month + ", GETDATE())", myConnection); hi, i've tried and it's the same error . sorry, use Convert(datetime, thDate,103). Replace the 120 with 103, this should work !! I've tried and work The problem is that you're storing the data as Varchar, and filling the datatable, that probably will expect a Datetime. You should cast the values in the query in two places: In the select, for returning as a Date In the Where, because you're comparing a Date against a Varchar. So, your select clause will be SqlCommand cmd = new SqlCommand("SELECT Convert(datetime, thDate,103) as thDate, thType, thAmountIn, thAmountOut from [Transaction] Where Convert(datetime, thDate,103)> dateadd(month, -" + month + ", GETDATE())", myConnection); there's no more error. However the date in the gridview was not selected based on the radio button selection . Do you have any idea why? For example ? What is the date in your Db, and what is the date showed on the GridView? I've edited the top. The date in db and gridview should be the same Well, are you sure that the Db result that you first show is Ok ? I mean, your Where clause is adding 0, 1 OR 2 to getdate() (which means, actual date), and returning the fields that are bigger than that. Your first output has a value that was incorrect, "9/2/2012 10:23.." Probably was because it was comparising the fields as Varchar (aka strings) and not as dates. I mean, the select is returning other results. If you are SURE that the first result is that you want, remove the "Convert" in the where (put it as before) i've changed the date before that, no more "9/2/2012 10:23". Your first result referring to 9/2/2012? Or ? Because i have removed the "Convert" in the where clause. But it says An expression of non-boolean type specified in a context where a condition is expected, near ','. "your first result" yes, refering to 9/2/2012. So, again, what is the date that youre getting on your db, and what is the wrong date in the datagridview? Also, what values you got in your radioButton ? Is correct the values you're selecting ? let us continue this discussion in chat
common-pile/stackexchange_filtered
C# Chart showing unnecessary changes Unnecessary Changes in Chart(Image Link) I am working on C#,Winform.This is My Code.I am trying show by chart the number of people registered in selected date Which is working perfectly at first selection of date from dateTimePicker, But at the second selection and second time button click it showing unexpected changes(Please refer the image for perfect understanding). So I don't want this changes I want every time it user pick date he has to get perfect output of only 5 bars.How I can do that? Please Help!Thanks In Advance. private void LoadChart_Click(object sender, EventArgs e) { string date1 = dateTimePicker1.Value.ToString("dd-MM-yyyy"); string date2 = dateTimePicker1.Value.AddDays(1).ToString("dd-MM-yyyy"); string date3 = dateTimePicker1.Value.AddDays(2).ToString("dd-MM-yyyy"); string date4 = dateTimePicker1.Value.AddDays(3).ToString("dd-MM-yyyy"); string date5 = dateTimePicker1.Value.AddDays(4).ToString("dd-MM-yyyy"); string cmdTextDate = "SELECT * FROM AllInOneTable WHERE Date=@Date"; OleDbCommand commandDate = new OleDbCommand(cmdTextDate, my_con); commandDate.CommandText = cmdTextDate; commandDate.Parameters.AddWithValue("@Date", date1); DataSet data = new DataSet(); OleDbDataAdapter da = new OleDbDataAdapter(commandDate); da.Fill(data); int a = data.Tables[0].Rows.Count; string cmdTextDate2 = "SELECT * FROM AllInOneTable WHERE Date=@Date"; OleDbCommand commandDate2 = new OleDbCommand(cmdTextDate2, my_con); commandDate2.CommandText = cmdTextDate2; commandDate2.Parameters.AddWithValue("@Date", date2); DataSet data2 = new DataSet(); OleDbDataAdapter da2 = new OleDbDataAdapter(commandDate2); da2.Fill(data2); int b = data2.Tables[0].Rows.Count; string cmdTextDate3 = "SELECT * FROM AllInOneTable WHERE Date=@Date"; OleDbCommand commandDate3 = new OleDbCommand(cmdTextDate3, my_con); commandDate3.CommandText = cmdTextDate3; commandDate3.Parameters.AddWithValue("@Date", date3); DataSet data3 = new DataSet(); OleDbDataAdapter da3 = new OleDbDataAdapter(commandDate3); da3.Fill(data3); int c = data3.Tables[0].Rows.Count; string cmdTextDate4 = "SELECT * FROM AllInOneTable WHERE Date=@Date"; OleDbCommand commandDate4 = new OleDbCommand(cmdTextDate4, my_con); commandDate4.CommandText = cmdTextDate4; commandDate4.Parameters.AddWithValue("@Date", date4); DataSet data4 = new DataSet(); OleDbDataAdapter da4 = new OleDbDataAdapter(commandDate4); da4.Fill(data4); int d = data4.Tables[0].Rows.Count; string cmdTextDate5 = "SELECT * FROM AllInOneTable WHERE Date=@Date"; OleDbCommand commandDate5 = new OleDbCommand(cmdTextDate5, my_con); commandDate5.CommandText = cmdTextDate5; commandDate5.Parameters.AddWithValue("@Date", date3); DataSet data5 = new DataSet(); OleDbDataAdapter da5 = new OleDbDataAdapter(commandDate5); da5.Fill(data5); int f = data5.Tables[0].Rows.Count; this.chart1.Series["Series1"].Points.AddXY(date1, a); this.chart1.Series["Series1"].Points.AddXY(date2, b); this.chart1.Series["Series1"].Points.AddXY(date3, c); this.chart1.Series["Series1"].Points.AddXY(date4, d); this.chart1.Series["Series1"].Points.AddXY(date5, f); } I think you probably need to call this.chart1.Series["Series1"].Points.Clear() before adding the points. Currently it seems like you are just adding 5 more points on each button click without first clearing the points that were created in a previous click Try clearing the series (below) then re-add the new data afterwards. Not sure if that's 100% what you want, but seems like it. Series.Points.Clear(); //for your code this.chart1.Series["Series1"].Points.Clear(); Also on a side note - You rarely should call SELECT * FROM or do multiple calls to the database to retrieve information that can be done with a single query. If you're using sql, look into GROUP BY and UNION keywords for sql queries. It'll allow you to condense multiple calls into one. Then separate it as you see fit, if you see the need. Linq will come in handy here. This Example will show you an overview of what I'm talking about even if it's not contextually the same. It is a little more advanced however. Yes,It's 100% working perfectly for me:). I wish I able to give like to this answer but my number of reputation not allow it,When I earn my reputation I will definitely give like to this answer:) Thank You Gabe:) No worries and no need too - Here to help not build rep. Good luck with your app Your code is made my work perfect,But Other advice also I keep In mind
common-pile/stackexchange_filtered
How to create a java interface to compile c# code, and then generate dll's? I would like to start coding a java interface but I want that The internal funcionality is to generate dll's from c# codes, I have several projects which several c# classes... Basically the idea is to make a setup, so there would be a .zip or .rar wich will contain all c# projects and a java exe which, when executed, will compile c# projects and then place the dll's in a specific directory, Do you know any example of how to do this? If you mentioned Portability ,than i think you need to run that java App. also at Linux and Mac ,so instead of MSBuild which work's only at Windows ,you can try http://www.mono-project.com/CSharp_Compiler Mono Compiler which is Cross Platform . You can use MSBuild to build .NET projects from the command line, which I think would be easy for you to call from Java code (though I don't understand why you are trying to use a Java program to trigger a compilation). Read about MSBuild here: http://msdn.microsoft.com/en-us/library/ms171452%28v=vs.90%29.aspx I would like Java because of portable issues, and want to trigger a compilation because the main project is in c#, in fact I would like to make a .bat file but having the advantages of a java swing interface ah ok. then I guess MSBuild (or a similar tool) would do. Hope its easy to call a command line tool in swing. Here's a HUGE list of build automation tools (not all are for .NET): http://en.wikipedia.org/wiki/List_of_build_automation_software @Guganeshan.T he say's he need Java because of Portability ,but MSBuild it's not so Portable ,maybe he need to check Mono and it's tool's. That's why I was confused about his java executable choice. If that is the case, then you are right, Mono is the way to go :)
common-pile/stackexchange_filtered
its related to beans use in jsp <jsp:useBean name="beanname" property="propertyname"/>how to access the value of a bean property. We can do this by using above syntax but i want to increment the value of this property how it is possible please please help me. I cant complete my project without it If I'm correct you have Bean class provide this class with package discription ex. exp.DemoBean. in jsp tag. After doing use this name value as object and call methods in your bean class.
common-pile/stackexchange_filtered
Android - Weka Classification is Taking too much time I am using weka library for implementing classification algorithms in android, Though Weka lib is not fully compatible with android , I am using Weka For Android as lib in my app. But its taking too much time for building model Before posting this, I gone through various link, but cant get any consolidated solution Some links are: Weka's PCA is taking too long to run Using Weka classifiers on Android Android - Adding external library to project I am using fallowing code for classification OneClassClassifier classifier = new OneClassClassifier(); classifier.setNominalGenerator(new NominalGenerator()); classifier.setTargetClassLabel("1"); classifier.setNumericGenerator(new DiscreteGenerator()); classifier.setSeed(1); classifier.setNumRepeats(10); classifier.buildClassifier(train); Evaluation eval = new Evaluation(train); eval.evaluateModel(classifier, test); System.out.println(eval.toSummaryString("\nResults\n======\n", false)); This Code is working fine in my Java application. But if i put the same code in Android App, Its taking too much time say 4-6 minute for building the Model. Please suggest some way to get out from this. Also I am not able to serialized / deserialized model in android , while its working fine in java. What's wrong with the links you've found? The first one seems to give a straightforward answer. Consider also asking a different question about serialization - there should be one question in one post. Its for PCA,I have done that part, while for classification, its taking too much time Right. Perhaps it's just running slow, because the device is slow? :) SVM's are expensive to train, even on PCs. Maybe you could switch to decision trees? Do you really need such a strong classifier? Right Thanks for your comments , Actually I need Unary Classification,I tried One Class SVM for this, Decisions trees works on binary/multi classification. Are there some other alogrthims which works on unary class? No, in case of one-class classification I also heard only about SVMs. My only idea left is that maybe you could train the classifier on a stronger machine, serialize the model, and then just use it on the Android device. Use Spark. Time to change tech! Thanks sam, But what i read there Spark MLlib also not supports one class classification , Its Support binary classification, regression, clustering and collaborative filtering. Can we use this as one class svm ?
common-pile/stackexchange_filtered
Updatedb include the path pointed by symbolic link I have created mlocate database with the contents of a particular folder. I see that the updatedb doesn't include the path pointed by the symbolic links in the database. How can I include the path pointed by the symbolic links in the database? Surprisingly: mlocate has a default option -L or --follow that follows trailing symbolic links when checking file existence (default). What purpose does it serves when updatedb doesn't include symlinks! References: updatedb(8): update database for mlocate - Linux man page mlocate - Gentoo Wiki slightly off-topic: I prefer to not have updatedb installed in VMs. @RuiFRibeiro Sorry I don't get your point, I too don't have updatedb installed in VMs. Using this implementation of plocate with updatedb in it, one can build a custom version to follow symlinks. I don't know if it answers the question also for mlocate, I only use plocate. If you are certain there will be no loop in your filesystem, you can simply replace the code e.is_directory = (de->d_type == DT_DIR); by e.is_directory = (de->d_type == DT_DIR) || (de->d_type == DT_LNK); // or even just true in the file updatedb.cpp. If you might have loops, here is a solution (maybe not very efficient). Always in updatedb.cpp, declare a global variable vector<char*> explored; just before the function int scan(...). Then, add the following code after the two first tests in that function scan: char buf[PATH_MAX]; realpath(path.c_str(),buf); for (auto &e : explored) if (strcmp(e,buf) == 0) return 0; explored.push_back(buf); And just before each return of that function, add an explored.pop_back();. With similar modifications of the code, one can follow symlinks only when inside certain directories, and/or completely exclude some directories of the database, depending on one's need. It is also quite useful to make the results clickable.
common-pile/stackexchange_filtered
Twilio adding more questions to phone poll example In Twilio they have an example on a "phone poll" in php. The phone poll has the following files, make call.php, poll.php, and process_poll.php. Make call php - contains the SID etc and dials the call. poll php - contains the actual poll question in a Gather tag. As shown here: <?php require 'Services/Twilio.php'; $response = new Services_Twilio_Twiml(); $gather = $response->gather(array( 'action' => 'php url to hit', 'method' => 'GET', 'numDigits' => '1' )); $gather->say("Hi question one here"); $gather->say("From 1 to 5 with 5 being the best service. How would you rate?"); header('Content-Type: text/xml'); print $response; ?> process poll php contains the next step after they choose a selection. For sake of space I won't post the db area just what follows. if (isset($choices[$digit])) { mysql_query("INSERT INTO `results` (`" . $choices[$digit] . "`) VALUES ('1')"); $say = 'Ok got it. Next question.'; } else { $say = "Sorry, I don't have that option. Next question."; } // @end snippet // @start snippet $response = new Services_Twilio_Twiml(); $response->say($say); $response->hangup(); header('Content-Type: text/xml'); print $response; my question on this is how would I add additional questions. What currently happens is user hears first question. Selects option from phone pad. Connection ends after a message response to their answer. I would like to add about 3 more questions and then responses. How can this be accomplished? Would I add a response that sends them to another url for the 2nd set of questions? Can you please give me some guidance on how this could be accomplished? I'm a php novice. You should ask each question in a separate URL, like that you can evaluate the answer to each individually. Ricky from Twilio here. Great question. There are a few different ways you could build this. The method you outlined of having a separate URL for each question and response would absolutely work. In this scenario, poll.php would become poll1.php: <?php require 'Services/Twilio.php'; $response = new Services_Twilio_Twiml(); $gather = $response->gather(array( 'action' => 'php url to hit', 'method' => 'GET', 'numDigits' => '1' )); $gather->say("Hi question one here"); $gather->say("From 1 to 5 with 5 being the best service. How would you rate?"); header('Content-Type: text/xml'); print $response; ?> You would want to process this to process_poll1.php: if (isset($choices[$digit])) { mysql_query("INSERT INTO `results` (`" . $choices[$digit] . "`) VALUES ('1')"); $say = 'Ok got it. Next question.'; } else { $say = "Sorry, I don't have that option. Next question."; } // @end snippet // @start snippet $response = new Services_Twilio_Twiml(); $response->say($say); $response->redirect("php url to hit for next question"); header('Content-Type: text/xml'); print $response; There's one key change besides the name I've changed in this file. We're going to use the TwiML verb to move the user to the next poll question instead of hanging up. To do this with code we replace $response->hangup(); with $response->redirect("php url to hit for next question");. You'd want this to redirect to poll2.php and then have the gather action on that go to process_poll2.php. Then continue for as many questions as you need. Let me know if that helps!
common-pile/stackexchange_filtered
Winforms Entity Framework I am attempting to follow a tutorial regarding Winforms and Entity Framework, but am having difficulty following along. The tutorial I am using is from CodeProject.com and can be found here. The problem I have is that the tutorial references two controls, EntityDataSource and EntityBindingNavigator, which I cannot find in my toolbox. I have tried to right-click on my toolbox and clicked "Choose Items...", but I still cannot find these two controls. Although EntityDataSource is selected in the following image, it does not appear in my toolbox (perhaps because it's from the System.Web assembly?): I have chosen the references I would assume I need, but it does not help the situation: I am using Visual Studio 2012 Update 4. The tutorial is from Feb 2014, so I can't imagine I cannot find these controls because the tutorial is using an extremely old version of VS or something along those lines. I am completely lost, especially because the tutorial has so many good ratings; apparently, it's just me who can't find these dang controls! I have found other posts from users who cannot find them, but the solution is usually to right-click the toolbox and click "Choose items..." (which I have done, to no avail). Any other suggestions? Your help is greatly appreciated! At a glance, I think the EntityDataSource the article refers to is a custom built control, not part of the standard out of the box .NET Framework for WinForms. Have you downloaded the sample to see if it's in there? @Tim I believe you may be right. Unfortunately, I am a complete novice at this. Can you provide any details regarding adding this control from the samples available? I understand this may be a common practice, but it is new to me and I don't see a "How to" or any instructions on the linked page. :( According to the sidebar in the link, the CodeProject article is about "A component that makes it easy to use Entity Framework in WinForms projects, including design-time binding support." The article itself (I didn't read it in detail) appears to be more about how to use the component, not directly about using Entity Framework in WinForms. The article author has created a library (EFWinForms), and it is included in the downloads. For example, I downloaded the EF6 C# code, which has two projects and one solution - an EF6WinForms project and Sample project. To follow along with the example, or use the EFWinForms library in your own project, you can add the project (from the download) to your solution and reference it, and then add the appropriate using (Imports for VB.NET) statements. If you want to add just the DLL pick then build the EFWinForms project (it'll probably have a slightly different name depending on the version), and then add a reference to that DLL. Thank you! Just the kind of guidance I needed. :)
common-pile/stackexchange_filtered
how to remove the part of image except highlighting part in ios hi i have done getting image from Gallery or camera.and i have button when i click that button getting image again from gallery or camera in which when i touch some part that is highlighting that should be only display in to anotherView.remaing part should be removed.please anybody assist me how to do this.help is appreciated. i written code this - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { self.slider.hidden=NO; UITouch *touch = [touches anyObject]; lastPoint = [touch locationInView:self.view]; //lastTouch is CGPoint declared globally } //#pragma mark - Touch moved mehotd - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentPoint = [touch locationInView:self.view1]; //currentTouch is CGPoint declared globally CGColorRef strokeColor = [UIColor whiteColor].CGColor; UIGraphicsBeginImageContext(view1.frame.size); CGContextRef context = UIGraphicsGetCurrentContext(); [imageView1.image drawInRect:CGRectMake(0, 0, self.view1.frame.size.width, self.view1.frame.size.height)]; //canvasView is your UIImageView object declared globally CGContextSetLineCap(context, kCGLineCapRound); CGContextSetLineWidth(context, slider.value); CGContextSetStrokeColorWithColor(context, strokeColor); CGContextSetBlendMode(context, kCGBlendModeClear); CGContextBeginPath(context); CGContextMoveToPoint(context, lastPoint.x, lastPoint.y); CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y); CGContextStrokePath(context); imageView1.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); lastPoint = [touch locationInView:self.view1]; } //#pragma mark - Touch end mehotd - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; currentPoint = [touch locationInView:self.view1]; } but it is erasing selected Area.but i don't want to erase the selected Area(highlight).unselected part should be erased .i tried a lot please any body reply me how to do this.help is appreciated your question sounds good. I think somebody already give answer earlier like same query. Please have a look following Let User Crop Photo in iOS App i hope will help you. All the best!! :)
common-pile/stackexchange_filtered
Techniques to increase average cadence for endurance-riding? According to the Wikipedia entry here: Recreational and utility cyclists typically cycle around 60–80 rpm. According to cadence measurement of 7 professional cyclists during 3 week races they cycle about 90 rpm during flat and long (~190 km) group stages and individual time trials of ∼50 km. During ∼15 km uphill cycling on high mountain passes they cycle about 70 rpm.1 Sprinters can cycle up to 170 rpm for short periods of time.[citation needed]. The professional racing cyclist and Tour de France champion Lance Armstrong is known for his technique of keeping up high cadences of around 110 rpm for hours on end to improve efficiency. What are the techniques to improve cadence? I am not targeting this question to touring or long-distance but for endurance-riding, meaning training for longer-distance with higher HF and higher speed. I like to do short repetitions for about 10 minutes with high rpm above 120 but I am not sure whether it is ideal and I am not sure how much resistance I should add during such periods. What kind of exercises do you do? The exercises do not necessarily need to be even on bike, as long as they meet the goal to train endurance-riding with higher cadence over longer time. I am not sure what the last sentence in the wikipedia means actually: to improve efficiency through cadence? Anyway, I am looking for ways to improve average cadence — perhaps it will improve my efficiency also, not bad thing at all. Some Perhaps Related questions or Useful Info How can I improve my stamina? A-bit-fuzzy and targeted-to-runners article about "faster running and better long-distance results" (using short sprints with hills) but perhaps good training also for cyclists, here Perhaps helpful about endurance and speed, also about running, question here targeted to younger audience. Not sure whether relevant to improve the cadence. In running, they have a trendy thing called "barefoot running" or "neutral running", this answer here claims that it can improve the cadence at least for running, no idea whether something like that could work in riding a bike. I recall reading of some research in one of the bike magazines 10-15 years ago that showed that the optimum cadence for most cyclists was around 90 (IIRC). Faster is only better for a few elite cyclists, and likely as much genetics as training. And (as I've discovered) the older you get the lower your "optimal" cadence. @hhh The most important thing to achieve what you want, from my experience, is to install a bicycle computer with cadence reading. Then you can catechize yourself by the number. @DanielRHicks Ouch! I can sustain 90RPM on a -2% (downhill) grade for 2.5 minutes and then I'm burned out! @Michael - Don't increase all at once. Try to go 5 RPM faster than "normal" (maybe 10 RPM if you've been doing below 50) until you get to a "new normal", then increase again. It can take weeks or months to get your cadence up to a "proper" level. And 90 RPM is only "optimum" for most cyclists -- some may be better off at 80-85, others at 100. A good option may be interval training...which extends well beyond cadence in effect... Some background first: High cadence in an "easy" gear means that you are primarily taxing your cardiovascular and respiratory systems. Basically, this is aerobic activity in which one can engage for long periods of time. Low cadence in a "hard" gear means that you are taxing the skeletal muscles such as the quads, hamstrings and glutes, etc. Since you are using a bigger gear you're relying more on what is commonly called the anaerobic energy system. The gist of higher cadence riding is that your heart and lungs can take repeated punishment for long periods of time (and they recover quickly after hard efforts,) while your muscles will fatigue relatively quickly and recover more slowly. So, since your training goal is to boost your average cadence, an effective method for improvement is interval training. ...a type of physical training that involves bursts of high-intensity work interspersed with periods of low-intensity work. The high-intensity periods are typically at or close to near-maximum exertion, while the recovery periods may involve either complete rest or activity of lower intensity. There are at least several forms of structured interval training programs such as Fartlek, Tabata, Chris Carmichael's Time Crunched Cyclist, as well as other well known cycling coaches. So, to train for cadence, you'd pick a method; and you would alternate low cadence (low intensity) with high cadence (high intensity) for the intervals. By diligently following an interval training program, you would probably see improvements in your average cadence in a relatively short period of time (1 - 2 months). Without knowing more about your specific current conditioning, I cannot offer a particular plan; however, I and others have had great results with the Chris Carmichael method. And a nice quote by Carmichael from his book (p.42): Keep in mind, however, that there's no magical cadence everyone should shoot for. Rather than aim for a specific number, I recommend athletes try to increase their normal cruising cadence and climbing cadence by 10% in a year (with the understanding that very few cyclists can ride effectively at sustained cadences above 120 to 125 rpm on flat ground). Note one: Cadence training is much easier with a cycling computer that monitors cadence. Note two: Interval training is difficult and should only be done 1 - 3 days per week and not on sequential days. After taking a look at Joe Friel's blog via the link provided by Dana the Sane, I noticed that Joe Friel has an excellent 5 part article on Interval Training. A quote from the Joe Friel article on Interval Training, Part 4: Speed skills sessions are intended to improve efficiency by improving movement skills. This usually involves doing drills to refine or learn a technique. Drills are common in swimming. A typical cycling drill is to increase the pedaling cadence in a low gear to a max level for a few seconds every few minutes while relaxing the body. Runners do “strides” workouts in which they run fast, usually down a slight hill, while focusing on one aspect of technique such as foot position at strike. There many, many drills for each sport. @hhh, etc - I in no way meant for the original line to be patronizing. I thought it was humorous. I changed it in order that no one takes it as patronizing. And the broken link is fixed... @hhh - My cycling buddies and I switched from long-slow-distance training to farily intensive interval training ~5 years ago to enhance long-distance and fitness riding. We all achieved significant improvements in speed, endurance, hill climbing and power, plus a side effect of increased cadence. Carmichael is definitely not the only source of good info, but that's the program we've been following for the last few years to great results. Anyway, consistently boosting average cadence will require targeted drills of some sort... Anyway, cadence was never my goal in using interval training programs. My goals for interval training are power, endurance, stamina, speed.... and increased cadence was a result of working on the other goals. Related to concepts such as LSD, HR -- something here. If you want to change your cadence, just change it. Take your cruising gear on your bike and calculate what your current cruising cadence is. Let's say you usually cruise at 16mph and your typical cadence is 75. If you want to be at 90, go into a lower gear and try to keep the same speed. This will increase your cadence. The benefit of a high cadence is that the higher the cadence the more aerobic you are (more cardio, less muscular). This will naturally increase your lung capacity providing increased stamina at any cadence because your vo2 max (google this) will increase with regular cardio sessions. The cadence you ride at is far less important than your abilities, such as efficiently, and your fitness. An authoritative reference on peddling efficiency is this Joe Friel blog post. The post outlines rationale describing the importance of efficiency and overviews of several pedalling drills. For some sample workouts using this type of drills, you could consult Serious Mountain Biking by Ann Trombley, though there are doubtless many others. Also, be wary of training 'Red Herrings'. Cadence is a critical component of cycling, but even among experts, there is always some amount of disagreement on what techniques are most effective for individual riders. My opinion is that 'practice by doing' is often a good way to improve specific cycling skills. Only when you reach a lengthy plateau in your improvement, is it necessary to spend a lot of time using specialized techniques.
common-pile/stackexchange_filtered
Hide custom UITableViewHeaders which contain same text I have some custom UITableViewHeaders which contain a custom label within. Label contains some Time events example today, yesterday, 20 may 2014 ..., but it shows twice or three times the event today example. i am using an sdk and the only way i think is if there is already a today text on the UutableviewHeader label, dont show the other today headers. Can i somehow Hide headers if their Label.text is same. example if the Label.text is different show else hide or dont create at all so if previous header label is same as last header label dont show last if different show header. Thank you very much. You should explain it more clearly. If you want header not to be shown just make the header height 0.00001. - (CGFloat)tableView:(UITableView *)tableView heightForHeaderInSection:(NSInteger)section { return 0.00001; } the question is not clear. @ Vineesh TP i will reformulate question ty There are may be so many ways but I like to, (I just put my logic here change it as per your need) - (CGFloat)tableView:(UITableView *)tableView heightForHeaderInSection:(NSInteger)section { if ([myLabel.text isEqualToString:myLabel.text]) { /// set your logic here return 0; /// or whatever } else { // whatever height you need to set. } }
common-pile/stackexchange_filtered
Impossible (??) related rates problem in Larson 11e I believe I've encountered an error in the problem set of Larson Calculus 11e. I have contacted the publisher about this, and they insist that both the textbook and answer key are correct. However, their reasoning seems flawed. I want to bounce it off some truly knowledgeable people to make sure my own reasoning isn't flawed. The problem reads, verbatim: The volume of oil in a cylindrical container is increasing at a rate of 150 cubic inches per second. The height of the cylinder is approximately ten times the radius. At what rate is the height of the oil changing when the oil is 35 inches high? On the surface, this reads like a classic related rates problem, and the solutions manual solves it like one. However, doesn't $\frac{r}{h}=\frac{1}{10}$ only hold for the cylindrical container itself? When we use this 1:10 relationship, we can relate the variables as $V=\pi\frac{h^3}{100}$, but again, doesn't this only hold for the container itself? As the height of oil in the container changes, the radius remains constant, thus we can't rely on this equation to differentiate and obtain $\frac{dh}{dt}$ for the volume of oil. I think this problem would be solvable if the container were conical instead of cylindrical, because then the proportion relating the radius and height would apply to both the container and the changing volume oil at any height. Have I got it twisted? The height of the cylinder shouldn't matter as long as it's at least 35 inches. And the "when the oil is 35 inches high" doesn't matter since the volume increases at a fixed rate and a cylinder has fixed cross-sectional area. The answer to the question asked seems to be "150 cubic inches per second divided by the cross sectional area which we don't know since we're not given the radius". Not given the radius except as it relates to the height of the container itself - which as you say, is irrelevant. That's the problem I'm having. Maybe we're meant to assume that the cylinder is always full of oil, but the height and radius of the cylinder are changing over time? Sounds like a typical high school chain rule question. Therefore, it's probably a typo and was asking for a cone instead (not a cylinder) You won't be able to solve this problem without the radius of the cylinder. To start: you're right that $h=10r$ only applies to the container. This makes sense when you think about it: the height of the oil is constantly changing while its radius remains constant. Now, normally a solution would proceed like this: We know that $V_{oil}=\pi r^2h$, and that $\frac{dV_{oil}}{dt} = 150 \frac{in^3}{s}$. We shall thus take the derivative of the volume with respect to time: $$\frac{dV_{oil}}{dt}=\pi(2r\frac{dr}{dt}+r^2\frac{dh}{dt})$$ Since the radius is constant, $\frac{dr}{dt}=0$, making the equation thus $$\frac{dV_{oil}}{dt}=\pi(r^2\frac{dh}{dt})$$ Substituting for $\frac{dV_{oil}}{dt}$: $$150 \frac{in^3}{s}=\pi(r^2\frac{dh}{dt})$$ $$\frac{dh}{dt}=\frac{150 \frac{in^3}{s}}{\pi(r^2)}$$ This is all we can do. Any further substitution by your textbook of $r=\frac{h}{10}$ is wrong.
common-pile/stackexchange_filtered
MS Access convert rows to columns using pivot I have a table with ~6500 products and images in which each product can have several jpg files: ID_product product product_photo ordering 1 Product A a1234.jpg 1 2 Product B x5678.jpg 0 3 Product A b1234.jpg 0 4 Product B y5678.jpg 1 5 Product B z5678.jpg 2 6 Product C e4455.jpg 1 7 Product C f4455.jpg 0 8 Product C g4455.jpg 2 So I created a query in MS ACCESS: TRANSFORM First([table1].product_photo) AS product SELECT [table1].ID_product, First([table1].product) AS products FROM [table1] GROUP BY [table1].ID_product PIVOT [table1].product_photo; The result of the query: ID_product product a1234_jpg b1234_jpg e4455_jpg f4455_jpg g4455_jpg x5678_jpg y5678_jpg z5678_jpg 1 Product A a1234.jpg b1234.jpg 2 Product B x5678.jpg y5678.jpg z5678.jpg 3 Product C e4455.jpg f4455.jpg g4455.jpg I would like to change the table in such a way that the images are in columns: ID_product product image_1 image_2 image_3 1 Product A a1234.jpg b1234.jpg 2 Product B x5678.jpg y5678.jpg z5678.jpg 3 Product C e4455.jpg f4455.jpg g4455.jpg How to expand the query to get the desired result? Because the Pivot works on the value of the column rather than its name, you are out of luck. This is the problem with pivoting - it's fairly limited in its flexibility. At this point, you usually revert to code. For example, you could build the result set in VBA in an array of arrays of variant. Or, you could use VBA to count the maximum number of columns for any product, create a temp table with appropriately named columns, and insert the values into the appropriate rows. Either way, this is really a UI issue of sexy display to the user, rather than handing off data from one query to the next, which is what SQL is designed for.
common-pile/stackexchange_filtered
JPanel Background help. I have tried and failed I need to set an image as a background to the JPanel I have created. I have made a basic calculator. I tried a code that someone told me to try and it set the image as the background but now I've lost my calculator buttons etc. I'm guessing this is because I added it to my JFrame? But I don't know where to go from here. I am new to this, I know my code probably isn't even that good, but it works for now. And it will be looked at and criticized later but for now I need this background added :/ Can someone help , please and thank you! Please excuse the name GridBag2. import javax.swing.*; import java.awt.*; import java.awt.event.*; /** * Created by IntelliJ IDEA. * Date: 09/01/2015 * Time: 15:52 */ public class GridBag2 extends JFrame implements ActionListener { private JPanel panel=new JPanel(new GridBagLayout()); private GridBagConstraints gBC = new GridBagConstraints(); private JButton zeroButton = new JButton("0"); private JButton oneButton = new JButton("1"); private JButton twoButton = new JButton("2"); private JButton threeButton = new JButton("3"); private JButton fourButton = new JButton("4"); private JButton fiveButton = new JButton("5"); private JButton sixButton = new JButton("6"); private JButton sevenButton = new JButton("7"); private JButton eightButton = new JButton("8"); private JButton nineButton = new JButton("9"); private JButton addButton = new JButton("+"); private JButton subButton = new JButton("−"); private JButton multButton = new JButton(" X "); private JButton divideButton = new JButton(" ÷ "); private JButton equalButton= new JButton("="); private JButton clearButton = new JButton("C"); private JTextArea input=new JTextArea(""); Double number1,number2,result; int add=0,sub=0,mult=0,divide=0; public GridBag2() { zeroButton.addActionListener(this); oneButton.addActionListener(this); twoButton.addActionListener(this); threeButton.addActionListener(this); fourButton.addActionListener(this); fiveButton.addActionListener(this); sixButton.addActionListener(this); sevenButton.addActionListener(this); eightButton.addActionListener(this); nineButton.addActionListener(this); addButton.addActionListener(this); subButton.addActionListener(this); multButton.addActionListener(this); divideButton.addActionListener(this); equalButton.addActionListener(this); clearButton.addActionListener(this); gBC.insets = new Insets(5, 5, 5, 5); gBC.gridx = 1; gBC.gridy = 0; gBC.gridwidth = 4; gBC.fill = GridBagConstraints.BOTH; panel.add(input, gBC); gBC.gridx = 2; gBC.gridy = 4; gBC.gridwidth = 1; panel.add(zeroButton, gBC); gBC.gridx = 1; gBC.gridy = 4; gBC.gridwidth = 1; panel.add(oneButton, gBC); gBC.gridx = 1; gBC.gridy = 3; gBC.gridwidth = 1; panel.add(twoButton, gBC); gBC.gridx = 2; gBC.gridy = 3; gBC.gridwidth = 1; panel.add(threeButton, gBC); gBC.gridx = 1; gBC.gridy = 2; gBC.gridwidth = 1; panel.add(fourButton, gBC); gBC.gridx = 2; gBC.gridy = 2; gBC.gridwidth = 1; panel.add(fiveButton, gBC); gBC.gridx = 3; gBC.gridy = 2; gBC.gridwidth = 1; panel.add(sixButton, gBC); gBC.gridx = 1; gBC.gridy = 1; gBC.gridwidth = 1; panel.add(sevenButton, gBC); gBC.gridx = 2; gBC.gridy = 1; gBC.gridwidth = 1; panel.add(eightButton, gBC); gBC.gridx = 3; gBC.gridy = 1; gBC.gridwidth = 1; panel.add(nineButton, gBC); gBC.gridx = 3; gBC.gridy = 3; gBC.gridwidth = 1; panel.add(addButton, gBC); gBC.gridx = 3; gBC.gridy = 4; gBC.gridwidth = 1; panel.add(subButton, gBC); gBC.gridx = 4; gBC.gridy = 1; gBC.gridwidth = 1; panel.add(divideButton, gBC); gBC.gridx = 4; gBC.gridy = 2; gBC.gridwidth = 1; panel.add(multButton, gBC); gBC.gridx = 4; gBC.gridy = 3; gBC.gridwidth = 1; panel.add(equalButton, gBC); gBC.gridx = 4; gBC.gridy = 4; gBC.gridwidth = 1; panel.add(clearButton, gBC); setVisible(true); setSize(300, 340); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); input.setEditable(false); getContentPane().add(panel); setContentPane(new JLabel(new ImageIcon("C:\\Users\\Cara\\Pictures\\christmas.jpg"))); }//GridBag2 public void actionPerformed(ActionEvent e) { Object source = e.getSource(); if(source==clearButton) { number1=0.0; number2=0.0; input.setText(""); }//if if (source == zeroButton) { input.setText(input.getText() + "0"); }//if if (source == oneButton) { input.setText(input.getText() + "1"); }//if if (source == twoButton) { input.setText(input.getText() + "2"); }//if if (source == threeButton) { input.setText(input.getText() + "3"); }//if if (source == fourButton) { input.setText(input.getText() + "4"); }//if if (source == fiveButton) { input.setText(input.getText() + "5"); }//if if (source == sixButton) { input.setText(input.getText() + "6"); }//if if (source == sevenButton) { input.setText(input.getText() + "7"); }//if if (source == eightButton) { input.setText(input.getText() + "8"); }//if if (source == nineButton) { input.setText(input.getText() + "9"); }//if if (source == addButton) { number1=number_reader(); input.setText(""); add=1; sub=0; divide=0; mult=0; }//if if (source == subButton) { number1=number_reader(); input.setText(""); add=0; sub=1; divide=0; mult=0; }//if if (source == divideButton) { number1=number_reader(); input.setText(""); add=0; sub=0; divide=1; mult=0; }//if if (source == multButton) { number1=number_reader(); input.setText(""); add=0; sub=0; divide=0; mult=1; }//if if(source==equalButton) { number2=number_reader(); if(add>0) { result=number1+number2; input.setText(Double.toString(result)); }//if }//if if(source==equalButton) { number2=number_reader(); if(sub>0) { result=number1-number2; input.setText(Double.toString(result)); }//if }//if if(source==equalButton) { number2=number_reader(); if(divide>0) { result=number1/number2; input.setText(Double.toString(result)); }//if }//if if(source==equalButton) { number2=number_reader(); if(mult>0) { result=number1*number2; input.setText(Double.toString(result)); }//if }//if }//actionPerformed public double number_reader() { Double num1; String s; s=input.getText(); num1=Double.valueOf(s); return num1; }//number_reader public static void main(String[] args){ GridBag2 gui = new GridBag2(); }//main }//class For better help sooner, post an MCVE (Minimal Complete Verifiable Example) or SSCCE (Short, Self Contained, Correct Example). Note that an example of creating a BG image panel can be achieved in around 30 lines of code. A BG image with a component on top, 31 LOC. JLabel is a bad choice for this kind of thing, as it will not calculate the preferred size of the child components, but will only calculate the preferred size based on the image (and the text of the label), this could result in your UI been sized incorrectly... setContentPane(new JLabel(new ImageIcon("C:\\Users\\Cara\\Pictures\\christmas.jpg"))); This line doesn't add the JLabel to you panel. This line makes the JLabel the root control of the JFrame. The replaces the content pane of the JFrame with the JLabel - a JFrame only has one content pane. You probably wanted to treat it the same way as every other control. Something like: JLabel background = new JLabel(new new ImageIcon("C:\\Users\\Cara\\Pictures\\christmas.jpg")); gBC.gridx = some number; gBC.gridy = some number; gBC.gridwidth = some number; gBC.gridheight = some number; panel.add(background, gBC); The OP could call setContentPane BEFORE adding any new content to the frame...you're right, it does "replace" the current content pane which is the core issue... I have tried this and it worked to a certain extent. It has added the background but distorted some of my buttons. It has made the bottom row of my buttons my zeroButton, clearButton, minusButton and my oneButton are all stretched touching the bottom of the frame. They aren't the same size as the other buttons anymore? getContentPane().add(panel); setContentPane(new JLabel(new ImageIcon("C:\\Users\\Cara\\Pictures\\christmas.jpg"))); My guess is that you need to switch the order of these two lines. You're adding the JPanel to the JFrame's default content pane, but then you're setting the content pane to something else (a JLabel), effectively throwing away the old content pane, which you added the panel to. On a side note, it's good practice to use java.io.File.separator in your filename for the image, rather than hardcoding backslashes into the string. So you would instead write: "C:"+File.separator+"Users"+File.separator+"Cara"+File.separator+"Pictures"+File.separator+"christmas.jpg" while being sure to import java.io.File at the top of your code. This ensures that it won't break on a unix-based system that uses forward slashes instead of backslashes like Windows does. Please stop guessing.. The first suggestion would simply change what is seen from the icon to the panel. On the 'side note', this is apparently an application resource that should be loaded using getResource(String) where the String uses / as path separator, since it is going to form an URL (always forward slash), rather than a File. I have tried swapping the order and it still doesn't work :/ And it's even better practice to use the getResource method of the Class class, which erases dependency on particular operating systems.
common-pile/stackexchange_filtered
Question regarding the solution of Codility's MinMaxDivision, that uses binary search to solve it Based on online solutions, I almost figured out how to solve Codility's MinMaxDivision, but there's one detail in the solution that I'm struggling to confirm. The question is as follows: Task description You are given integers K, M and a non-empty array A consisting of N integers. Every element of the array is not greater than M. You should divide this array into K blocks of consecutive elements. The size of the block is any integer between 0 and N. Every element of the array should belong to some block. The sum of the block from X to Y equals A[X] + A[X + 1] + ... + A[Y]. The sum of empty block equals 0. The large sum is the maximal sum of any block. For example, you are given integers K = 3, M = 5 and array A such that: A[0] = 2 A[1] = 1 A[2] = 5 A[3] = 1 A[4] = 2 A[5] = 2 A[6] = 2 The array can be divided, for example, into the following blocks: [2, 1, 5, 1, 2, 2, 2], [], [] with a large sum of 15; [2], [1, 5, 1, 2], [2, 2] with a large sum of 9; [2, 1, 5], [], [1, 2, 2, 2] with a large sum of 8; [2, 1], [5, 1], [2, 2, 2] with a large sum of 6. The goal is to minimize the large sum. In the above example, 6 is the minimal large sum. Write a function: class Solution { public int solution(int K, int M, int[] A); } that, given integers K, M and a non-empty array A consisting of N integers, returns the minimal large sum. For example, given K = 3, M = 5 and array A such that: A[0] = 2 A[1] = 1 A[2] = 5 A[3] = 1 A[4] = 2 A[5] = 2 A[6] = 2 the function should return 6, as explained above. Write an efficient algorithm for the following assumptions: N and K are integers within the range [1..100,000]; M is an integer within the range [0..10,000]; each element of array A is an integer within the range [0..M]. The following solution get's 100%: public int solution(int K, int M, int[] A) { int min = 0; int max = 0; for (int i = 0; i < A.length; i++) { max += A[i]; min = Math.max(min, A[i]); } if (K == 1) return max; if (K >= A.length) return min; int result = min; while (min <= max) { int mid = (min + max) / 2; if (check(mid, K, A)) { max = mid - 1; result = mid; } else { min = mid + 1; } } return result; } private boolean check(int mid, int k, int[] a) { int sum = 0; for (int i = 0; i < a.length; i++) { sum += a[i]; if (sum > mid) { sum = a[i]; k--; } if (k == 0) return false; } return true; } The idea of the solution is pretty simple: the minimum large sum is between min(A) or sum(A). Instead of iterating one by one, we can use binary search to look for the minimum large sum. For each candidate(mid), we see if we can have K blocks that does not pass the value of mid. My question is about the strategy to find the number of blocks based on the mid value, in the check() method above. There are situations, where the number of blocks fits the criteria but none of the blocks have their sum equaling the mid value. One good example is when we have one block with all the array values, and the other blocks are empty. One good example is A = [2, 3, 3, 5, 4, 2, 3], K = 3: the mid value eventually get's the value 10, we can have 3 blocks [2,3,3],[5,4],[2,3] but none of them are equal 10. Can the solution algorithm output a mid value being the minimum large sum but that sum actually does not exist? How can the check() method always find the minimum large sum, AND that minimum large sum exists in the array without comparing the sum value with the mid value? There are situations, where the number of blocks fits the criteria but none of the blocks have their sum equaling the mid value This doesn't matter, because check will return true and a lower mid will be checked: some lower mid will eventually be one that will equal some block's sum. One good example is A = [2, 3, 3, 5, 4, 2, 3], K = 3: the mid value eventually get's the value 10, we can have 3 blocks [2,3,3],[5,4],[2,3] but none of them are equal 10. After mid = 10 and check returning true, this will execute: max = mid - 1; result = mid; By setting max to 9, 9 will eventually be checked as well, and be returned. Can the solution algorithm output a mid value being the minimum large sum but that sum actually does not exist? No, because if that sum does not exist and check returns true, then we have a smaller sum that is possible - so the current mid is not the minimum. If the algorithm gets 100% then it will output this smaller value. Also think about it in terms of the definition given in the problem statement: The large sum is the maximal sum of any block. [...] The goal is to minimize the large sum. In the above example, 6 is the minimal large sum. So, by definition, the minimum large sum is the sum of some block. How can the check() method always find the minimum large sum, AND that minimum large sum exists in the array without comparing the sum value with the mid value? The check method itself does not find the minimum large sum. It only tells you if a given sum (its mid parameter) is valid (that is, if we can split the array into K blocks with a max sum <= mid). It's the binary search that finds the minimum large sum. Thanks. "No, because if that sum does not exist and check returns true, then we have a smaller sum that is possible - so the current mid is not the minimum." was key for me. Eventually mid will reach a value of one of the smaller sums of one of those blocks, as one of the previous mid values was valid.
common-pile/stackexchange_filtered
Queue remote commands for execution as soon as workstation comes online I am managing one workstation and think of a maximum of 5 workstations running Ubuntu-based Linux Mint. My own, controlling Computer is also running Mint. What is a good way to queue commands and have them executed when the host comes online? The following requirements should be met: Add/remove commands from queue when workstation is offline (local queue) output and exit code should be logged or mailed (on my local computer) keep it simple: no big management software or web interfaces I am already thinking about at. I could transmit the commands to the remote at queue using batch and have them run when the system idles. But I am not sure if the remote atq is persistent when the user suddenly shut's the down the workstation. Is there a software or best-practise for this? Your workstations could utilize cron's built-in @reboot attribute -- cron will execute whatever you want during system startup if you put in /etc/crontab line like this: @reboot root /path/to/your/script The script could copy new to-be-executed commands from a master workstation and then execute them, or merely inform the master workstation that hey, I'm online, please let me know if there's something new for me to do. Or then the script could just utilize rsync and fetch whatever scripts should be run. Perhaps you could have a dir at your master workstation where you drop the scripts the clients should run? Like /opt/scripts/. The clients would rsync the script dir and then compare from their local log/state file if they need to run some script or not. Alternatively you could install an actual management software such as Puppet or cfengine, but that's something you voted against. :) Thanks. This does not seem like a robust queue-like system. Sure I could script it by myself and run it (as daemon) on every boot. But I need some ideas on how to do the logic, what tools to use or if there is even software for this.
common-pile/stackexchange_filtered
How to store File in storage and make that read only? How to set the permission of File? How to store File in storage and make that read only? How to set the permission of File? File folder = new File(Environment.getExternalStorageDirectory() + "/AFolder"); if (!folder.exists()) { folder.mkdir(); Log.e(String.valueOf(folder),"Created......" ); } bval = folder.setReadOnly(); http://stackoverflow.com/questions/14797746/android-read-write-permission-of-a-folder folder.mkdir();. Check the return value. Do not just suppose and say that it is created. What is wrong with folder.setReadOnly(); ? Java has AFAIK no good working API for changing file/directory permissions. For changing file permissions you have to call chmod/chgrp.
common-pile/stackexchange_filtered
Getting "TypeError: Cannot read properties of undefined (reading 'getNetwork')" with gnosis-safe-sdk+hardhat Snap of code: const { Contract, Signer, providers } = require('ethers') const Safe = require('@safe-global/safe-core-sdk') const EthersAdapter = require('@safe-global/safe-ethers-lib') // const { ethers } = require('ethers') const SafeService = require('../gnosis/service') const SafeEthersSigner = require('../gnosis/signer') console.log("Setup provider") console.log("Setup SafeService") const service = new SafeService(process.env.SERVICE_URL) console.log("Setup Signer") const [owner] = await hre.ethers.getSigners(); console.log("Setup SafeEthersSigner", process.env.DEPLOYER_SAFE) const ethAdapter = new EthersAdapter.default({ ethers: hre.ethers, owner }) const safe = await Safe.default.create({ ethAdapter, safeAddress: process.env.DEPLOYER_SAFE }) const safeSigner = new SafeEthersSigner(safe, service, owner.provider) const contract = new Contract("0xe50c6391a6cb10f9B9Ef599aa1C68C82dD88Bd91", ["function pin(string newMessage)"], safeSigner) const proposedTx = await contract.functions.pin(`Local time: ${new Date().toLocaleString()}`) console.log("USER ACTION REQUIRED") console.log("Go to the Safe Web App to confirm the transaction") console.log(await proposedTx.wait()) console.log("Transaction has been executed") ``` ENV: DEPLOYER_SAFE=0xD934fbEd3CB5dAa5A82C14089cAcaD6035718163 SERVICE_URL=https://safe-transaction-goerli.safe.global
common-pile/stackexchange_filtered
Can I use one domain name with two servers from different hosting companies? I have a WordPress website. These days I'm having physical memory limitations. I tried to fix it and did well except the memory usage (RAM) usage is still high. Not as high as before, but still not good. I have a Linux hosting with cPanel in Godaddy which is where my domain is based, and I have another hosting on Obambu. Can the domain work under the two hosts? I mean with the same database and both servers presenting content to users? There is a technique that could make it work, sort-of: round robin DNS. You can add two DNS records for the same name with two different IP addresses. In theory, clients should get one of two of the IP values and half your traffic should hit each of your servers. In practice round robin DNS has problems: One of your servers will end up getting more traffic than the other. This is due to DNS servers making a choice for all their users and serving just one of the two values to many users. If one of your servers goes down, the half of users that would hit it won't be able to get to your site. There is nothing that will route them to the other server that is still up. This makes your site twice as fragile as it was with only one server and makes it hard to do scheduled down time and maintenance. It is better to get two servers at the same web host and put a load balancer in front of them. A load balancer only works on a local network, but it routes traffic to servers based on whether or not they are actually up. It also allows you to control traffic percentages much better. With either round robin DNS or a load balancer, you will need two central things to run WordPress: Your database which stores posts and users Your wp-content directory which stores images and themes To use two servers, you will need to set both of them up to use a central database and a network file system for wp-content. Another option is to continue with a single server but put a content delivery network (CDN) in front of it. A CDN is a set of caching proxy servers designed to be close to your users. A CDN will reduce the load on your main server while making your site faster for users. Because there is caching, it is hard to use this technique with highly dynamic websites. can i use CDN Plugins ? A CDN is an external service, not a plugin. Here is an article comparing seven of them for WordPress: http://www.wpbeginner.com/showcase/best-wordpress-cdn-services/ Technically both problems you mentioned can be overcome using "failover" type services offered by some DNS providers. These would use low TTLs and monitor both hosts in order to stop serving records to one particular host if it goes down. There are also problems with failover services. First they are usually fairly expensive. Second, failover can take an hour, even with low TTL values. We have found that low TTL values are not universally honored, so not all traffic moves over in a timely manner. I'd recommend a load balancer first, and then a fail over to a second load balancer in a different geographic location for when something catastrophic happens to your primary hosting.
common-pile/stackexchange_filtered
android null pointer exception this is my code when i implement get item count 1 it shows non static method cannot be reference when i implement count 2 it show null pointer exception on return statement helpme to fix this final int speedScroll = 1000; final Handler handler = new Handler(); final Runnable runnable = new Runnable() { int count = 0; @Override public void run() { if(count == Adapter4.getItemCount2()) count =0; if(count < Adapter4.getItemCount2()){ recyclerView4.smoothScrollToPosition(++count); handler.postDelayed(this,speedScroll); } } }; handler.postDelayed(runnable,speedScroll); } @Override public int getItemCount() { return albumList.size(); } public static int getItemCount2() { List<Album4> albumList=null; return albumList.size(); } You are making it null and how can you get the size of null variables.. Try with something like List<Album4> albumList=new ArrayList<>() which will return 0; I'm assuming the variable albumList is global, so try to remove or atleast rename the variable with the same name in your getItemCount2 method There are two thing which may cause an issue for your error 1st is: @Override public int getItemCount() { return albumList.size(); } may be albumList is not yet initialized and this is causing null exception. 2nd is: public static int getItemCount2() { List<Album4> albumList=null; return albumList.size(); } you initialise the list with null and then return size which will cause null exception. Try this.Assuming the variable albumList is global. and albumList is not null; final int speedScroll = 1000; final Handler handler = new Handler(); final Runnable runnable = new Runnable() { int count = 0; @Override public void run() { if(count == albumList.size()) count =0; if(count < albumList.size()){ recyclerView4.smoothScrollToPosition(++count); handler.postDelayed(this,speedScroll); } } }; handler.postDelayed(runnable,speedScroll); } @Override public int getItemCount() { return albumList.size(); }
common-pile/stackexchange_filtered
How to compile apv library in windows to read pdf file in android I want to add a pdf viewing feature in my android app. I got results that apv library is best & fast. Can anyone tell how to compile that stuffs in windows. Suggestion are welcomed fro other pdf viewing libraries also. Thanks in Advance. Nothing,just moved to pdf viewer available to me from market & used that in my code. I used abode reader in my code Hi @Greenhorn, successfully complied the apv library in eclipse project to read pdf. Hey Shashank pls tell how you have compiled the apv library I am also looking for that.
common-pile/stackexchange_filtered
What would be the best course of action to patent a newly discovered chemical? Although it may sound unlikely, I believe I discovered a chemical. I am highly inexperienced in chemistry and law and now seeking assistance. If I am correct in my discovery, this chemical has potential of being very valuable. That being said, I would like to protect every aspect of the chemical and it's potential. Since I am inexperienced in chemistry, I will need the assistance of a chemist to help identify the new chemical before I can patent it, but I do not want that chemist to steal or use the chemical other than identifying it.. I could give the chemist a sample of the chemical to try identifying it and there shouldn't be any more information given. Is an NDA the best choice of protection? Should I consult a lawyer? Any advice is accepted. Not relevant to the answer, but how do you know you've "discovered" a new chemical if you are inexperienced in chemistry and don't have it identified? I use identified to mean the composition and structure of the chemical. I also use "discovered" to describe the chemical as naturally occurring and not something I created. The chemical in question has very particular characteristics, and it's existence has been debated. You could say it's just speculation, but it would greatly put my mind at ease to know if I am wrong or not. Sorry if I answered your question vaguely. Just think of it like you're drinking a beer and you don't know how the alcohol affects you, but you know it works, because you get drunk. Alcohol has a very particular effect. I’m not at all sure you are able to patent a naturally occurring substance. You should ask that question. You definitely can patent a novel use for a naturally occurring substance or a novel method of processing it. Since the chemical is naturally occurring, do you think an NDA can protect the method of obtaining or collecting the chemical? I feel like I won't need to disclose this information, but in case I do, I would like to protect that info. An NDA can protect any proprietary information you wish. From my familiarity with chemical art the first step after discovering a new chemical is to register the compound and get a CAS number assigned by American chemical society they maintain world wide data base of the compounds if the compound had already been disclosed to CAS the CAS number would be provided for you. If the compound has been discovered i.e., isolated from existing plants or organism then the patent would not be granted for the chemical isolated. As tested by courts and available case law the mere isolation of naturally occurring chemical would not grant patent rights. In isolation of chemicals naturally occurring the process of isolation, purification or analytical methods can be patented. I will need the assistance of a chemist to help identify the new chemical before I can patent it. If the compound has not been identify, what are your claims going to be? identify the chemical first and search for the disclosure of compound with databases.Use an NDA with a chemist to identify the compound. There are patents on product-by-process that do not claim or even understand the resulting substance, just a way to make it. My claim would be that I have contained the chemical in a solution and I could give a sample that doesn't have the chemical and a sample that does, which a person should easily distinguish the difference. If the chemical could be identified in a mixture of many chemicals then it would be an easy and fast process, I'd assume. If not, the chemist will have to help with finding better solvents or anything else that could lead to the isolation of the chemical. You need a chemist and a registered patent practitioner specifically experienced in patenting chemicals. It is a world or its own within patent law. Use an NDA with chemists you interview; it is not needed with patent attorneys or agents.
common-pile/stackexchange_filtered
How to sendkeys unicode character VBA - without clipboard I need to insert chatacter 'ü' to the web page with VBA via SendKeys. Application.SendKeys "ü" does nothing Then I tried to send ALT+code combination: Application.SendKeys "%(0252)" does nothing for ALT+0252 Application.SendKeys "%(129)" does nothing for ALT+129 I cannot (don't want to) use clipboard and paste it. It is a nice workaround but not applicable in this case. No, API doesn't work. Any ideas? What about using chr(252)? Another approach: Inject Javascript into the page and insert from there Are you able to send other chars? Is it only ü that causes trouble? FYI I can use SendKeys "ü" to send ü to Notepad so the problem may be with your receiving application and not with SendKeys. @xidgel That did not work for me in Excel (trying to send keys to a worksheet), but it did work using Application.SendKeys "+´u", True @FoxfireAndBurnsAndBurns: Application.SendKeys Chr(84) for "T" works, Application.SendKeys Chr(252) for "ü" not. @xidel: it works for Application.SendKeys "ü" when default win language is set for German. BUT why combination with ALT does nothing? Application.SendKeys "%(0252)"? Manually (by humane typing) it works nicely... @kabarto ALT codes have to be typed on the numeric keypad, but AFAIK Application.SendKeys doesn't send digits from the numeric keypad. The digits along the top of the keyboard are different than the digits on the numeric keypad @kabarto Have you tried Application.SendKeys "+´u", True?
common-pile/stackexchange_filtered
How to solve decics like $x^{10}+100x^2+160x+64=0$ having Galois group 10T33? Using the approach described in Smart way to solve octics like $x^8+5992704x-304129728=0$ (the method DecomPoly available in GAP) the decic quadrinomial from this question can be decomposed into two quintics: $$x^{10}+100x^2+160x+64=(x^5+10ix+8i)(x^5-10ix-8i)$$ where $i$ is the imaginary unit. But here I got stuck. How to get explicit formula for roots of those solvable quintic trinomials? I have tried to use the RadiRoot's method RootsOfPolynomialAsRadicals but without success. It seems Galois group 10T33 of the original quadrinomial is too big, and the method does not accept (quintic) polynomials with complex coefficients. I am afraid the answer will be too complicated for me, but still have hope there is some tool / built-in method that can run calculations without too deep digging into Galois theory... What do you expect from an explicit formula? The exact solutions that are given by Wolfram make it seem like they cannot be simplified very well. One of the solutions for example is $x = -4/5 _4 F_3(1/5, 2/5, 3/5, 4/5;1/2, 3/4, 5/4;i/2)$ @Servaes The goal is an explicit expression for roots containing only 4 basic operations ($+$,$-$,$/$,$*$) and extracting n-th roots. I.e. expressing roots by radicals. @PeterForeman Thanks for interesting answer. However Galois group 10T33 is solvable by radicals. A big difference to Smart way to solve octics like $x^8+5992704x-304129728=0$ is that in the prior example the field $E$ defined by the polynomial $f$ (of degree 8) had a subfield $S$ (of degree 4) over which it is quadratic. Thus finding the subfield reduced everyting to the degree $\le 4$ formulas. Here you have two difficulties. The first is that -- while there still is a subfield -- the degree over this subfield is 5. So you cannot use a generic formula, but would need to go into the generic Galois theory machinery with resolvent, roots of unity etc. for degree $\ge 5$. The second difficulty is that the $E$ as extension of $S$ will have Galois group not cyclic, but of order $20$. You thus will have to do three steps, corresponding to the composition series of the Frobenius group of order $20$. This can be done (even without need to go to hypergeometric series, as Mathematica seems to do), but the resulting expression in radicals will, in all likeness, be a nightmare, and if you are just looking for the result "out of curiosity" most likely not be worth the effort. thank you for detailed explanations that unfortunately confirmed that one needs to dig into the mentioned "machinery" to solve the above equation. For the "big science" indeed this task is not important, but for a person that likes such stuff is an opportunity to learn something useful. Very likely your answer should be accepted as the best possible, still I will wait some time for, hopefully, some less demanding steps (publicly available tools, scripts,...) The solution to solvable and depressed quintics can be succinctly expressed in the form $$x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}$$ with appropriate $5$th roots of unity affixed, and where the $z_i$ are the roots of its quartic Lagrange resolvent. (Kindly see other answer.) In this manner, it may be written in "shorthand" and its Lagrange resolvent may be also interesting. For example, turns out for the OP's quintic, one needs the square root of the golden ratio. To solve equations of Fermat prime degree $p=2^{n}+1$ with a solvable Galois group, one generally needs a resolvent equation of degree $2^n$ (which, using only nested square roots, can factor into quadratics). Thus to solve the solvable quintic, $$x^5+10i x+8i=0$$ one employs 4 roots of a quartic (factored into 2 quadratics) which in this case nicely involves the golden ratio $\phi$. Let, $$\begin{aligned}\color{red}u &= \frac{\sqrt{2\,i}}{\;5}\sqrt{1+2\,i}\\ &= \frac{\sqrt{2\,i}}{\;5}\left(\sqrt{\frac{1+\sqrt5}2}+\sqrt{\frac{1-\sqrt5}2}\right)\\ &= 0.0971+0.4116i\end{aligned}$$ (Note: All numerical values are generated by Mathematica and truncated.) I. New answer The advantage of this version is there is only one $z^{1/5}$ root extraction and does not involve the $5$th root of several complex numbers (and consequent ambiguity) as in the old answer. The five roots $x_k$ are then $$x_k = u\,T+\frac1T-\frac{a}{T^2}+\frac{b}{T^3}$$ where $$T_k = \zeta^k \Big(\frac{ab}{u}\Big)^{1/5}$$ $$a=\frac1u\big(i-\sqrt{u-1}\big) = -0.05367 + 0.5009 i$$ $$b=\frac1u\big(-c+\sqrt{c^2+1}\big) = 0.24289 + 0.00326 i$$ $$\quad c = \frac{1-5i}2+\frac1u \;=\; 1.04322 - 4.8011i$$ with $\zeta = e^{2\pi i /5}$ and $k = 0,1,2,3,4$. II. Old answer Let $\color{red}u$ be as defined above. Then the resolvent quadratics are, $$z^2+2u(5u+2i)z+u^5 = 0\\z^2+2u(5u-2i)z-u^5 = 0$$ So the roots $z_i$ are, $$z_1 = -0.0657 - 0.3821 i\\ z_2=0.0191 - 0.0291i\\ z_3=0.0028 + 0.0027i\\ z_4=3.2437 - 1.1914i$$ Taking their $5$th roots and affixing a correct $5$th root of unity $\zeta = e^{2\pi i/5}$, $$z_1^{1/5}\zeta^2 = -0.3464 + 0.3758i \\z_2^{1/5}\zeta^2 = -0.2934 + 0.1511i \\z_3^{1/5}\zeta^2 = -0.4632 + 0.6855i \\z_4^{1/5}\zeta^4 = 0.3092 - 1.2435i$$ The sum of those $5$th roots, $$x_1 = z_1^{1/5}\zeta^2+z_2^{1/5}\zeta^2+z_3^{1/5}\zeta^2+z_4^{1/5}\zeta^4 = -0.7938 - 0.03104i$$ is then a root of the quintic, $$x^5+10i x+8i=0$$ P.S. The other roots can be found by affixing appropriate $\zeta^k$ to the $z_i^{1/5}$. @Klajok, have you found other irreducible decic quadrinomials like in the post? I have not found any other irreducible decic quadrinomials regardless the above concise form. More generally, I searched for irreducible $x^{n}+(ax^{m}+b)^2$ of degrees 10, 12 and 16, for coprime $n$, $m$, with integer $a$, $b$ up to millions... without success. In case of reducibles, small examples: $x^{10}+(4x+8)^2$, $x^{10}+(38x+35)^2$, and similar: $x^{10}-(33x+34)^2$, $(13x^5)^2-(10x+3)^2$. How did you get the new answer? Since both quintic factors are in Bring–Jerrard normal form, Malfatti's method (described here) may be applied. Consider the following mpmath program (so all roots are principal unless otherwise stated): #!/usr/bin/env python3 from mpmath import * i = root(-1, 2, s) a = sqrt(1+2*i) l1 = [1, 4*(-2 + i + a - i*a)/5, 4*(-7*a + i*a)/3125] z1 = root(polyroots(l1)[0], 5, n1) z2 = root(polyroots(l1)[1], 5, n2) l2 = [1, 4*(-2 + i - a + i*a)/5, 4*(+7*a - i*a)/3125] z3 = root(polyroots(l2)[0], 5, n3) z4 = root(polyroots(l2)[1], 5, n4) r = z1 + z2 + z3 + z4 print(polyval([1, 0, 0, 0, 10*i, 8*i], r)) Then for $s=0,1$ the $n_i$ must be set to these values so that $r$ will end up being a root of the quintic beneath it: $$s=0:\begin{array}{cccc} n_1&n_2&n_3&n_4\\ 0&4&1&0\\ 1&3&4&2\\ 2&2&2&4\\ 3&1&0&1\\ 4&0&3&3\end{array}\qquad s=1:\begin{array}{cccc}n_1&n_2&n_3&n_4\\ 0&1&4&0\\ 1&0&2&2\\ 2&4&0&4\\ 3&3&3&1\\ 4&2&1&3\end{array}$$ Since it is easy to solve a quadratic, this leads to explicit reproducible formulas for all the initial decic's roots.
common-pile/stackexchange_filtered
reading data into .ejs from a CSV in node.js I am extracting data from a CSV using d3.csv and routing it into a ejs. d3.csv("http://vhost11.lnu.se:20090/assig2/data1.csv", function(data) { var data1 = data; console.log(data1[0]); app.get('/doctor', isLoggedIn, function(req, res) { res.render('doctor.ejs', { user : req.user, datap1 : data1 }); }); }); The console shows the correct output. In the ejs, I am calling the same by <%= datap1[0]%> and it shows [object object] What am I doing wrong here? I guess console.log converts passed object to a string. Similarly you can use e.g.: datap1: JSON.stringify(data1). didn't work. shows nothing... To answer your question: [object object] means that on your object .toString() was called and this is it's output. Since you object doesn't overrid .toString() default Object.prototype.toString() was called. Ejs just rendered that what you provided. You should override toString() or better in your case convert properly data to e.g. string before passing to ejs. At the client end you can parse the string back to an object using JSON.parse() and while sending it as a string from the server side app.get('/doctor', isLoggedIn, function(req, res) { res.render('doctor.ejs', { user : req.user, datap1 : JSON.stringify(data1) }); }); }); did you console.log( <%= datap1%>); in your ejs code... I want to see the ejs too If you will use above code then you need to put in ejs <%= datap1 %> no [0].
common-pile/stackexchange_filtered
How to improve my Optimization algorithm? Recently, I was playing one of my favorites games, and I came accross a problem: This game have a store, and in that store, skins to especific characters are selled, and I'm planning to buy them. There is 34 skins avaliable, and each one costs 1800 credits (the game currency). The only way of earning those credits is buying packs of it with real money. There is 6 packs, as I show below: Pack Amount of credits Price 1 600 19.90 2 1200 41.50 3 2670 83.50 4 4920 144.90 5 7560 207.90 6 16000 414.90 My first tought was to calculate what was the best way (aka the way of spending less money) to buy any quantity of skins (1 -> 34), but buying N amount of just a single type of pack. So, I wrote this code: import numpy as np import pandas as pd #Cost per Skin (Cs) c_ps = 1800 #Amount of credits per pack (Qp) q_ppc = [600, 1200, 2670, 4920, 7560, 16000] #Cost per pack (Cp) c_ppc = [19.9, 41.5, 83.3, 144.9, 207.9, 414.9] #Amount of skins to be buyed (Qd) qtd_d = 0 #Total cost of the transaction (Ct) ct_total = 0 #Total amount of packs (Pt) qtd_total = 0 #Complete list lista_completa = np.zeros((34,4), dtype=np.float16) #count var j = 0 while True: #best option (Bb) best_opt = 0 #best amount (Bq) best_qtd = 0 #best cost (Bc) best_cost = 50000 qtd_d += 1 #Cost of the nº of skins to be buyed custo = (c_ps * qtd_d) for opt in q_ppc: i = q_ppc.index(opt) qtd_total = m.ceil(custo/opt) ct_total = (qtd_total * c_ppc[i]) if best_cost > ct_total: best_opt = opt best_qtd = qtd_total best_cost = ct_total lista_completa[j] = [int(qtd_d), int(best_opt), int(best_qtd), float(np.round(best_cost, decimals = 1))] j += 1 if j == 34: break float_formatter = '{:.2F}'.format np.set_printoptions(formatter={'float_kind':float_formatter}) pd.set_option('display.float_format','{:.2f}'.format) df = pd.DataFrame(lista_completa, columns = ['Quantidade desejada', 'Melhor opção de pacote', 'Quantidade de pacotes necessária', 'Custo total']) df.set_index('Quantidade desejada', inplace = True) df That gave me the following output: Amount of Skins Best pack option Required amount of packs Final Cost 1.00 600.00 3.00 59.69 2.00 600.00 6.00 119.38 3.00 600.00 9.00 179.12 4.00 7560.00 1.00 207.88 5.00 4920.00 2.00 289.75 6.00 600.00 18.00 358.25 7.00 16000.00 1.00 415.00 8.00 16000.00 1.00 415.00 9.00 600.00 27.00 537.50 10.00 4920.00 4.00 579.50 11.00 7560.00 3.00 623.50 12.00 7560.00 3.00 623.50 13.00 4920.00 5.00 724.50 14.00 16000.00 2.00 830.00 15.00 16000.00 2.00 830.00 16.00 16000.00 2.00 830.00 17.00 16000.00 2.00 830.00 18.00 4920.00 7.00 1014.50 19.00 4920.00 7.00 1014.50 20.00 7560.00 5.00 1040.00 21.00 7560.00 5.00 1040.00 22.00 16000.00 3.00 1245.00 23.00 16000.00 3.00 1245.00 24.00 16000.00 3.00 1245.00 25.00 16000.00 3.00 1245.00 26.00 16000.00 3.00 1245.00 27.00 4920.00 10.00 1449.00 28.00 7560.00 7.00 1455.00 29.00 7560.00 7.00 1455.00 30.00 4920.00 11.00 1594.00 31.00 16000.00 4.00 1660.00 32.00 16000.00 4.00 1660.00 33.00 16000.00 4.00 1660.00 34.00 16000.00 4.00 1660.00 Now, my question is: Is there a way to calculate and get the best combination of packs mixed or not, for each number of skins? I didn't think of anything but to first calculate the maximum amount of packs which I would need to buy all the 34 skins (102 packs of 600 credits). But I got stuck on this tought, and hope for you to help me solve this! Thank you all, in advance! What you are trying to solve here is a variation of the Knapsack Problem. This means there is no solution in polynomial time possible. However you can do a few optimizations: In No circumstance will somebody buy pack #2. It is strictly inferior to buying 2 (or 1) pack #1, therefore we can immediately eliminate it for the algorithm so it does not have to waste time on it ;) import math from pprint import pprint import numpy as np def cheapest_option( pack_credits: np.ndarray, pack_costs: np.ndarray, # generalize problem to arbitrary amounts of credits # not just multiples of 1_800 min_credits: int, prev_assignment: np.ndarray = None): def permut(total: int, length: int, at_least: int): """ adapted from: https://stackoverflow.com/a/7748851/12998205 Creates all possible permutations of length `length` such that `at_least <= sum(permutation) <= total` """ if length == 1: if at_least >= 0: yield [at_least] else: yield [total] else: for i in range(total + 1): n_tot = total - i if n_tot == 0: yield [i] + [0] * (length - 1) else: for permutation in permut(n_tot, length - 1, at_least - i): yield [i] + permutation # if previous assignment would be enough to cover the required amount of credits # we can just re-use that optimal solution, since we know it is optimal if prev_assignment is not None: prev_credits = prev_assignment.dot(pack_credits) if prev_credits >= min_credits: return prev_assignment.copy(), round(prev_assignment.dot(pack_costs), 2) # the mackimum amount of packs is reached when we ONLY buy the cheapest one # this serves as an upper bound for the number of packs we have to buy n_packs_most = math.ceil(min_credits / pack_credits[0]) # analogously the least amounts of packs we have to buy # is if we only buy the most expensive one n_packs_least = math.ceil(min_credits / pack_credits[-1]) # create permutation table as numpy array for fast lookups # convert to float so np.dot does not have to convert the array each time table = np.asarray( list(permut(n_packs_most, len(pack_credits), n_packs_least)), dtype=float ) # our initial guess optimal = np.zeros_like(pack_credits) optimal[0] = n_packs_most optimal_costs = optimal.dot(pack_costs) for assignment in table: # skip assignments that do not reach the required credits if assignment.dot(pack_credits) >= min_credits: curr_costs = assignment.dot(pack_costs) # update with new best solution if curr_costs < optimal_costs: optimal, optimal_costs = assignment, curr_costs # convert back to int return optimal.astype(int), round(optimal.dot(pack_costs)) # store as floats to speed up np.dot-calls pack_credits = np.asarray([600., 2_670., 4_920., 7_560., 16_000.]) pack_costs = np.asarray([19.90, 83.30, 144.90, 207.90, 414.90]) skin_cost = 1_800 opt_ass, opt_cost = cheapest_option(pack_credits, pack_costs, min_credits=skin_cost) results_optimal = {1: (opt_ass, opt_cost)} # still takes around a minute to complete for n_skins in range(2, 35): results_optimal[n_skins] = (opt_ass, opt_cost) = \ cheapest_option( pack_credits, pack_costs, min_credits=n_skins * skin_cost, # attempt to reuse previous solution prev_assignment=opt_ass ) pprint(results_optimal) {1: (array([3, 0, 0, 0, 0]), 59.7), 2: (array([6, 0, 0, 0, 0]), 119.4), 3: (array([9, 0, 0, 0, 0]), 179.1), 4: (array([0, 0, 0, 1, 0]), 207.9), 5: (array([15, 0, 0, 0, 0]), 298.5), 6: (array([18, 0, 0, 0, 0]), 358.2), 7: (array([0, 0, 0, 0, 1]), 414.9), 8: (array([0, 0, 0, 0, 1]), 414.9), 9: (array([1, 0, 0, 0, 1]), 434.8), 10: (array([0, 1, 0, 0, 1]), 498.2), 11: (array([0, 0, 1, 0, 1]), 559.8), 12: (array([0, 0, 0, 1, 1]), 622.8), 13: (array([0, 0, 0, 1, 1]), 622.8), 14: (array([0, 0, 0, 0, 2]), 829.8), 15: (array([0, 0, 0, 0, 2]), 829.8), 16: (array([0, 0, 0, 0, 2]), 829.8), 17: (array([0, 0, 0, 0, 2]), 829.8), 18: (array([1, 0, 0, 0, 2]), 849.7), 19: (array([0, 1, 0, 0, 2]), 913.1), 20: (array([0, 0, 1, 0, 2]), 974.7), 21: (array([0, 0, 0, 1, 2]), 1037.7), 22: (array([0, 0, 0, 0, 3]), 1244.7), 23: (array([0, 0, 0, 0, 3]), 1244.7), 24: (array([0, 0, 0, 0, 3]), 1244.7), 25: (array([0, 0, 0, 0, 3]), 1244.7), 26: (array([0, 0, 0, 0, 3]), 1244.7), 27: (array([1, 0, 0, 0, 3]), 1264.6), 28: (array([0, 1, 0, 0, 3]), 1328.0), 29: (array([0, 0, 1, 0, 3]), 1389.6), 30: (array([0, 0, 0, 1, 3]), 1452.6), 31: (array([0, 0, 0, 0, 4]), 1659.6), 32: (array([0, 0, 0, 0, 4]), 1659.6), 33: (array([0, 0, 0, 0, 4]), 1659.6), 34: (array([0, 0, 0, 0, 4]), 1659.6)} Thank you very much for the answer and help! It was incredible. This code only piqued my curiosity even more, I will still study it from end to end!
common-pile/stackexchange_filtered
DecryptException in BaseEncrypter.php line 48: The MAC is invalid. laravel 5.2 I know this question has already been asked. But none of them solve my issue. I am facing the issue : DecryptException in BaseEncrypter.php line 48: The MAC is invalid. I checked my .env file there is no space in APP_DEBUG and APP_KEY Tried to generate new key but nothing worked out. Laravel Framework version 5.2.45 Post the code... Why down vote. What is the issue. Read my first comment. After surfing for around two hours , tried so many solution form different sources but none of them solved my problem. There may be reason then .env file have space in APP_DEBUG and APP_KEY key. By removing the space if exists can solve the problem. By clearing cache, composer dump-autoload can also be used in some cases. But in my case the issue was either DecryptException in BaseEncrypter.php line 48: The MAC is invalid or DecryptException in BaseEncrypter.php line 45: The payload is invalid The issue was because of database column issue. The column type was varchar and length was 256 that was creating whole issues. As field mwsAuthToken has lenght more then filed 256 and was truncating the mwsAuthToken key value. Thus by changing the field type to text solved my problem.
common-pile/stackexchange_filtered
abcde: What is an ~/.abcde.conf file to rip to multiple formats? The 'canonical' abcde conf file section of the website at andrews-corner has been removed; what is a conf file that I can use now to rip my audio cds with abcde to multiple different formats at the same time under Ubuntu? Disclaimer: This is my web site and I am a former developer of abcde... Disclaimer: This my web site and I am a former developer of abcde... The web author of andrews-corner has moved to other areas of interest now but preserved here is an ~/.abcde.conf that will rip to 11 different audio formats at the same time: # -----------------$HOME/.abcde.conf----------------- # # # A sample configuration file to convert music cds to # MP3, Ogg Vorbis, FLAC, Musepack, AAC, WavPack, Opus, # Monkey's Audio (ape), True Audio, AC3 and mp2, 11 formats # at the same time! Using abcde version 2.7.2 release version. # # Acknowledgements to http://andrews-corner.org # -------------------------------------------------- # # Encode tracks immediately after reading. Saves disk space, gives # better reading of 'scratchy' disks and better troubleshooting of # encoding process but slows the operation of abcde quite a bit: LOWDISK=y # Specify the method to use to retrieve the track information, # the alternative is to specify 'musicbrainz': CDDBMETHOD=cddb # With the demise of freedb (thanks for the years of service!) # we move to an alternative: CDDBURL="http://gnudb.gnudb.org/~cddb/cddb.cgi" # Make a local cache of cddb entries and then volunteer to use # these entries when and if they match the cd: CDDBCOPYLOCAL="y" CDDBLOCALDIR="$HOME/.cddb" CDDBLOCALRECURSIVE="y" CDDBUSELOCAL="y" OGGENCODERSYNTAX=oggenc # Specify encoder for Ogg Vorbis MP3ENCODERSYNTAX=lame # Specify encoder for MP3 FLACENCODERSYNTAX=flac # Specify encoder for FLAC MPCENCODERSYNTAX=mpcenc # Specify encoder for Musepack AACENCODERSYNTAX=fdkaac # Specify encoder for AAC OPUSENCODERSYNTAX=opusenc # Specify encoder for Opus WVENCODERSYNTAX=wavpack # Specify encoder for Wavpack APENCODERSYNTAX=mac # Specify encoder for Monkey's Audio TTAENCODERSYNTAX=tta # Specify encoder for True Audio MP2ENCODERSYNTAX=twolame # Specify encoder for MP2 MKAENCODERSYNTAX=ffmpeg # Specify encoder for MKA (AC3 via FFmpeg) OGGENC=oggenc # Path to Ogg Vorbis encoder LAME=lame # Path to MP3 encoder FLAC=flac # Path to FLAC encoder MPCENC=mpcenc # Path to Musepack encoder FDKAAC=fdkaac # Path to the AAC encoder OPUSENC=opusenc # Path to Opus encoder WVENC=wavpack # Path to WavPack encoder APENC=mac # Path to Monkey's Audio encoder TTA=tta # Path to True Audio encoder TWOLAME=twolame # Path to MP2 encoder FFMPEG=ffmpeg # Path to FFmpeg (AC3 via FFmpeg) OGGENCOPTS='-q 6' # Options for Ogg Vorbis LAMEOPTS='-V 2' # Options for MP3 FLACOPTS='-s -e -V -8' # Options for FLAC MPCENCOPTS='--extreme' # Options for Musepack FDKAACENCOPTS='-p 2 -m 5 -a 1' # Options for fdkaac OPUSENCOPTS="--vbr --bitrate 128" # Options for Opus WVENCOPTS="-hx3" # Options for WavPack APENCOPTS="-c4000" # Options for Monkey's Audio TTAENCOPTS="" # Options for True Audio TWOLAMENCOPTS="--bitrate 320" # Options for MP2 FFMPEGENCOPTS="-c:a ac3 -b:a 448k" # Options for MKA (AC3 via FFmpeg) OUTPUTTYPE="ogg,mp3,flac,mpc,m4a,opus,wv,ape,tta,mp2,mka" # Encode to 11 formats! # The cd ripping program to use. There are a few choices here: cdda2wav, # dagrab, cddafs (Mac OS X only) and flac. New to abcde 2.7 is 'libcdio'. CDROMREADERSYNTAX=cdparanoia # Give the location of the ripping program and pass any extra options, # if using libcdio set 'CD_PARANOIA=cd-paranoia'. CDPARANOIA=cdparanoia CDPARANOIAOPTS="--never-skip=40" # Give the location of the CD identification program: CDDISCID=cd-discid # Give the base location here for the encoded music files. OUTPUTDIR="$HOME/Music" # The default actions that abcde will take. ACTIONS=cddb,playlist,read,encode,tag,move,clean # Decide here how you want the tracks labelled for a standard 'single-artist', # multi-track encode and also for a multi-track, 'various-artist' encode: OUTPUTFORMAT='${OUTPUT}/${ARTISTFILE}-${ALBUMFILE}/${TRACKNUM}.${TRACKFILE}' VAOUTPUTFORMAT='${OUTPUT}/Various-${ALBUMFILE}/${TRACKNUM}.${ARTISTFILE}-${TRACKFILE}' # Decide here how you want the tracks labelled for a standard 'single-artist', # single-track encode and also for a single-track 'various-artist' encode. # (Create a single-track encode with 'abcde -1' from the commandline.) ONETRACKOUTPUTFORMAT='${OUTPUT}/${ARTISTFILE}-${ALBUMFILE}/${ALBUMFILE}' VAONETRACKOUTPUTFORMAT='${OUTPUT}/Various-${ALBUMFILE}/${ALBUMFILE}' # Create playlists for single and various-artist encodes. I would suggest # commenting these out for single-track encoding. PLAYLISTFORMAT='${OUTPUT}/${ARTISTFILE}-${ALBUMFILE}/${ALBUMFILE}.m3u' VAPLAYLISTFORMAT='${OUTPUT}/Various-${ALBUMFILE}/${ALBUMFILE}.m3u' # This function takes out dots preceding the album name, and removes a grab # bag of illegal characters. It allows spaces, if you do not wish spaces add # in -e 's/ /_/g' after the first sed command. mungefilename () { echo "$@" | sed -e 's/^\.*//' | tr -d ":><|*/\"'?[:cntrl:]" } MAXPROCS=2 # Run a few encoders simultaneously PADTRACKS=y # Makes tracks 01 02 not 1 2 EXTRAVERBOSE=2 # Useful for debugging COMMENT='abcde version 2.7.2' # Place a comment... EJECTCD=y # Please eject cd when finished :-) Keep in mind that this ~/.abcde.conf can also be used for a single audio codec rip and encode by using something like the following: abcde -o mp3 This will utilise the 'mp3' section of the conf file only... How cool is the command line :) Take a look here also.. A collection of abcde.conf files... If you want start from a full list of configuration options and defaults, copy /etc/abcde.conf to ~/.abcde.conf and edit it. Putting only OUTPUTTYPE="ogg,mp3,flac,mpc,m4a,opus,wv,ape,tta,mp2,mka" into that file will provide multiple file type outputs without changing any other configuration options.
common-pile/stackexchange_filtered
Does overpaying estimated taxes lead to loss of non-refundable tax credit? Say: My total fed tax liability is 60K. I've made estimated payments of 65K. (keeping withholdings out of picture here for simplicity) I'll expect a tax refund of 5K. I also installed solar panels and so expecting a non-refundable tax credit of 3K. How much refund would I get, 8K or just 5K? Is non-refundable tax credit anyways affected by the tax payments (withholdings or estimated payments)? Are you in the United States? @ChrisW.Rea, yes. You would get an $8k refund. Non-refundable tax credits don't literally mean you can't get a refund in the sense that most people think (i.e. a check or direct deposit from the IRS); it means you can't get a refund beyond your payments (i.e. withholding and estimated taxes). Look at Form 1040. Your non-refundable tax credits are totaled on Schedule 3 line 7, which transfers to 1040 line 20. This reduces your tax (but not below $0, i.e. non-refundable), excluding self-employment and a few other special taxes on Schedule 2. So your total tax is effectively $57k instead of $60k. Then your estimated tax payments (1040 line 26) are treated similarly to a refundable tax credit, which means you get the entire $65k - $57k = $8k refunded. It might be helpful (if my understanding here is right) to spell this out explicitly: "non-refundable" means that it can't reduce your NET taxes below zero, NOT that you can't get a refund of it (as long as it would still leave your net taxes positive.) It's kind of weird terminology. @GlennWillen Good suggestion, I edited to try to clarify. If your income is so low that your tax liability would either be zero, or less than the $3,000 non-refundable tax credit, then you wouldn't be entitled to the whole $3,000 credit. What happens in April each year is a settlement between what you have paid them, and what you should have paid them. Overpaying doesn't change what you should have paid them. Underpaying only impacts what you should have paid if it exposes you to interest and penalties. The fact you are getting a refund in April is different from the refundable status or non-refundable status of a tax credit. If a married couple with no kids has only $15,000 in income, which is less than the $24,000 standard deduction, then they have no tax liability. This makes them ineligible for non-refundable tax credits. Congress is very careful about this point because it can determine who can and who can't benefit from the tax credit.
common-pile/stackexchange_filtered
AttributeError: 'str' object has no attribute 'shape' - when resizing image using scikit-image I'm trying to iterate through a directory and resize every image using scikit-image but I keep getting the following error: b'scene01601.png' Traceback (most recent call last): File "preprocessingdatacopy.py", line 16, in <module> image_resized = resize(filename, (128, 128)) File "/home/briannagopaul/PycharmProjects/DogoAutoencoder/venv/lib/python3.6/site-packages/skimage/transform/_warps.py", line 104, in resize input_shape = image.shape AttributeError: 'str' object has no attribute 'shape' My code: import skimage from sklearn import preprocessing from skimage import data, color import os from skimage.transform import resize, rescale import matplotlib.pyplot as plt import matplotlib.image as mpimg import os directory_in_str = "/home/briannagopaul/imagemickey/" directory = os.fsencode(directory_in_str) for file in os.listdir(directory): print(file) filename = os.fsdecode(file) if filename.endswith(".png"): image_resized = resize(filename, (128, 128)) img = mpimg.imread(file) imgplot = plt.imshow(img) plt.show() filename.shape() It looks like you're trying to get the shape of the fileNAME rather than the actual FILE. The filename is a string, which doesn't have the shape() attribute. You're going to have to load the actual file into a variable, then call shape() on that variable Thanks for the help. I tried using x = file.open() and then resized x and got the same error. Do you have any suggestions on how I should go about loading the file? @JasonKLai First off, unless the code is run in the same directory as the image, you are going to want to specify the directory in the filename: for file in os.listdir(directory): print(file) filename = directory_in_str + os.fsdecode(file) But to address your question, you are already reading the image via the mpimg.imread line and storing this image as a numpy array called img. With that img variable, you can run it through the rest of your lines: if filename.endswith(".png"): img = mpimg.imread(filename) image_resized = resize(img, (128, 128)) imgplot = plt.imshow(img) plt.show() print(img.shape) Note that I changed two separate calls to filename to img instead. That's because filename is simply the name of the file and not the actual file, which in your case was called img.
common-pile/stackexchange_filtered
How to add simple form fields to a BlackBerry application? Assuming I have a class inherited from MainScreen, how do I put up a couple of form fields for user input? // Inside the constructor LabelField label1 = new LabelField("Hello World Demo" , LabelField.ELLIPSIS | LabelField.USE_ALL_WIDTH); EditField firstNameEditField = new EditField(EditField.NO_NEWLINE | EditField.NON_SPELLCHECKABLE | EditField.NO_COMPLEX_INPUT); EditField lastNameEditField = new EditField(EditField.NO_NEWLINE | EditField.NON_SPELLCHECKABLE | EditField.NO_COMPLEX_INPUT); ButtonField submitButton = new ButtonField("Submit") { protected boolean navigationClick(int status, int time) { onSubmit(); return true; } }; this.add(label1); this.add(firstNameEditField); this.add(lastNameEditField); this.add(submitButton); and there are several field types to match your requirement like RadioButtonField, ButtonField, DateField etc. you can arrange these filed Horizontaly (using HorizontalFieldManager) or Vertically (using VerticalFieldManager).
common-pile/stackexchange_filtered
How to rewrite Entity Framework query to SQL Server query I have a table called passenger policy that looks like this: public Guid HotelId { get; set; } public int FromAge { get; set; } public int ToAge { get; set; } public PassengerType PassengerType { get; set; } and it has 3 rows for each HotelId key. I have another table called search that looks like this public class Search : BaseEntity { public DateTime Date { get; set; } public Guid CountryId { get; set; } public Guid ProvinceId { get; set; } public Guid CityId { get; set; } public Guid HotelId { get; set; } public Guid VendorHotelRoomId { get; set; } public int StandardCapacity { get; set; } public int ExtraCapacity { get; set; } public int MaxInfantAge { get; set; } public int MaxChild1Age { get; set; } public int MaxChild2Age { get; set; } public double BasePrice { get; set; } public double ExtraAdultPrice { get; set; } public double ExtraInfantPrice { get; set; } public double ExtraChild1Price { get; set; } public double ExtraChild2Price { get; set; } } I want to write a query in T-SQL (SQL Server) to get hotels based on date field, standard capacity and extra capacity. The extra capacity has 3 possible values: infant child 1 child 2 (fetched from passenger type table) I write it like this in EF Core var searchOnAllVendors hotelContext.Search .Where(c => c.Date >= fromDate && c.Date <= toDate && c.CityId == cityId && c.ExtraCapacity >= adultCount) .AsEnumerable(); foreach (var item in searchOnAllVendors) { foreach (var ag in request.Passengers.ChildrensAges) { if (ag <= item.MaxInfantAge && ag < item.MaxChild1Age && ag < item.MaxChild2Age) infant++; if (ag > item.MaxInfantAge && ag <= item.MaxChild1Age) child1Count++; if (ag > item.MaxChild1Age && ag <= item.MaxChild2Age) child2Count++; if (ag > item.MaxChild1Age && ag <= item.MaxChild2Age) extraAdult++; } if (item.MaxInfantAge >= infant && item.MaxChild1Age >= child1Count && item.MaxChild2Age >= child2Count) { var adulPrice = extraAdult * item.ExtraAdultPrice; var infantPrice = infant * item.ExtraInfantPrice; var child1Price = child1Count * item.ExtraChild1Price; var child2Price = child1Count * item.ExtraChild2Price; var finalPrice = adulPrice + infantPrice + child1Price + child2Price + item.BasePrice; searches.Add(new Search_Response { CityId = item.CityId, CountryId = item.CountryId, HotelId = item.HotelId, ProvinceId = item.ProvinceId, VendorHotelRoomId = item.VendorHotelRoomId, Price = finalPrice }); } } Hi - so what have you tried and what specific issue are you facing? convert it to T-SQl So update your question with your attempt to convert it to T-SQL and explain what specific issue you are having with it i.e. don’t just say that “it doesn’t work”. If it errors then what is the error message; if it gives the wrong result then what result does it give and what result were you expecting i want to use two loop ( for each ) in my code as i explained above Hi - unfortunately this is not a site where you are likely to find someone who will do this for you, it’s not a free coding site. You need to show you have made some effort to solve the problem yourself and then ask a question about a specific issue you are facing plus your example has a couple bugs: child2Price is using child1Count., and your extraAddult and child2Count increment conditions are the same. I'm guessing the extaAdult should be: if (ag > item.MaxChild2Age ) extraAdult++; As for doing all of that in TSQL, no idea. Other options if you are running into performance issues would be to fetch just the data about each item you need using Select rather than loading all fields from the vendors. The age group counts could be calculated without iterating. Other than that, the question would be "why" do you want to do this in TSQL? @StevePy thank you for your help yes i fix that , i want to do that for performance and it works perfectly file in Ef Core I take 12Second to Render data but in T-SQL it reduce to 900ms @NickW sometimes we do not need exact code we need just a push or help to run Sometimes we can help without pay :) after couple days and try a few things i find a way to have best performance in T-SQL ... 1. Get count of extra passernger types in scalar-valued functions like this : SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER FUNCTION [dbo].[GetChildRule](@ages as nvarchar(max),@min as int,@max as int) RETURNS int AS BEGIN declare @count int select @count=count(*) from STRING_SPLIT(@ages,',') where value > @min and value <= @max RETURN @count END ... and use it in stored procedure like a field as below : select * , GetChildRule('1,2,3',mymin,mymax) from Search where date between date1 and date2 and call it in EF CORE: Context.Set<YourModelWithAllOfFiledYouReturnInSP>() .FromSqlRaw($"EXEC [dbo].[StaySearch] @extraAges = N'{ages}', @checkInDate = N'{fromDate}', @checkOutDate = N'{toDate}', @destinationId = '{destinationId}', @countrySearch = '{countrysearch}',@adultCount={adultCount}");
common-pile/stackexchange_filtered
What 'exactly' does NETfilter/iptables do when you enable forwarding and masquerading I'm setting up firewall rules and even though I thought I understood netfilter and iptables, I am confused about what exactly happens when you set 1 to net.ipv4.ip_forward and make a MASQUERADE rule. Does enbaling forwarding mean that every incoming IP packet is basically retransmitted and sent according to the routing table? Or does it simply sent it out on all network interfaces? The MASQUERADE rule is postrouting and matches everything sent to the internet interface. This seems to suggest that enabling forwarding sents IP packets to every interface and this MASQ rule does something special (NAT'ing) for the internet interface. My next question would then be if all incoming traffic to the NAT router is simply transmitted on the LAN interface (albeit with non-working addresses). Does that mean that I need to make special DROP rules in FORWARD to prevent unnecessary trafic from being generated? And if so, does that in turn mean that for every PREROUTING rule I need another in FORWARD? edit: as a sidenote: my confusion stems from a task I need to do: I forwarded port 80 to an internal server. This works fine from outside our network, but I can't reach it when I try to connect to the WAN IP from within the network. My rule is: iptables -t nat -A PREROUTING -d WANIP -p tcp -m tcp --dport 80 -m comment --comment "Forward www to <machine>." -j DNAT --to-destination <IP_ADDRESS> I solved the problem. In short, it’s because the reply the machine your connecting to makes, goes back to the LAN IP directly, and not back through the router. This article explains it well. Aside from the rule above, I made this rule to fix it: iptables -t nat -A POSTROUTING -p tcp --source <IP_ADDRESS>/16 --dest <IP_ADDRESS> --dport 80 -j SNAT --to-source <IP_ADDRESS> <IP_ADDRESS> is the router, <IP_ADDRESS> is the machine to which it's forwarded. Enabling forwarding allows the machine to route packets to other hosts, and yes, it consults the routing table and only sends it to the correct interface. With forwarding disabled, all packets not destinated to this machine are discarded. However, there is only a single instance of each table per system, not per-interface tables. So the packets get processed by Netfilter regardless of where they get routed, and you have to filter over the destination interface by hand in the rules. Masquerading replace the source IP adress with the one of the outgoing interface, so that the request appears to come from the router, and not from the private NATted address. It also keeps track of the connection status, so that when replies arrive they can be directed to the original source host. DNAT happens in PREROUTING and SNAT/MASQUERADE in POSTROUTING, so all the steps in between (filtering by FORWARD, notably) always sees the effective, "end-to-end" adresses. You have to write your forward rules like NAT was non-existent and your private address space was routable. Your problem most likey stems from the fact that your FORWARD rules do not allow forwarding from the lan to the lan (which would be reasonable in the absence of that kind of internal NATting). Since DNAT happens before FORWARD, when FORWARD looks at it, the destination address has already been changed to the lan one. My confusion about forward rules was not the cause of the problem, because the policy for FORWARD is ACCEPT. I was just kind of wondering how dangerous it is to have it not on DROP. I mean, nothing gets forwarded for which there is no DNAT rule or when it's not ESTABLISHED or RELATED anyway... As for the actual problem, I solved that. I will update my post.
common-pile/stackexchange_filtered
Java class Vectors is running too slow Here are my three classes as follows: Array_List.java: import java.util.*; public class Array_List { public long startTime, difference = 0; public ArrayList<Integer> a; public Array_List() { a = new ArrayList<Integer>(); } public void buildArray() { startTime = System.currentTimeMillis(); while (difference <= 10000) { int r = (int)(Math.random()*10); a.add(r); difference = System.currentTimeMillis() - startTime; } } } This class creates an array list and times its creation. In the next java class I am using the size of Array_List to initialize the size of a vector class. I want them both to be the same size so I can compare the speeds. Since I made the size of Array_list dependent on its speed. I can then compare Vectors using the same size. Vectors.java: import java.util.*; public class Vectors { public long startTime, difference = 0; public Vector<Integer> a; public Vectors(int size) { a = new Vector<Integer>(size); } public void buildVectors() { int i=0; startTime = System.currentTimeMillis(); while(i<=a.size()) { int r = (int)(Math.random()*10); a.add(r); difference = System.currentTimeMillis() - startTime; i++; } } } This next class will run both methods. performanceOfArrays.java: import java.util.*; public class performanceOfArrays { public static Array_List arrayList; public static Vectors vectors; public static void main(String[] args) { arrayList = new Array_List(); arrayList.buildArray(); S.ystem.out.println("Testing the speed of an array list:"); System.out.print("The speed of composing an array list with " + arrayList.getSize()); System.out.print(" integers is " + arrayList.getSpeed() + " seconds."); vectors = new Vectors(arrayList.getSize()); vectors.buildVectors(); System.out.print("Testing the speed of a vector:"); System.out.println("The speed of composing a vector with " + vectors.getSize()); System.out.print(" integers is " + vectors.getSpeed() + " seconds."); } } I keep running into memory and speed problems with Vectors.java. I know it has to do with my buildVectors() method. It seems as though the method will not stop. Any help on what to do would be greatly appreciated. To be clear - what are you trying to accomplish with i inside of buildVectors? First, your benchmark is invalid cause the JVM does optimizations while running, so it gets speed up, you must test then separatedly, do that and come back with the new results (also add the results).. Also, theres no point on storing "difference" inside the loop, move it to outside Why are you using Vector? It's an old mostly obsolete class. The javadoc even says: "If a thread-safe implementation is not needed, it is recommended to use ArrayList in place of Vector". As for thread-safe implementations, you should use the newer concurrent implementations added in Java 1.5. In your code for vector you use while (i < a.size()) but then you call a.add() which adds element to vector and increments its size. So your while loop never ends. If you know number of elements you want to add beforehand you'd better use for loop for that. BTW vector has all methods marked as synchronized so it will be slower then arraylist anyway
common-pile/stackexchange_filtered
Linking DLL to DLL I'm adding code to a Visual Studio 2010 solution with multiple DLLs. Some of the DLLs are dependent on others. I'm wondering how to specify that the lib file of one (existing) DLL should be input to another (new) DLL. First, how do I specify that a lib file in the existing DLL project should be created? Second, how do I indicate that the new DLL project should be dependent on the lib file of the existing one? The code compiles fine. I'm getting unresolved externals. It turns out I was omitting the "export symbols" setting on the source DLL project (which is specified in the project-creation wizard). This creates a header file with the declspec defines as follows: #ifdef TESTFILTERS_EXPORTS #define TESTFILTERS_API __declspec(dllexport) #else #define TESTFILTERS_API __declspec(dllimport) #endif This is for a DLL project entitled "TestFilters". For a class definition intended to be exported, the TESTFILTERS_API definition must be used in the class's header file as follows: class TESTFILTERS_API CTestFilters {...}; The presence of declspec(dllexport) in at least one class definition causes the lib file (i.e. TestFilters.lib) to be automagically created. In the project properties: You have to add the library references in the properties of each project -- including the projects that generate the DLLs. Suppose that project DLL_B uses DLL_A. Select DLL_B in the Solution Explorer, press Alt-Enter, go to Configuration Properties -> Linker -> Input, add DLL_A.lib to Additional Dependencies. Also add ..\Release to General -> Additional Library Dependencies (similarly, add ..\Debug in the debug mode). Make sure you modify it for both Debug and Release builds. In the solution: You need to make the users dependent on the libraries they use. Select your solution in the Solution Explorer, press Alt-Enter, go to Common Properties -> Project Dependencies. For the DLL_B project, check the DLL_A in the 'Depends On' pane. This is based on VS2008, but I believe it should be similar in VS2010. How do I force DLL_A to emit a DLL_A.lib file for the benefit of DLL_B? It should do it by default. What VS is it?
common-pile/stackexchange_filtered
REGEX for only 'GO' on a line I would like to create a Regex in C# that will match on the word GO (ignoring case) and only where GO is the first and only word on a line (whitespace after GO is acceptable) So in the following the bolded text is what would be matched: This is a test of GO GO (followed by whitespace) Not a go Go (no whitespace after Go) Not Good What is wrong with a good old string.Equals? The sarcasm provides no value to the discussion. @BrianKE, I think the many would judge that a Regex is overkill here. If you have the whole line in a String, like so: line.TrimEnd().ToLower().Equals("go") will tell you whether the line meets your criteria. The reason for this is we have a library method that executes a single SQL script. Some of our command scripts have multiple steps separated by a 'go'. In order to process each step, we divide the command script on the 'go' and execute each piece individually. the problem was that someone had a comment with 'go' in the text and what we had (\bgo\b) did not handle this scenario. Not sure why this is on hold. The question is a specific question that is not asking for discussion or a best way to do something. Additionally, a straight forward answer was provided by Holger. @BrianKE - I vtc'ed based on the fact that it is too broad. but the close reason given even states "Instead, describe the problem and what has been done so far to solve it." This should work. Perhaps you will change the options. Regex regex = new Regex(@"^go\s*$", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase | RegexOptions.Multiline); foreach(Match match in regex.Matches(input)) { ... } Won't match on Go however Wouldn't it? There's a RegexOptions.IgnoreCase in there @Yorye - Doh, you're right. That would work. string trimmed = textToSearch.TrimEnd(); if (int.Equals(trimmed.Length, 2) && string.Equals(trimmed, "GO", StringComparison.OrdinalIgnoreCase)) { //found the GO } Regex is faster but, this will do the trick nicely I highly doubt regex is faster than this Sorry @Sayse - meant to say I believe regex will be faster if comparing multiple strings, though I'm not entirely sure! Don't know what is faster. But you have to split a multiline string to lines first. @HolgerThiemann Ah well, I've up-voted yours but mine may server as some inspiration at least.. I'd be more inclined to upvote this but I don't believe answering bad questions should be encouraged Ok. Checked it out of curiosity. Your solution even with a split operation to split the lines seems to be faster. :-) Regex r = new Regex("^[G|g][O|o][\\s]+$"); Rudimentary I know, but it works.
common-pile/stackexchange_filtered
convert a string year into a date object I have a requiremnet, where I have given a year of type string. I want to convert it into date object and add Year, month and days to above date instance. String callYear = '2014'; Now my requirent is If callYear is '2014' then new date instance must be 15th September of previous year. Any idea how to achieve that? A String can be converted into an Integer using Integer.valueOf. Here is an example of how to do that: String callYear = '2014'; Integer year = Integer.valueOf(callYear); Once it's an Integer you can convert it to a Date by using Date.newInstance: Date myDate = Date.newInstance((year - 1), 9, 15);
common-pile/stackexchange_filtered
htaccess rewrite condition miss css and image i'm tryng to use a rewrite condition like this: www.sitename.com/view/post/1 and rewrite to: www.sitename.com/view_post.php?id=1 I'm using this code: RewriteRule view/post/([^/]+) view_post.php?id=$1 [L] But on this page the linked css and images were missed if i link them like this: css/style.css It's possible to avoid this error also when working in localhost/sitename for developing with the same htaccess? thanks a lot If you use relative URIs then the same and code will work on multiple sites, e.g. www.sitename.com and localhost. Why not add <IP_ADDRESS> localhost www.sitename.home to your hosts file and then you can test locally using http://www.sitename.home/whatever_uri The issue on the CSS is that the browser will resolve a relative URI of css/style.css from a referring www.sitename.com/view/post/1 as www.sitename.com/view/post/css/style.css. Do you see why? You either need to swap all of your app references to site-relative, directory-absolute /css/style.css as Danilo suggests -- and which might involve a lot of rework -- or use an additional rule to dump the extraneous directories: RewriteRule .*/css/(.*?)\.css css/$1.css [L] Do you see how this works? :-) try using <link rel="stylesheet" type="text/css" href="http://www.sitename.com/css/style.css"> If you want test you app on localhost you must use <link rel="stylesheet" type="text/css" href="/css/style.css"> You don't really address the Q and I am not sure why you recommend fully qualified URIs for CSS.
common-pile/stackexchange_filtered
Finding date and battle of wounding for First World War English Private Arthur Higgins in France? My father, Private Arthur Higgins No 2163, was born 4 Jul 1893, and was in the 5th Battalion of the North Staffordshire Regiment which landed in France on 5 Mar 1915 and was repatriated to England on 19 Oct 1915. I wish to find out the date he was wounded and the Battle Front at which that occurred? Welcome to G&FH.SE! You don't say where you've tried to search already, so I wanted to alert you to the free access period this weekend on Ancestry.co.uk. Their promo says "Access to the records in the featured collections will be free from 5 June at 00:01 until 7 June, 2015 at 23:59 GMT." (Note that while there is a big button encouraging you to sign up for a 14-day free trial that is NOT necessary to view records during the free period.) While you are waiting for answers, take a look around at other questions here tagged world-war-1 and see if there is information that can help you. Great steer from @Jan Murphy as the best place to find out will be if his WW1 service record survived. Regimental diaries may well help locate theatre of action. In the meantime, I would suggest a likely location was Ploegsteert Wood, near Ypres, where 1st Bn North Staffs lost men who died on 13 and 14 Oct 1914. http://www.greatwar.co.uk/ estimates that only 40% of the service records survived and some of them are in very poor condition. http://www.1914-1918.net/nstaffs.htm/ This site indicates that the 1st Btn was in France from 12 Sep 1914. The 5th Btn was reservists. Whereas the 1/5th btn arrived in France on 4 Mar 1915. I think you might have got the year wrong. @user3310902 I think you should write a short answer to summarise your comments
common-pile/stackexchange_filtered
Is sklearn using both a threshold and a bias term? Reading this Can a neuron have both a bias and a threshold? has confused me, as it appears to be more common to use a threshold of 0 when using bias. But reading this https://stackoverflow.com/questions/19984957/scikit-learn-predict-default-threshold indicates that the threshold is 0.5. So my question is, is sklearn using both a threshold and a bias term ? Questions about software libraries or APIs are off-topic here. You're asking about sklearn, so your question is off-topic. The quick answer is yes. The implementation of sklearn use both a bias for each weight and a final threshold, by default set to 0.5 It sounds to me though that you don't have a clear idea about the purpose of the threshold in the first place, so let's elaborate more on that. why do we use a threshold A perceptron, and any other type of model, return continuous values. For some tasks (e.g. regression) we can compute error directly on these continuous values, but for other tasks (e.g. classification) we need to convert these values into discrete values, usually binary, which tell us if the model is predicting a yes or a no for a specific node. The threshold serves precisely for this purpose. We apply it to the very last probabilities returned by a model to convert them into discrete outputs (and I stress out probabilities, not logits, so you need to apply first either sigmoid of softmax). when do we modify the threshold The only moment in which is reasonable to change the threshold is after training. And the reason to do so is that we want to test if there are values different from 0.5 that lead to better metrics score. Usually this is performed using the ROC curve. i.e. we compute the amount of false positives an false negatives for different thresholds and we check what threshold lead to the highest AUC score (the red one in the plot, which maximize the ROC area). But there is no point in using a different threshold from 0.5 during training. And there is no point in associating multiple threshold to each bias in a model. The only reason why the threshold is used is to discretize continuous values ad turn them into final predictions. Thank you for your answer, but I was wondering why you wrote about threshold during training, as I thought the cost function directly used the output of the final activation function, without threshold. In the sklearn code of the Perceptron you can see here that a threshold of 0 is used when calling the predict function. And here is where the bias is added (they call the bias _intercept). Using both doesn't really change the behavior of the perceptron, because you can reformulate the formulation like this: $$ \begin{align} \mathbf{Wx} + b &> t && | \text{ Perceptron with bias $b$ and threshold $t$}\\ \Leftrightarrow \mathbf{Wx}+(b-t) &> 0 && | \text{ threshold 'included' in bias} \\ \Leftrightarrow \mathbf{Wx} &> (t - b) && | \text{ bias 'included' in threshold} \end{align} $$ (From a learning perspective, there is a slight difference, because the threshold is not a learnable parameter, so it would change the optimum of the weights.)
common-pile/stackexchange_filtered
Can the cp command be used to restore a hard drive? My cousin contacted me for tech support because her computer was refusing to boot to the startup drive, and the issue turned out to be a corrupted drive that Disk Utility refused to fix, telling us to format the drive. I looked for a way to back up her files and found this answer, which says that you can use Recovery HD's Terminal's cp command. Her computer is currently running cp -pRv "/Volumes/Macintosh HD/" "/Volumes/EHD" and it seems to be working, copying every file on her computer to her external drive. Since it's not Time Machine, though, putting everything back in its place is going to be a hassle. I was wondering, once we format the drive, could we just use cp again? How much of the setup process would that get done for us? To clarify, could we, for example, not even bother reinstalling OS X or anything, just run cp -pRv "/Volumes/EHD/" "/Volumes/Macintosh HD"? Would that restore everything and make it functional? No, OS X needs to be properly installed and the cp command is not a proper way to do it. I'd just get the contents of her Home folder and not worry about anything else. Then fix the hard drive and reinstall OS X, then copy back only the User Data, basically starting fresh but with the important User Data files intact. @user3439894 There are at least 3 user accounts and possibly some third-party apps (Microsoft Office, if nothing else). Is there any convenient way to deal with all of those? The convenient way would have been 'have a backup, preferably on Time Machine'. Yes, you could just copy it back, but it may not be bootable. That can be fixed by installing OS X over the top of your files. It should preserve what is there and make it bootable. The problem you encounter was exactly the same as mine. You can use the dd command. It does a sector by sector clone from your hard disk to another hard disk. Unlike Disk Utility, it doesn't throw you any error. sudo dd if=/dev/disk0 of=/dev/disk2 bs=128m conv=noerror,sync This is the command, which I got from AskDifferent, where if=/dev/disk0 refers to the source that dd is cloning from and of/=dev/disk2 is the destination. I don’t know much about setting the right byte sector, so I left it as bs=128m. I shared the details and other recovery methods here. This can cause problems if the two disks are not exactly the same size. If the source is larger, some data will not be copied; if the destination is larger, some of its space will be unallocated and hard to recover. asr is a much better way to clone volumes under macOS. @GordonDavisson thanks for pointing that out. While I was researching for ways, I came across asr too. Could you share how to use the asr to backup? asr can be a little tricky depending on the situation, but in general you'd use it something like sudo asr restore --source "/Volumes/Macintosh HD/" --target "/Volumes/EHD" --erase. In the case of a corrupted drive (as in the original question), it may either fail or copy too accurately (i.e. it'd copy the corruption along with everything else), so now that I think about it rsync (see here) would be a better bet. BTW, that also applied to dd -- it will copy the corrupt volume structures rather than fixing them.
common-pile/stackexchange_filtered
Adding External Library in CMake ( linux ) What is the simple process to add my external .so in Cmake. I want to know the exact location to where it needs to be added. And also for the include directories path. Please suggest the simplest way. I am a newB in CMake. Suppose my external .so file is libclutter-1.0.so which is present in bin/res folder. Please tell how can i add it into the CMake? You will have to write a very simple CMakeLists.txt file and then use the cmake utility. The problem for a newbie remains how to write one such file. The very basic steps to write one can be seen in an example given in the article "How to write CMakeLists.txt" here. I hope that helps. You mentioned npapi; is this a firebreath plugin, or are you just using cmake?
common-pile/stackexchange_filtered
How do you store duplicate constant strings in a single pointer while still being able to know its length at compile time? I would like to find a way to store duplicate constant strings in a single location. However; I need to get the length of that string at the compiler level (such that it is not found at runtime by functions such as strlen()). I know of a way to do each of these separately, as shown. Storing duplicate strings in a single address by using pointers: const char *a = "Hello world."; const char *b = "Hello world."; printf("a %s b\n", a == b ? "==" : "!="); // Outputs "a == b" on GCC Getting the length of a string at compile time: const char c[] = "Hello world."; printf("Length of c: %d\n", sizeof(c) - 1); // Outputs 12 on GCC Though there seems to be no way to combine the two: const char *d = "Hello world."; printf("Length of d: %d\n", sizeof(d)); // Outputs the size of the pointer type; 8 on 64-bit computers const char e[] = "Hello world."; const char f[] = "Hello world."; printf("e %s f\n", e == f ? "==" : "!="); // Outputs "e != f" on GCC const char *g[] = {"Hello world."}; const char *h[] = {"Hello world."}; printf("g %s h\n", g == h ? "==" : "!="); // Outputs "g != h" printf("Length of g: %d\n", sizeof(g[0])); // Outputs pointer type size Is there a way to do this that I am unaware of? Why do you have these needs—what are you really trying to do? @Eric Postpischil I am writing a library that may have duplicate strings within files that are labeled as “static const char”. I much prefer using compile-time generated string lengths so that if there was a string 1,000 characters long, you wouldn’t have to wait for a 1,000 character search using strlen(), as you would just have a constant value for how long that string is. I would also not like to have duplicate strings because they are redundant, but I’d much prefer not having a file dedicated to storing every string for every class. Just write a separate program that organizes the strings and their sizes however you want and writes a .c file defining them and a .h file exporting declarations. The C compiler was not intended to do much compile-time data preparation, and there is little reason to try to kludge it into doing so when one can readily write a special-purpose program to do the desired work. Does this answer your question? gcc __attribute__((selectany)) alternative for linux? C does not guarantee that equal string literals share space, although GCC normally performs that optimisation. Other compilers might not. And if the literals are in different translation units, you also need linker support. Also, you might consider const char *const d = "Hello, world";, if you want to make it easier for the compiler to constant-fold strlen(d). @rici I know this isn’t a C language standard, which is why I tagged it GCC. And thank you for that advice. I was under the assumption that a constant character array with a variable pointer would be enough, but there’s no reason not to have a constant pointer as well. @Eric Postpischil I’ll end up doing this if there is no other way, but it just seems very inelegant to write every string in a single file that every class must include. Though I suppose we can’t always get the best of both worlds. Thank you for the suggestion. gcc may be able to optimize the strlen() call to get the length at compile time. Check the -foptimize-strlen option here https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html Is there any info on what this actually does to compiled code? I’ll have a look at it in the morning if not, but it says it uses the functions’ “faster alternatives”, which leads me to believe they just use different functions. Furthermore the option “ max-tracked-strlens” says it sets the maximum “strlen optimization pass[es] [that] will track string lengths”, which seems to be an alternative function to store lengths. If it equivalent to a compile-time constant, this is the solution. Didn't find any official info, but there's this blogpost which suggests the call to strlen() is replaced with an assembly mov operation with the constant length, I guess it may also depend on the version of the compiler, so you should check the assembly it generates for you. This seemed to work well in GCC in the tests I did. The only issue is that it forces you to include string.h, even though no call to strlen() is made. This is not a problem for my use, but maybe something to note for those looking into it. The blogpost given was very descriptive and worth reading.
common-pile/stackexchange_filtered
Rails 3.1 asset pipeline vendor/assets folder organization I'm using the jQuery Tools scrollable library in my Rails 3.1 site with the various assets placed in the vendor/assets folder and it works great. My question is regarding the best way to organize the various files under vendor/assets. What is the recommended way to organize vendor/assets subfolders? Currently I have this structure: vendor/assets/ |-- images/ | |-- scrollable/ | <various button/gradient images> |-- javascripts/ | |-- scrollable/ | jquery.tools.min.js |-- stylesheets/ | |-- scrollable/ | scrollable-buttons.css | scrollable-horizontal.css This is a fairly un-DRY what to do this. I feel that all of the 'scrollable' items should be under one folder. What is the recommended way to do this without having to manipulate the asset pipeline load paths? Thanks! It has a benefit, though. All your images, stylesheets and javascripts are grouped in their own folders and not scattered acrossa a dozen of plugins. one plugin in multiple directories is hard to manage. Removing or updating it would be a pain. You could organise them this way, which is slightly better in that it keeps stuff related to the plugin in one directory: vendor/assets/scrollable |-- images/ | |-- <various button/gradient images> |-- javascripts/ | |-- jquery.tools.min.js |-- stylesheets/ | |-- scrollable-buttons.css | scrollable-horizontal.css I am pretty sure this will work as rails globs all directories under assets/. For the life of me I can't get this to work. I've structured my vendor/assets like this, but when I do something like: //= require scrollable/jquery.tools.min I get a Sprockets::FileNotFound error. How else would I require that? Try adding javascripts into the path Check this answer: http://stackoverflow.com/questions/8798646/what-are-the-best-practices-when-organizing-assets-in-rails-asset-pipeline
common-pile/stackexchange_filtered
RMI VS AppDynamics JMX used for monitoring and managing the services/components & devices. My question is about monitoring,for the monitoring purpose do we have to change any code if we use JMX. If that is the case, App dynamics will solve this process without doing single line of code change ? Typically, you don't need to change anything in your code to capture JMX metrics except your Java Beans have to fulfill the Management Beans (MBeans) requirements you have to enable JMX monitoring in each Java process that should be monitored and the monitored system runs on Java 1.5 or later. See also here and here. Then you can navigate to Tiers & Nodes -> Select a tier -> JMX tab
common-pile/stackexchange_filtered
Transfrom Update in Delete Insert Oracle Golden Gate I have a Oracle GoldenGate extract verions <IP_ADDRESS>.231017 and a Oracle GoldenGate for Big Data replicat Version <IP_ADDRESS>.3, i want to trasform every UPDATE in 2 transaction DELETE and INSERT. There is any possibility to do that with extract and replicat parameters? Could you please add more details on why you need such a request? Maybe further logic can be added on a target side? Consider giving a look at INSERTUPDATES. In this case you can add extra column containing source type of operation to detect updates and add some extra logic to handle this operation on target
common-pile/stackexchange_filtered
How to set two decimal places in output I'm trying to get Python to round up the output of a simple Tip Calculator program to two decimal places, but I've had no joy figuring it out so far. Below is the relevant section of code. I want the output to be printed in conventional dollars and cents format (e.g., $XX.XX) bill = float(input("\n\nWhat is the bill for your meal?: $")) low_tip = bill * .15 print("\nIf you would like to tip the waiter 15%, the amount of the \ntip is: $", low_tip) low_total = bill + low_tip print("\nSo, your total bill including a 15% tip would be: $", low_total) high_tip = bill * .20 print("\nIf you would like to tip the waiter 20%, the amount of the \ntip is: $", high_tip) high_total = bill + high_tip print("\nSo, your total bill including a 20% tip would be: $", high_total) See Answer to python-format-decimal-with-a-minimum-number-of-decimal-places here on StackOverflow. You can format the float as a string before printing: >>> '${0:.2f}'.format(15.5) '$15.50' Or using the % operator: >>>'$%.2f' % 15.5 '$15.50' The whole prints would then look like: print("\nSo, your total bill including a 20% tip would be: ${0:.2f}".format(high_total)) Thanks! I just made all of the relevant replacements and this works like a charm. I also see how changing the ".2" to ".3" will give me three decimal places, etc. @SomeCallMeTim The full specification of the formatting fileds for the format method can be found here, in particular check the examples while here there is the documentation for %. Thanks again. Those links have provided some very useful information. I don't understand why this question was closed as "not a real question". It was asked in earnest, and answered promptly and succinctly by another user who understood my meaning completely and provided a very helpful solution.
common-pile/stackexchange_filtered
Pointer initialization I've seen many questions about pointer initialization but I couldn't find answer for something that bothers me recently a lot. Why does it work on gcc : class C { }; /* other stuff here */ typedef C* pTypeC; C* pOtherTypeC = pTypeC(0); Is it valid to use "(0)" on typedef to initialize pointer with NULL ? I am not sure (hence this is a comment and not an answer), but I think it is interpreted as: C* pOtherTypeC = (pTypeC)(0); which is equal to C* pOtherTypeC = (C*)(0); Leaving you with a cast. in C++11 you can/should use nullptr instead of NULL anyway @awoodland: Why should? Not necessarily. If you don't care about those extra type safety checks you may as well use just 0... Unfortunately, my app can use only C++03 standard. @VladLazarenko Why wouldn't you care about type safety? >:| The problem is that pTypeC(0) is considered as a casting operation. C* pOtherTypeC = (pTypeC) (0); Hope this helps! What you're doing isn't initialization of an object, you haven't allocated any new memory or created any new objects -- you're assigning a pointer to NULL (which is equal to the integer value 0). Working backwards: C* pOtherTypeC = pTypeC(0); C* pOtherTypeC = pTypeC(NULL); C* pOtherTypeC = C*(NULL); C* pOtherTypeC = (C*) NULL; C* pOtherTypeC = NULL; All five lines are functionally equivalent and will produce the same result. fixed to be clear the initialization refers to the class (C *) object... though my reply was beat by 1m or so. It's initialising a C* to a C* which was initialised to 0. It's perfectly legal, but you should prefer C* blah = nullptr, though both do about the same thing. The value of NULL can be defined a couple different ways. Sometimes it can simply be assiging the value 0 to a pointer at which point there is an implicit conversion from an integral to pointer type, other times it's defined with a cast such as (void*)0, allowing it to be converted to any other pointer type, but failing should you explicitly assign it to a non-pointer type, and in C++11 there is a specific global nullptr object you can use to initialize a pointer with a "NULL" value. In general though, how you're initializing your pointer is confusing, and is not a recommended coding practice. Just explicitly assign either nullptr or NULL to the pointer, and if you choose NULL, then make sure to include <cstddef> if you haven't included any other header files so you don't get compiler errors.
common-pile/stackexchange_filtered
Resolver function for union type in Ariadne I am trying to write a query resolver function for union type in Ariadne. How can I accomplish this? As I have read in the documentation there is a field called __typename which helps us to resolve the union type. But I am not getting any __typename to my resolver function. Schema type User { username: String! firstname: String email: String } type UserDuplicate { username: String! firstname: String email: String } union UnionTest = User | UserDuplicate type UnionForCustomTypes { user: UnionTest name: String! } type Query { user: String! unionForCustomTypes: [UnionForCustomTypes]! } Ariadne resolver functions query = QueryType() mutation = MutationType() unionTest = UnionType("UnionTest") @unionTest.type_resolver def resolve_union_type(obj, *_): if obj[0]["__typename"] == "User": return "User" if obj[0]["__typename"] == "DuplicateUser": return "DuplicateUser" return None # Query resolvers @query.field("unionForCustomTypes") def resolve_union_for_custom_types(_, info): result = [ {"name": "Manisha Bayya", "user": [{"__typename": "User", "username": "abcd"}]} ] return result Query I am trying { unionForCustomTypes { name user { __typename ...on User { username firstname } } } } When I try the query I am getting below error { "data": null, "errors": [ { "message": "Cannot return null for non-nullable field Query.unionForCustomTypes.", "locations": [ [ 2, 3 ] ], "path": [ "unionForCustomTypes" ], "extensions": { "exception": { "stacktrace": [ "Traceback (most recent call last):", " File \"/root/manisha/prisma/ariadne_envs/lib/python3.6/site-packages/graphql/execution/execute.py\", line 675, in complete_value_catching_error", " return_type, field_nodes, info, path, result", " File \"/root/manisha/prisma/ariadne_envs/lib/python3.6/site-packages/graphql/execution/execute.py\", line 754, in complete_value", " \"Cannot return null for non-nullable field\"", "TypeError: Cannot return null for non-nullable field Query.unionForCustomTypes." ], "context": { "completed": "None", "result": "None", "path": "ResponsePath(...rCustomTypes')", "info": "GraphQLResolv...f04e9c1fc50>})", "field_nodes": "[FieldNode at 4:135]", "return_type": "<GraphQLNonNu...ustomTypes'>>>", "self": "<graphql.exec...x7f04e75677f0>" } } } } ] } We don't need any resolver for union type. We can just send __typename field while returning the owner field. In my code, I am returning list for owner attribute which is wrong. I have to just send a dictionary. Below is the change I have done in my code to make it work. # Deleted resolver for UnionType @query.field("unionForCustomTypes") def resolve_union_for_custom_types(_, info): result = [{"name": "Manisha Bayya", "user": {"__typename": "User", "username": "abcd", "firstname": "pqrs"}}] # <-- Line changed return result
common-pile/stackexchange_filtered
Is Starvation Possible in LOOK Algorithm The LOOK algorithm (wiki) is the same as the SCAN algorithm in that it also honors requests on both sweep direction of the disk head, however, this algorithm "Looks" ahead to see if there are any requests pending in the direction of head movement. If no requests are pending in the direction of head movement, then the disk head traversal will be reversed to the opposite direction and requests on the other direction can be served. In wiki there is a line which says LOOK behaves almost identically to Shortest seek time first (SSTF), but avoids the starvation problem of SSTF. Is it(LOOK) always successful in avoiding starvation or there might be some order of disk request where it may lead to starvation. The wiki article also says that This is because LOOK is biased against the area recently traversed, and heavily favors tracks clustered at the outermost and innermost edges of the platter. However, I found it confusing. The reason why LOOK avoids starvation is quite easy: For any request, it will be served within a finite time interval $t$ (since it is submitted): Suppose the number of cylinders in a disk is $N$, then we can set $t = 2N$.
common-pile/stackexchange_filtered
Recovering a USB stick after formating it to "No Partitioning (Empty)" The problem I recently formated my USB stick using Gnome Disks (on Ubuntu). When doing so, i choose the option "No Partitioning (Empty)". Gnome Disk options I choose: After doing so, I am unable to format or use this USB stick in any way. Does someone know of a way to reinstall a partitioning system on a USB stick ? (to be clear, I don't care about the data that was on the usb, I just want it to be usable) What I tried When I plug it in and use Gnome Disk, all the formating options are grayed out, the only options left are "Turn Off" in the formatings options and "change mount options". USB details on Gnome Disk: It's even worse in the Windows Disk Manager, it is listed but I can't right click it or format it. Whe I acces it via the windows file explorer it tells me to insert a disk... I tried to use Diskpart but when i use de "Clean" command i get the error "There is no media in the device". I also tried gparted but it doesn't detect the drive. So yeah... what in the world happend ? And is there a way to fix this ? You can analyze the problem according to this link, and if you are lucky, find a solution. It can sometimes help by completely wiping the drive with following command: dd if=/dev/zero of=/dev/sdb bs=2048 (substitute the correct device name instead of /dev/sdb). After that, you should be able to create a partition table and a partition. You can also work with more granularity and remove signatures that exist on the disk. For example, signatures present can be listed with: $ sudo wipefs /dev/sdb offset type ---------------------------------------------------------------- 0x200 gpt [partition table] 0x8001 iso9660 [filesystem] LABEL: Linux Mint 17.2 Cinnamon 64-bit UUID: 2015-06-27-14-16-51-00 To remove the second signature of an ISO image, run sudo wipefs -o 0x8001 /dev/sdb When I use either of these commands, I get the same result: the error "medium not found". I feel like I get these errors because it is trying to mount the media, but there is nothing to mount... is there a way to acces the volume in an other way ? I cannot see from here whether you are doing all this correctly, of course. Nothing is possible when you click the cog in Gnome Disks? Else try terminal commands to create partitions and post error messages in your question: this can help seeing what might cause the problem. This all could also mean the stick is EOL.
common-pile/stackexchange_filtered
VNC Server for a Headless Debian box? Which VNC Server can I use to run graphical applications (i.e. X Clients) on a headless Debian Wheezy box? Usually I don't need graphical applications on servers but from time to time there are applications (e.g. firmware update utilities) which sadly required a graphical interface. You don't need VNC or even to run X via a service, though X and whatever gnome-visual stuff will get installed along with your visual program. I use aptitude to install to keep an eye on the dependencies. I use graphical apps via ssh from home all the time without doing anything special, just use ssh -X as you would ssh before. Once logged in, you background the task, example:$ midori & and then the window eventually appears just as if it were your local machine. I do it mostly for getting a browser inside the network to manage something. The last time, I previously used firefox, but my new Debian install is without anything. One tool I did really like and enjoy was opening xfce-panel on the server from my linux desktop, in which I had a few different useful plugins, including hardware monitoring. Sometimes gedit would come in handy when I knew there'd be many edits and it seemed faster to start a new shell with -X and then background the GUI rather than just use vi. Edit to add: You reminded me I needed an internal browser, so from home I just installed a bunch of browsers on my work's host & midori is the best I have tried yet. Netsurf couldn't open webmin, arora was way too slow, & epiphany was slower than normal to show GUI input. BTW, just installed epiphany-browser. Wow, forgot how much gnome poop there is. I looked at the dependencies & they didn't seem like the several hundred that did install. :P + it's slow, so I'm looking at the others here: https://wiki.debian.org/WebBrowsers Ah, of course you're right. I was too unspecific with my question. I was looking for something that keeps the applications running once I log out. I'll mark your answer anyway because it correctly answers the question as I asked it. Ah, for those fun instances, you can evoke and then background the nohup command (http://en.wikipedia.org/wiki/Nohup) but not sure how that and GUI work together based on my prior usage of remote X sessions and nohups. I use nohup to start informal services or long wgets and then logout. The nature of X is such that you can run your display server on another machine. The easiest way to start GUI apps is login from another unix machine with ssh -X and then simply start the application. It will then appear on your local computer.
common-pile/stackexchange_filtered
MacOS: brew install graph-tool on High Sierra I'm using High Sierra and am unable to install graph-tool via brew install. Given below is the output after brew installing. > brew install graph-tool graph-tool: macOS Mojave or newer is required. Error: An unsatisfied requirement failed this build. As I'm using a somewhat locked down machine, I'm unable to update to Mojave. I've previously been able to brew install graph-tool on High Sierra without issues. The homebrew formulae link for graph-tool seems to state that it's only available for Mojave now. As does the formula itself. https://github.com/Homebrew/homebrew-core/blob/master/Formula/graph-tool.rb depends_on :macos => :mojave # for C++17 Is there a way I can install an older version of graph-tool on my machine? Update This might not apply to most people. But since it was such effort installing graph_tool I thought I'd copy my solution here. When trying bfontaine's solution, I kept getting the following error: ==> /usr/local/Cellar/graph-tool/2.27_7/libexec/bin/pip install -v --no-deps --no-binary :all: --ignore-installed /private/tmp/graph-tool--matplotlib-20190621-40984-1767uhv/matplotlib-2.2.2 Last 15 lines from /Users/greatora/Library/Logs/Homebrew/graph-tool/05.pip: Removed build tracker '/private/tmp/pip-req-tracker-b74drkg2' ERROR: Command "/usr/local/Cellar/graph-tool/2.27_7/libexec/bin/python3.7 -u -c 'import setuptools, tokenize;__file__='"'"'/private/tmp/pip-req-build-6ib3tzd9/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8ubg3x4_/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/Cellar/graph-tool/2.27_7/libexec/bin/../include/site/python3.7/matplotlib" failed with error code 1 in /private/tmp/pip-req-build-6ib3tzd9/ Exception information: Traceback (most recent call last): File "/usr/local/Cellar/graph-tool/2.27_7/libexec/lib/python3.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/Cellar/graph-tool/2.27_7/libexec/lib/python3.7/site-packages/pip/_internal/commands/install.py", line 414, in run use_user_site=options.use_user_site, File "/usr/local/Cellar/graph-tool/2.27_7/libexec/lib/python3.7/site-packages/pip/_internal/req/__init__.py", line 58, in install_given_reqs **kwargs File "/usr/local/Cellar/graph-tool/2.27_7/libexec/lib/python3.7/site-packages/pip/_internal/req/req_install.py", line 951, in install spinner=spinner, File "/usr/local/Cellar/graph-tool/2.27_7/libexec/lib/python3.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) pip._internal.exceptions.InstallationError: Command "/usr/local/Cellar/graph-tool/2.27_7/libexec/bin/python3.7 -u -c 'import setuptools, tokenize;__file__='"'"'/private/tmp/pip-req-build-6ib3tzd9/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-8ubg3x4_/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/Cellar/graph-tool/2.27_7/libexec/bin/../include/site/python3.7/matplotlib" failed with error code 1 in /private/tmp/pip-req-build-6ib3tzd9/ I realised that I won't be using graph-tool's (excellent) functionality for visualisation, so I downloaded the graph_tool.rb file that bfontaine linked below, cut the matplotlib dependency from it, and ran brew install --build-from-source ~/Downloads/graph-tool.rb. I then downgraded numpy to 1.16.1, and graph_tool works as expected! Again, only do this if you don't plan on using graph_tool's visualisation capabilities. See below for my modifed graph_tool.rb file. class GraphTool < Formula include Language::Python::Virtualenv desc "Efficient network analysis for Python 3" homepage "https://graph-tool.skewed.de/" url "https://downloads.skewed.de/graph-tool/graph-tool-2.27.tar.bz2" sha256 "4740c69720dfbebf8fb3e77057b3e6a257ccf0432cdaf7345f873247390e4313" revision 7 bottle do sha256 "4bf2967b707d3fa33dbb1d0f54d2cf18b33820754232883f9f53192dd1155ccc" => :mojave sha256 "7454e5ac93d90e1e0048df7e34e6069e36674597d495fb76e2a22494f5fb76c1" => :sierra end depends_on "pkg-config" => :build depends_on "boost" depends_on "boost-python3" depends_on "cairomm" depends_on "cgal" depends_on "google-sparsehash" depends_on "gtk+3" depends_on "librsvg" depends_on :macos => :el_capitan # needs thread-local storage depends_on "numpy" depends_on "py3cairo" depends_on "pygobject3" depends_on "python" depends_on "scipy" resource "Cycler" do url "https://files.pythonhosted.org/packages/c2/4b/137dea450d6e1e3d474e1d873cd1d4f7d3beed7e0dc973b06e8e10d32488/cycler-0.10.0.tar.gz" sha256 "cd7b2d1018258d7247a71425e9f26463dfb444d411c39569972f4ce586b0c9d8" end resource "kiwisolver" do url "https://files.pythonhosted.org/packages/31/60/494fcce70d60a598c32ee00e71542e52e27c978e5f8219fae0d4ac6e2864/kiwisolver-1.0.1.tar.gz" sha256 "ce3be5d520b4d2c3e5eeb4cd2ef62b9b9ab8ac6b6fedbaa0e39cdb6f50644278" end resource "pyparsing" do url "https://files.pythonhosted.org/packages/3c/ec/a94f8cf7274ea60b5413df054f82a8980523efd712ec55a59e7c3357cf7c/pyparsing-2.2.0.tar.gz" sha256 "0832bcf47acd283788593e7a0f542407bd9550a55a8a8435214a1960e04bcb04" end resource "python-dateutil" do url "https://files.pythonhosted.org/packages/a0/b0/a4e3241d2dee665fea11baec21389aec6886655cd4db7647ddf96c3fad15/python-dateutil-2.7.3.tar.gz" sha256 "e27001de32f627c22380a688bcc43ce83504a7bc5da472209b4c70f02829f0b8" end resource "pytz" do url "https://files.pythonhosted.org/packages/10/76/52efda4ef98e7544321fd8d5d512e11739c1df18b0649551aeccfb1c8376/pytz-2018.4.tar.gz" sha256 "c06425302f2cf668f1bba7a0a03f3c1d34d4ebeef2c72003da308b3947c7f749" end resource "six" do url "https://files.pythonhosted.org/packages/16/d8/bc6316cf98419719bd59c91742194c111b6f2e85abac88e496adefaf7afe/six-1.11.0.tar.gz" sha256 "70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9" end # Remove for > 2.27 # Upstream commit from 3 Jul 2018 "Fix incompatibility with Python 3.7" patch do url "https://git.skewed.de/count0/graph-tool/commit/0407f41a.diff" sha256 "94559544ad95753a13ee701c02af706c8b296c54af2c1706520ec96e24aa6d39" end # Remove for > 2.27 # Upstream commit from 3 Oct 2018 "Fix compilation with CGAL 4.13" patch do url "https://git.skewed.de/count0/graph-tool/commit/aa39e4a6.diff" sha256 "5a4ea386342c2de9422da5b07dd4272d47d2cdbba99d9b258bff65a69da562c1" end def install # Work around "error: no member named 'signbit' in the global namespace" ENV["SDKROOT"] = MacOS.sdk_path if MacOS.version == :high_sierra xy = Language::Python.major_minor_version "python3" venv = virtualenv_create(libexec, "python3") resources.each do |r| venv.pip_install_and_link r end args = %W[ --disable-debug --disable-dependency-tracking --prefix=#{prefix} PYTHON=python3 PYTHON_LIBS=-undefined\ dynamic_lookup --with-python-module-path=#{lib}/python#{xy}/site-packages --with-boost-python=boost_python#{xy.to_s.delete(".")}-mt ] args << "--with-expat=#{MacOS.sdk_path}/usr" if MacOS.sdk_path_if_needed system "./configure", *args system "make", "install" site_packages = "lib/python#{xy}/site-packages" pth_contents = "import site; site.addsitedir('#{libexec/site_packages}')\n" (prefix/site_packages/"homebrew-graph-tool.pth").write pth_contents end test do (testpath/"test.py").write <<~EOS import graph_tool as gt g = gt.Graph() v1 = g.add_vertex() v2 = g.add_vertex() e = g.add_edge(v1, v2) assert g.num_edges() == 1 assert g.num_vertices() == 2 EOS system "python3", "test.py" end end Graph-tool 2.28 requires macOS Mojave. It’s been updated in Homebrew on June, 9. You can try installing the previous version by using a direct URL to the formula at the commit just before the bump: brew install --build-from-source https://raw.githubusercontent.com/Homebrew/homebrew-core/26177e166b/Formula/graph-tool.rb
common-pile/stackexchange_filtered
Segmentation Fault: CS50 recover.c This program accepts a filename as input and should recover all jpegs on that file. It reads 512 bytes at a time, checking for the start of a new jpeg. The program compiles when I run it, but it gives a segmentation fault. Please advise me how I can go about fixing this. #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { // check for proper usage if (argc != 2) { printf("Usage: 1 command line argument\n"); return 1; } // check if file can be opened FILE *file = fopen(argv[1], "r"); if (file == NULL) { printf("Cannot be opened\n"); return 2; } // read 512 bytes into buffer until end of card int buffer[128]; int counter; counter = 0; char filename[8]; FILE *img = NULL; while(fread(buffer, 4, 128, file) == 128) { //check if start of new JPEG if (buffer[0] == 0xff && buffer[1] == 0xd8 && buffer[2] == 0xff && (buffer[3] & 0xf0) == 0xe0) { //check if first JPEG if (counter == 0) { sprintf(filename, "%03i.jpg", counter); img = fopen(filename, "w"); fwrite(buffer, 4, 128, img); counter += 1; } else if (counter > 0) { fclose(img); sprintf(filename, "%03i.jpg", counter); img = fopen(filename, "w"); fwrite(buffer, 4, 128, img); counter += 1; } } else if (counter > 0) { fwrite(buffer, 4, 128, img); } } fclose(img); fclose(file); return 0; } The call fread(buffer, 4, 128, file) reads (as you rightly say) 512 bytes into the array of 128 integers. However, when you then test for the start of a new JPEG file in this code: if (buffer[0] == 0xff && buffer[1] == 0xd8 && buffer[2] == 0xff && (buffer[3] & 0xf0) == 0xe0) { //... you are inspecting the low bytes of each of the first four integers in the array, rather than (as you should be) checking the first four bytes of the array. These four bytes will all be in the first integer element (buffer[0]). Thus, your program will never find the start of a new JPEG and, consequently, the img file will never be opened ... and you are then calling fclose with a NULL file pointer, which is undefined behaviour and likely to cause the segmentation fault you see. Instead (assuming the correct 'endianness'), do the following check: if ((buffer[0] & 0xfffffff0) == 0xffd8ffe0) { //... To allow for the 'wrong endianness', you could check for either byte order in buffer[0]: if ((buffer[0] & 0xfffffff0) == 0xffd8ffe0 || (buffer[0] & 0xf0ffffff) == 0xe0ffd8ff) { //... Better still, just read (and write) the buffer for what it is - an array of 512 bytes, with code like this: unsigned char buffer[512]; //... while(fread(buffer, 1, 512, file) == 512) { //... This way, you can keep your 'JPEG start' test as it is. Thanks, this explanation really helped a lot. I made the necessary changes and the program runs smoothly now. fclose(img) is the culprit. img is NULL. In order to find out why, you can insert some printf after your fopen. Here is the link to debug any segfaults in the future. $ ./recover card.raw Memory access error: invalid parameter; abort execution. # fclose's parameter (0x0) is not a valid FILE pointer. # Stack trace (most recent call first) of the error. # [0] file:/recover.c::55, 5 # [1] [libc-start-main]
common-pile/stackexchange_filtered
Subview's bottom anchor is relative to superview's top anchor, and not bottom anchor as intended I have a UICollectionView, and within each UICollectionViewCell I have a UILabel that's supposed to be at the bottom of the cell. All of the constraints work correctly except for the constraint that is meant to position the UILabel relative to the bottom of the cell (imagine that the label is 40 points above the bottom of the cell). Instead, my code places it 40 points below the top of the cell and as a result the UILabel is at the top of the cell, not the bottom. The code clearly specificies bottom anchor for the cell's contentView, so I'm not sure what's going on. // Constraints work correctly self.text.widthAnchor.constraint(equalToConstant: CellConstants.textWidthAnchor).isActive = true self.text.heightAnchor.constraint(equalToConstant: CellConstants.textHeightAnchor).isActive = true self.text.leftAnchor.constraint(equalTo: self.contentView.leftAnchor, constant: CellConstants.textLeftAnchor).isActive = true // Incorrect constraint self.text.bottomAnchor.constraint(equalTo: self.contentView.bottomAnchor, constant: 40).isActive = true Do you intentionally not have a constraint on self.text.topAnchor? My guess is that's what's causing you trouble. @AdamPro13 you are correct! Adding top anchor fixes, and even having top anchor by itself fixes it. Any thoughts on why bottom anchor by itself wouldn't do the trick?
common-pile/stackexchange_filtered
Return dict with key values only if values have duplicates in it I am trying to iterate over the dict and validate against the below condition but "{{ list1 | difference(list1|unique) }}" Is giving empty list only eventhough the list has duplicates in it. I have the input like this dict1 = {'a':[1,2,3],'b':[2,2,3],'c':[3,4],'d':[1,2,1]} Trying to get lists having duplicates and get the output like this dict2 = {'b':[2,2,3],'d':[1,2,1]} I tried myself and got the required output with the below code - set_fact: dict2: "{{ dict2|d({})|combine({item.key: item.value}) }}" with_dict: "{{ dict1 }}" vars: count1: "{{ item.value | unique | count }}" count2: "{{ item.value | count }}" when: count1!=count2 - debug: var=dict2 Thanks, any suggestions are welcome
common-pile/stackexchange_filtered
What's the difference between xml methods (XML transformation) can anyone help me please? I need to transform one XML document to another by using XSLT. So I have the next simple code: var xmlDocument = new XmlDocument(); xmlDocument.Load("input.xml"); var xslTransform = new XslCompiledTransform(); var styleSheetFullPath = "DefaultStyleSheet.xsl"; xslTransform.Load(styleSheetFullPath); Input xml document looks like the example below: <?xml version="1.0" encoding="utf-8"?> <Root> <Object> <GUID>201110180954525010129</GUID> <Meta name="FILENAME" format="string" frate="" /> </Object> </Root> I need to transform it to next XML: <?xml version="1.0" encoding="UTF-8"?> <Root> <Object> <GUID>201110180954525010129</GUID> <FILENAME/> </Object> </Root> When I trying to use the next approach it works well. But I need first write document to file and after the transformation I need to reed it. var fileName = "result.xml"; using (var myWriter = new XmlTextWriter(fileName, null)) { xslTransform.Transform(xmlDocument.CreateNavigator(), null, myWriter); } var doc = new XmlDocument(); doc.Load(fileName); But when I trying create XML document dynamically by using the next approach var xmlDocOutput = new XmlDocument(); var xmlDocOutputDeclaration = xmlDocOutput.CreateXmlDeclaration("1.0", "utf-8", null); xmlDocOutput.AppendChild(xmlDocOutputDeclaration); using (var xmlWriter = xmlDocOutput.CreateNavigator().AppendChild()) { xslTransform.Transform(xmlDocument.CreateNavigator(), null, xmlWriter); } I get the next output <?xml version="1.0" encoding="UTF-8"?> <Root> <Object> <GUID>201110180954525010129</GUID> <FILENAME> </FILENAME> </Object> </Root> So what can I do to preserve white space in element content. Thanks in advance Consider to post samples allowing us to reproduce the problem. And try to present samples that make sense, the last "output" sample you have presented is not even well-formed XML as it has two start tags <Root> but no closing tag and a start tag <Object> but a closing tag </MAObject>. So whatever your XSLT does, it is not likely to produce that output. Please show us the complete stylesheet code so that we can reproduce the issue. Is the problem that there's a space between and in that last output file? Otherwise the given file and the output file are equivalent. what do you mean by "So what can I do to preserve white space in element content." ? What's the problem with your whitespaces? Let me guess: you want to have a self-contained tag (eg. ) when there is either nothing, or whitespace, as the contents? For whitespace tips, refer to http://www.ibm.com/developerworks/xml/library/x-tipwhitesp/index.html
common-pile/stackexchange_filtered
Android L - No peer certificate I've developed a small app that connect to my server using SSL with a self signed certificate. To make it work, i've loaded my certificate in a custom keystore using the BouncyCastleProvider, and imported the certificate in my custom SSLSocketFactory. Everythink works great from android 2.3 (minimum sdk) up to 4.4.4. But in android L (Preview) my app fails with: Tue Aug 12 14:34:40 BRT 2014 : javax.net.ssl.SSLPeerUnverifiedException: No peer certificate at com.android.org.conscrypt.SSLNullSession.getPeerCertificates(SSLNullSession.java:104) at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:93) at org.apache.http.conn.ssl.SSLSocketFactory.createSocket(SSLSocketFactory.java:388) at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:165) at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:360) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) .... and i have absolutely no idea how to fix it. Any help would be really appreciated. Hi. Did you manage to find a solution. I am having the same problem with Android L. Can you please post here the resolution of this issue ? Just to let you know how we fixed this issue in our project. Maybe this can help anybody. We based our app on the ion and AndroidAsync network libs, which had this bug: https://github.com/koush/AndroidAsync/issues/187 An update to the newest version (1.4.0) fixed the "no peer certificate" issue for us on Android L.
common-pile/stackexchange_filtered
How can i change my 404 error page in nginx configuration file? i need help about my Nginx configuration file. I coded my html file. This is the config file: server { listen 888; server_name phpmyadmin; index index.html index.htm index.php; root /www/server/phpmyadmin; location ~ /tmp/ { return 403; } #error_page 404 /404-error.html; location ~ /*-error.html { try_files $1-error.html @error; internal; } location @error { root /var/www/wwwroot/umutisik.com/error_docs; } Also sorry for my bad English, not my main language. Clarify your request. What's what you want to fix? Does this answer your question? Nginx - Customizing 404 page
common-pile/stackexchange_filtered
Cannot get results from json API I am trying to get results from an json api. I believe the error has to do with the data being nested in the array. I am able to console.log my data from the api and it looks as such. However, I am unable to display me results in my html. I have looked at other questions such as this but to no avail. ts. getApiResult(){ if(this.searchvalue != null && this.searchyear != null){ this.apiService.loadAll(this.searchvalue,this.searchyear).subscribe(data => { this.data = data console.log(data) }); } } html. <div *ngFor="article of data;index as i"> <div class="card" class="cardpadding"> <div class="card-body"> <h5 class="card-title" class="display-4" style="font-size: 32px; text-align: center;"> {{article.Results[i].Model_Name}} </h5> </div> </div> </div> Tell us about the errors you see in the console? no errors in the console unfortunately you should loop through data.Results, ie. instead of article of data. it should be article of data.Results. alternatively in subscribe call you can set this.data = data.Results; after checking error codes and all that. Also dont forget to change in mustache expression to {{article.Model_Name}} first of all, its not <div class="card" class="cardpadding"> but rather <div class="card cardpadding"> with class names in one attribute separated by <space>. In your example only the last class gets applied. About your data: You assign this.data = data. Where data is just an object. Instead you should change the *ngFor to *ngFor="article of data.Results" and later use article directly without the index i. {{article.Model_Name}} Edit: I just realized you are also missing the "let" in the *ngFor e.g. *ngFor="let article of data.Results" Seems to have wrong dataType for ngFor. From your code it shows data is not an array rather just a single object. I thik you should iterate data.Results. Try replacing below html template with yours. You are wrongly iterating. <div *ngFor="article of data.Results;index as i"> <div class="card" class="cardpadding"> <div class="card-body"> <h5 class="card-title" class="display-4" style="font-size: 32px; text-align: center;"> {{article.Model_Name}} </h5> </div> </div> </div>
common-pile/stackexchange_filtered
How do I use a CSV file received from a URL query in Objective-C? How do I use comma-separated-value (CSV) file received from a URL query in Objective-C? When I query the URL I get CSV data such as "OMRUAH=X",20.741,"3/16/2010","1:52pm",20.7226,20.7594 How do I parse and use this for my application? My problem is creating/initializing that NSString object in the first place. An example is this link http://download.finance.yahoo.com/d/quotes.csv?s=GBPEUR=X&f=sl1d1t1ba&e=.csv which returns CSV. I don't know how to parse this into an object since I cannot use an NSXMLParser object. You can try the following code using -componentsSeparatedByString: which will end up with an NSArray of each component separated by: NSURL *url = [NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=GBPEUR=X&f=sl1d1t1ba&e=.csv"]; NSString *reply = [NSString stringWithContentsOfURL:url encoding:NSASCIIStringEncoding error:nil]; NSArray *csvItems = [reply componentsSeparatedByString:@","]; Claus Works prefect! merci :) A couple of tools: http://michael.stapelberg.de/cCSVParse http://www.cocoadev.com/index.pl?ReadWriteCSVAndTSV Here's an example of using NSScanner: http://www.macresearch.org/cocoa-scientists-part-xxvi-parsing-csv-data
common-pile/stackexchange_filtered
Receive order for sequential self tell in akka I am using Akka.NET in my project. And I am wondering, do I have any guarantee that messages will be received in the same order as they were sent to a self actor? Ex: Self.Tell(msg1); Self.Tell(msg2); Question: will msg1 be handled before msg2? will msg1 be handled before msg2? Yes, if the actor is alive when it sends and receives both messages, and if the actor's mailbox has a FIFO implementation (which is the default). Akka.NET (and Akka) guarantees message ordering on a per-sender basis when using Tell, provided that the messages are actually delivered and the recipient's mailbox is FIFO. This is the case even if the sender and recipient are the same actor.
common-pile/stackexchange_filtered
How to remove unwanted space from Ionic popover Excuse me, but can you say me, please, what is that terrible footer in my popover? Do you see this white line above last radio? I don't want to see it. How to fix it? <ion-popover-view> <ion-header-bar> <h1 class="title">Фильтр</h1> </ion-header-bar> <ion-content> <ion-radio ng-model="feedFilterLevel" value="0" ng-click="hideFilterPopover()">Все</ion-radio> <ion-radio ng-model="feedFilterLevel" value="1" ng-click="hideFilterPopover()">По стране</ion-radio> <ion-radio ng-model="feedFilterLevel" value="2" ng-click="hideFilterPopover()">По городу</ion-radio> <ion-radio ng-model="feedFilterLevel" value="3" ng-click="hideFilterPopover()">Только контакты</ion-radio> </ion-content> </ion-popover-view> UPD: It is popover size. When I add another radios, size don't changes. How to change popover size or autosize it? Does that appear with any number of radios? It seems that when you add ion-header-bar, you get this extra white space!! Not, it is not ion-header-bar. It just not changes their height. I fix this. But method are little strange. I tried to change class .popover, another classes, tried css via id. But works only it <ion-popover-view style="height: 263px !important;"> If you have another answer, you're welcome.
common-pile/stackexchange_filtered
In "La Haine" (1995 French movie), was this cop French or was he Arab? I was watching La Haine, and I thought he was Arab because of a comment Said made which at the time I initially misread, "An Arab wouldn't last one hour in a police station" (initially I read that as an Arab would not last in a career as a police officer a French police station), but then I realized he was referring to Arab victims of police brutality when I saw "hour." So I'm wondering, is the cop who was sympathetic to the youth in the movie, French or Arab. Said, one of the youth, is Arab, which leads me to believe, since the character is North Africa that he's Egyptian. But the cop, what is he? The character us North African? If you identify the actor and his role we might be able to help. The most likely heritage is Algerian. I mean the cop in the picture. He's right there Obviously, but what is the character's name or the name of the actor? The character. I know the answer now, because someone just told me, but I was wondering ethnically wise (cause I know nationality wise he's French), was he ethnic French or ethnic Arab... The plainclothes police officer is French as you need to be French to work for the French "Police Nationale". He is also Arab or Berber descent as the talk with Saïd indicates, and also his credit name Samir. He's most likely from francophonic North Africa (Tunisia, Morocco, Algeria). I wouldn't say Egypt since, even Egyptian immigration still existed, it was more rare. He is portrayed by the French actor Karim Belkhadra who is Kabyle. So the cop is most likely French and North African Arab or North African Berber descent, not Asian Arab. Thank you for the answer, bro :). I was wondering. So he is Arab. Just was a bit confused
common-pile/stackexchange_filtered
WooCommerce Category and Attributes linked and filtered I´m setting up a WooCommerce website and would like to have a special feature that I don´t know if there is a plugin alaredy done or I need to do it myself (but I have no idea if possible and how). In a example, I have a store that sell t-shirts, and it can be from various color and size as well various fabric type. To make less confuse for clients, I would like to setup the fabric types as category and the colors and sizes as attributes. But the same t-shirt model can be available in all fabrics, with some different colors and/or sizes, and I would to not have to duplicate each entry, but instead, have all linked in a way the customer once he choose the fabric (the category) he only see the attributes available for that category... So resuming, let´s say I have 1 model of t-shirts, and 3 different fabrics. That t-shirt model can be made of each fabric type. And each model has specifics colors available Fabric A -> Model 1 -> Colors Blue and red Fabric B -> Model 1 -> Colors Green Fabric C -> Model 1 -> Colos yellow And I can have more produts (Models) Fabric A -> Model 2 -> Colors Blue Fabric B -> Model 2 -> Colors Green and Green Fabric C -> Model 2 -> Colos yellow etc... Is there a way to acomplish this using WooCommerce ? And how ? let me know if you need more details ! WooCommerce has built in function to set up this feature. You can set this products as variable products. To get more details about how to set up variable product visit following link http://docs.woothemes.com/document/product-variations/ Hi, yes, I used variable products, but there is a way to have same products with different variables according to the category ?
common-pile/stackexchange_filtered
Setting java classpath for lucene on a mac I downloaded the lucene jars and then added them to the CLASSPATH variable via my .bash_profile, the paths to the jars display correctly in the terminal. export CLASSPATH=/Users/dk/lucene-3.4.0/lucene-core-3.4.0.jar export CLASSPATH=$CLASSPATH:/Users/dk/lucene-3.4.0/contrib/demo/lucene-demo-3.4.0.jar echo $CLASSPATH /Users/dk/lucene-3.4.0/lucene-core-3.4.0.jar:/Users/dk/lucene-3.4.0/contrib/demo/lucene-demo-3.4.0.jar However, java still complains to me when I try to run the demo: java org.apache.lucene.demo.IndexFiles -docs . Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/lucene/demo/IndexFiles Caused by: java.lang.ClassNotFoundException: org.apache.lucene.demo.IndexFiles at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) ...... I was able to follow this exact procedure to get the lucene demo working on an ubuntu machine, now I just want it to be able to run on my mac. It seems there are similar questions floating around stackoverflow but none of them seem to answer this question. Are you sure you have your paths right? It works fine for me. .../lucene-3.4.0/tmp $ ls lucene-core-3.4.0.jar lucene-demo-3.4.0.jar .../lucene-3.4.0/tmp $ export CLASSPATH=./lucene-core-3.4.0.jar:./lucene-demo-3.4.0.jar .../lucene-3.4.0/tmp $ echo $CLASSPATH ./lucene-core-3.4.0.jar:./lucene-demo-3.4.0.jar .../lucene-3.4.0/tmp $ java org.apache.lucene.demo.IndexFiles -docs . Indexing to directory 'index'... adding ./lucene-core-3.4.0.jar adding ./lucene-demo-3.4.0.jar 1485 total milliseconds .../lucene-3.4.0/tmp $ you are correct, I mistakenly left out a part of the path! silly question, but is it possible to get tab completion for the jars in the classpath as well? So far I have to type out the full name of the class, which is annoying. @Damonkashu Yeah, classpaths can be a pain. Btw, I generally recommend against an environment classpath--better to set it up in a shell script. I know you could implement classname completion in bash (search for "bash-completion") but I don't know how. Zsh has it by default, which is pretty cool. As of Lucene 6.0, these seem to work: java -cp ./core/lucene-core-6.0.0.jar:./analysis/common/lucene-analyzers-common-6.0.0.jar:./demo/lucene-demo-6.0.0.jar org.apache.lucene.demo.IndexFiles -docs <directory to index> For the search demo: java -cp ./core/lucene-core-6.0.0.jar:./analysis/common/lucene-analyzers-common-6.0.0.jar:./queryparser/lucene-queryparser-6.0.0.jar:./demo/lucene-demo-6.0.0.jar org.apache.lucene.demo.SearchFiles
common-pile/stackexchange_filtered
whats the difference between $|v|$ and $||v||$? $v$ being a vector. I never understood what they mean and haven't found online resources. Just a quick question. Thought it was absolute and magnitude respectively when regarding vectors. need confirmation The double bar indicates the magnitude of the vector. In essence algebraically that is still the absolute value, meaning the square root of $x^2+y^2$ (in case of 2D) In general $\lvert\cdot\rvert$ and $\lVert\cdot\rVert$ are both used to signify norms of some sort. Different texts use different notation conventions, and sometimes the precise definition (if there is one) will vary from context to context I have found $|\cdot|$ to almost always represent the euclidean ($2$)-norm of a vector in $\mathbb K^n$, and $\Vert\cdot\Vert$ is a general sign for a norm. In different context, both may be used for different norms, though. I have even seen $|||\cdot |||$ for a "special" norm
common-pile/stackexchange_filtered
How to return rows in one table whose field contains a string from any row in another table? I have the following table structure in MySQL 5.6: CREATE TABLE `dictionary` ( `id` int(11) NOT NULL AUTO_INCREMENT, `word` varchar(80) NOT NULL, PRIMARY KEY (`id`), KEY `word` (`word`) ) ENGINE=InnoDB; CREATE TABLE `sentences` ( `id` int(11) NOT NULL AUTO_INCREMENT, `sentence` varchar(254) NOT NULL, PRIMARY KEY (`id`), FULLTEXT KEY `sentencefulltext` (`sentence`) ) ENGINE=InnoDB; I want to return all the sentences that contain any of the words in my dictionary. I don't really care about word boundaries, it can be a basic substring. Fulltext MATCH AGAINST seems to only work with strings and not a select from another table. As for LIKE and REGEXP, they seem to take for ages since I have about 500k rows of sentences and 50k rows of words in my dictionary. One option would be to go through the dictionary row by row in a program and call a select on each row, but I'd rather do it in a single SQL statement. If someone has any ideas, please share. Thanks! try something like this - SELECT * FROM `sentences`,'dictionary' WHERE INSTR(word,''+sentence+'') > 0 when you use like with % sign - there is no indexing that is why it is too slow so you may need full text search: http://dev.mysql.com/doc/refman/5.5/en/fulltext-search.html The fields seem to be in the wrong order there. I got it to work just a bit faster than LIKE with a small subset of the data with the following SQL: SELECT * FROM \sentences`,`dictionary` WHERE INSTR(sentence,word) > 0;` However this takes for ages with the big dataset. Does anyone have any more ideas? check my edit above and like and instr are always going to be slow - may be think of redesigning the table so you dont have to perform a like. I haven't tried 'IN' but i think it will be slow too I started of with full text searching, but could not reference the dictionary table in the match against clause, but instead could only use text strings. It would be perfect if I could do it like this: SELECT * FROM \sentences`,`dictionary` WHERE MATCH `sentence` AGAINST(`word`);`
common-pile/stackexchange_filtered
How to get Newsletter form shown in footer block? I'm new to Magento. I'm using Magento 2 Community Edition and I'm trying to get the newsletter subscriber form in the footer, but for some reason, it doesn't appear. How should I do that? I'm not 100% sure what you mean by 'get', I thought you meant add it but as it's already there in the blank and Luma theme and the previous answer is pretty much correct I presume you want to move it so I'll base my answer on that. Quick answer: <move element="form.subscribe" destination="*DESTINATION-HERE*" /> The explanation: Find the block name First you need to find the name of the block you want to move, to do this I searched all of Magento's module and theme XML files for 'newsletter', to do this I used the following search term vendor/magento/**/frontend/**/*.xml. With enough experience you'll know off the top of your heard that it's subscribe.phtml so it does get easier with time. This returned quite a few files, the one that is responsible for adding the footer newsletter block is vendor/magento/module-newsletter/view/frontend/layout/default.xml. This is the code that renders the block: <referenceContainer name="footer"> <block class="Magento\Newsletter\Block\Subscribe" name="form.subscribe" as="subscribe" before="-" template="subscribe.phtml"/> </referenceContainer> Move the block Now we know the block name, we can move it. To do this we use this code: <move element="*BLOCK-NAME-TO-MOVE*" destination="*DESTING-BLOCK-OR-CONTAINER-NAME*" /> So inside app/design/frontend/*PACKAGE-NAME*/*THEME-NAME*/Magento_Theme/layout/default.xml we can move the block, like so: <?xml version="1.0" ?> <page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd"> <body> <move element="form.subscribe" destination="content" /> </body> </page> In the above example the newsletter signup form will be moved to the content, you can swap content with any block or container you wish to place it in. Screenshot If you did want to add it to a new theme: If you did actually mean add it then paste the below code into app/design/frontend/STORE-NAME/THEME-NAME/Magento_Theme/layout/default.xml <?xml version="1.0"?> <page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd"> <body> <referenceContainer name="footer"> <block class="Magento\Newsletter\Block\Subscribe" name="form.subscribe" as="subscribe" before="-" template="subscribe.phtml"/> </referenceContainer> </body> </page> Hi Ben , your solution works very well for my template ! Thank you for your time , now I'm trying to get working also the double minicart , I will let you know how it goes.. if you are creating your own them in view/frontend/layout/default.xml put tag like this <page xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:framework:View/Layout/etc/page_configuration.xsd"> <body> <referenceContainer name="footer"> <block class="Magento\Newsletter\Block\Subscribe" name="form.subscribe" as="subscribe" before="-" template="subscribe.phtml"/> </referenceContainer> </body> </page> don't forget to clear your layout cache hope this will work for you Hi liyakat , I've aready tryed like this , cleared the cache , but still nothing.. did you configure you theme perfectly. it should work to call newsletter in footer yes it should be configured ok , I'm using Venustheme - Yume by the way.. Wouldn't it make more sense to move the block rather than redefine it? my purpose to give this for new them not to change place :) Yeah I think the question isn't very clear, or I'm being stupid :P @liyakat How to display Newsletter Subscriber block in cms page, i did cms static block added : {{block type="newsletter/subscribe" name="newsletter" template="newsletter/subscribe.phtml"}} after this go to cms page added cms static block as a widget : {{widget type="cms/widget_block" template="cms/widget/static_block/default.phtml" block_id="63"}}, still my page empty, how to display Newsletter Subscriber block in cms page? use this in to your CMS block :{{block class="Magento\Newsletter\Block\Subscribe" template="Magento_Newsletter::subscribe.phtml"}}
common-pile/stackexchange_filtered
Can I use Basic authentication in website and Token authentication in web Api, Is this wrong concept? I need to create a website in asp.net, where user registrations required and also need to create a WebApi code for mobile app users. Currently user registration is created in asp.net Webform, and login works fine(used basic authenication), but when I tries to login using WebApi code it shows error 400 bad request(token based authentication), all parameters passed are correct. Is this happens because I used basic authentication in Webform ? Do I need to use basic authentication in WebApi also? if yes then how does it work for login? Please help. I would use the same authentication model for both use cases. So to implement basic authentication in WebApi there is a good article from Mike Wasson. You can find the source code here. It's too much to copy it here. Create your own [BasicAuthentication] Attribute and add it to your controller classes. I would not use cookies, instead send your credentials every time you call the Api within the Authentication-Header of your HTTP call. But make sure you use HTTPS! And to answer your question about mobile apps: Yes of course, adding an authentication header is possible within any mobile application. Same advice here about using HTTPS... You should be able to use the same basic auth for webapi that you use for webforms (both cookie based). Using basic auth in web api will work fine with mobile app, right?
common-pile/stackexchange_filtered
Why is this onpaste event not firing? I'm trying to capture a paste event for an input[type="date"] element. In Chrome you cannot copy/paste into this type of element, so as a workaround I am trying to wrap it in a DIV element with an onpaste event. The issue I'm encountering is that if you click the date input and press CTRL+V nothing happens. However, if you click anywhere else in the body first and then click the date input and press CTRL+V, it works... <div onpaste="alert('test')"> <input type="date"> </div> Demo: https://jsfiddle.net/4qh31tn0/ EDIT: OK, so it turns out that the onpaste event doesn't have to be on the DIV, it can be moved to the INPUT element, but the problem persists. If I load the jsfiddle, click the input and press CTRL+V, nothing happens. If I click someplace outside of the INPUT element beforehand then click the input and press CTRL+V, it works... This is beginning to look like a bug in Chrome that only affects date inputs. Why you are not using keyup event and change date to text on dynamic and change again to date. Opened https://bugs.chromium.org/p/chromium/issues/detail?id=634426 I just want to point out, though you probably already knew this, that onpaste isn't part of the HTML spec. If you get this working in one browser, it's unlikely to work in another. NOT A DUPLICATE. The question wasn't "how to get clipboard data on paste" (answer: use onpaste), it's "why does onpaste not fire in some cases".
common-pile/stackexchange_filtered
XML Parse to list [XmlRoot("Employees")] public class Employee { [XmlElement("EmpId")] public int Id { get; set; } [XmlElement("Name")] public string Name { get; set; } } and simple method, which return List: public static List<Employee> SampleData() { return new List<Employee>() { new Employee(){ Id = 1, Name = "pierwszy" }, new Employee(){ Id = 2, Name = "drugi" }, new Employee(){ Id = 3, Name = "trzeci" } }; } Program.cs: var list = Employee.SampleData(); XmlSerializer ser = new XmlSerializer(typeof(List<Employee>)); TextWriter writer = new StreamWriter("nowi.xml"); ser.Serialize(writer, list); I have file result: <?xml version="1.0" encoding="utf-8"?> <ArrayOfEmployee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Employee> <EmpId>1</EmpId> <Name>pierwszy</Name> </Employee> <Employee> <EmpId>2</EmpId> <Name>drugi</Name> </Employee> <Employee> <EmpId>3</EmpId> <Name>trzeci</Name> </Employee> </ArrayOfEmployee> but i would like for Root Element has name: "Employees", not "ArrayOfEmployee" how can i make it? I want to do it, because i have file, where structure looks like: <Employees> <Employee> ... </Employee> <Employee> ... </Employee> </Employees> possible duplicate of How to rename XML attribute that generated after serializing List of objects Just change as below XmlSerializer ser = new XmlSerializer(typeof(List<Employee>), new XmlRootAttribute("Employees")); that's all. But to get a clean xml as in your question (no xml declaration, no xsi or xsd namespaces etc.), you should use a few tricks XmlSerializer ser = new XmlSerializer(typeof(List<Employee>), new XmlRootAttribute("Employees")); TextWriter writer = new StreamWriter(filename); var xmlWriter = XmlWriter.Create(writer, new XmlWriterSettings() { OmitXmlDeclaration = true, Indent = true }); XmlSerializerNamespaces ns = new XmlSerializerNamespaces(); ns.Add("", ""); ser.Serialize(xmlWriter, list, ns); You can pass the XmlRootAttribute to set the element Name: var root = new XmlRootAttribute("Employees"); XmlSerializer ser = new XmlSerializer(typeof(List<Employee>), root); TextWriter writer = new StreamWriter("nowi.xml"); ser.Serialize(writer, list); From http://msdn.microsoft.com/en-us/library/f1wczcys%28v=vs.110%29.aspx : ... the root parameter allows you to replace the default object's information by specifying an XmlRootAttribute; the object allows you to set a different namespace, element name, and so on. You can mark your property with attributes, use the XmlArray and XmlArrayItem attribute
common-pile/stackexchange_filtered
Laravel 4, eloquent - between statement and operators There is a query I use to run in mysql : select * from my_table where $val between col1 and coL2; It works fine, but with laravel 4, the only way to make that query is to have something like my_model::where('col1','>=',$val)->where('col2','<=',$val) This way doesn't seem to work, because I don't have the same result when using the usual "select * ..." Any idea ? Just to clarify my request : In my case i dont have "...where column between value1 and value2" but "where value between commun" So it seems to me that i can't use "wherebetween" Look at this post on the Laravel forums : http://forums.laravel.io/viewtopic.php?pid=46789#p46789 This should do it... $results = my_model::select('*')->whereRaw("$val between col1 and coL2")->get(); I think this is pretty safe, but you may need to clean $val first. Does laravel clean this for you? @Rafael, @user1669496, Laravel does not do anything to the variable. You need to call like this: $results = my_model::select('*')->whereRaw("? between col1 and coL2", ['someValue'])->get(); for a safe query. You may try something like this // Get records whose id between 3 and 6 $users = User::whereBetween('id', array(3, 6))->get(); Or using variable $id = 'id'; $from = 1; $to = 5; $users = User::whereBetween($id, array($from, $to))->get(); This will get all the records whose ID between 1 and 5. Thanks for replying. The problème in my case is that the input data in my query has to be between columns. I didn't get it, can you please be more descriptive ? @WereWolf-TheAlpha you're hard-coding the "between" values array(3,6), but he needs to select between 2 columns in the table -- SELECT * FROM table WHERE $desiredValue BETWEEN col1 AND col2 @TheAlpha $attendances= Attendance::whereBetween('attndate', array($fromdate, $todate))->get(); ..... this command is not working for dates?? is there any other solutions for comparing dates?? Without creating MySQL Model, we can generate query like: // If column value need to checked between value 1 and value 2 $DBConnection->table('users')->whereBetween('id', array(3, 6))->get(); // If value need to checked between column 1 and column 2 value $DBConnection->->table('users')->whereRaw("$val between col1 and col2")->get(); Your eloquent example using where() didn't work because you have the comparison operators reversed. If you want to retrieve rows where val is between col1 and col2, it should be like this: my_model::where('col1','<=',$val)->where('col2','>=',$val) Notice the comparison operators are reversed to say "where val is greater than or equal to col1 and val is less than or equal to col2." You may have to squint a little hard to see it. :)
common-pile/stackexchange_filtered
Multiple photo compression I need decrease photo size while photo size is not under 1MB. So how I do this. I have activity where I can take photo when is photo taken then in activity result I start AsyncTask which do compression of my photo. Compression works good and photo is saved OK if compression is done only once but if I try multiple photo compression I get infinity loop and is interesting photo size is increasing after every compression but why? Anyone have idea what I do bad? I find some samples with change photo resolution but I dont want change photo resolution I want apply only multiple photo compression. public class PhotoSaverAsyncTask extends AsyncTask<Void, Void, Void> { private int compressionLevel = 99; //if not set then default 99 private Bitmap bitmap = null; private final int MAX_PHOTO_SIZE_IN_KILO_BYTES = 1024; // 1MB private PhotoSavedNotify photoSavedNotify = null; private ProgressDialog mProgressDialog = null; public PhotoSaverAsyncTask(int compressionLevel, PhotoSavedNotify photoSavedNotify) { super(); this.compressionLevel = compressionLevel; this.photoSavedNotify = photoSavedNotify; } protected void onPreExecute() { mProgressDialog = new ProgressDialog(activity); mProgressDialog.setTitle("Kompresia fotografie..."); mProgressDialog.setMessage("Čakajťe prosím..."); mProgressDialog.setIndeterminate(true); mProgressDialog.show(); } protected Void doInBackground(Void... params) { try { Log.d(this.toString(), "Original size: " + (getMediaFile().length() / 1024) + "Kb"); compressionMethod(); } catch (Exception e) { e.printStackTrace(); } return null; } protected void onPostExecute(Void unused) { mProgressDialog.dismiss(); if ((getMediaFile().length() / 1024) > MAX_PHOTO_SIZE_IN_KILO_BYTES) //1MB { new PhotoSaverAsyncTask(compressionLevel, photoSavedNotify).execute(); //compressionMethod(); I also try while cycle without start another tak but the same result } else { photoSavedNotify.taskCompletionResult(getMediaFileName()); } } private void compressionMethod() { try { BitmapFactory.Options options = new BitmapFactory.Options(); bitmap = BitmapFactory.decodeFile(getMediaFileUri().getPath(), options); FileOutputStream out = new FileOutputStream(getMediaFile()); bitmap.compress(Bitmap.CompressFormat.JPEG, compressionLevel, out); out.flush(); out.close(); Log.d(this.toString(), "After compression size: " + (getMediaFile().length() / 1024) + "Kb"); } catch (Exception e) { e.printStackTrace(); } } } If after compression the size is still too large you should not compress the compressed photo. Instead you should compress the original photo with a different value of compressionLevel. If you compress a compressed photo it can get LARGER because the compression artifacts from the previous round of compression don't have the same smooth transitions as an uncompressed photo. "Compress the original photo with a different value of compressionLevel" -> this is great idea thx. Multiple compression is useless. There is only so much you can get for a given resolution and quality. It is not possible to get a smaller size without changing these parameters.
common-pile/stackexchange_filtered
Katie The genome assembly results are fascinating - we're seeing these massive Starship elements carrying entire biosynthetic gene clusters between our isolates. Emma Lewis-Shailer The size alone is remarkable. These aren't your typical transposons - some are over 100 kilobases. The HhpA tyrosine recombinase they use creates very specific target site insertions, which suggests sophisticated targeting mechanisms rather than random jumping. Katie That specificity intrigues me. If these elements can precisely deliver beneficial cargo like antibiotic resistance or novel enzyme pathways, we're looking at natural horizontal gene transfer vehicles that could revolutionize strain engineering. Emma Lewis-Shailer But there's the host defense problem. Fungi have repeat-induced point mutation systems specifically evolved to neutralize mobile elements. The fact that Starships persist suggests they've found ways to evade or overwhelm these defenses. Katie Or perhaps the benefits outweigh the costs. The pathogenic fungi we examined had significantly more transposable element insertions in their genes compared to saprophytic species. That correlation implies mobile elements might actually drive virulence evolution. Emma Lewis-Shailer Consider the temporal dynamics though. The younger, potentially active elements cluster away from essential genes, while the older remnants integrate throughout the genome. This pattern suggests initial selection pressure followed by gradual neutralization as the elements age. Katie Which raises questions about reactivation. If environmental stress could mobilize dormant elements, we might see rapid phenotypic shifts in response to changing conditions. The enzymatic domain enrichment in genes with old transposon insertions supports this - these aren't just genomic parasites. Emma Lewis-Shailer The horizontal transfer evidence is compelling too. When we see identical Starship sequences across divergent species, especially with those target site duplications intact, we're witnessing active inter-species movement. That has profound implications for containment if we're engineering these systems. Katie Exactly. We need to understand the mobilization triggers before we can safely harness these natural gene delivery systems. The Captain recombinase is clearly key, but what signals activate it?
sci-datasets/scilogues
How to understand 之所以 I’m reading Randall Munroe’s “What If 2” in Mandarin. (It’s the author of the popular cartoon xkcd.com) There’s a chapter about placing a house-sized Jupiter in the middle of a neighborhood. The sentence reads: 一个20,000摄氏度的氢球将以不可思议的压力向外扩张。真实的木星之所以没有爆炸,是因为 […] Roughly: “A 20,000 degree ball of hydrogen will expand outward due to unbelievable pressure. The reason why the real Jupiter doesn’t explode is because…” I understand that 之所以 means “the reason why”, and my dictionary confirms that. However I don’t think 之所以 should be intended as one single word. Intuitively I can understand it as 之 + 所以 (therefore), however I’m not fully sure why it’s used as a noun modifier (木星之所以) Can someone explain the grammar behind this construction? There's also the chengyu 不知所以 = "to not know the reason / to not know what to do". 之所以……是因为…… is more or less a fixed phrase (固定搭配) meaning “the reason why … is because …”. Etymologically, 之所以 is comprised of 之 + 所以, but in contemporary Mandarin, it almost always occurs as a fixed phrase together with 是因为, known as a pair of “connectives” (关联词) along with 因为……所以……, 虽然……但是……, 不仅……而且……, and so on in Chinese school grammar. 木星(subject) 之('s) 所以(reason) 没有爆炸 (statement) 木星[之所以]没有爆炸 = [The reason why] Jupiter not exploded ~ 美國[之所以]要援助烏克蘭,原因是為了打擊俄羅斯 [The reason way] the U.S. aids Ukraine is to weaken Russia Is 所以 actually used as a noun in this case? 所以 is acting as a noun for "reason" here. "之所以" = "reason of" You are almost there. 之 + 所以 = it + therefore. 木星(Jupiter)之(it)所以(therefore)没有爆炸(didn't explode), 是因为(was because)... "it therefore" isn't grammatical English though. I'm not sure what you are trying to say. Yes, "it" shall be dropped in English. But, my main point is that "之" can be used as a pronoun, here it equates to "它" (it). I don't think this is a correct explanation. Ignore the previous answer, try this: "Jupiter, therefore it didn't explode, is because of......", or this: "Jupiter, for the reason it didn't explode, is because of......" 我今天之所以迟到是因为路上堵车了. The reason I was late today is that there was traffic on the road.
common-pile/stackexchange_filtered
Do I need VR camera for VR Project? I am to make a project involving virtual reality for college, I have a budget of $800, the original idea is to get a VR camera (this camera https://www.stereolabs.com or any other similar camera) and program it with Unity to display objects in the environment and or hide real objects in the room by first recognizing it by the camera. However while searching I found this product (vuforia) and made a simple demo on unity which accomplishes displaying objects on a certain position, I am not sure if I will need the VR camera after this since this can run on any camera. My expereince with unity is good mainly making games, but I have little information about both VR camera hardware and vuforia. some features that I need to implement include: Displaying objects (virtual avatar) hiding objects (real objects that are there like a table) be able to control avatar using button (adding button as well) So, the questions I have: 1- can these VR hardware camera be programmed with unity) ? 2- should I just use vuforia and forget about the VR camera, or are there some feature the VR camera can do that I can integrate with vuforia and unity? 3- is unity personal edition able to handle face recognition ? Thank you Do you need a VR camera for a VR game? No. Absolutely not. A VR camera is for recording stereoscopic video and has no bearing on using a 3D engine to create stereoscopic views. This is basically like asking "do I need a 360 degree camera in order to use Google Earth?" No, that would be silly. Could you combine the two? Sure. The Microsoft HoloLens is doing pretty much that: using multiple cameras in order to map the real world and create immersive experiences by allowing the virtual holograms to "hide" behind real world objects. Now, you would need a VR headset (such as an Oculus Rift, HTC Vive, or Google Cardboard) in order to properly test your game. This is the reverse of a stereoscopic camera, and it would be the only way to make sure that your game will feel right and that the controls are appropriate. As for Vuforia, Vuforia is for determining the 3D position of a target object with respect to the camera. This is done primary for AR not VR (that is, displaying 3D content overlaid on top of the real world). Vuforia can do this with one camera or two or five (as the Microsoft HoloLens has). If your game isn't interacting with the real world at all, you don't need Vuforia. Depending on what your goals are, you will have to evaluate these options and figure out which one actually fits your project.
common-pile/stackexchange_filtered
how to get id of one table in another table in jsp and servlet? I have a form with two text fields main category and sub category and both of them are inserted into two separate tables. I want id of main category table to be inserted in sub category table as well right now it's showing null can someone please tell the way to achieve this or point me towards a document or other similar questions here (although i searched but didn't find anything according to my needs). I'll add my codes and my tables for understanding the problem. DAO:- public void insertCategory(MainCategory maincategory,SubCategory subcategory) throws SQLException, ClassNotFoundException { Connection conn = DatabaseConnection.initializeDatabase(); String query1 = "insert into list_of_main_categories (main_category) values (?)"; String query2 = "insert into list_of_sub_categories (sub_category) values (?)"; try { PreparedStatement prestmt1 = conn.prepareStatement(query1); prestmt1.setString(1,maincategory.main_category); PreparedStatement prestmt2 = conn.prepareStatement(query2); prestmt2.setString(1,subcategory.sub_category); prestmt1.executeUpdate(); prestmt2.executeUpdate(); } catch (SQLException e) { e.printStackTrace(); } } Controller:- public void insertCategory(HttpServletRequest request, HttpServletResponse response)throws SQLException, IOException, ClassNotFoundException, ServletException, ParseException { String main_category = request.getParameter("main_category"); MainCategory newMainCategory = new MainCategory(main_category); String sub_category = request.getParameter("sub_category"); SubCategory newSubCategory = new SubCategory(sub_category); try { productDAO.insertCategory(newMainCategory,newSubCategory); } catch (ClassNotFoundException e) { e.printStackTrace(); } RequestDispatcher rd = request.getRequestDispatcher("Categories.jsp"); rd.forward(request,response); } main category table:- [enter image description here][1] [1]: https://i.sstatic.net/YBqSp.png // this is working fine sub category table:- [enter image description here][1] [1]: https://i.sstatic.net/13Pfm.png // here i want the same m_id to be populated You have not returned the m_id in you query1 try like this: String query1 = "insert into list_of_main_categories (main_category) values (?) returning m_id "; String query2 = "insert into list_of_sub_categories (sub_category,m_id) values (?,?)"; try { PreparedStatement prestmt1 = conn.prepareStatement(query1); prestmt1.setString(1,maincategory.main_category); ResultSet rs = prestmt1.executeQuery(); int m_id = 0; while (rs.next()) { m_id = rs.getInt("m_id"); } PreparedStatement prestmt2 = conn.prepareStatement(query2); prestmt2.setString(1,subcategory.sub_category); prestmt2.setInt(2,m_id ); prestmt2.executeUpdate(); } catch (SQLException e) { e.printStackTrace(); }
common-pile/stackexchange_filtered
What software is recommended for authentication using the Happstack web dev kit? Last week three of us spent two days trying to build a simple web application using Happstack. One of our concerns is authentication, and it appears there was once a Happstack.Auth package that looks really good. Unfortunately the original project seems to have been abandoned, and although there has been a fork, we could not get the fork to build. What alternatives do people recommend for doing authentication in Happstack? Is happstack-auth viable? You might consider happstack-authenticate as an alternative, darcs get http://src.seereason.com/happstack-authenticate/ happstack-authenticate builds on top of authenticate and pwstore to provide: standard username/password authentication openid authentication facebook connect The code is designed so that it can be used with multiple different templating solutions. Though, at the moment, there is only HSP templates. It is not on hackage yet, but will be. I expect it to become the de facto happstack authentication solution. You can see it is action here: http://www.seereason.com/ The source code includes a demo directory with a self contained example. The code works -- though there are some features that still need to be added. For example, if you get redirected to a login page, you should ultimately be forwarded back to the original page you were trying to access after you are logged in. The biggest shortcoming at the moment is documentation. That will be addressed. You will notice that happstack-authenticate uses web-routes for type-safe urls and acid-state for storing authentication information. However, those design choices do not have to leak into the rest of your application. What template solution are you using? I would be interested in adding support for additional systems. -- jeremy p.s. If you look at the code, it may seem a bit more complex than expected. That is because it is designed to allow for: multiple authentication methods for a single profile For example, you could link multiple openid accounts to the same profile. Perhaps because you are afraid that you might lose access to your primary openid account. Or maybe you want everyone on your team to login using a shared account. (For example, on a site like twitter, you might want multiple people to be able to post tweets through the company account). multiple profiles for a single authentication method On a site a like twitter, you might have multiple accounts. For example, I have twitter accounts for, myself, my photography, my music, happstack, seereason, and more. Instead of having a separate authentication for each account, it would be nice to have a single authentication, and then pick which 'profile' I want to be. sites using happstack-authenticate do not have to support these options, of course. At present we're using Text.Blaze for templating. We really prefer a WASH-style type-safe HTML generator to the XML template style. blaze-html support for happstack-authenticate is definitely the next template library on the TODO list. If that is a blocker I could do it sooner rather than later. I am moving this week, but I could do it next week sometime.
common-pile/stackexchange_filtered
Pexels API HTTP Authorization Header I just started learning about APIs and I am trying to use the pexels API found here: https://www.pexels.com/api/ I have gotten the API Key, however I am not sure where to put my API key at. I want the result to display JSON. When I run this code on bash it works, however, I am not sure how to do it inside javascript. curl -H "Authorization: YOUR_API_KEY" "http://api.pexels.com/v1/search?query=people" I am running express and request. This is my code. var express = require("express"); var app = express(); var request = require("request"); app.set("view engine","ejs"); var url = "http://api.pexels.com/v1/search?query=example+query&per_page=15&page=1"; request(url, function(error,response, body){ if(!error && response.statusCode == 200){ console.log(body); } }); app.listen(process.env.PORT, process.env.IP, function(){ console.log("server is running!"); }); Any help is greatly appreciated as I am new to this and tried to Google for an answer but couldn't. Thank you! You need to add header to make api calls, The code goes this way, var express = require("express"); var app = express(); var request = require("request"); app.set("view engine","ejs"); var data = { url : "http://api.pexels.com/v1/search?query=example+query&per_page=15&page=1", headers: { 'Authorization': 'Your-Api-Key' } } request(data, function(error,response, body){ if(!error && response.statusCode == 200){ console.log(body); } }); app.listen(process.env.PORT, process.env.IP, function(){ console.log("server is running!"); });
common-pile/stackexchange_filtered
Continuous functions and the boundary I have some troubles with the next exercise. Let $(X,\tau_X)$ and $(Y,\tau_Y)$ be a topological spaces and $f:X\rightarrow Y$. Prove the equivalence between the next sentences. a) $f$ is continuous b) For all $A\subset X$, $f[\text{der}_X(A)]\subset\text{cl}_{Y}(f[A])$ c) For all $B\subset Y$, $\text{Fr}_{X}(f^{-1}[B])\subset f^{-1}[\text{Fr}_{Y}(B)]$ Here, $\text{der}$ is the derived set and $\text{Fr}$ is the boundary of a set. a) $\Rightarrow$ b) Let $A\subset X$. We know that $\text{der}_{X}(A)\subseteq\text{cl}_{X}(A)$, then, $f[\text{der}_{X}(A)]\subset f[\text{cl}_{X}(A)]$. Like $f$ is continuous, we know that $f[\text{cl}_{X}(A)]\subseteq\text{cl}_{Y}(f[A])$. Thus, $f[\text{der}_{X}(A)]\subset\text{cl}_{Y}(f[A])$ c) $\Rightarrow$ a) Let $x\in X$ and $V\in \tau_{Y}$ (open set) such that $f(x)\in V$. By hypothesis, $$\text{Fr}_{X}(f^{-1}[Y\setminus V])\subset f^{-1}[\text{Fr}_{Y}(Y\setminus V)]$$ Like $V$ is open, then, $\text{Fr}_{Y}(Y\setminus V)\subseteq \text{cl}_{Y}(Y\setminus V)=Y\setminus V$, then, $$\text{Fr}_{X}(f^{-1}[Y\setminus V])\subset f^{-1}[Y\setminus V]$$Like $Y\setminus V$ is closed, then, $$f^{-1}[Y\setminus V]=f^{-1}[Y]\setminus f^{-1}[V]=X\setminus f^{-1}[V]$$ is closed. Then, $f^{-1}[V]$ is open and $x\in f^{-1}[V]$ and, finally, $f[f^{-1}[V]]\subseteq V$. Thus, $f$ is continuous. But, how can I do it the implication b) $\Rightarrow$ c). I don't have idea. Some hint? I really appreciate any help you can provide. For c) to a) you can also use that $A$ is closed iff $\operatorname{Fr}(A) \subset A$. Then $f^{1-}[Y \setminus V]$ is closed from the second equation already. Start with a closed set instead to simplify. For (b)\implies (c): Let $x\in Fr_X(f^{-1}B).$ (i). If $x\in f^{-1}B$ then $x\in der_X(X$ \ $f^{-1}B)$ so by (b), $$f(x)\in cl_Y(f(X \backslash f^{-1}B)=cl_Y(f(X) \backslash B)\subset cl_Y(Y \backslash B).$$ But also $f(x)\in B$ because $x\in f^{-1}B.$ So $f(x)\in B\cap cl_Y(Y$ \ $B)\subset cl_Y(B)\cap cl_Y(Y$ \ $B)= Fr_Y(B).$ So $x\in f^{-1}Fr_Y(B).$ (ii). If $x\not \in f^{-1}B$ then $x\in der_X(f^{-1}B)$ so by (b), $f(x)\in cl_Y(f(B)).$ But also $f(x) \in Y$ \ $B$ because $x\not \in f^{-1}B .$ So $f(x)\in cl_Y(B)\cap (Y$ \ $B)\subset cl_Y(B)\cap cl_Y(Y$ \ $B)=Fr_Y(B).$ So $x\in f^{-1}Fr_Y(B). $ Remark: (ii) is a dual of (i) because if we let $B'=Y$ \ $B$ then $Fr_Y(B)=Fr_Y(B').$ But also $f^{-1}B$ and $f^{-1}B'$ are disjoint and their union is $X$, so $Fr_X(f^{-1}B)=Fr_X(f^{-1}B').$ So if we replace $B$ by $B'$ in (i) we have $x\in f^{-1}B'\implies x\in f^{-1}Fr_Y(B')=Fr_Y(B).$ (.... And $x$ must belong to $f^{-1}B$ or to $f^{-1}B'$.) There are many equivalent def'ns of continuity. I hadn't seen (c) before.
common-pile/stackexchange_filtered
Aggregate sum of rows identified by max value I am trying to aggregate a the max value of each type, then sum all of this to one value: resource_id price a 100 a 84 b 33 b 100 A 100 and B 100 would be selected (max value of each type of A and B). Expected return: 200 What I have so far: SELECT resource_id, MAX(price) FROM costs GROUP BY resource_id It is currently returning A = 100 and B = 100... just need a little help on how to sum all this into a return of just 200 Thanks! Wrap your query... select sum(m_price) from ( SELECT resource_id, MAX(price) as m_price FROM costs GROUP BY resource_id )z
common-pile/stackexchange_filtered