text
stringlengths
70
452k
dataset
stringclasses
2 values
What is wrong with this JavaScript code? I have two checkboxes name check1 and check2. I wanted for either one to be disabled if the other one was checked. This is what I did: var male = document.getElementById("check1"); var female = document.getElementById("check2"); male.disabled = (female.checked == true) ? true : false; female.disabled = (male.checked == true) ? true : false; It does not work at all. Is the syntax correct. What did I do wrong? You can check for javascript syntax errors with : http://jslint.com/. It has a slight learning curve but its extremely useful. @havok, how is that useful in this particular case? Consider using radio buttons instead of checkboxes. The user is likely to be less confused by radio buttons behaving normally than by checkboxes behaving like radio buttons. @J-P, part of his question was whether his syntax is correct. jslint can confirm that it is. It doesn't answer his question so thats why I only put it as a comment. You need the onchange event, and your code could be tidied up as well. var male = document.getElementById("check1"), female = document.getElementById("check2"); male.onchange = function() { female.disabled = male.checked; }; female.onchange = function() { male.disabled = female.checked; }; jsFiddle. Also, shouldn't you be using radio input? You need to remove extra closing brackets after functions. Here is a working example uses jquery: http://jsfiddle.net/billymoon/wAukf/ disabled shouldn't be set at all in order to be disabled AFAIK, any value is "truthy" do .removeAttribute('disabled') to not have it disabled. you need to change the state of the other checkbox when one is clicked eg. male changed - alter female, and versa verse. function maleOnClick(){ female.checked= !this.checked; } function femaleOnClick(){ male.checked= !this.checked; } Anyway why dont you use type="radio" Try: male.setAttribute('disabled', 'disabled'); //set male.setAttribute('disabled', ''); //clear So: male.setAttribute = (female.checked == true) ? 'disabled': '';
common-pile/stackexchange_filtered
swf adaptative resolution, change depending user's monitor Is it posible to make a flash site, which will change the size when the vistors resolution is diferent? Practicly, every visitor will see the same size of the site, even if he has 800x600 or 1280x1024?? Maybe making html go fullscreen but make the swf occupy only 80% of the screen, so it always have the same proportion to the users monitors. Do you have any example? <center> <table border="0"> <tr> <td> <embed src="left.swf" quality="high" scale="exactfit" bgcolor="#000" wmode="transparent" width="150" height="600" /> </td> <td> <embed src="MAIN.swf" quality="high" scale="exactfit" bgcolor="#000" wmode="transparent" width="940" height="600" align="middle" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </td> <td> <embed src="rightr.swf" quality="high" scale="exactfit" bgcolor="#000" wmode="transparent" width="150" height="600" /> </td> </tr> </table> </center> I thought Flash movies always scaled proportionally to the output size by default... I guess I put too many resolution,so in some computers it seems nice and in others not If you set your flash to 100% of the HTML page, you can then read the "screen" resolution and even listen for changes (if the window is resized). public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { removeEventListener(Event.ADDED_TO_STAGE, init); stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; stage.addEventListener(Event.RESIZE, onStageResized); onStageResized() } private function onStageResized(e:Event=null):void { trace(stage.stageWidth, stage.stageHeight); } how do you embed the flash in your page? swfobject? ok, put 100% in width and height and set the following in your css: html, body { height:100%; } body { margin:0; }
common-pile/stackexchange_filtered
Is there any precedence for symbols for constructing parse trees? I am wondering if some symbols such as the ones in propositional logic have precedence over others in drawing parse trees. For example, the sentence: p ∧ q → r, would ∧ take precedence over → in becoming the root of the parse tree or vice versa? Or is there no precedence meaning, → or ∧ could be used the root? Operator precendences have to be specified, either in the grammar or using other means. There is no inherent precendence to rules, but you can fix one arbitrarily.
common-pile/stackexchange_filtered
Is it safe to delete a sender() with deleteLater() here? Suppose some slot was called by some QDialog based class , I create the dialog elsewhere , e.g MyDialog *dlg = new MyDialog (this); connect (dlg , SIGNAL(valueSet(QString)) , SLOT(slotGetValue(QString))); dlg->exec (); And in the slot , I delete the object by using deleting it's "deepest" parent class , which is QObject: void slotGetValue (const QString & key) { // process the value we retrieved // now delete the dialog created sender()->deletLater (); } Is that the correct way of doing this ? Is that safe ? There should be no reason to delete a dialog that is modal. Since QDialog::exec() blocks, the dialog can be safely deleted immediately after that returns. MyDialog *dlg = new MyDialog (this); connect (dlg , SIGNAL(valueSet(QString)) , SLOT(slotGetValue(QString))); dlg->exec (); delete (dlg); From that, you can probably guess there isn't any need for using new and delete. You can just put it on the stack, and it will be destroyed when leaving scope. Like this: MyDialog dlg(this); connect(&dlg, SIGNAL(valueSet(QString)) , SLOT(slotGetValue(QString))); dlg.exec(); And unless you need the this pointer in the MyDialog constructor, there's no reason to pass it.
common-pile/stackexchange_filtered
Convert a Linked List of type Integer to Set of type String in JAVA I have a List of Integers but I would like to take that List and convert it to a HashSet. For example my list is as follows: 1234 5678 1234 7627 4328 But I would like to take that list and convert the string of integers to a HashSet so it doesn't include repeats. What is the best way to accomplish this? My list is defined as static List<Integer> list; And my HashSet is defined as static HashSet<String> set = new HashSet<String>(list); My error is that I can't convert from int to string so what can I do to solve this? If all you want to do is eliminate duplicates, what's wrong with HashSet<Integer>? Do you really need them stored as strings? It is a string of integers "A string of integers"? I thought you had a List of integers. What is this "string of integers" you're talking about? And what does a "string of integers" have to do with whether to convert each single integer to a string? Assuming you are using an ArrayList as the instantiated form of List<>: for (Integer value : list) { set.add(value.toString()); } This will iterate through your List and take each integer, convert it to a String, and add that value to your HashSet. I ended up doing something like this. Thanks! No problem, if this was the answer you used feel free to mark it answered. One way is to use streams: Set<String> set = list.stream() .map(Object::toString) .collect(Collectors.toSet()); First, you stream the list. Then, each element is converted to string and finally all elements are collected to a set. By default, Collectors.toSet() creates a HashSet, though this is not guaranteed by the specification. If you want a guaranteed HashSet, you could use Collectors.toCollection(HashSet::new): Set<String> set = list.stream() .map(Object::toString) .collect(Collectors.toCollection(HashSet::new)); using Java 8 streams: set = list.stream().map(e -> e.toString()).collect(Collectors.toCollection(HashSet::new)); DEMO or without using streams for(Integer i : list) set.add(Integer.toString(i));
common-pile/stackexchange_filtered
Error while getting public data from twitter String status; IEnumerable<TwitterStatus> twitterStatus = twitterService.ListTweetsOnPublicTimeline(); foreach(String status in twitterStatus) { Console.WriteLine(twitterStatus); } Why it give can not convert string type error in foreach loop ? this is my whole code namespace TweetingTest { class Program { static void Main(string[] args) { TwitterClientInfo twitterClientInfo = new TwitterClientInfo(); twitterClientInfo.ConsumerKey = ConsumerKey; //Read ConsumerKey out of the app.config twitterClientInfo.ConsumerSecret = ConsumerSecret; //Read the ConsumerSecret out the app.config TwitterService twitterService = new TwitterService(twitterClientInfo); if (string.IsNullOrEmpty(AccessToken) || string.IsNullOrEmpty(AccessTokenSecret)) { //Now we need the Token and TokenSecret //Firstly we need the RequestToken and the AuthorisationUrl OAuthRequestToken requestToken = twitterService.GetRequestToken(); string authUrl = twitterService.GetAuthorizationUri(requestToken).ToString(); //authUrl is just a URL we can open IE and paste it in if we want Console.WriteLine("Please Allow This App to send Tweets on your behalf"); //Process.Start(authUrl); //Launches a browser that'll go to the AuthUrl. //Allow the App Console.WriteLine("Enter the PIN from the Browser:"); string pin = Console.ReadLine(); OAuthAccessToken accessToken = twitterService.GetAccessToken(requestToken, pin); string token = accessToken.Token; //Attach the Debugger and put a break point here string tokenSecret = accessToken.TokenSecret; //And another Breakpoint here Console.WriteLine("Write Down The AccessToken: " + token); Console.WriteLine("Write Down the AccessTokenSecret: " + tokenSecret); } twitterService.AuthenticateWith(AccessToken, AccessTokenSecret); //Console.WriteLine("Enter a Tweet"); //string tweetMessage; //string data; //string ListTweetsOnPublicTimeline; //string TwitterUserStreamStatus = ListTweetsOnPublicTimeline(); //TwitterStatus=ListTweetsOnPublicTimeline(); //tweetMessage = Console.ReadLine(); //ListTweetsOnPublicTimeline = Console.ReadLine(); //TwitterStatus twitterStatus = twitterService.SendTweet(tweetMessage); //TwitterStatus twitterStatus = twitterService.ListTweetsOnPublicTimeline(); //String status; IEnumerable<TwitterStatus> tweets = twitterService.ListTweetsOnPublicTimeline(); foreach(var tweet in tweets) { Console.WriteLine(tweet); //Console.WriteLine("{0} says '{1}'", tweet.User.ScreenName, tweet.Text); } //twitterStatus=Console.ReadLine(); } This is my whole code on which I am working and facing just one error on foreach loop which is my lack of knowledge in C# what does the error say? Can you put the Exception text or message I update it , kindly check it and I need output , I don't want exception You still have not posted the exception details as asked for by the other posters. Also your twitterStatus is an IEnumerable. Is this TwitterStatus class, the same as a string or at least derived from string? I did it but it give me error on for loop : Object reference not set to an instance of an object @Amitd I upload my whole code Have you registered your app on twitter .. this site will show you how http://www.d80.co.uk/post/2011/02/13/A-Simple-Twitter-Client-in-C-with-OAUTH-using-TweetSharp.aspx yes I reg my application ah good then did u replace lines 9 and 10 with the details from twitter..*pls dont post those key and secret here ..see step 4 in above link.. u will also need lines 22 - 30 in your code yes these lines are in my code but I didn't post here You will need something like this.. using TweetSharp; TwitterService service = new TwitterService(); IEnumerable<TwitterStatus> tweets = service.ListTweetsOnPublicTimeline(); foreach (var tweet in tweets) { Console.WriteLine("{0} says '{1}'", tweet.User.ScreenName, tweet.Text); } Also try this code to see why it is failing.. try to debug response.StatusCode using TweetSharp; TwitterService service = new TwitterService(); IAsyncResult result = service.ListTweetsOnPublicTimeline( (tweets, response) => { if(response.StatusCode == HttpStatusCode.OK) { foreach (var tweet in tweets) { Console.WriteLine("{0} said '{1}'", tweet.User.ScreenName, tweet.Text); } } }); More here : https://github.com/danielcrenna/tweetsharp Object reference not set to an instance of an object. you need to follow this article.. step by step guide ..you will need register your app with twiiter. http://www.d80.co.uk/post/2011/02/13/A-Simple-Twitter-Client-in-C-with-OAUTH-using-TweetSharp.aspx I am registered , as I already crawl dataset from twitter from python , and now am using c# , am registered updated my answer can u check what you get for response.StatusCode ? put a break point on the "if" line and check add a watch to response.StatusCode value and tweets list. The object you are itterating through is of type "TwitterStatus", not string... so you are confusing it when you try to automatically cast a TwitterStatus object as a string. Your question is very vague, so I'm going to assume your TitterStatus object has a "Text" property for the purposes of this answer. foreach(TwitterStatus status in twitterStatus) { Console.WriteLine(status.Text); } (just replace the ".Text" with whatever property of TwitterStatus holds the status text) Use the new keyword to create an object instance !!! this is now the error this shows on foreach loop Sounds like twitterStatus is null. is your TwitterService call returning values? Thanks! when you break after your service call to ListTweetsOnPublic...(), is the ienumerable list of "tweets" getting filled? this is thing which confusing me , thats why I upload my code , I think ienumerable list of tweets is empty I'm seeing your code snippet looks like it came from https://github.com/danielcrenna/tweetsharp. The only difference I see here is that he declares his service: TwitterService service = new TwitterService(); (without passing in the client info parameter) Also - just checking that your "twitterService" is authenticating ok and that you are registered (good comment from amitd) I am registered , as I already crawl dataset from twitter from python , and now am using c# , am registered and I update my statues on twitter from the same code Have you been able to verify if the ienumerable list is empty after the ListTweetsOnPublicTimeline() call? If so, do you get any error info in why it failed?
common-pile/stackexchange_filtered
How to attach a scroll bar from tool box to a panel so that it expands in relation to the form/panel? I have dragged a scroll bar on a panel on my form. Now i want to bind that scroll bar with the form. such that whenever i expand my form the scorll bar expands and remains at the right (incase of vertical scroll bar) and at bottom(in case of horizontal scroll bar). What is happening right now is that the scroll bar remains at the place where i had positioned it even if i resize or maximize the form. Help! You have to handle the Size events of the form. When the form size changes, you move your scrollbar so it stays flush right, and change the length of the scrollbar so it's always correct. It would be great if you could explain a bit more there.
common-pile/stackexchange_filtered
How can I make a clean build of microsoft/vscode source tree? I am trying to make a clean new build of microsoft/vscode. Being new to JS, TS, NPM, and Yarn what command do I have to execute to clear all build artifacts and output files and build the code I changed? $node --max_old_space_size=4095 ./node_modules/gulp/bin/gulp.js compile While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. You can find more information on how to write good answers in the help center: https://stackoverflow.com/help/how-to-answer . Good luck
common-pile/stackexchange_filtered
How to configure kubernetes so that I could issue commands against the master machine from my laptop? I'm trying to setup a cluster of one machine for now. I know that I can get the API server running and listening to some ports. I am looking to issue commands against the master machine from my laptop. KUBECONFIG=/home/slackware/kubeconfig_of_master kubectl get nodes should send a request to the master machine, hit the API server, and get a response of the running nodes. However, I am hitting issues with permissions. One is similar to x509: certificate is valid for <IP_ADDRESS>, not <IP_ADDRESS>. Another is a 403 if I hit the kubectl proxy --port=8080 that is running on the master machine. I think two solutions are possible, with a preferable one (B): A. Add my laptop's ip address to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in kubeadm init? B. Add <IP_ADDRESS> to the list of accepted ip addresses that API server or certificates or certificate agents holds. How would I do that? Is that something I can set in kubeadm init? I think B would be better, because I could create an ssh tunnel from my laptop to the remote machine and allow my teammates (if I ever have any) to do similarly. Thank you, Slackware are you doing kubectl --kubeconfig KUBECONFIG get nodes? Upon any request, the API server sends its certificate and kubectl verifies it. Your first error message might mean that the API sever's certificate is valid for IP address <IP_ADDRESS>, but the API server is actually running on IP address <IP_ADDRESS>, so kubectl fails to verify this certificate. You can find here a good explanation how to achieve this goal. You shoud add --apiserver-cert-extra-sans <IP_ADDRESS> to your kubeadm init command. Refer to https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options You should also use a config file: apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.16.2 apiServer: certSANs: - <IP_ADDRESS> You can find all relevant info here: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2
common-pile/stackexchange_filtered
Use Laravel as a webservice and client Is this possible to use laravel framework on one side as a RESTful webservice and on other side as a client ? I have a project thats has a internal database. And on other side a web application. I would like to make a REST API for the internal db and consume it with laravel framework as a client. So there will be 2 laravel projects. it is possible, to be a client, you could throw curl into it. Yes. Is simple to create a restfull controller. To create a client you can use something like Guzzle Or anything you want can i use guzzle as a extension for laravel framework ? or is this a standalone solution ? Yes, instal with composer in your project http://docs.guzzlephp.org/en/stable/overview.html#installation Yes that is possible. PHP gives you a number of ways to make HTTP requests, the most powerful of which is probably cURL. I recommend you use Guzzle as that will make your life a lot easier.
common-pile/stackexchange_filtered
ReactJS auth redirect before application loads I am designing a React application consisting of multiple components. In my organisation they use a kind of Authentication mechanism wherein one needs to check a particular cookie; if available user is considered authenticated and is allowed to view app. If cookie is not there / expired then one needs to direct the user to a particular URL wherein he can fill in his user id and password and then that URL redirects him to original application along with valid cookie. I am thinking how this can be achieved in my React application... I know there is componentWillMount method but shouldn’t user first get authenticated before any of the component loads? How to implement this? Guidance appreciated Cheers How about just put the function in your entry point of js (e.g. index.js)? But it will block your full application running & rendering see Redux, it's store may help you control the login state of a user If you are using react-router you have to create protected route component, which checks if user is authenticated. Then your routes file should look like this: import { Route, Redirect, BrowserRouter } from 'react-router-dom' const ProtectedRoute = ({ component: Component, ...rest }) => ( <Route {...rest} render={props => ( auth.isAuthenticated ? ( <Component {...props}/> ) : ( <Redirect to={{ pathname: '/login', state: { from: props.location } }}/> ) )}/> ) export const Routes = () => ( <BrowserRouter> <Route path='/' component={Index}/> <ProtectedRoute path='/access' component={Access}/> </BrowserRouter> ) For more details check the official example on auth redirects. You have several cases of handling authentication redirection / token expiration. 1. At start time Wait for the redux-persist to finish loading and injecting in the Provider component Set the Login component as the parent of all the other components Check if the token is still valid 3.1. Yes: Display the children 3.2. No: Display the login form 2. When the user is currently using the application You should use the power of middlewares and check the token validity in every dispatch the user makes. If the token is expired, dispatch an action to invalidate the token. Otherwise, continue as if nothing happened. Take a look at the middleware token.js below. I wrote a whole sample of code for your to use and adapt it if needed. The solution I propose below is router agnostic. You can use it if you use react-router but also with any other router. App entry point: app.js See that the Login component is on top of the routers import React from 'react'; import { Provider } from 'react-redux'; import { browserHistory } from 'react-router'; import { syncHistoryWithStore } from 'react-router-redux'; import createRoutes from './routes'; // Contains the routes import { initStore, persistReduxStore } from './store'; import { appExample } from './container/reducers'; import Login from './views/login'; const store = initStore(appExample); export default class App extends React.Component { constructor(props) { super(props); this.state = { rehydrated: false }; } componentWillMount() { persistReduxStore(store)(() => this.setState({ rehydrated: true })); } render() { const history = syncHistoryWithStore(browserHistory, store); return ( <Provider store={store}> <Login> {createRoutes(history)} </Login> </Provider> ); } } store.js The key to remember here is to use redux-persist and keep the login reducer in the local storage (or whatever storage). import { createStore, applyMiddleware, compose, combineReducers } from 'redux'; import { persistStore, autoRehydrate } from 'redux-persist'; import localForage from 'localforage'; import { routerReducer } from 'react-router-redux'; import reducers from './container/reducers'; import middlewares from './middlewares'; const reducer = combineReducers({ ...reducers, routing: routerReducer, }); export const initStore = (state) => { const composeEnhancers = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; const store = createStore( reducer, {}, composeEnhancers( applyMiddleware(...middlewares), autoRehydrate(), ), ); persistStore(store, { storage: localForage, whitelist: ['login'], }); return store; }; export const persistReduxStore = store => (callback) => { return persistStore(store, { storage: localForage, whitelist: ['login'], }, callback); }; Middleware: token.js This is a middleware to add in order to check wether the token is still valid. If the token is no longer valid, a dispatch is trigger to invalidate it. import jwtDecode from 'jwt-decode'; import isAfter from 'date-fns/is_after'; import * as actions from '../container/actions'; export default function checkToken({ dispatch, getState }) { return next => (action) => { const login = getState().login; if (!login.isInvalidated) { const exp = new Date(jwtDecode(login.token).exp * 1000); if (isAfter(new Date(), exp)) { setTimeout(() => dispatch(actions.invalidateToken()), 0); } } return next(action); }; } Login Component The most important thing here is the test of if (!login.isInvalidated). If the login data is not invalidated, it means that the user is connected and the token is still valid. (Otherwise it would have been invalidated with the middleware token.js) import React from 'react'; import { connect } from 'react-redux'; import * as actions from '../../container/actions'; const Login = (props) => { const { dispatch, login, children, } = props; if (!login.isInvalidated) { return <div>children</div>; } return ( <form onSubmit={(event) => { dispatch(actions.submitLogin(login.values)); event.preventDefault(); }}> <input value={login.values.email} onChange={event => dispatch({ type: 'setLoginValues', values: { email: event.target.value } })} /> <input value={login.values.password} onChange={event => dispatch({ type: 'setLoginValues', values: { password: event.target.value } })} /> <button>Login</button> </form> ); }; const mapStateToProps = (reducers) => { return { login: reducers.login, }; }; export default connect(mapStateToProps)(Login); Login actions export function submitLogin(values) { return (dispatch, getState) => { dispatch({ type: 'readLogin' }); return fetch({}) // !!! Call your API with the login & password !!! .then((result) => { dispatch(setToken(result)); setUserToken(result.token); }) .catch(error => dispatch(addLoginError(error))); }; } export function setToken(result) { return { type: 'setToken', ...result, }; } export function addLoginError(error) { return { type: 'addLoginError', error, }; } export function setLoginValues(values) { return { type: 'setLoginValues', values, }; } export function setLoginErrors(errors) { return { type: 'setLoginErrors', errors, }; } export function invalidateToken() { return { type: 'invalidateToken', }; } Login reducers import { combineReducers } from 'redux'; import assign from 'lodash/assign'; import jwtDecode from 'jwt-decode'; export default combineReducers({ isInvalidated, isFetching, token, tokenExpires, userId, values, errors, }); function isInvalidated(state = true, action) { switch (action.type) { case 'readLogin': case 'invalidateToken': return true; case 'setToken': return false; default: return state; } } function isFetching(state = false, action) { switch (action.type) { case 'readLogin': return true; case 'setToken': return false; default: return state; } } export function values(state = {}, action) { switch (action.type) { case 'resetLoginValues': case 'invalidateToken': return {}; case 'setLoginValues': return assign({}, state, action.values); default: return state; } } export function token(state = null, action) { switch (action.type) { case 'invalidateToken': return null; case 'setToken': return action.token; default: return state; } } export function userId(state = null, action) { switch (action.type) { case 'invalidateToken': return null; case 'setToken': { const { user_id } = jwtDecode(action.token); return user_id; } default: return state; } } export function tokenExpires(state = null, action) { switch (action.type) { case 'invalidateToken': return null; case 'setToken': return action.expire; default: return state; } } export function errors(state = [], action) { switch (action.type) { case 'addLoginError': return [ ...state, action.error, ]; case 'setToken': return state.length > 0 ? [] : state; default: return state; } } Hope it helps.
common-pile/stackexchange_filtered
What does 'return $next($request)' do in Laravel middleware? Please respect that I'm new to programming and Laravel, so this question might seem a little odd to the most of you. But I think this is what stackoverflow is for, so: When I created a new middleware with the command php artisan make:middleware setLocale there was already the handle-function with this code in it: return $next($request); and I'm wondering what exactly this line does. persist / you are good you go / you have the right @AhmedAboud and how? What does the Closure do in this context? $next($request) just passes the request to next handler. Suppose you added a middleware for checking age limit. public function handle($request, Closure $next) { if ($request->age <= 18) { return redirect('home'); } return $next($request); } when age is less than 18 it will redirect to home but when the request passes the condition what should be done with the request? it will pass it to next handler.Probably to the register user method or any view. took this as an answer since it gives an example and explanation This is explained in the documentation: To pass the request deeper into the application (allowing the middleware to "pass"), call the $next callback with the $request. It's best to envision middleware as a series of "layers" HTTP requests must pass through before they hit your application. Each layer can examine the request and even reject it entirely. https://laravel.com/docs/11.x/middleware#defining-middleware
common-pile/stackexchange_filtered
Regexp.escape adds weird escapes to a plain space I stumbled over this problem using the following simplified example: line = searchstring.dup line.gsub!(Regexp.escape(searchstring)) { '' } My understanding was, that for every String stored in searchstring, the gsub! would cause that line is afterwards empty. Indeed, this is the case for many strings, but not for this case: searchstring = "D " line = searchstring.dup line.gsub!(Regexp.escape(searchstring)) { '' } p line It turns out, that line is printed as "D " afterwards, i.e. no replacement had been performed. This happens to any searchstring containing a space. Indeed, if I do a p(Regexp.escape(searchstring)) for my example, I see "D\\ " being printed, while I would expect to get "D " instead. Is this a bug in the Ruby core library, or did I misuse the escape function? Some background: In my concrete application, where this simplified example is derived from, I just want to do a literal string replacement inside a long string, in the following way: REPLACEMENTS.each do |from, to| line.chomp! line.gsub!(Regexp.escape(from)) { to } end . I'm using Regexp.escape just as a safety measure in the case that the string being replaced contains some regex metacharacter. I'm using the Cygwin port of MRI Ruby 2.6.4. This happens to any searchstring containing a space. Indeed, if I do a p(Regexp.escape(searchstring)) for my example, I see "D\\ " being printed, while I would expect to get "D " instead. Is this a bug in the Ruby core library, or did I misuse the escape function? This looks to be a bug. In my opinion, whitespace is not a Regexp meta character, there is no need to escape it. Some background: In my concrete application, where this simplified example is derived from, I just want to do a literal string replacement inside a long string […] If you want to do literal string replacement, then don't use a Regexp. Just use a literal string: line.gsub!(from, to) line.gsub!(Regexp.escape(searchstring)) { '' } My understanding was, that for every String stored in searchstring, the gsub! would cause that line is afterwards empty. Your understanding is incorrect. The guarantee in the docs is For any string, Regexp.new(Regexp.escape(str))=~str will be true. This does hold for your example Regexp.new(Regexp.escape("D "))=~"D " # => 0 therefore this is what your code should look like line.gsub!(Regexp.new(Regexp.escape(searchstring))) { '' } As for why this is the case, there used to be a bug where Regex.escape would incorrectly handle space characters: # in Ruby 1.8.4 Regex.escape("D ") # => "D\\s" My guess is they tried to keep the fix as simple as possible by replacing 's' with ' '. Technically this does add an unnecessary escape character but, again, that does not break the intended use of the method.
common-pile/stackexchange_filtered
Regex replace keeping digits after comma AND the sign at the end of the number I would like to replace all characters after the first 2 digits after a comma, as well as keep the negative sign at the end of the string. E.g. having a string of 1234,56789- should result into 1234,56-. Using (,\d{2}).* and replacing with "$1-" does indeed keep everything until 2 digits after comma, but it doesnt keep/add the minus sign at the end of the string. I have tried (,\d{2}).*(-) and then replacing with "$1$2" too, but that didnt work neither. Just use \d: (,\d{2})\d* If you want to use a substitution, you can use (,\d{2})\d* and replace with $1 (,\d{2}): keeps the coma and the needed two digits \d*: ignore the other digits https://regex101.com/r/dcObRO/2 If you have a floating point number, it is better to make groups on it and rewrite the number as you desire, as in: ([0-9][0-9]*,[0-9][0-9])[0-9]*([-+])? ^ 1st group two digits ^ 2nd group (optional) then you can convert it into \1\2 as shown in this demo
common-pile/stackexchange_filtered
Change default keyboard language in mobile app I am developing a html5 based hybrid application and it is for a german client. Everything is completed and stumbled into one problem. The web app is in german, but while accessing the application, the keyboard is english one and i need to change it to german. Specifying <html lang="de"> dint work really. Ive found some questions related to android/iOs application and found in iOS you cannot do this. But in android, someone was able to solve this. how to change keyboard language programmatically Is there a way in html5 apps? I am using backbone.js + phonegap Any help is appreciated. This is not really that possible. You can change the users locale. Number of solutions present on stackoverflow already: Here for example, or Here another one. However, this will only change the locale. The problem you will encounter is that the keyboard is itself an application. Therefore, you cannot change it directly from your application, nor can you guarantee that your user will have the "German" charset or addon or whatever, for they keyboard app that they employ. Your only real and reliable solution if you wish to accomplish what you need would be to create your own keyboard input. Otherwise, it will be in the user's hands to change their keyboard to German. That means you have to change input language yourself. Yes, i saw the above solutions, but all that being native ones specific to android...:( yah, but i think do not have any other solution.
common-pile/stackexchange_filtered
Integration problem of a modified 'standard integral' Consider $\int\limits_0^\infty ye^{-y}e^{-xy}dy$ I can use the fact that $\int\limits_0^\infty u^ne^{-u}=n!$ Clearly, $\int\limits_0^\infty ye^{-y}e^{-xy}dy=\int\limits_0^\infty ye^{-y(1+x)}dy$. Hence, for $\int\limits_0^\infty ye^{-y}dy=1!=1$. However I don't understand how to apply that modification in the integral (apart from the fact that the answer is $\frac{1}{(x+1)^2}$). Could anyone help me out with this? Substitute $u = (1+x)y$. Thanks for the hint, so let $u=(1+x)y$, then $\int\limits_0^\infty \frac{1}{1+x}ue^{-u} du = \frac{1}{1+x}\int\limits_0^\infty ue^{-u} du = \frac{1}{1+x}\cdot 1!$, so I suppose I made a mistake somewhere.. Could you tell me where? From $$\int\limits_0^\infty ye^{-y}e^{-xy}dy$$ Use the substitution $u = y(1+x)$ so $\dfrac{\mathrm{d}u}{\mathrm{d}y} = 1 + x$ and hence $\dfrac{\mathrm{d}y}{\mathrm{d}u} = \dfrac{1}{1+x}$. Our integral can be written as $$\int\limits_0^\infty \frac{u}{1+x}e^{-u}\, \frac{\mathrm{d}y}{\mathrm{d}u} \mathrm{d}u = \int\limits_0^\infty \frac{u}{1+x}e^{-u} \times \frac{1}{1+x} \, \mathrm{d}u$$ So we get $$\frac{1}{(1+x)^2}\int\limits_0^\infty ue^{-u}\,\mathrm{d}u = \frac{1}{(1+x)^2}$$ Tada.
common-pile/stackexchange_filtered
Datadog extracting value from logs i have the following being output in my logs Finished processing [154976] items for user id [1234] is there any way in datadog i could output that on a widget with userid -> number basically process the logs, similar to how we do with errors from logs and creating alerts. Create a log processor with a grok parser rule such as getItemsAndUserid Finished processing [%{integer:count}] items for user id [%{integer:userid}]
common-pile/stackexchange_filtered
SU question closed as Exact Duplicate, but it does not seem to be the same to me. I posted this question yesterday evening: Good software to take a blog and format it for printing It is a question looking for a software tool that will take a blog and format it for printing. This morning I checked on it and found it had been closed as an exact duplicate for this question: What is a good alternative to Publisher for Desktop Publishing? Which is a question looking for a desktop publishing tool. While it is true that both questions refer to Publisher in some way, I don't see that my question is answered (or asked for that matter) in the "first post". Any chance I can get some re-open votes from Meta users? (Or a good explanation as to how my question is answered by the "first question".) Yes, you're correct, that was a bad call. Instead it should have been closed as a dupe of Looking for software to facilitate printing of online content. But then that one does mention wanting web-based software. Flip switch Dupe has been swapped and the annotation left as: Dupe close because while the newer one is of extremely similar premise, it does not cover just web-based software, which is outside the scope of SU. Actually, I would say that the second half of your question ("I could use Publisher") is matching to the duplicate, and this is probably why it got closed. However, the rest seems quite specific, and more than just asking for a Publisher alternative. In my opinion, this could be reopened. In general, remember that moderators are humans too (aside from random, who is obviously a unicorn on a sugar rush), and as such, can easily make mistakes. Don't hesitate in such case to edit your question with more details, and/or simply flag your own question for moderator attention, for a second opinion. I guess there is no need for a meta post for each closed question, such thing can be settled directly, without a "call for reopen votes".
common-pile/stackexchange_filtered
How to run linear regression with constraints in R? If I have the following data n<-1000 x1<-rnorm(n,1,1) x2<-rnorm(n,2,2) x3<-rnorm(n,3,3) e<-rnorm(n) y<-3+0.5*x1+0.2*x2+0.3*x3+e I want to fit a linear model between $y$ and $x$ like: $$y=\alpha+\beta_1x_1+\beta_2x_2+\beta_3x_3+\epsilon$$ The unconstraint linear regression in R is fit=lm(y~x1+x2+x3) Now if I have some extra constraints for the coefficients: (i) $\beta_i\ge0$, for $i=1,2,3$; (ii) $\displaystyle\sum_{i=1}^3\beta_i=1$. I still want to run a linear regression but with the two constraints above. How can I implement this constrained linear regression in R? To require $\sum_i \beta_i=1$, you can just write $\beta_3=1-\beta_1-\beta_2$, so this is a two parameter optimization. One way to fool R into using only positive values for $\beta$ is use $optim(logPars,ols)$ on the OLS sum of squares and pass $logPars = log(\beta_1,\beta_2)$ as parameters, then exponentiate them before calculating the sum of squares.
common-pile/stackexchange_filtered
How do php echo use for smart phones I am using codeigniter for my project and I want to know how codeigniter can use for echo to shows a message that send to a mobile phone. it shows the php code instead of the message. https://i.sstatic.net/HF0xf.png $this->sms->message('Thank you for your order Your order has been received and will be processed shortly<br>Your order ID: <?php echo $order_id ?>'); How is it sent? Show your code. Apparently, the PHP is not executed, it is just treated as a String. As said below, it doesn't seem to have anything to do with Android and JavaScript. It appears to be sent as SMS, not related to Android if so. by the looks of that image, your file isn't .php - It's showing PHP code and not being properly parsed. This has nothing to do with smart phones. This is serverside processing, not client-side. Show your codes. I Posted my code under the post You're already in PHP, therefore you need to remove the <?php and ?> tags. Plus, variables do not get parsed in single quotes, therefore you need to use double quotes. $this->sms->message("Thank you for ... shortly<br>Your order ID: $order_id"); I also hope that your files are a .php extension. For more information on single/double quotes, read the following Q&A on Stack: What is the difference between single-quoted and double-quoted strings in PHP? Thanks for the edit blex. I need to add something but I will keep yours. Sure, I just thought it would be nice not to have to scroll to see the changes you made ;) +1 @blex Je me demande s'il y a une section francophone sur Stack. Je n'ai jamais regarder/chercher. Sinon, ce serait bien d'en avoir une. arrivederci! I don't believe there is one in French, but there is one in Russian. Peut-être plus tard, but it does not bother me, I think having a common site in English for every country is the best way to gather the most answers and questions. Furthermore, it's a good way to practice our English :) PS: J'adore la Poutine ! - Non, je ne réduis pas le Canada à la Poutine ;) @blex Indeed. The English language is the most widely used, and more so in business. Ouais, notre poutine est la meilleure. Le secret est dans la sauce ;-) prend soin. I Generally Use this code. Change the settings. I got this link through 'mysmsmantra' $OrderMessage=urlencode("Dear Customer, Order $OrderId of Rs. $TotalPayableAmount is accepted by Sangakara. Thank You"); $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_URL, "http://bulksms.mysmsmantra.com:8080/WebSMS/SMSAPI.jsp?username=sangakara&password=password&sendername=sendername&mobileno=$MobileNo&message=$OrderMessage" ); $content = curl_exec($ch);
common-pile/stackexchange_filtered
Equalizers and Basic limit theorem in Category theory I think I've found an error in Benjamin Pierce's Basic Category Theory for Computer Scientists proof of the Basic Limit Theorem. This usually means I've misunderstood something. Can you point out the flaw in the following reasoning? Theorem: Let $\textbf{D}$ be a diagram in a category $\textbf{C}$, with sets $V$ of vertices and $E$ of edges. If every $V$-indexed and every $E$-indexed family of objects in $\textbf{C}$ has a product and every pair of arrows in $\textbf{C}$ has an equalizer then $\textbf{D}$ has a limit. The proof proceeds roughly as follows: Construct the product $\Pi_{I \in V}D_I$ of objects in $\textbf{D}$. Construct the product $\Pi_{(I \xrightarrow{e} J \in E)}D_J$. For any $\textbf{D}$-edge $D_e : D_I \rightarrow D_J$ there are two ways from $\Pi_{I\in V}D_I$ to any $D_J$. Those are $\pi_J$ and $D_e \circ \pi_I$. Form a family of arrows from each method. Each family induces a mediating arrow from $\Pi_{I\in V}D_I$ to $\Pi_{(I \xrightarrow{e} J \in E)}D_J$, call those $p$ and $q$. Select $e : X \rightarrow \Pi_{I\in V}D_I$ such that $e$ equalizes $p$ and $q$. $X$ is a limit of $\textbf{D}$. This is nice and concise. My trouble is this: What if there are two $D_e : D_I \rightarrow D_J$? In that case, there are potentially many more than two ways from $\Pi_{I\in V}D_I$ to each $D_J$ and potentially many more than two mediating arrows $\Pi_{I\in V}D_I$ to $\Pi_{(I \xrightarrow{e} J \in E)}D_J$. Note that this does not affect the proof: $\textbf{C}$ is assumed to be a small category, and no matter how many mediating arrows you have between the two products you can just keep stacking on equalizers until you've equalized them all (at which point you've constructed your limit). However, there's no mention of this in the text, and it leaves me wondering whether I'm crazy and/or missing something obvious. But limits require that for any limit $X$ with arrow $f_i : X \rightarrow D_i$ in the limit and arrow $g : D_i \rightarrow D_j$ in the diagram, $f_j = g \circ f_i$. If we ignore extra $D_e : D_I \rightarrow D_J$ in the proof then it seems like there are arrows in the diagram (the ignored arrows) that could violate this equation. Note that $\prod_{I\to J}D_J$ has index set $E$. Thus, two edges $e_1:I_1\to J$ and $e_2:I_2\to J$ correspond to two "copies" of $D_J$ in that product. In particular, if $e_1$ and $e_2$ have the same domain $I$ they still induce two distinct copies of $D_J$. Moreover, $q$ distinguishes them. @KarlKronenfeld, that's precisely what I was missing, thanks. Care to write up an answer? Nate, there is something I need you to clarify. What exactly are $I$ and $D_I$. My understanding was that both $I$ and $D_I$ were elements of $V$, but that seems to be false or at least not exactly true. Oh, that was just me being sloppy when summarizing the proof in the book. In the book, $I$ and $J$ are the notation used when referring to elements of $V$ (i.e. only in the indexing of the products), while $D_I$ and $D_J$ are used when referring to objects of $\textbf{D}$ (which are indeed the same, unless I'm missing something). Sorry. I'll go back and edit. Nate, diagrams are generally represented by functors, so $E$ and $V$ would not consist of constituents of $\bf C$ but rather some index category besides $\bf C$. Here, this functor would be unnecessary and also it would get in the way. It would be like showing that some open subset of $\mathbb R^n$ is a manifold by constructing charts and all that. Yeah, it seems that $I$, $J$, ... are indeed intended to index the objects $D_I$, $D_J$, ... of $\textbf{D}$. Apologies; this syntax is somewhat new to me. Nate, I will be very clear about how I am treating these things in my answer, so don't worry too much. Central limit theorem? Or basic limit theorem? @Did Exactly!!! The central limit theorem is a fundamental result in statistics http://en.wikipedia.org/wiki/Central_limit_theorem ....@Did and has nothing to do with category theory. So the current title is misleading. I tried to edit it yesterday, but apparently some (half asleep?) moderator did not accept it. Besides, as you point out, the OP himself calls it BASIC limit theorem in the body. So please some other moderator should consider reaccepting my edit or at least fix the title appropriately. Done. (No need for a mod to do it.) @Did I know, and I did it! But as I said it was rejected. I am glad it is fixed now. $\DeclareMathOperator {\cod}{cod}$ Note about my notation: There is no loss of generality just to say that $V$ and $E$ consist of the objects and arrows (respectively) of the diagram $\mathbf D$. Thus, the product of objects is $P_1=\prod_{I\in V}I$ and the other product is $P_2=\prod_{e\in E}\cod(e)$, where I use $\cod(e)$ to refer to the codomain of $e$; if $e:I\to J$, then $\cod(e)=J$. Now, you asked what happens to two arrows $e,e':I\to J$ in $\mathbf D$. Since they are different elements of $E$, they will represent different "copies" of $J$ in the product $P_2$. In category theory one uses projections to formalize the notion of "copies". Specifically, the projections $\pi_e$ and $\pi_{e'}$ from $P_2$ to $J$ allow us to differentiate between the instance of $J$ corresponding to $e$ and the instance of $J$ corresponding to $e'$. Let's see how this works by examining the arrow $q$. It is defined by the property that $\pi_e\circ q=e\circ\pi_I$ for all $e\in E$. Thus, if $e,e'$ are as above, then $q$ must map into the coordinate $e$, so to say, by behaving like $e\circ\pi_I$. Likewise $q$ must map into the coordinate $e'$ by behaving like $e'\circ\pi_I$. Since $e\circ\pi_I$ and $e'\circ\pi_I$ can be completely different, so can $q$ in these two coordinates. Notice that the equation $\cod e=J=\cod e'$ really has little impact on $q$. In fact, if I were to sum up the above paragraph, it would be: the arrows $e$ and $e'$ dictate the behavior of $q$, not their codomains.
common-pile/stackexchange_filtered
Make all buttons rounded Using swift, I tried with this code to make a button rounded, and it works: button.layer.borderColor = UIColor.grayColor().CGColor button.layer.borderWidth = 1 button.layer.cornerRadius = 8 Unfortunately, I've got a lot of buttons and I would like to know if there's a way to make all buttons rounded without doing "copy and paste" of the code above every time. You can do this via UIAppearance, which is a proxy that allows you to configure properties for all objects of a UIKit class. Firstly, as UIAppearance works on properties on the UIView itself and the ones you want to control are on the button layer, you need to expose this to the appearance: @objc extension UIButton { dynamic var borderColor: UIColor? { get { if let cgColor = layer.borderColor { return UIColor(CGColor: cgColor) } return nil } set { layer.borderColor = newValue?.CGColor } } dynamic var borderWidth: CGFloat { get { return layer.borderWidth } set { layer.borderWidth = newValue } } dynamic var cornerRadius: CGFloat { get { return layer.cornerRadius } set { layer.cornerRadius = newValue } } } in Swift, the dynamic keyword instructs the compile to generate getters and setters for the property, so that UIAppearance that identify it as configurable, (see this SO question, and Apple's documentation for more details). You can then set the properties on the ui appearance of the UIButton class: UIButton.appearance().borderColor = UIColor.grayColor(); UIButton.appearance().borderWidth = 2; UIButton.appearance().cornerRadius = 20; You should do the configuration during the application startup, as the appearance changes apply when the view is added to the window (see the note on the UIAppearance documentation). If you want to give these default properties only for some buttons in your class, you can subclass UIButton, and use the appearance on that class instead of UIButton. For example: class MyButton: UIButton {} ... MyButton.appearance().borderColor = UIColor.grayColor(); MyButton.appearance().borderWidth = 2; MyButton.appearance().cornerRadius = 20; will apply the styling only to buttons of the MyButton class. This allows you to define different look&feel for buttons in your class by simply subclassing UIButton and working on the appearance of the subclass. It works!! For all newbies (like me) I suggest to paste the code in the first pane in Appdelegate.swift just below "import UIKit" and the code in the second pane below "func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {". Thank you very much! @Cristik Hello, about "UIAppearance works on properties in UIView", could you explain more? Because I tried to expose layer.borderColor which is CGColor and had runtime error. But layer.borderWidth can be directly exposed, how come since both of them are not properties in UIView? @DesmondDAI I meant that UIAppearance needs some existing properties on UIView to work with. In this case if you need to configure the border color via UIAppearance, then you need a borderColor property to work with. @Cristik I figured out why: There are certain data types that UIAppearance can work with, according to https://developer.apple.com/documentation/uikit/uiappearancecontainer. And because CGColor is a structure therefore cannot it will fail. Looks like properties' attributes should be @objc dynamic... (doesn't work without @objc on Swift 5) Good point, @AntonBelousov, thanks, I added @objc to the extension declaration, this should make all properties from there as @objc. With this, you can apply to all buttons. Copy/paste to your helper and use. You can modify button from a storyboard. Just, add this class to any button. (Add this class to any button in a storyboard) @IBDesignable class RoundedButton: UIButton { override func awakeFromNib() { super.awakeFromNib() layer.cornerRadius = frame.size.height / 2 clipsToBounds = true imageView?.contentMode = .scaleAspectFit } @IBInspectable var borderWidth: CGFloat = 0 { didSet { layer.borderWidth = borderWidth } } @IBInspectable var borderColor: UIColor? { didSet { layer.borderColor = borderColor?.cgColor } } @IBInspectable var bgColor: UIColor? { didSet { backgroundColor = bgColor } } override var isHighlighted: Bool { didSet { if isHighlighted{ backgroundColor = backgroundColor?.alpha(0.6) }else{ backgroundColor = backgroundColor?.alpha(1) } } } } You should subclass UIbutton. In your subclass, override awakeFromNib and paste the code there. In Storyboard, when referring to your button, open the assistant editor and choose the third tab in. You can provide your custom class name here. Just to comment in relation to the other suggestions saying to use UIAppearance: this solution will let you apply the style to a large number of buttons, while still giving you the option to have regular UIButtons without the style later on, so I think this solution is a little more flexible (and easier imo). @ConnorNeville you can also subclass UIButton and configure the UIAppearance proxy for that subclass, this will allow different subclasses to all look the same without interfering with the default buttons Use this code to get story board outlet as property extension UIView { @IBInspectable var cornerRadius: CGFloat { get { return layer.cornerRadius } set { layer.cornerRadius = newValue layer.masksToBounds = newValue > 0 } } @IBInspectable var cornerWidth: CGFloat { get { return layer.borderWidth } set { layer.borderWidth = newValue } } @IBInspectable var borderColor: UIColor { set{ self.layer.borderColor = newValue.cgColor } get{ return UIColor(cgColor: self.layer.borderColor!) } } } Select the button and go to the Identity inspector, click the Add button (+) in the lower left of the user defined runtime attributes editor. Double click on the Key Path field of the new attribute to edit the key path for the attribute to layer.cornerRadius Set the type to Number and the value to 8. To make a circular button from a square button, the radius is set to half the width of the button. Or you can add this code, I find the first way easier but you can decide which one to use. button.layer.borderWidth = 2 button.layer.cornerRadius = 8 button.clipsToBounds = true
common-pile/stackexchange_filtered
How to create OneNote 2010 section How can you create a new section in a OneNote 2010 notebook with c#? According to the API there is no method to do so. But there is a CreateNewPage Method so I wondering if there is something similiar for sections? If not, how can this be achieved except for manipulating the XML files (which is a task i'd like to avoid since I'm not experienced in it)? If the API indicates there is no method, that should answer your question, you can only create new pages. You could in theory look at the XML to figure out how its done. I would simply write my own method to modify the XML code for me. Here is code snippet from my add on: public bool AddNewSection(string SectionTitle, out string newSectionId) { try { string CurrParentId; string CurrParentName; string strPath; CurrParentId = FindCurrentlyViewedSectionGroup(out CurrParentName); if (string.IsNullOrWhiteSpace(CurrParentId) || string.IsNullOrWhiteSpace(CurrParentName)) { CurrParentId = FindCurrentlyViewedNotebook(out CurrParentName); if (string.IsNullOrWhiteSpace(CurrParentId) || string.IsNullOrWhiteSpace(CurrParentName)) { newSectionId = string.Empty; return false; } strPath = FindCurrentlyViewedItemPath("Notebook"); } else strPath = FindCurrentlyViewedItemPath("SectionGroup"); if (string.IsNullOrWhiteSpace(strPath)) { newSectionId = string.Empty; return false; } SectionTitle = SectionTitle.Replace(':', '\\'); SectionTitle = SectionTitle.Trim('\\'); strPath += "\\" + SectionTitle + ".one"; onApp.OpenHierarchy(strPath, null, out newSectionId, Microsoft.Office.Interop.OneNote.CreateFileType.cftSection); onApp.NavigateTo(newSectionId, "", false); } catch { newSectionId = string.Empty; return false; } return true; } Basically what I am doing here is to get the path of currently viewing Section Group or Notebook and then adding new section name to that path and then calling OpenHierarchy method. OpenHierarchy creates a new section with title provided and returns it's id. Following is where I create a new section and Navigate to it: onApp.OpenHierarchy(strPath, null, out newSectionId, Microsoft.Office.Interop.OneNote.CreateFileType.cftSection); onApp.NavigateTo(newSectionId, "", false); So can write something like: static void CreateNewSectionMeetingsInWorkNotebook() { String strID; OneNote.Application onApplication = new OneNote.Application(); onApplication.OpenHierarchy("C:\\Documents and Settings\\user\\My Documents\\OneNote Notebooks\\Work\\Meetings.one", System.String.Empty, out strID, OneNote.CreateFileType.cftSection); }
common-pile/stackexchange_filtered
Are we going to enforce the prior research aspect of good questions or not? A question was asked about resistance and voltage today which struck me as off-topic because of no apparent research. Also, there was no conceptual discussion which indicated a difficulty about understanding. At this Meta question the PSE community lays out a comprehensive discussion about sufficient prior research. When I read the question, I immediately down-voted it and flagged to close, but there were two answers already posted with no down-votes (their privilege). I suppose my point for asking this question is to raise the issue, again, that we are getting low-quality traffic and people are obliging it by giving answers. I am not complaining about a simple question, but about the fact that a quick web search would turn up the answer already written somewhere. If we are to enforce the prior research clause, would it be appropriate, in addition to down-voting, to flag as "low quality" so that the moderators would have an immediate better picture of what is happening (rather than waiting for members to close). Frankly, that question looks fine to me. It's by no means a great question, but it looks like a genuine confusion with the concepts and the type of thread we're here to answer. @Rishi Because the OP mentions 5 bulbs in series and asks a question about voltage drop in each resistor, I inferred from that much specificity that it is a homework-inspired question. The homework VtC specifically says show some effort. Ohm's Law is very evident as the physics concept at play, and there is no lack of explanation of that concept. OP should have taken the time to at least tell us what they have tried. The prior research issue is always a tricky one. The problem is distinguishing between someone who simply can't be bothered and someone who is genuinely confused, perhaps because they are just starting out in physics and don't even know what to Google. In principle this shouldn't matter because we judge the question not the person. In practice we wouldn't be answering questions here unless we were enthusiastic about physics and eager to help budding young physicists get up to speed, so most of us will cut OPs some slack if we think it is a genuine question. I saw this question in the review queue and debated with myself whether to vote to close it. In the end I decided not to, but it was borderline and I find myself unwilling to criticise site members for voting to close or for deciding to leave the question open. I think we're pretty good at closing the more outrageous instances of insufficient effort questions so I wouldn't worry too much about the borderline cases. John, thanks for your answer. My reaction to the OP is based on the fact that the words voltage and resistance are both in the title, but the author doesn't indicate any effort to do a general search on those terms. Author mentions "5 bulbs in series" which is a very specific arrangement, along with the word series which is a circuitry technical term. From those facts I inferred it was a homework-related problem. Series resistors have LOTS of documentation all around the internet, in books, and even on this site. With all due respect to you, I don't consider this a borderline case but laziness Be sure to note that I am not saying you are lazy, rather the OP. @BillN Surely you can recognize that this is a subjective issue? This is why we require a five-user consensus for closure - you cast your close vote; if enough people agree with you, then it gets closed, and if people do not agree with you, then it's because the question is on the fence on a subjective issue. Yes, I recognize that @EmilioPisanty. I was simply laying out my reasoning for why I thought this question was particularly lacking sufficient research, as well as lacking a clear statement of their confusion. Personally, I don't think insufficient prior research makes a question low quality (in the sense of the VLQ flag). If there's something more going on, like a pattern of a single user posting many questions without prior research, then a custom moderator flag might be in order, but in general for individual unresearched questions I think it should be sufficient to downvote and/or vote to close using a custom reason (or one of the standard reasons, if it applies). This does get enforced. This is a community site-scope policy, documented in this meta's FAQ at What counts as sufficient prior research when asking a question?, and it gets used all the time to close questions. For a sampling of questions that get closed with comments mentioning that policy, see this query. As mentioned by John and on several places in the discussion at the FAQ post, the question of whether a given question has sufficient prior research or not is ultimately a subjective judgement call, and it is perfectly OK for different users to disagree on whether a specific question meets the bar or whether it should be closed under that policy. This is why these closures are not done by unilateral action, but through a five-vote consensus, so that this variation gets averaged out as far as possible. Maybe I missed something, but I don't feel comfortable about the word "enforce" in the title of the question. One can enforce some law, but I am not sure answers to a Meta question have the status of a law, they may be suggestions, "nice to haves". I looked at the rules (https://physics.stackexchange.com/help), and I did not find anything cut-and-dried about "prior research". The section on "research" at https://physics.stackexchange.com/help/how-to-ask is categorized as "tips". Again, I may have missed something. So I would think if one feels a question lacks "prior research", one could downvote it or vote to close (although I don't see "lack of prior research" among the available reasons for closure), but this does not look like "enforcement", it's more like expression of a personal opinion. Anyway, I don't feel we have a duty to "enforce" prior research, although prior research may be nice to have. Should we really demand perfection, especially from new users of the site? @Rishi : No, that does not count as official rule: at meta.stackexchange.com/q/7931 (FAQ for Stack Exchange sites), they write: "For official guidance from Stack Exchange, visit the Help Center." The scope of the site - what's on topic and what's not - is decided by the site community, not by Stack Exchange corporately. This community consensus is agreed through, and documented on, threads on this meta, particularly the ones marked as FAQ. Of course those policies need to be enforced. They are not "nice-to-haves" - they are an essential component of keeping the community moderation fair, consistent, and predictable. @EmilioPisanty : With all due respect, I cited the rules, you just offered your opinion. @Rishi : And I cited the rules that do not support your point of view.
common-pile/stackexchange_filtered
"Unable to perform" issue in IIS Manager (localhost) When I'm trying to open any app in IIS Manager (local machine) like CGI, Handler mappings etc. I got an error There was an error while performing this operation: I'm unable to find out this issue. Make sure Web.config exists in the home directory of the website and its NTFS permissions are allowing read Can u please help how to check NTFS permissions.. You might try to use Jexus Manager to troubleshoot, as it might give a more meaningful error message, https://www.jexusmanager.com I installed the hosting bundle again. i.e. repair and then checked if the iis_usr has all access for the wwwroot folder. This resolved the problem for me.
common-pile/stackexchange_filtered
Unique Facebook Comments with Viddler Playlist Videos Is it possible to change the Facebook Comments box on a page to a new one when a user changes a video within our Viddler playlists? The playlist changes on the same page with no refresh, so if I just insert one FB comment box it will stay the same no matter what video is called up. I know it would be possible with javascript... Here is the script which was created here to do the playlist change: $(".playlist-link").click(function(e){ e.preventDefault(); var playlist = $(this).attr("href"); var url = 'http://www.viddler.com/tools/vidget.js' + '?widget_type=js_list' + '&widget_width=940' + '&source=playlist' + '&user=Fanaticgroup' + '&playlist='+ playlist + '&style=grid'+ '&show_player=true'+ '&player_type=simple'+ '&max=12'+ '&videos_per_row=6'+ '&v=2.0'+ '&id=7476737550'; $.getScript(url, function(data, textStatus){ }); }); Now this does the job perfectly and leaves it open for me to call other functions after the playlist is chosen, like auto start the first video ect. Is there a way of calling a new FB comment box onto the page after the playlist is changed, and upon each new video? According to the Facebook Documentation, you can insert the comment social plugin using html like so: <div class="fb-comments" data-href="http://example.com" data-num-posts="2" data-width="500"></div> <div id="fb-root"></div> <script> (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1&appId=132892600147693"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> It should be simple enough, at that point to do something similar to this: <div class="fb-comments" data-href="http://www.yourdomain.com/some/unique/path/identifying/your/playlist" data-num-posts="5" data-width="500"></div> <div id="fb-root"></div> <script> (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1&appId=YOUR_APP_ID"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); var commentContainerTemplate = $('.fb-comments').clone(); function loadNewPlaylistComments(playlist_id) { var currentComments = $('.fb-comments'), newComments = commentContainerTemplate.clone() .data('href', "http://www.yourdomain.com/some/unique/path/identifying/your/playlist/" + playlist_id) .before(currentComments); currentComments.remove(); FB.XFBML.parse( newComments.get() ); } </script> Then, every time you call loadNewPlaylistComments() the old comments will be removed, a new comment container will be added to the page, and the FB XFBML parser will be executed. That should load the comments for the new playlist. It may take a few seconds for the new comments to load, so I suggest you build in some kind of loading indicator for the user, once you get everything figured out. Yeah that would do it. Well I will have to keep you in the loop on this one once I get round to installing the comments. Thank you kindly though!
common-pile/stackexchange_filtered
React Refs' value not showing This is a very simple example. I have a form and I need the value of a hidden field so I need to use ref: <form> <input ref="num" type="hidden" name="amount" value="99"/> </form> var number = this.refs.num.value; console.log(number); // nothing console.log(this.refs.num); // shows the field How to get the value with ref? this.refs.num.value is correct. Could you provide a bit more context to your code? Where are you trying to grab this.refs? @BradColthurst From a function. foo: function(){...} And where are you calling the function? I think you get value, before rendering, try this: handleSubmit(e) { if (e) { e.preventDefault(); } var value = this.refs.num.value; console.log(value); } render() { console.log(this.refs.num ? this.refs.num.value : ''); return ( <form> <input ref="num" type="hidden" name="amount" value="99" /> <a onClick={this.handleSubmit.bind(this)}>submit</a> </form> ); } output would be empty string at first and 99 after render: Hi. How to get the 99 when I call my function? https://codepen.io/Crema/pen/yOdqpO?editors=1011 on this codePen I have no problem getting the value from a function I've wrote a handleSubmit function Thanks. I needed the .bind(this). ;) Thanks again.
common-pile/stackexchange_filtered
Replace NULL by default value in LEFT JOIN, but not in ROLLUP I have a query having LEFT JOIN, group by and ROLLUP, like this: Select * from ( Select user_agent, value, recoqty, count(recoqty) as C from august_2016_search_stats SS LEFT JOIN august_2016_extra E on (SS.id = E.stats_id and E.key = 'personalized') where time >= '2016-08-22 00:00:00' and time <= '2016-08-22 23:59:59' and query_type = 'myfeed' and recoqty = 'topics' group by recoqty, user_agent, value with ROLLUP having recoqty is not null ) D order by C desc; which gives result like this: +------------+-------+---------+------+ | user_agent | value | recoqty | C | +------------+-------+---------+------+ | NULL | NULL | topics | 1330 | | abscdef | NULL | topics | 1330 | | abscdef | NULL | topics | 1285 | | abscdef | 1 | topics | 25 | | abscdef | 0 | topics | 20 | +------------+-------+---------+------+ Here, the value (NULL 1285) is due to LEFT JOIN, and the value (NULL 1330) is due to rollup. However, is there a way to replace NULL value ONLY for LEFT JOIN and not for ROLLUP ? This is a little tricky, because it appears that the NULL values coming from your data are indistinguishable from the NULL values coming from the rollup. One possible workaround is to first do a non-aggregation query in which you replace the NULL values from the value column with 'NA', or some other placeholder, using COALESCE. Then aggregate this as a subquery using GROUP BY with rollup. Then the NULL values in the value column will with certainty be from the rollup and not your actual data. SELECT t.user_agent, t.value, t.recoqty, t.C FROM ( SELECT user_agent, COALESCE(value, 'NA') AS value recoqty, COUNT(recoqty) AS C FROM august_2016_search_stats SS LEFT JOIN august_2016_extra E ON SS.id = E.stats_id AND E.key = 'personalized' WHERE time >= '2016-08-22 00:00:00' AND time <= '2016-08-22 23:59:59' AND query_type = 'myfeed' AND recoqty = 'topics' ) t GROUP BY t.recoqty, t.user_agent, t.value WITH ROLLUP HAVING t.recoqty IS NOT NULL What happens when you run the above query? All NULL values got replaced by NA Use COALESCE first, then subquery and do the rollup. You should include sample data. I can't debug anything without this.
common-pile/stackexchange_filtered
Using wp_handle_upload() to Direct Specific Path by Using $overrides How do you use wp_handle_upload function and apply_filters together to upload files on a specific path? What is going to be the override? For example: $overrides = array('file' => 'C:\\uploads\\filename.pdf','message' => 'File written'); apply_filter('wp_handle_upload',$overrides); or something like that? Or is this the right code? The real question in here is: what $overrides can be used as the key to this associative array? You need to specify a list of allowed mime types. You could make it easy by just getting the allowed mime types like: $file = $_FILES['the-file']; $upload_file = wp_handle_upload($file, array( 'test_form' => false, 'mimes' => get_allowed_mime_types() )); If you look at the codex for Default allowed mime types, you could manually specify which ever mime types you want in that format. An example would be like this answer.
common-pile/stackexchange_filtered
"Show in Finder" switches Desktops Here's an odd problem I noticed with Chrome and the Finder. I'm new to OS X so I'm not really sure if this is a bug, or there is some setting that may fix it. Select Assign To..None for the Finder app. Open a Finder window on desktop 1. Open a Chrome window on desktop 2. Download a file in Chrome, and click on Show in Finder. The new Finder window opens on Desktop 2, yet you are switched back to the Finder window in Desktop 1. This is confusing and requires a couple of clicks to get back to Desktop 2. This behavior doesn't happen in iTunes when I click on Show in Finder there, so I don't know. What settings are specified in the Mission Control preference pane in System Preferences? If changing this option solved your problem, please provide an answer below to help others with the same problem. Unchecking "When switching to an application, switch to a Space with open windows for that application" in Mission Control System Preferences will fix the problem, but the side effects may not be desirable. Also, it seems this option is not consistent. Some Apple apps will automatically switch spaces anyway (Finder, iTunes), Wouldn't mind a better answer.
common-pile/stackexchange_filtered
Is there dry land on Earth if the moon orbits just above its Roche limit? Imagine an Earth-like world in all respects except that the Moon is much much closer. I think I’m right to say if the Moon were very close that tidal forces would slowly rob it of momentum and it would probably eventually hit the Earth destroying both. Although theoretically the Moon should be disrupted before it hit the Earth, (Earths radius 6371km, Moon radius 1737km, Moon’s Roche limit 9492km) given its elliptical orbit and the extreme proximity I would have thought a collision was more or less unavoidable. My question is this in the last one thousand years before impact (or disruption) is there any permanently dry land on Earth? And the related question do the oceans even behave like oceans as we would know them under these extreme conditions? One point about this; we believe it's already happened. Our current theories about how a moon as large as ours (as a proportion to the Earth's mass) formed and why it's so close is that it was formed from a planetary collision. The thinking is that a planet around the size of Mars collided with the proto-Earth early in the formation of both. This caused a massive amount of debris to be flung into space, although the net mass of the Earth increased. Gravity being what it is, the debris forms a ring around the Earth, which in the space of a few thousand years, forms the Moon. Thing is, the early Moon was very close to the Earth. We know this because the Moon is actually drifting away from us ever so slightly. It's believed that the early Moon would have orbited the Earth every 35 hrs or so, and caused MASSIVE tides and storm fronts (assuming that the water fell back down to Earth or was condensed back after flash evaporation) because of being so close. It's not until the Moon recedes a little that things become sufficiently stable on Earth to sustain the first life. Based on our projections, it's believed that the moon will eventually free itself from Earth orbit and drift away as a rogue. This will cause problems for the habitability of the Earth as many of the environmental cycles we take for granted (including our stable rotation) are based on having the moon in orbit. Th the time this happens though (in about a billion years), the Earth is pretty much uninhabitable anyway because of the Sun and its increasing temperature and diameter. The real point here is that there's information out there about the effects that the Earth would have likely experienced during the early days of the moon which might help you extrapolate this out in terms of tides and the like. Some sources would be nice, but +1 anyway. Ask and you shall receive, @Molot. :) The moon isn't going to escape Earth before the sun turns into a red giant and swallows the Earth, Moon and the rest of the inner solar system. For various reasons the moon is actually moving away much faster now than it has done before, or will do in the future (there is a resonance in the tides that won't exist for long) All very interesting but it doesn't realy answer the question... @Slarty it does. It says about storms and tides so severe life didn't form,and tells what happened to water. Sure, this part could and probably should be longer and more detailed, but it is there. Hummmm... well yes I suppose it is. @JamesK The Moon gets further away by slowing down the Earth's rotation. If the Earth-Moon system survived long enough, the distance between the Earth and the Moon would eventually stabilise, i.e. there is not enough rotational energy in the Earth to allow the Moon to escape. The Earth would then be tidally locked to the Moon, the same way the Moon is already tidally locked to the Earth. However, I believe the Sun will expand and destroy both before that happens. Fairly unpredictable, A moon that close would likely be tearing the Earth's atmosphere away let alone causing extreme geological stress. Likely the Earth would be a heaping mass of lava with all surface water vaporized and dispersed into the atmosphere now shared with the moon and vulnerable to be dispersed by solar winds. I would think there would be a good chance for a tidal lock to form causing a great big volcano to appear. If the moon orbits the Earth as posited, the orbit will fairly rapidly get higher due to momentum transfer (unless the moon is in a retrograde orbit, it which case the orbit will decay). If you assume a retrograde orbit, how big are the tides on Earth at the Roche Limit. The solid body Roche limit for Earth/Moon is 9492 km vs 384000 km today. This is a ratio of 40.58:1, and tidal stresses are about 66,400 times as large as today. This is causing major damage to Earth (and the moon is breaking up). Earthquakes and Volcanoes are widespread and severe, tides wash over major portions of the continents, etc. I would not consider Earth, much less the oceans, familiar under such extreme conditions. When the moon breaks up, I would expect quite a few large chunks to impact Earth too.
common-pile/stackexchange_filtered
Is there a way to manipulate/interfere sun light? What happens when yellow light is passed through a prism? If the speed of a light wave decreases, doesn't the frequency decrease as well? is there a way to manipulate/interfere sun light or a way to somehow produce a kind of resonance, to create microwave (in singular?/monochrome?) from sun light without reducing the intensity for the increase of efficiency of voltaic cell? This is much too broad for this site's format - you should separate it into three or more different questions addressing the different aspects you're including here. Some are duplicates of existing questions (linked here, and at their Linked sidebars on the right) and you should be much more explicit about what it is about the existing answers you find confusing. supposed visible light (sun) filtered using colored transparent substance that allowed red and green light to pass through. what is the resulted light's property/characteristic? what is the different between the yellow colored light resulted from a prism? The difference is the colour (spectra) supposed visible light (sun) passed through prism made of colored transparent substance that allowed red and green light to pass through. what is the resulted light's property/characteristic? Depends on the filters and prism but probably spatially separated red and gree light. Our eyes can perceive yellow light both monochromatic light as well as combinations of red and green light. What is the reason? Is there similarity or relation between the two? Colour is something that is made up in your brain it is not a physical property of the light. If the speed of a light wave decreases, doesn't the frequency decrease as well?, by manipulating the speed/length/frequency of light (sun), will the light's color we sense change? No the frequency doesn't decrease. The wavelength and speed will change. by manipulating the frequency of light (sun), is there a way to produce a kind of resonance? or, the reason we can see object around is the result of the so called resonance? This doesn't make any sense and its not really how we see. by manipulating the frequency of light (sun), is there a way to create microwave (in singular?/monochrome?) without reducing the intensity for the increase of efficiency of voltaic cell? Nope
common-pile/stackexchange_filtered
How to keep track of line number in files when its being updated continuosuly We are pushing IIS Log files via API to DB for monitoring via SPLUNK but we are sending the duplicate data since our C# job runs every five minutes and it sends all the lines // Read the file and display it line by line. System.IO.StreamReader file = new System.IO.StreamReader(filepath); while ((line = file.ReadLine()) != null) { System.Console.WriteLine(line); counter++; } file.Close(); System.Console.WriteLine("There were {0} lines.", counter); // Suspend the screen. System.Console.ReadLine(); Can you explain your problem ? Add an ID field to the data. I would recommend making each entry in log Xml format. The xml will not be well formed meaning there are an array of elements at the root but it makes it easy to read fields in the results. Using Xml in log files are very common and make it very easy to parse. See following Code Project : https://www.codeproject.com/Articles/28752/Use-XML-for-Log-Files Maybe you can try this: Before the run starts, store in a file, say file1, the number of lines in the log file. This is a onetime step. Before the while loop in the code, from file1 read the number of lines into a variable, say lastCount Change line System.Console.WriteLine(line) to the following: if (counter > lastCount) System.Console.WriteLine(line); After the while loop, store the value of the counter in file1
common-pile/stackexchange_filtered
error: expected expression when initialising a two dimensional structure variable in C So I am trying to learn about structures in C and tried having a 2D character array in a structure. When I try to initialise it in main, I get an error saying "error: expected expression". struct students { char roll_no[9][2]; }st; int main() { st.roll_no={"21BCD7001","21BCD7002"}; //this is where I get the error } When I try to compile this, I get the error at the first '{' in main().So how do i remove this error? @DavidRanieri i still get the same error on doing that as well char roll_no[9][2]; means "give me 9 arrays, each 2 characters long". But you actually want 2 arrays, each 10 characters long. 9 bytes for the data and 1 byte for the null terminator. That is: char roll_no [2][10]; Additionally, st.roll_no={"21BCD7001","21BCD7002"}; is assignment not initialization. You cannot assign arrays in C, you'd have to use strcpy in this case. To actually initialize the struct, you will have to do this: struct students { char roll_no[2][10]; }; int main() { struct students st = {"21BCD7001","21BCD7002"}; } Or you can use the functionally equivalent but much prettier style: struct students st = { .roll_no = {"21BCD7001","21BCD7002"} }; ok ya thanks that worked. Also what if I have two 2D arrays like this in my structure? @Ppp The preferably go with the prettier style I just posted: struct students st = { .first_array = { ... }, .second_array = { ... }, }; try removing st.(trust me I have 40 years of experience, I'm one if the core founders of java)
common-pile/stackexchange_filtered
How can I detect blur event in React Native tab component? I'm using rmc-tabs for tab component in React Native. I'm using video component and want to pause the video when I move to other tab, but I don't know how to do this. How can I get the blur event in rmc-tabs or are there any other ways to handle blur event in React Native video or view? does rmc-tab provides onChangeTab method ? if yes you can use this method The problem is I want to get the event in a nested component. There is NavigationEvent in react-navigation. I want sth like this one. can you add that code ? if you are using createTabNavigator or createBottomTabNavigator you can use NavigationEvents within the screens inside TabNavigator which allow you to listen to onWillFocus : before focusing a tab; onDidFocus : after focusing a tab; onWillBlur : before losing focus on a tab; onDidBlur : after losing focus on a tab; If you are using markup to define tabs, I believe you can do this too <Tabs screenOptions={{ tabBarActiveTintColor: Colors[colorScheme ?? "light"].tint, headerShown: false, }} initialRouteName="index" screenListeners={{ blur: (event) => { const target = event.target; if (!target) { console.warn("No target found in event", event); return; } const tabName = target.substring(0, target.lastIndexOf("-")); console.log("Tab name:", tabName); }, }} > <Tabs.Screen name="index" options={{ title: "Home", tabBarIcon: ({ color, focused }) => <TabBarIcon name={focused ? "home" : "home-outline"} color={color} />, }} /> <Tabs.Screen name="account" options={{ title: "Account", tabBarIcon: ({ color, focused }) => <TabBarIcon name={focused ? "person" : "person-outline"} color={color} />, }} /> </Tabs>
common-pile/stackexchange_filtered
Let $f : R → R^{2}$ be $ C^{∞} $. Does there exist $t_{o} ∈ (0, 1)$ such that $f(1) − f(0)$ is a scalar multiple of $df/dt| t=t_{o}$ Let $f : R → R^{2} $ be $C^{∞} $(i.e., has derivatives of all orders). Then there exists $t_{o} ∈ (0, 1)$ such that $f(1) − f(0)$ is a scalar multiple of $df/dt| t=t_{o}$ (true/false) $?$ This statement is equivalent to - There exists a $t_{o}$ such that $f'(t_{o}) =( f(1)-f(0))/k$ This looks like mean value theorem but I don't know if it is applicable here. How can I proceed$?$ The statement is true if $f'(t)$ is nowhere zero: If $f(0) = f(1)$ then $f(0) - f(1)$ is a scalar multiple of $f'(t_0)$ for any $t_0$. Otherwise choose a non-zero vector $v$ which is orthogonal to $f(1) - f(0)$ and consider the real-valued function $$ g(t) = \langle f(t) - f(0), v \rangle \, . $$ Then $g(0) = g(1) = 0$ and we can apply the mean-value theorem (or Rolle's theorem). It follows that for some $t_0 \in (0, 1)$ $$ 0 = g'(t_0) = \langle f'(t_0), v \rangle \, . $$ So $v$ is orthogonal to both $ f(1) - f(0)$ and $f'(t_0)$, and in two dimensions this implies that $ f(1) - f(0)$ is a scalar multiple of $f'(t_0)$. Without the restriction $f'(t) \ne 0$ the statement is wrong. To simplify the notation, I'll define a counterexample on the interval $[-1, 1]$ instead of $[0, 1]$. $f(t) = (t^3, 1-t^2)$ satisfies $f(1) - f(-1) = (1, 0) - (-1, 0) =(2, 0)$, and that is not a scalar multiple of $$ f'(t) = (3t^2, -2t) $$ for any $t \in [-1, 1]$. Can I take the function $f(t) = (e^{t},t)$ for a contradiction$?$ since there is no such $t_{o}$ for which $(e-1,1)$ is scalar multiple of $ (e^{t},1)$. @Mathsaddict: That is not a counterexample, $e^t = e-1$ has a solution in $(0, 1)$. It is false, if the scalar multiple is $ \ne 0.$ Take $f(t)=( \cos( 2 \pi t), \sin (2 \pi t)).$ Then $f(1)=f(0),$ hence $f(1)-f(0)=0,$ but $f'(t) \ne 0$ for all $t.$ But $k=0$ works in this case.
common-pile/stackexchange_filtered
How can I write to a specific line range with sed or awk? I simply want to write a command's output to a file's particular line range. For instance first command should be written into range 0-1000 then second command into 1001-2000 etc. I've successfully managed to write to a single line with sed -i command which doesn't help me at all. What I lastly tried in a for loop is; for cmd in "${commands[@]}"; do awk "NR >=$counter && NR <=$((counter + 1002)) {print $(eval $cmd)}" file > $logfile counter=$((counter + 1003)) done which throws argument is too long error. Any help would be appreciated. I can't imagine what you're trying to do. Please [edit] your question to include an example with concise, testable sample input and expected output using blocks of, say, 4 lines instead of 1000. You could use seq, for example, in place of $cmd if you need a tool that produces some number of lines of output to use in the example. Is the output of $(eval $cmd) expected to always be exactly 1000 lines? In fact the question is pretty clear, I want to write a bunch of lines which could be counted between 0-1000, into a file to a specific range. And for same file I want to write something else to the next range like 1001-2000 How about something like this? The awk script either truncates or pads to 1000 lines. $ cat foo.sh for cmd in 'seq 5' 'seq 3000'; do $cmd | awk 'NR > 1000 { exit } END { while (NR++ < 1000) print ""} 1' done >foo.txt $ bash foo.sh $ wc -l foo.txt 2000 foo.txt $ head foo.txt 1 2 3 4 5 $ tail foo.txt 991 992 993 994 995 996 997 998 999 1000 I really don't want to use anything other than bash @SercanOzdemir try this version which just uses awk. Instead of evaluating the same command and incrementing counter on thousands of awk iterations - use the following sed optimized approach: cnt=1 for cmd in "${commands[@]}"; do cmd_out="$(eval $cmd)" sed -n "$cnt,$((cnt + 1002)) s/.*/$cmd_out/p" 10lines.txt >> $logfile cnt=$((cnt + 1003)) done But if don't actually use the contents of the processed file - you can just iterate through inner ranges and print/append the same command output to a destination file. I didn't get this part: s/.*/$cmd_out/p" actually
common-pile/stackexchange_filtered
Equality of the totient function of two multiples of $x$ I'm looking to solve for $x \in \mathbb{N}$ in the equation $\phi(4 x) = \phi(5 x)$. I know the totient function $\phi(y)$ just gives the number of integers less than or equal to $y$ that are coprime to $y$. I tried approaching it like a normal equation and expanding out $\phi(4 x) - \phi(5 x) = 0$ into its prime number decomposition, but I didn't get anywhere. Any ideas? Graphing it, I noticed the equation seems to hold only when n is even, but I can't figure out why it fails at certain even values (like $n=10$, for instance). I think it would be a good idea to consider cases. Does $5$ divide $x$ or not? Does $2$ divide $x$ or not? A full prime factorisation is messy, and contains too much information. Try working with these four cases, and you should move forward with the problem. How do you mean? I don't know $x$, so how could I check what factors divide into it? Split into cases. Suppose $5$ divides $x$. What does this tell you about $\varphi(5x)$ compared $\varphi(x)$? Suppose instead $5$ does not divide $x$. Now what do you know about $\varphi(5x)$ compared $\varphi(x)$? Oh, I see. Let me try that... Thanks, by the way! We use the fact that the totient function is multiplicative. Let: $$x=2^a5^by$$ where $\gcd(y,10)=1$. Then: $$\phi(4x)=\phi(5x) \implies \phi(2^{a+2}5^bx)=\phi(2^a5^{b+1}x)$$ Using the fact that the totient function is multiplicative, we yield: $$\phi(2^{a+2})\phi(5^b)\phi(x)=\phi(2^a)\phi(5^{b+1})\phi(x)$$ Cancelling $\phi(x)$, we have: $$\phi(2^{a+2})\phi(5^b)=\phi(2^a)\phi(5^{b+1})$$ We know that $\phi(2^{a+2})=2^{a+1}$ and $\phi(5^{b+1})=4\cdot 5^b$. If $b>0$, then $\phi(5^b)=4 \cdot 5^{b-1}$. However, this would be a contradiction as the LHS as one less factor of $5$ than required. Thus, $b=0$. Substituting: $$\phi(2^{a+2})=4\phi(2^a)$$ which holds for all $a \geqslant 1$. Thus, $x=2^ay$ where $a>0$. This means that $x$ can be any even number not divisible by $5$. Note the following implications: \begin{align*} 5 \mid x &\implies \varphi(5x) = 5\varphi(x) \\ 5 \not\mid x &\implies \varphi(5x) = 4\varphi(x) \\ 2 \mid x &\implies \varphi(4x) = 4\varphi(x) \\ 2 \not\mid x &\implies \varphi(4x) = 2\varphi(x) \end{align*} As $\varphi(x) \neq 0$, the only possibility of equality is if $5 \not\mid x$, but $2 \mid x$. That is $x$ is even, but not a multiple of $10$.
common-pile/stackexchange_filtered
Bitmaps or Binary search tree Which is the following data structures be best suited for insertion, deletion, lookup, set intersection, union? Optimize time complexity. Bitmaps Binary search tree binary search is better until some array length because bitmaps need some decoding encoding. binary search time can change, bitmap is constant Foreward The question is ambiguous, so I will make some assumptions. First we are trying to represent a set of numbers over the range [0 to M] where M is a reasonably small number like 100,000. The BST can simply contain the elements that are in the set. The Bitmap can just be M bits long, and a bit is set if the number at that position is in the set, if not, the number is not in the set. I will assume that a Bitmap has already been created of size M. In this answer I consider Self-Balancing Binary Search Trees and Bitmaps--for non-balancing BSTs you would have to deal with best case, average case, and worst cases for each operation. Operations and Complexity Insertion: Add n to the set. BSTs take O( log(n) ) time for insertion. Bitmaps requires a simple (OR 1) operation on the target bit, and doing a quick division and some shifts to put that 1 in the right place, still this stays O( 1 ). Deletion: Remove n from the set. BSTs find the element, then delete it, then rotate tree, so overall O( log(n) ). Bitmaps requires a simple (AND 0) operation on the target bit, and doing a quick division and some shifts to put that 1 in the right place, still this stays O( 1 ). Look-Up: Is n in the set? BSTs take O( log(n) ) to look up n. Bitmaps return the value of (AND 1) on the specific target bit, doing division and shifts to put the 1 in the right place, overall O( 1 ). Set Intersection: Here's 2 sets, which ones are the same. BSTs: Do in-order traversals of both BSTs in a kind-of-MergeSort style, if two elements are the same add that into the return BST. Adding elements to the return BST in sorted order is bad for balancing, so you could modify this by doing an in-order traversal over half of the BSTs, and a reverse-order traversal over the last halfs of the bSTs, then add the intersecting elements from the forward-list and the reverse-list in alternating order. Overall this is something like O( n*log(n) ) since adding all the elements to the return BST takes longer than traversing. Bitmap: Walk over both bitmaps, storing the ( AND ) of bytes (or say 32 bits at a time) and store it in the result location as a bitmap. O( M ) where M is the size of the range the bitmaps are initialized to. Set Union: Here's 2 sets, combine them. BSTs: create a new BST by adding all elements from BST1 and BST2 into it by doing a breadth-first traversal of BST1 and BST2 alternating between BST1 and BST2. If the element is already there, then don't add it again. Overall this is something like O( n * log(n) ). Bitmap: Same as set intersection, but use ( OR ) operation. O( M ) where M is the size of the range of the bitmaps. Ambiguities in the Question What are the items to be stored in these sets (numbers, objects) Are the Binary Search Trees to be analyzed Self-Balancing? Is this a set chosen from a reasonably small finite range? (i.e. can this be done with a Bitmap? ) It is unclear what Optimize Time Complexity means, although it seems to mean "What is the lowest known Big-O (upper bound) for the average-case for this operation on this data structure?" Conclusion Bitmaps are better than self-balancing-BSTs for just about all of these operation. Downsides of inclusion/exclusion Bitmaps: You only store inclusion/exclusion, and don't store any data about that element. Only works on integers, or objects hashed to integers in a 1-to-1 fashion (If the hash function is reversible you can get the objects back out.) You must have a fixed range of elements, like integers 0 to 100,000. The fixed range must be reasonably small, like less than 100 Million. Union and Intersection depend on Range Size and not on number of elements represented.
common-pile/stackexchange_filtered
1: This constraint system keeps rejecting valid configurations. Let me trace through why $(p \land q) \to r$ combined with $p$, $q$, and $\neg r$ creates conflicts. 2: Start by marking each constraint true and the conclusion false. So we have T: $(p \land q) \to r$, T: $p$, T: $q$, and F: $r$. 1: The implication rule splits this. Since we need the implication true, either the antecedent is false or the consequent is true. 2: Right, so we get two branches: F: $(p \land q)$ or T: $r$. But we already have F: $r$, so the right branch closes immediately. 1: That leaves us with F: $(p \land q)$ on the left branch. A conjunction is false when at least one component is false. 2: So we branch again into F: $p$ or F: $q$. But we have T: $p$ and T: $q$ from our original constraints. 1: Both branches close because we have $p$ marked both true and false, same with $q$. Every path leads to contradiction. 2: That's why the system rejects this configuration. The tableau proves the constraint set is unsatisfiable - there's no assignment making all conditions true simultaneously. 1: What's elegant is how the method systematically explores every logical possibility. Each connective breaks down according to its truth conditions. 2: And when we hit contradictions on every branch, we know definitively that no solution exists. The branching mirrors the logical structure perfectly. 1: Exactly. If any branch stayed open, we could read off a satisfying assignment directly from that path. The closed branches eliminate impossible combinations. 2: This systematic decomposition is what makes tableaux so reliable for checking logical consistency. No guesswork -
sci-datasets/scilogues
Do you set yourself daily goals and stop once you've reached them? I don't follow any kind of time or word limit. I write when I have time and stop when the time is over (e.g. time to eat, to work, to sleep). But now I'm thinking of setting myself some daily goals to see if this improves the quality of my writing (e.g. 2000 per day, 5 hours a day), as well as my mental health and sanity. Do you set yourself daily goals? Do you stop once you've reached them? What are the pro and cons you've noticed? Yes to the first! Emphatic no to the second. If I'm on a roll, I'll keep writing until I run out of words. In order to start writing, I have to make a plan and stick to it, otherwise I'll get distracted and forget. I also have to purge my mind of the idea of "writing mood" because if I wait until I'm in the mood, I end up writing nothing. Setting aside time to write is a good disciplinary practice. Setting word goals is also helpful, if only so that you can fail to reach them. Failure allows you to better gauge how much you can really write in a set time period. Last month I made myself a spreadsheet to track my daily words, but life interfered and I got very little done. This month I am more optimistic. Even though concrete goals have a lot of advantages, not everyone can work with them, just as not everyone can work from an outline, and not everyone can write a snappy, plot-driven story. It's hard work to find the routine that works for you, but it's incredibly rewarding once you start reaping the results. So when I'm in writing mode, I set a goal of about 1500 words a day, broken into three sessions of 30-45 minutes with a goal of 500 words per session. The goal is fluid, since I will write to scene completion, not actual word count, so if the scene takes me more than 500 words, I'll write more and less if it takes less, than less. Typically, I'll write around breakfast, lunch, and dinner. Setting goals can be good. Especially if it helps motivate you. But stopping when you've reached your goals is a classic mistake which is described in every game theory or economics book. This is best illustrated with taxi drivers. Taxi drivers tend to set themselves a daily number of passengers to transport. As a result, there are more taxi drivers working on days when the demand for taxi drivers is low, and fewer taxi drivers working on days when the demand for taxi drivers is high. This goes for writing too. There are good days when you're productive, have good ideas, and take pleasure in writing; and bad days when you don't. If you set yourself a daily limit and stop when you reach it, then you'll be working long hours on the bad days and short hours on the good days, which is the exact opposite of what you want. In other words: If you're on a roll, then keep going! Make the most of that roll. Don't interrupt it just because you've reached your daily quota. If you've already spent a long time writing today but still haven't met your daily quota, because you've been less productive than usual, or have enjoyed writing less than usual, then don't punish yourself by making this bad day last longer just to reach your daily quota.
common-pile/stackexchange_filtered
Pip cannot install anything on ubuntu server I had deleted an existing virtual environment. I created a new one and activated it. Now I am trying to install site packages using pip install -r requirements.txt But I keep getting the error Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement BeautifulSoup==3.2.1 (from -r requirements.txt (line 1)) Now I know that the packages are really old but this is running on python 2.7.6. I am also not able to install anything through pip. I have tried pip install numpy But it shows the same errors. As per the similar questions answered before the suggestion is to use https://pypi.python.org which I have already done but still facing these errors. Would love to hear your suggestions. Might be a problem with having an old version of pip. Try pip install --upgrade pip and then try installing the requirements again. pip tries to create lockfile in cache directory Try running pip install --upgrade pip --no-cache-dir Hi, welcome to SO, you could improve your answer significantly, read this: https://stackoverflow.com/help/how-to-answer
common-pile/stackexchange_filtered
CUDA C++ shared memory and if-condition i have a question i couldnt find an answer to myself, and i was hoping some of you could offer me some insight regarding a possible solution. Within a kernel call, i would like to insert an if-condition regarding access to shared memory. __global__ void GridFillGPU (int * gridGLOB, int n) { __shared__ int grid[SIZE] // ... initialized to zero int tid = threadIdx.x if (tid < n) { for ( int k = 0; k < SIZE; k++) { if (grid[k] == 0) { grid[k] = tid+1; break; } } } //... here write grid to global memory gridGLOB } The idea is that, if the element grid[k] has already been written by one thread (with the index tid), it should not be written by another one. My question is: can this even be done in parallel ? Since all parallel threads perform the same for-loop, how can i be sure that the if-condition is evaluated correctly? I am guessing this will lead to certain race-conditions. I am quite new to Cuda, so i hope this question is not stupid. I know that grid needs to be in shared memory, and that one should avoid if-statements, but i find no other way around at the moment. I am thankful for any help EDIT: here is the explicit version, which explains why the array is called grid __global__ void GridFillGPU (int * pos, int * gridGLOB, int n) { __shared__ int grid[SIZE*7] // ... initialized to zero int tid = threadIdx.x if (tid < n) { int jmin = pos[tid] - 3; int jmax = pos[tid] + 3; for ( int j = jmin; j <= jmax; j++ { for ( int k = 0; k < SIZE; k++) { if (grid[(j-jmin)*SIZE + k] == 0) { grid[(j-jmin)*SIZE + k] = tid+1; break; } } } } //... here write grid to global memory gridGLOB } I am not sure I understand the code. grid is never initialised anywhere I can see, so I don't see how that could work. But leaving that aside, yes, as written, you have a memory race. the initialization of grid is in the second line. I am new to CUDA and thought thats how you initialize an array which all threads can access Initialisation mean "give an initial value". You test for grid[k]==0, but before that, grid is never given a value. as far as i know shared memory is always initialized as 0. Maybe i am wrong, in that case you would need to set the values to 0 first of course. EDIT: you are right, it needs to be set to zero first, i will correct That is my point. Shared memory isn't intialised (in C++ no local scope arrays are initialised to anything by default) OK, so now you have illegal initialisation for the shared memory. That isn't valid syntax in CUDA i realized that while trying to run it. it is the first time i use shared memory, so i will need to see how to initialize it. Edited For the problem that you described in your question, the answer is to use atomicCAS(grid[(j-jmin)*SIZE + k],0,tid). However, I doubt that this is the answer to your real problem. (Aside from the obvious problem that you should not compare to 0 because tid might be 0.) i forgot the +1, my bad. i will read into atomicCAS implementation and post an answer if i find one As @havogt said, you should be able to make something work with atomicCAS. This question may be of interest. You should model you problem in a way you don't need to worry about "if has been written already", also because cuda offers no guarantee in the order in which thread will be executed, so the order might not be the way you excpect. There are some minor things that cuda ensure you order wise within a warp but that is not the case. There are sync barries and stuff you can use but I don't think is your case. if you are processing a grid you should model that in a way that each thread has its own region of memory is going to work on. and that should not overlap with other thread region (at least in writing, in reading you can go outside boundaries). Also I would not worry about shared memory, make the algorithm works first, then think about optimization like load a tile in shared memory using the warp. In that case if you want to split your domain in a grid you should setup the kernel, in order to have enough threads as your grid "cells" or pixels if is an image. Then you use the thread and block coordinates that cuda provides you to compute where you should read and write in memory. There is a really good course on udacity.com about cuda, you might want to have a look at that. https://www.udacity.com/courses/cs344 There is also another one on coursera.com but I don't know if it is open right now. Anyway dividing the domain in a grid is a really common and solved problem ,you can find a lot of material on that. i agree on the shared memory aspect which might not be needed at this point. But the overlap of thread regions is hard to avoid for my special problem. I will look into the provided material. Thanks! can you explain a little bit more about your problem?. Also other thing you can do is model the problem in a "odd/even" tiles. Is just an idea, you can shut down tiles,in order to avoid the overlap, kinda like a checker board, and you run the kernel twice , first on the "white" checker then on the black ones. But before going down that road I would make 100% sure the algorithm is not parallelizable the way you want. You said it is modeled as a grid right? Is it a scatter or gather kind of algorithm? PS: you have also atomic operation that can help you. the idea is to flood the grid (representing a certain position in space) with particle indexes tid. this position can be flooded with maximum SIZE indexes, and once a grid element has been written, another tid cannot be at position k, but only in a "vacant" spot, that is the next one k+1 Now I understand, so basically you wanted to check if the index at the grid position using a control value, maybe -1? If that the case you can pre-fill the shared memory in parallel, by letting each thread write to the memory location the -1, and then, use a synchronization barrier. This is a common technique to load memory into shared memory. The coursera coures I mentioned does tiled matrix multiplication that covers those technique that might be useful for you i read a bit about synchronization barriers, especially the _syncthreads() option. I am also aware of atomicAdd(), which brought me to use shared memory in the first place. But in this case, my guess is that it is not the right thing to do, because it is the if-condition that needs to be synchronized. I will read into it , thx! what i mean is: the if-condition (which is checked in a thread) needs to be synchronized with the write (which occurs in another thread where the if-condition has already returned true). atomicCASwas the solution to my problem Thx to everyone!
common-pile/stackexchange_filtered
imshow() pixels are not accurately aligned with grid lines I would like to draw line segments and polygons on top of an image displayed with imshow(). My problem is that the coordinates are precisely aligned with the pixels. I prefer maximum precision. I created a minimum reproducible example with a checker pattern to make the problem easier to see. import numpy as np from matplotlib import pyplot as plt n = 100 checkers = np.zeros((n, n)) checkers[1::2, ::2] = 0.8 checkers[::2, 1::2] = 0.8 fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.imshow(checkers * 255, interpolation='none', cmap='gray', vmin=0, vmax=255) ax.grid(which='both', linewidth= 0.005, color='r') ax.tick_params(which='both', width=0.1) plt.setp(ax.spines.values(), linewidth=0.1) ax.set_xticks(np.arange(0, n+1, 1), fontsize=1) ax.set_yticks(np.arange(0, n+1, 1), fontsize=1) ax.tick_params(axis='both', labelsize=1) plt.savefig("grid_tests.pdf", bbox_inches='tight') As you can see, the grid lines are loosing accuracy on some regions of the image. I know that I could probably use pcolormesh instead, but this doesn't work with 3D (RGB) arrays. Is there a way to make the grid and coordinates more accurate? Where do you see that they are off? If I zoom in, the very thin red lines always line up with the center of the thicker tick marks. My guess is that, at the zoom level you have chosen there, the tick marks are an even number of pixels wide, in which case it is, of course, impossible to have a "center". @TimRoberts I should have been more clear. The tick marks are not an issue. The issue is the the coordinates do not correspond the the centers of the squares (image pixels). This is highlighted by the grid lines, as the do not pass through the centers of the squares. I see now. If I use plt.show() to plot to the screen and change the linewidth to 0.5 (so they can be seen), they are perfectly centered. Very interesting. By default the grid it starts at -0.5. You can fix that by setting the extent argument in imshow to start the grid from 0: ax.imshow(checkers * 255, interpolation='none', cmap='gray', vmin=0, vmax=255, extent=[0, n, 0, n])``` Thanks for your effort, but the red squares are still a tiny bit smaller than the black and white ones. And it's been a year I moved on.
common-pile/stackexchange_filtered
Is this the correct way to control the speed of a 4-wire CPU fan? The circuit is used outside of a PC. The fan hums (but still spins) on the MED and LO settings. Does this mean the circuit is the wrong way to control the fan? If this circuit is the wrong way to control the fan, please edit the schematic to make it right. Use a variable 1kHz PWM input to control speed If input is pulled up to V+ then probably an open collector , or a push pull, lots of 555 solutions for PWM , also 4000 series Schmitt trigger relaxation oscillators with offset bias to control PWM duty cycle THen I would guess hi input is 12V from a pullup R and the % low reduces speed Does grounding PWM input stop fan? Use a resistor to measure V/R then use an open collector or CE switch to vary duty cycle AT 1kHz. report V and I in your question with RMP response to PWM input. On/off There’s a hundred ways to do this.. with 4000 series logic on 12V using a 4093, this is one way. Variable speed and sleep input switch https://electronics.stackexchange.com/questions/330184/logic-shutdown-for-cmos-oscillator?r=SearchResults&s=2%7C21.7580. Other ways use an Op Amp for a positive feedback oscillator Let us continue this discussion in chat. It's an Intel fan intended for use with an Intel CPU. Intel have published a specification for 4 wire PWM controlled fans. There were two basic types of PWM made. One type is an always off type that has a pull down resistor that kept it off when the main voltage is applied to the power wire. The PWM logic high of 5V controlled the on state of the fan. The other type is an always on, or free running which has a voltage divider network internally attached to the pin, and a logic low or grounding this pwm pin turns the fan off. The PWM logic low controls the off time state of the fan. The PWM fan can be controlled several ways. The most inefficient is the current limiting method the OP listed above, because the fan coils will oscillate from low frequency harmonics caused by the reduction of current flow in the fan coils. The basic PWM control is what I call the self idle circuit, that uses its own RPM signal to introduce the logic low for speed control: The fan's RPM sensor receives about 5V to operate the sensor. When the sensor pulls down the voltage when it conducts, the PWM fan interprets this as a logic low and temporarily tun off the fan until the voltage is above the normal TTL logic high threshold. The way a microcomputer controls PWM is with adjusting an oscillator signal to control this off state. The oscillator based solutions is just simulating what these micro controllers do, but at a less sophisticated methods by using 555 based timers and positive feedback oscillators. Depends on the fan because if it is one that uses a transistor for the PWM, it would bias dynamically like that, but if it is a MOSFET in the PWM circuit of the fan, it will be a switch on-off operation and varying a voltage on the PWM doesn't do anything until it switches past the threshold voltage..
common-pile/stackexchange_filtered
How to fix (MAKEPKG) installation problem? I want to install yay for downloading package from AUR ArchLinux, I don't know what to do?! Thanks for your helping. When I use git clone and then use the makepkg -si command , it gives me this error: git clone https://aur.archlinux.org/yay.git cd yay makepkg -si Error text: ==> ERROR: Cannot find the fakeroot binary. ==> ERROR: Cannot find the strip binary required for object file stripping. You're missing the libraries to compile the package. To install them: sudo pacman -S binutils make gcc pkg-config fakeroot or, to install basic tools for compiling code: sudo pacman -S base-devel Also instead of installing and compiling yay (which also requires you to install all the GO libraries) why not installing the precompiled yay-bin? It's the same package. git clone https://aur.archlinux.org/yay-bin.git cd yay-bin makepkg -si try with that: sudo pacman -S binutils make gcc pkg-config fakeroot then again try to makepkg.
common-pile/stackexchange_filtered
Different uv or normal value for the same vertex in indexed geometry Look at the following indexed geometry: there are 2 faces and only 4 vertices so the buffers in three.js look like this (in pseudocode but the idea is clear): position = [A, B, C, D] index = [0, 1, 2, 2, 1, 3] //[A, B, C, C, B, D] vertices B and C are shared by two faces but they are not repetead in position buffer. Is there any way to set vertex uv (or normal) based on the face it belongs to? I mean something like this: position = [A, B, C, D] index = [0, 1, 2, 2, 1, 3] uv = [uv1, uv2, uv3, uv4, uv5, uv6] normal= [n1, n2, n3, n4, n5, n6] so the vertex C for example have different uv (uv3, uv4) and normal (n3, n4) values depending on the face. I found a related question on Three.js forum, see here This is not possible. The idea behind indexed geometry is so that the same vertex can be reused across faces. What you’re describing is two different vertices that happen to have equal positions. But you can’t selectively choose which attributes are indexed and which ones are unique. The only way to get different UV values per vertex is to make the geometry non-indexed. https://threejs.org/docs/#api/en/core/BufferGeometry.toNonIndexed Thanks! I get it but I'm curious... Could it be an easy-implementation feature by three.js guys or there is some kind of limitation in the way WebGL works? It's two rendering alternatives provided by the WebGL API. Non-indexed geometries use gl.drawArrays(), while indexed geometries use gl.drawElements(). There's nothing the Three.js team can do to create a hybrid method combining the two.
common-pile/stackexchange_filtered
Join two Json arrays to one with key and value postgresql I have two jsonb columns(keys,values). Eg: keys colum value = ["key1","key2","key3","key4"] values column = ["val1","val2","val3","val4"] I want to write a select query to get the output as below based on the array index. {"key1":"val1","key2":"val2","key3":"val3","key4":"val4"} The problem is the array size wont be fixed. Each row contains different size. But keys and values column size will be always same. I got the solution here. From Stack Exchange. https://dba.stackexchange.com/questions/291088/join-two-json-arrays-to-one-with-key-and-value/291089#291089 step-by-step demo:db<>fiddle SELECT json_object_agg( -- 3 keys ->> gs, -- 2 vals -> gs ) FROM mytable, generate_series(0, json_array_length(keys) - 1) as gs -- 1 Create the indexes for access the array elements. For that: Count elements of the keys (json_array_length()) and generate an index series. Now you can use the created indexes to access both, the keys and the values Create a new JSON object by aggregating the extracted key/value pairs what you're looking for is the json_array_elements function to explode the JSONs and the json_agg to recompact them I replicated your case with create table jsonb_test (a jsonb, b jsonb); insert into jsonb_test values ('["key1","key2","key3","key4"]','["val1","val2","val3","val4"]'); And the query solving the problem is with row_tbl as ( select replace(cast(json_array_elements(a::json) as varchar),'"','') k, replace(cast(json_array_elements(b::json) as varchar),'"','') v from jsonb_test) select json_agg( json_build_object(k,v)) as complete_json from row_tbl ;
common-pile/stackexchange_filtered
Enemies are penetrating in each other when following player As you can see the image my enemy (when following my player), penetrate each other. How can i avoid it? I am using NevMeshagent to follow the player. void Update () { currentNavMeshAgent.destination = player.transform.position; } I have added a Rigidbody and a Collider to my enemy object but they are still penetrating each other To avoid penetration, increase each NavMeshAgent's radius so it's equal to or larger than your agent's collider. There are several solutions to this, but, based on the image you posted i assume that your "enemies" are zombies, so, they don't have or need to "think" by themselves, right? In this particular case I'd bet for a swarm approach, by doing this instead of managing each of the zeds one by one, you can have a logic 2D Matrix and check where your monsters are before moving, by A* or any other pathfinding approach you want to use. By doing this, you would be able to skip AABB collision checks between your enemy entities, boosting your performance and knowing that your main enemy puppet master wont let them collide. NevMesh based on A* also. I guess
common-pile/stackexchange_filtered
jQuery Image Swap Not Showing Immediately I'm using the jQuery code below to replace part of the image src. Basically it converts example.com/200x200/sample.jpg into example.com/500x500/sample.jpg. It works fine only problem is it renders the old image first before showing the new one. Is it possible to load the swapped image first to improve user experience? $(document).ready(function() { $(".gallery img").each(function() { $(this).attr("src", function(a, b) { return b.replace("200x200", "500x500") }) }) }); JSFiddle Demo (Click "Run" multiple times) possible duplicate of Programmatically change the src of an img tag your fiddle works fine for me, look the OP is able to change the image URL, the problem is that the paging is loading the original image (200x200) and then loading 500x500 image. @dippas Yes the code does work, but if you try to press 'Run' several times. You will notice that the old image shows first before the new one. I was wondering if it's possible to load the new image first for better user experience. @Neverever Yes exactly. I was aware of that, probably because my internet speed is fast i don't see the old image loading first, not even hit the "run" several times in a row @VianneYuZhèng Check out my answer and see if it solves your problem Put the image in fixed position div that has overflow hidden and height and width of 0. This will cause the image to load but not display. Here is a fiddle showing the basic idea: https://jsfiddle.net/0tm3kb6e/. This image displays after 10 seconds. Use chrome to throttle the network and you will see that it is loaded by the time it is displayed. Here is the code I used. You just need the html and css html <div id="image-hider"> <img src="https://www.google.com/images/srpr/logo11w.png"/> </div> css #image-hider { height:0; width: 0; overflow: hidden; position: fixed; } javascript $(document).ready(function() { setTimeout(function() { $('#image-hider').css('height','500px'); $('#image-hider').css('width','500px'); }, 10000); }); Try using a overlay div <div class="gallery"> <img src="//lorempixel.com/200/200" /> <div class="overlaydiv"> </div> </div> Hide it after a second(giving some time for image to load) $(".gallery img").each(function () { $(this).attr("src",$(this).attr("src").replace("200/200", "400/400")); setTimeout(function(){ $(".overlaydiv").hide(); },1000); }); Check out this fiddle
common-pile/stackexchange_filtered
Reason for using `Any` over `AnyAsync` in async code In the Microsoft tutorial for using ASP.NET Core with EF Core here, there is this sample code: [HttpPut("{id}")] public async Task<IActionResult> PutTodoItem(long id, TodoItem todoItem) { if (id != todoItem.Id) { return BadRequest(); } _context.Entry(todoItem).State = EntityState.Modified; try { await _context.SaveChangesAsync(); } catch (DbUpdateConcurrencyException) { if (!TodoItemExists(id)) { return NotFound(); } else { throw; } } return NoContent(); } ... private bool TodoItemExists(long id) { return _context.TodoItems.Any(e => e.Id == id); } Is there any reason we don't use the following function instead, given we are calling it from another async function? private async Task<bool> TodoItemExistsAsync(long id) { return await _context.TodoItems.AnyAsync(e => e.Id == id); } Probably because before C# 6 it was forbidden to use await in catch block. ^^ Also, those examples do not always follow best practices when it comes to stuff that is not directly the stuff that is to be shown an example of. Maybe out of context, but its better to return the task directly instead of awaiting it. private Task<bool> TodoItemExistsAsync(long id) { return _context.TodoItems.AnyAsync(e => e.Id == id); }
common-pile/stackexchange_filtered
Trouble with positioning elements In my page i'm having trouble with getting the right position with my element outside of the header div. I want it to automatically position after/below the div not inside it. I guess that the "position: fixed" is ruining the document flow, is there any way around that so i don't need to use it? The Reason i use it is because i want a header background image. I feel really stupid not solving this on my own. Can i get your help? HTML and CSS CODE: .header { left: 0; top: 0; height: 50%; position: fixed; right: 0; z-index: -1; } .pic1 { position: absolute; width: 100%; height: 100%; z-index: -1; } .menu { float: right; margin-right: 30px; margin-top: 10px; } .font { color: gray; text-decoration: none; font-size: 20px; h1 { color: yellow; } <!DOCTYPE html> <html> <head> <title>Gesällprov</title> <meta charset="UTF-8"> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <div class="header"> <div class="menu"> <a class="font" style="margin-right:30px" href="">HOME</a> <a class="font" style="margin-right:30px" href="">SHOP</a> <a class="font" style="margin-right:30px" href="">ABOUT US</a> <a class="font" style="margin-right:30px" href="">CONTACT</a> </div> <img class="pic1" src="pic1.jpg" alt="fake.jpg"> </div> <h1>test</h1> </body> </html> I think a little explanation is helpful: Putting an element after/below another element requires a defined document flow. See Visual Formatting Model. Positioning your <header> as fixed removes it from that normal document flow. See position @ MDN. Effectively, there is no "after" the header because it doesn't take up any space in the document flow. The element .header has been removed from the natural document flow, so the space it had occupied before is no longer occupied - consider this element as no longer part of or interacting with sibling elements. This is why the h1 element appears to be "inside" of this element, it is actually below it. To resolve this common issue, you would need to account for the space (height) this absolutely positioned element would've taken in the DOM had it remained part of the normal document flow. In this particular instance, this value is dynamic; the height of the element will vary, you will need to use relative length values as well (like percentage values) to offset this space. Consider declaring margin or padding properties on the appropriate element. In this case, the better option would probably be declaring a padding-top property on the body element, e.g: body { padding-top: 25%; /* adjust accordingly to suit requirements */ } Note: if necessary, experiment with adjusting this property value accordingly for various resolutions using @media queries Code Snippet Demonstration: /* Additional */ body { padding-top: 25%; /* adjust accordingly to suit requirements */ } .header { left: 0; top: 0; height: 50%; position: fixed; right: 0; z-index: -1; } .pic1 { position: absolute; width: 100%; height: 100%; z-index: -1; } .menu { float: right; margin-right: 30px; margin-top: 10px; } .font { color: gray; text-decoration: none; font-size: 20px; } h1 { color: black; } <!DOCTYPE html> <html> <head> <title>Gesällprov</title> <meta charset="UTF-8"> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <div class="header"> <div class="menu"> <a class="font" style="margin-right:30px" href="">HOME</a> <a class="font" style="margin-right:30px" href="">SHOP</a> <a class="font" style="margin-right:30px" href="">ABOUT US</a> <a class="font" style="margin-right:30px" href="">CONTACT</a> </div> <img class="pic1" src="https://placehold.it/800x225" alt="fake.jpg"> </div> <h1>test</h1> </body> </html> Just using margin-top HTML: <div class="content"> <h1>test</h1> </div> CSS: .content{ margin-top:32px; }
common-pile/stackexchange_filtered
CGImageRef uses lot of memory even after release I'm using CGImageRef and noticed that it uses a lot of memory that doesn't get deallocated. So I tried experimenting with the following code - (void)photofromAsset:(ALAsset *)asset completion:(void(^)(NSError *error))completionHandler ALAssetRepresentation *representation = asset.defaultRepresentation; UIImageOrientation orientation = (UIImageOrientation)representation.orientation; CGImageRef fullResolutionImage = representation.fullResolutionImage; //UIImage *fullImage = [UIImage imageWithCGImage:fullResolutionImage scale:0.0 orientation:orientation]; //[self startUpload:fullImage completion:completionHandler]; } put some breakpoints and put the first three lines of code in an @autorelease pool. Then tried removing the @autorelease and called CGImageRelease(fullResolutionImage); When I get to UIImageOrientation my app is using less than 30MB but as soon as I call CGImageRef it gets more than 80MB. Both memory freeing methods only get me to 50MB, so there's an extra 20MB somewhere. Those extra 20MB are freed only when the whole method gets completed. Where are those extra 20MB from? How can I free them before calling startUpload: ? Thank you "Those extra 20MB are freed only when the whole method gets completed" But if they are then freed, what do you care? I agree with Matt. That having been said, CoreGraphics calls can be tough to catch memory problems, but the static analyzer ("Analyze" on the Xcode "Product" menu) is pretty good at catching anything that slipped through. Make sure you have a clean bill of health from the analyzer. @matt the problem is that before that method ends I'm calling the startUpload: method which is quite long. At the same time I'm calling photofromAsset: many times (even hundreds) so before the 20MB are freed another 10 images will be occupying 10x20 MB and that causes my app to crash. "I'm calling photofromAsset: many times" So what if you wrap that call in an autorelease pool and drain it? You don't need to release manually CGImageRef, as an old CF opaque type it follows the rules of ownership, you need to care about memory management only if in the function/method name there is Create or Copy. About your problem is difficult to understand what -startupload does and where you call -photoFromAsset. My suggestion is if you -photoFromAsset is called inside a loop, wrap what's inside the loop in an @autorelasePool. Second is probably your startUpload (I imagine that is uploading something to a server) that continues to keep a strong reference to the image itself by decompressing it and sending to the server. To avoid that also to avoid opening a lot of connection my suggestion is to create a sort of a queue for network operations and send just 2 images "at once". "My suggestion is if you -photoFromAsset is called inside a loop, wrap what's inside the loop in an @autorelasePool" Yes, that was my suggestion too. Then why is there a memory improvement when calling it? The problem is that the upload I'm doing needs to be able to continue working in background. That's why I'm starting all the uploads as soon as possible, so the OS knows about them and the user can exit the app without stopping the file uploading. -startUpload: sets up the request for that photo, starts the upload and returns only once the photo has been uploaded. -photoFromAsset: is already in a @autorelasepool @matt seeing that we agree is great honor for me, I'm a big fan, I have all your books, sorry for the OT, but I'm exited to have a comment from you. @halfblood17 Are you using any network library? because in my opinion without enqueuing the request you use a lot of resources in general and could also lead to networking problem. If you care about the user closing the app, you should use all the specific method given by the OS, such as the beginBackgroundTaskWithName or better backgroundSessionConfigurationWithIdentifier: for the NSURLSession. When I need to send images I usually collect them and save them in a temp dir, them I send them not more that once at time as chucked data. I'm using AFNetworking 2.0 but i just realized that probably the main problem are the completionHandlers. I need to get some stuff done when the upload completes but I can't wait for that to happen to free up the resources. Probably the best thing would be to use the basic NSURLSession so I can use delegates. What do you think? Could that be the problem? @halfblood17 Your problems all seem to stem from the fact that you are uploading an image (a UIImage) which is held in memory (the variable fullImage, commented out in your quoted code). That seems a very silly thing to do: images are huge. Why don't you write the UIImage to disk, and that way you can upload the file (from disk), without holding anything in memory? I write it in memory before uploading it. In my startUpload: method I'm creating the body of the request using that image and then writing that body on the disk. Then, using the disk copy, I start the upload with uploadTaskWithRequest:fromFile:progress: completionHandler: The problem is that only when the upload is complete the completion handler gets called, and it's here that I call the first completionHandler terminating -photoFromAsset: So my guess is that this whole design is flawed for the background upload since it works only with serialized uploading. But if you do what I am saying to do, you will have released the memory you were using in your first four lines. So now it doesn't matter when the completion handler is called. The memory is no longer being held. I totally agree with matt, you should save your images to disk once you get your asset, and start the upload using the path to your file by creating a multipart POST request, in this way you the body of the requeste will be treated as a stream, with a very low impact on memory. Also better if you use the method provided in AFHTTPSessionManager to create the POST request directly. Your problems all seem to stem from the fact that you are uploading an image (a UIImage) which is held in memory (the variable fullImage, commented out in your quoted code). That seems a very silly thing to do: images are huge, and uploading takes time, so you are forcing your program to hold the entire image in memory for as long as it takes to upload it. Instead, why don't you immediately write the UIImage as file to disk, and that way you can let go of the image from memory and instead upload the file (from disk), without holding anything in memory?
common-pile/stackexchange_filtered
Unknow value in field UserPrincipalName. Getonedriveactivityuserdetail graph API When accessing the https://graph.microsoft.com/beta/reports/getOneDriveActivityUserDetail(period='D7')?$format=application/json graph API from a specific tenant, the value of the userPrincipalName field in response is an unknown string. Ex: "userPrincipalName ": "BE4EFE9E83863382382492509E8BD85E ". The correct response value for this field should be the user's e-mail enter image description here Welcome to Stack Overflow. Please Take the Tour, and be sure to read How do I ask a good question? This question is similar to this,so we can refer to this question
common-pile/stackexchange_filtered
Disable a Share link for wall post I am developing an application that will be posted on the wall of the private Facebook Group by using Open Graph. If I have a general post in this group there is no view of the "Share" link, but with Open Graph it is. What I need is to disable the possibility to "Share" this post out from this group, e.g. disable "Share" link under the post. Is it possible to implement such behavior via Open Graph API? Or any other solutions? hmm..are you able to see the Share link ? Per my understanding, closed groups don't have the ability to Share posts. It's related to permission settings.
common-pile/stackexchange_filtered
Contents of file as input to hashtable in Objective-C I know to read the contents of file in Objective-C, but how to take it as the input to hashtable. Consider the contents of text file as test.txt LENOVA HCL WIPRO DELL Now i need to read this into my Hashtable as Key value pairs KEY VAlue 1 LENOVA 2 HCL 3 WIPRO 4 DELL Isn't that just an array? If you are reading it in the same order that it is keyed, there is no need to implement a hash table. You want to parse your file into an array of strings and assign each element in this array with a key. This may help you get in the right direction. NSString *wholeFile = [NSString<EMAIL_ADDRESS>NSArray *lines = [wholeFile componentsSeparatedByString:@"\n"]; NSMutableDictionary *dict = [NSMutableDictionary dictionaryWithCapacity:[lines count]]; int counter = 1; for (NSString *line in lines) { if ([line length]) { [dict setObject:line forKey:[NSString stringWithFormat:"%d", counter]]; // If you want `NSNumber` as keys, use this line instead: // [dict setObject:line forKey:[NSNumber numberWithInt:counter]]; counter++; } } Keep in mind this isn't the most efficient method of parsing your file. It also uses the deprecated method stringWithContentsOfFile:. To get the line back, use: NSString *myLine = [dict objectForKey:@"1"]; // If you used `NSNumber` class for keys, use: // NSString *myLine = [dict objectForKey:[NSNumber numberWithInt:1]]; stringWithContentsOfFile: is deprecated in favour of stringWithContentsOfFile:usedEncoding:error: which gives you information about what encoding was detected when reading the file and also returns an NSError object when there was a failure reading the file. hey.. this is how we check the contents of hashtable right?? " NSLog(@"1=%@",[dict objectForKey:@"1"]); " @suse, not quite, in the above I have used NSNumber class as the key, but in your comment you are trying to use NSString as the key. If you want NSString as the key, you need to change the setObject:forKey: part. I have edited my answer to assist you further. wat should i write for the value of "i", its giving me an error saying 'i' is undeclared, first use inthis function?? Its the key value right? but how to declare it? Oops! Sorry, that is a typo, i is meant to read counter. My mistake. Please try again.
common-pile/stackexchange_filtered
How to create a custom skimage.future.graph.rag which is given as a input to cut_normalized and ncut? I am trying to create a custom adjacency graph with RAG but all the examples only have graph creation using rag = graph.rag_mean_color(img, labels) I DON'T want to use this function and want to define the weights with my custom measures. So I wrote the following code labels1 = segmentation.slic(img_i.reshape(img.shape[0],img.shape[1]), compactness=30, n_segments=200) out1 = color.label2rgb(labels1, img_i.reshape(img.shape[0],img.shape[1]), kind='avg') plt.axis('off') plt.imshow(out1) print(labels1.shape) ... g_seg = graph.rag.RAG() for ix in range(0,img.shape[0]): for iy in range(0,img.shape[1]): idx = ix*img.shape[1] + iy g_seg.add_node(idx,labels=[labels_slic[idx]]) win_rad = 7 for i in range(0,img.shape[0]): for j in range(0,img.shape[1]): for ii in range(-int(win_rad),int(win_rad)): for jj in range(-int(win_rad),int(win_rad)): if i+ii>0 and i+ii<img.shape[0] and j+jj>0 and j+jj<img.shape[1]: idx = i*img.shape[1] + j idc = (i+ii)*img.shape[1] + (j+jj) w_tx = g_tx[idx][idc]['weight'] w_ic = g_ic[idx][idc]['weight'] g_seg.add_edge(idx, idc, weight=(w_tx*w_ic)) But when using this graph for normalized cut I am getting wrong output labels3 = graph.cut_normalized(labels1, g_seg,5.0,10) So my understanding is that I am destroying the special structure of the image while creating the graph because I took the nodes a 1D array discarding their 2D coordiantes. So I need help in understanding how can I create a graph the keeps the 2D structure of an image intact and gives results as the ones we get with rag = graph.rag_mean_color(img, labels) How are you creating the second image you show? Have you looked at the show_rag function? I don't see anything obviously wrong with your code, although instead of nested loops I suggest using scipy.ndimage.generic_filter. See the implementation of the RAG building functions for details/inspiration! PS: Interesting problem! If you figure it out, could you please send a write-up about it to our mailing list at<EMAIL_ADDRESS>We might like to add it to our gallery. Thanks! =) You can create your own version of a RAG with your own custom weights between adjacent nodes like so: from skimage.future.graph import RAG import numpy as np def rag(image, labels): #initialize the RAG graph = RAG(labels, connectivity=2) #lets say we want for each node on the graph a label, a pixel count and a total color for n in graph: graph.node[n].update({'labels': [n],'pixel count': 0, 'total color': np.array([0, 0, 0], dtype=np.double)}) #give them values for index in np.ndindex(labels.shape): current = labels[index] graph.node[current]['pixel count'] += 1 graph.node[current]['total color'] += image[index] #calculate your own weights here for x, y, d in graph.edges(data=True): my_weight = "do whatever" d['weight'] = my_weight return graph image: your input image labels: labels for each pixel of image You should also check out the source code of graph.rag_mean_color.The above code was based on that. rag_mean_color source code
common-pile/stackexchange_filtered
Microsoft Flow mapping values dynamically We are connecting one our applications to Microsoft Flow. To achieve one of our functions we need to have Microsoft Flow serve outputs to dynamic data. Let me explain what I meant, { 'Firstname': 'John', 'Lastname': 'Doe' } if this is the input our API needs it's easy to form this input using flow because with the UI it's easy to map values from former output to Firstname and Lastname fields. But, { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3' } This is the input we need for our api. Mapping dynamic data. Meaning that the value mapping UI should be dynamic. The keys should be taken from our API. This is a stripped down version of our actual problem. Can anyone help please? Sounds like you are looking to be able to use a dynamic schema: https://flow.microsoft.com/en-us/blog/integrating-custom-api/ This will allow you to define the schema of the inputs (or even outputs). For example, Flow will call your endpoint to get the dynamic schema, which will then return the keys, and Flow will use that for its inputs.
common-pile/stackexchange_filtered
Good Design for adding business logic to Java Bean In the following code, does it make sense to have isMatched() in here (in a Value object/java bean) ? What's a good design. btw, I tried compareTo, compare, hashSet etc. by following other posts of Stack overflow and somehow that still does not work for me to remove dups from two lists. public class SessionAttributes { private static final Logger LOGGER = Logger .getLogger(SessionAttributes.class); public SessionNotificationAttributes(String userName, String sessionState) { this.userName = userName; this.sessionState = sessionState; } String userName; public String getUserName() { return userName; } public void setUserName(String userName) { this.userName = userName; } // .. getters/setters for sessionState public static isMatched (List<SessionAttributes> list1, List<SessionAttributes> list2) { //.. custom logic here... } } ==== Entire Code per the ask in comment by David. Look at main() method. This is directly copied pasted from Eclipse to meet http://sscce.org/ requirement ======== package snippet; import java.util.ArrayList; import java.util.Comparator; import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Set; import org.apache.log4j.Logger; public class SessionAttributes implements Comparable<SessionAttributes>, Comparator<SessionAttributes> { private static final Logger LOGGER = Logger .getLogger(SessionAttributes.class); public SessionAttributes(String userName, String sessionState) { /* * String nasPort, String endpointProfile, String audiSessionId, String * epsStatus, String securityGroup, String nasIp, String postureStatus, * String postureTimestamp) { */ this.userName = userName; this.sessionState = sessionState; /* * this.nasPort = nasPort; this.endpoinProfile = endpointProfile; * this.auditSessionId = audiSessionId; this.epsStatus = epsStatus; * this.securityGroup = securityGroup; this.nasIp = nasIp; * this.postureStatus = postureStatus; this.postureTimestamp = * postureTimestamp; */ } String userName; public String getUserName() { return userName; } String sessionState; public String getSessionState() { return sessionState; } public int compareTo(SessionAttributes o) { // TODO Auto-generated method stub if (this.getUserName().equals(o.getUserName()) && this.getSessionState().equalsIgnoreCase(o.getSessionState())) { return 0; } return -1; } public String toString() { return "\n User Name : " + this.getUserName() + " Session State : " + getSessionState() + "\n"; } static boolean isMatched(List<SessionAttributes> list1, List<SessionAttributes> list2) { if (null == list1 || null == list2) return false; System.out.println("Actual List=>" + list1); System.out.println("Expected List=>" + list2); Iterator<SessionAttributes> iterator = list1.iterator(); while (iterator.hasNext()) { SessionAttributes actual = iterator.next(); Iterator<SessionAttributes> iterator2 = list2 .iterator(); while (iterator2.hasNext()) { SessionAttributes expected = iterator2.next(); if (expected.getUserName().equalsIgnoreCase( actual.getUserName())) { if (expected.getSessionState().equalsIgnoreCase( actual.getSessionState())) { System.out.println("Element matched - user name-" + expected.getUserName() + " State -" + expected.getSessionState()); iterator.remove(); iterator2.remove(); } } else { System.out.println("Element NOT matched - user name-" + expected.getUserName() + " State -" + expected.getSessionState()); } } } System.out.println("Lists after removing Dups -"); System.out.println("list1 =>" + list1.toString() + " list2 -" + list2.toString()); if (list1.size() > 0 || list2.size() > 0) return false; return true; } static void sortLists () { List<SessionAttributes> expectedSessionList = new ArrayList<SessionAttributes>(); SessionAttributes user11 = new SessionAttributes( "postureuser1", "STARTED"); // // ,null,null,null,null,null,null,null,null); SessionAttributes user12 = new SessionAttributes( "postureuser1", "DISCONNECTED"); SessionAttributes user13 = new SessionAttributes( "postureuser5", "STARTED"); // ,null,null,null,null,null,null,null,null); expectedSessionList.add(user11); expectedSessionList.add(user12); expectedSessionList.add(user13); List<SessionAttributes> actualSessionList = new ArrayList<SessionAttributes>(); SessionAttributes user3 = new SessionAttributes( "postureuser1", "STARTED"); // ,null,null,null,null,null,null,null,null); SessionAttributes user4 = new SessionAttributes( "postureuser1", "DISCONNECTED"); SessionAttributes user5 = new SessionAttributes( "postureuser2", "DISCONNECTED"); // ,null,null,null,null,null,null,null,null); actualSessionList.add(user3); actualSessionList.add(user4); actualSessionList.add(user5); Set<SessionAttributes> removeDups = new HashSet<SessionAttributes>(); boolean b1 = removeDups.add(user11); boolean b2 = removeDups.add(user12); boolean b3 = removeDups.add(user13); boolean b4 = removeDups.add(user3); boolean b5 = removeDups.add(user4); boolean b6 = removeDups.add(user5); System.out.println(" Set--" + removeDups); // removeDups.addAll(expectedSessionList); // removeDups.addAll(actualSessionList); System.out.println("== Printing Set ===="); int countMisMatch = 0; System.out.println(isMatched(actualSessionList, expectedSessionList)); // int isMatch = user3.compareTo(user1); // System.out.println("Compare=>" + isMatch); } static void sortSet () { List<SessionAttributes> expectedSessionList = new ArrayList<SessionAttributes>(); SessionAttributes user11 = new SessionAttributes( "postureuser1", "STARTED"); // // ,null,null,null,null,null,null,null,null); SessionAttributes user12 = new SessionAttributes( "postureuser1", "DISCONNECTED"); SessionAttributes user13 = new SessionAttributes( "postureuser5", "STARTED"); SessionAttributes user3 = new SessionAttributes( "postureuser1", "STARTED"); // ,null,null,null,null,null,null,null,null); SessionAttributes user4 = new SessionAttributes( "postureuser1", "DISCONNECTED"); SessionAttributes user5 = new SessionAttributes( "postureuser2", "DISCONNECTED"); // ,null,null,null,null,null,null,null,null); Set<SessionAttributes> removeDups = new HashSet<SessionAttributes>(); boolean b1 = removeDups.add(user11); boolean b2 = removeDups.add(user12); boolean b3 = removeDups.add(user13); boolean b4 = removeDups.add(user3); boolean b5 = removeDups.add(user4); boolean b6 = removeDups.add(user5); System.out.println(" Set--" + removeDups); // removeDups.addAll(expectedSessionList); // removeDups.addAll(actualSessionList); System.out.println("== Printing Set ===="); System.out.println(removeDups); // int isMatch = user3.compareTo(user1); // System.out.println("Compare=>" + isMatch); } public int compare(SessionAttributes o1, SessionAttributes o2) { LOGGER.debug("Compare called -[" + o1.getUserName() + "] [" + o2.getUserName() + "]"); boolean isSameUserName = o1.userName.equalsIgnoreCase(o2.userName); boolean isSameState = o1.sessionState .equalsIgnoreCase(this.sessionState); if (isSameUserName && isSameState) return 0; return -1; } public boolean equals(SessionAttributes obj) { if (obj == null || !(obj instanceof SessionAttributes)) { return false; } System.out.println(" In equals=="); boolean isSameUserName = obj.userName.equalsIgnoreCase(this.userName); boolean isSameState = obj.sessionState .equalsIgnoreCase(this.sessionState); return (isSameUserName && isSameState); } public int hashCode() { System.out.println(" in hashcode "); int hash = 1; hash = hash * 17 + this.getUserName().hashCode(); hash = hash * 31 + this.getSessionState().hashCode(); // hash = hash * 13 + this.getAuditSessionId().hashCode(); System.out.println(" hash=>" + hash); return hash; } public static void main(String[] args) { //sortSet(); sortLists(); } } ==== Code from David which is supposed to remove dups. Pasting only relevant portion for better comparison. Somehow, this still does not work public int compareTo(SessionAttributesFromDavid o) { if (this == o) { return 0; } // Null is considered less than any object. if (o == null) { return 1; } // Use compareToIgnoreCase since you used equalsIgnoreCase in equals. int diff = userName.compareToIgnoreCase(o.userName); if (diff != 0) return diff; diff = sessionState.compareToIgnoreCase(o.sessionState); return diff; } public boolean equals(Object o) { // See if o is the same object. If it is, return true. if (o == this) { return true; } // The instanceof check also checks for null. If o is null, instanceof will be false. if (!(o instanceof SessionAttributes)) { return false; } SessionAttributes that = (SessionAttributes) o; return userName.equalsIgnoreCase(that.userName) && sessionState.equalsIgnoreCase(sessionState); } Set removeDups = new TreeSet(); boolean b1 = removeDups.add(user11); boolean b2 = removeDups.add(user12); boolean b3 = removeDups.add(user13); boolean b4 = removeDups.add(user3); boolean b5 = removeDups.add(user4); boolean b6 = removeDups.add(user5); System.out.println(" Set--" + removeDups); Set--[ User Name : postureuser2 Session State : DISCONNECTED , User Name : postureuser1 Session State : STARTED , User Name : postureuser5 Session State : STARTED , User Name : postureuser1 Session State : DISCONNECTED , User Name : postureuser1 Session State : STARTED ] I would not create an isMatched method to compare two Lists of SessionAttributes. I would definitely go with having SessionAttributes implement equals and hashCode. It's crucial that you implement them both and in the same way. If you want to compare both Strings for equality, calculate your hashCode using both Strings. If you don't get equals and hashCode right, none of this will work. If you want to put SessionAttributes in a SortedSet, I would have SessionAttributes implement Comparable, too. Also, I would make SessionAttributes immutable, so no setters and declare the two String elements as final. If you do this, you can add SessionAttributes to a Set or Map and you won't have to worry about their values changing. If you don't make them immutable, you have to be sure not to change any of the SessionAttributes values after adding them to the List or Set. Instead of putting them in a List, I would put them in a Set to ensure that you don't have duplicates within the same Collection of SessionAttributes. If you want to remove duplicates from one of the Collections, use Collection's removeAll method on it, passing it the other Collection of SessionAttributes. As a side note, sessionState looks like a variable with a finite number of possible values, so I would consider defining an Enum for it instead of making it a String. I hope this helps. Edit: Your compareTo method is not working because it returns 0 if equal, but -1 if not. It does not fulfill the contract for compareTo. Please read its javadocs. Below is the SSCCE with some changes and comments. Since in equals, you compare for equality ignoring case, you have to convert the Strings to all upper or lower case in your hashCode method so it will be consistent with equals. You also have to use compareIgnoreCase in your compareTo method for consistency. Since the compareTo method works, you now can simply use a TreeSet to sort a collection of your objects. I removed the methods you won't need any more and tried to put some helpful comments in the code. package snippet; import java.util.ArrayList; import java.util.List; import java.util.Set; import java.util.TreeSet; // You don't need to implement Comparator. public class SessionAttributes implements Comparable<SessionAttributes> { // You typically define member variables at the top of the class definition. private final String userName; private final String sessionState; public SessionAttributes(String userName, String sessionState) { // Throw a NullPointerException from the constructor if either of the Strings is null. This way, you know that // if the object is constructed successfully, it is free of nulls. if (userName == null) { throw new NullPointerException("userName must not be null"); } if (sessionState == null) { throw new NullPointerException("sessionState must not be null"); } /* * String nasPort, String endpointProfile, String audiSessionId, String epsStatus, String securityGroup, String * nasIp, String postureStatus, String postureTimestamp) { */ this.userName = userName; this.sessionState = sessionState; /* * this.nasPort = nasPort; this.endpoinProfile = endpointProfile; this.auditSessionId = audiSessionId; * this.epsStatus = epsStatus; this.securityGroup = securityGroup; this.nasIp = nasIp; this.postureStatus = * postureStatus; this.postureTimestamp = postureTimestamp; */ } public String getUserName() { return userName; } public String getSessionState() { return sessionState; } @Override public int compareTo(SessionAttributes o) { if (this == o) { return 0; } // Null is considered less than any object. if (o == null) { return 1; } // Use compareToIgnoreCase since you used equalsIgnoreCase in equals. int diff = userName.compareToIgnoreCase(o.userName); if (diff != 0) return diff; diff = sessionState.compareToIgnoreCase(o.sessionState); return diff; } // public int compareTo(SessionAttributes o) { // // TODO Auto-generated method stub // if (this.getUserName().equals(o.getUserName()) && this.getSessionState().equalsIgnoreCase(o.getSessionState())) { // return 0; // } // return -1; // } public String toString() { return "\n User Name : " + this.getUserName() + " Session State : " + getSessionState() + "\n"; } // public boolean equals(SessionAttributes obj) { // // if (obj == null || !(obj instanceof SessionAttributes)) { // return false; // } // System.out.println(" In equals=="); // boolean isSameUserName = obj.userName.equalsIgnoreCase(this.userName); // boolean isSameState = obj.sessionState.equalsIgnoreCase(this.sessionState); // return (isSameUserName && isSameState); // } public boolean equals(Object o) { // See if o is the same object. If it is, return true. if (o == this) { return true; } // The instanceof check also checks for null. If o is null, instanceof will be false. if (!(o instanceof SessionAttributes)) { return false; } SessionAttributes that = (SessionAttributes) o; return userName.equalsIgnoreCase(that.userName) && sessionState.equalsIgnoreCase(sessionState); } public int hashCode() { System.out.println(" in hashcode "); int hash = 1; // Since in equals you are comparing for equality and ignoring case, you must convert the Strings to either // lower // or upper case when computing the hashCode so that it will always be consistent with equals. hash = hash * 17 + this.getUserName().toUpperCase().hashCode(); hash = hash * 31 + this.getSessionState().toUpperCase().hashCode(); // hash = hash * 13 + this.getAuditSessionId().hashCode(); System.out.println(" hash=>" + hash); return hash; } public static void main(String[] args) { // sortSet(); // sortLists(); // expectedSessionList List<SessionAttributes> expectedSessionList = new ArrayList<SessionAttributes>(); SessionAttributes user11 = new SessionAttributes("postureuser1", "STARTED"); // // ,null,null,null,null,null,null,null,null); SessionAttributes user12 = new SessionAttributes("postureuser1", "DISCONNECTED"); SessionAttributes user13 = new SessionAttributes("postureuser5", "STARTED"); expectedSessionList.add(user11); expectedSessionList.add(user12); expectedSessionList.add(user13); System.out.println("expectedSessionList: " + expectedSessionList); // actualSessionList List<SessionAttributes> actualSessionList = new ArrayList<SessionAttributes>(); SessionAttributes user3 = new SessionAttributes("postureuser1", "STARTED"); // ,null,null,null,null,null,null,null,null); SessionAttributes user4 = new SessionAttributes("postureuser1", "DISCONNECTED"); SessionAttributes user5 = new SessionAttributes("postureuser2", "DISCONNECTED"); // ,null,null,null,null,null,null,null,null); actualSessionList.add(user3); actualSessionList.add(user4); actualSessionList.add(user5); System.out.println("actualSessionList: " + actualSessionList); // removeDups // Use a TreeSet to sort it. Set<SessionAttributes> removeDups = new TreeSet<SessionAttributes>(); boolean b1 = removeDups.add(user11); boolean b2 = removeDups.add(user12); boolean b3 = removeDups.add(user13); boolean b4 = removeDups.add(user3); boolean b5 = removeDups.add(user4); boolean b6 = removeDups.add(user5); System.out.println(" Set--" + removeDups); actualSessionList.removeAll(expectedSessionList); System.out.println("actualSessionList after removeAll: " + actualSessionList); } } Output: expectedSessionList: [ User Name : postureuser1 Session State : STARTED , User Name : postureuser1 Session State : DISCONNECTED , User Name : postureuser5 Session State : STARTED ] actualSessionList: [ User Name : postureuser1 Session State : STARTED , User Name : postureuser1 Session State : DISCONNECTED , User Name : postureuser2 Session State : DISCONNECTED ] Set--[ User Name : postureuser1 Session State : DISCONNECTED , User Name : postureuser1 Session State : STARTED , User Name : postureuser2 Session State : DISCONNECTED , User Name : postureuser5 Session State : STARTED ] actualSessionList after removeAll: [ User Name : postureuser2 Session State : DISCONNECTED ] thanks for your comments. I did implement equals and hashCode. I did not paste the entire code. Good point on removing setters. I did try comparable, comparator, adding objects to Set, HashSet but somehow it did not work for me. I wanted two value match like userName=user1, sessionState=disconnected in two lists. Somehow Set etc did not work for me. There were posts related to this here on SO which I followed but did not work. Not sure if getters/setters caused that. let me try removing setters/getters and try Set/removaAll approach. If you still can't get it to work, please update your question with a short, self-contained, compilable example. http://sscce.org/ Actually, above code was aimed for SSCCE. Let me get the entire code and you can suggest what could be missing. I gave up on Comparator and went brute force way (Iterating over list of session beans to find a match using equals) Thanks, David. Did you test it ? It seems similar to what I pasted. Thanks much for spending your time on this. Appreciate it. Hi, yes I tested it. It works. You're welcome. The stackoverflow way of saying thank you is to approve an answer. If you feel I answered your question to your satisfaction, please approve it. Thanks Is there a reason why you would not accept this as the answer? You said in your question that you could not get "compareTo, compare, hashSet etc." to work. I posted a change to your example with a working compareTo, equals and hashCode. I explained what I did to make the methods work. It means you don't have to implement your own code to sort or compare for matches. It is similar to the code you pasted, but not the same in that the compareTo, equals, and hashCode works. Did you test what I gave you? I just copied and pasted your code, compiled it and ran it after removing Lists versions (retained only TreeSet.add (SessionAttribute) and it still prints everything and does not drop the duplicates. Please note my final goal is to drop duplicates. In this case, I would like Set to be empty Set--[ User Name : postureuser2 Session State : DISCONNECTED , User Name : postureuser1 Session State : STARTED , User Name : postureuser5 Session State : STARTED , User Name : postureuser1 Session State : DISCONNECTED , User Name : postureuser1 Session State : STARTED ] I added my output. expectedSessionList and actualSessionList shared: User Name : postureuser1 Session State : STARTED User Name : postureuser1 Session State : DISCONNECTED After calling removeAll on actualSessionList those two are removed from actualSessionList. Where are your duplicates? I took the code that builds the removeDups Set directly from YOUR code. NO objects that get added to the removDups Set are duplicates. Look at YOUR definition of equals/hashCode. You compare both userName and sessionState. All the objects you added in the removeDups Set are different from each other. If you want to see duplicates get removed, either change your definition of equals and hashCode (compare only userName for example), OR try adding two SessionAttributes objects that are duplicates by YOUR definition of equals/hashCode (The userName and sessionState must be equal, ignoring case). So, actualSessionList:User Name : postureuser1 Session State : STARTED expectedSessionList:User Name : postureuser1 Session State : STARTED You are right actually. Let me vote up. Thanks for your time. Could you please Vote up my question since we both had to go back and forth to analyze this. Thanks, @David ! Actually, if I there is a match or dups, then I would like to remove this entry from both Lists, in that case, I was using iterator.remove() from lists (Until this Set worked)- what would you do ? Reason I want to do this is - at the end of this comparison, I would pass my test if the final Set comes out as Empty. Which means all dups are gone since I know my expected values. Basically, if all of expected values are in Actual values, just make it empty. Removing dups are still retaining expected. This would prevent the code to iterate Set again and probably make it faster ! removeAll is the method you need. If you call removeAll on Collection A and pass Collection B, removeAll will remove all the objects from A that were in B. A would only have objects that were not also in B. A and B would have only unique objects. B would still have the same objects, but you know that none of those objects are in A because removeAll removes the duplicates. The only way removeAll would result in an empty Collection (List, Set, etc.), is if the Collection you passed in removeAll had the exact same values. For this to work, your equals and hashCode must be right. That's the key. If you want to have a Collection of all objects that are unique to Collection A and Collection B, you could make a copy of Collection A before calling removeAll on it. Then, call removeAll on Collection B, passing it the COPY of Collection A. Finally, you would call addAll on Collection A, passing it Collection B. Or, you could use Common Collections The disjunction method sounds like it does what you need.
common-pile/stackexchange_filtered
Android Button Sizing I have several buttons in my application that are displayed at the bottom of the screen. Right now the buttons have text in them. When running on the emulator, the buttons with text fit nicely. Now, that I am running on the actual device, some buttons' text takes more than two lines and the screen is not very presentable. I could change the font to make it work for the device in question, but there is no guarantee that it will work on some other device. Should I create button images (with text embedded as part of the image) and then have multiple versions, depending on the size of the device screen being used? That seems like a lot of work, is there a simpler solution to this? Thank You, Gary Can you post your screenshot with your layout code ?. You need to give equal weights to all buttons.So that all of them look similar and occupy same amount of space. You have to get screen resolution and set sizes as a proportion of this resolution. Here is the sample code to obtain screen width and height. Display display = getWindowManager().getDefaultDisplay(); Point size = new Point(); display.getSize(size); int width = size.x; int height = size.y; You can find multiple screen size handling tutorial here: Supporting Multiple Screens your emulator may have specific resolution that is different than the one of your actual device. It is not hard but little bit tricky. In this purpose you can use built in draw-able folder. In android project there are many draw-able folder like drawable-hdpi, drawable-mdpi, drawable-xhdpi where you can put different size of images and it will automatically render image based on device screen. Check this tutorial for more understanding Supporting Multiple Screens Or you can take screen size dynamically. Based on the screen size you can set the button height and width.
common-pile/stackexchange_filtered
Writing a script to guess foreign keys in Oracle SQL I am trying to write a SQL script that guesses foreign keys. My approach is to eliminate every column that can't be a foreign key. The rest would be manual work. SELECT atc1.table_name atc1_tn, atc1.column_name atc1_cn, atc2.table_name atc2_tn, atc2.column_name atc2_cn FROM all_tab_cols atc1, all_tab_cols atc2 WHERE atc1.data_type = 'NUMBER' AND atc1.data_type = atc2.data_type AND atc1.table_name != atc2.table_name AND atc1.high_value <= atc2.high_value AND atc1.num_distinct <= atc2.num_distinct At this point I get all possible matching columns but that is still not accurate enough. The next step would be to check if every entry in atc1.column_name exists in atc2.column_name, because if not it can't be a foreign key. How can I add that condition to my where clause? The approach is: Select (execute immediate 'select '||ATC1_CN||' from '||ATC1_TN||'') as a, (execute immediate 'select '||ATC2_CN||' from '||ATC2_TN||'') as b from my_temp_table where a not in b; But that doesn't work as expected, because I can't use the table names in a string for a query. Are you starting from a schema with no primary or foreign keys at all, and looking for potentially related combinations of columns across all pairs of tables (in which case you'd have to figure out which is the parent); or do you already have PKs (or UKs) and are looking for possible children (in which case you could start from user_cons_columns)? Hi Alex, I have a db with a lot of tables and relations but no constraints at all. So I have to reverse engineer these relations to know what stuff i need to join on. So if get it right, it think it is both of the two options you mentioned. Please clarify via edits, not comments. Debug questions require a [mre]. Please format code reasonably. Please use standard spelling & punctuation. [ask] [Help] Try the below for existing foreign keys SELECT a.table_name, a.column_name, a.constraint_name, c.owner, -- referenced pk c.r_owner, c_pk.table_name r_table_name, c_pk.constraint_name r_pk FROM all_cons_columns a JOIN all_constraints c ON a.owner = c.owner AND a.constraint_name = c.constraint_name JOIN all_constraints c_pk ON c.r_owner = c_pk.owner AND c.r_constraint_name = c_pk.constraint_name WHERE c.constraint_type = 'R' For potential foreign keys here are some pointers The data type of foreign and referenced key should be same The values in foreign and referenced key columns should be same The child and parent tables must be on the same database For query you can use the below declare prec number; begin for rec in (SELECT atc1.table_name atc1_tn, atc1.column_name atc1_cn, atc2.table_name atc2_tn, atc2.column_name atc2_cn FROM user_tab_cols atc1, user_tab_cols atc2 WHERE atc1.data_type = 'NUMBER' AND atc1.data_type = atc2.data_type AND atc1.table_name != atc2.table_name AND atc1.high_value <= atc2.high_value AND atc1.num_distinct <= atc2.num_distinct ) loop execute immediate 'select count(1) from ' || rec.atc1_tn || ' a where EXISTS (SELECT 1 FROM ' || rec.atc2_tn || ' b where a.' || rec.atc1_cn || '!=' || ' b.' || rec.atc2_cn || ' )' into prec; if prec = 0 Then dbms_output.put_line('potential foreign key rec:table1 ' || rec.atc1_tn || ' table2: ' || rec.atc2_tn || ' column1: ' || rec.atc1_cn || ' column2: ' || rec.atc2_cn); end if; end loop; end;
common-pile/stackexchange_filtered
Can I create a link into rundeck that goes straight to the execution of the job? I have a job without parameters that I would like to give my users to start from a HTML page outside of rundeck. I'd prefer to not go through additional clicks with the output selection and debug options, but go straight to e.g. https://host/rundeck/project/myproject/execution/show/35#output But of course the 35 would need to be replaced with $new or something similar, and know that I want to trigger a certain job. Is there a way? Something like https://host/rundeck/project/myproject/execution/show/$new&jobuuid=cce4b26b-8e8a-4920-bd99-4fa3092a3a02 ? The closest approach is to use this ULR http://localhost:4440/project/ProjectEXAMPLE/job/show/030801bc-6933-472f-ae61-cae11121ca6e (it needs only a click on the "History" tab). Thanks! Yes this is the URL that I'm using to link to the job now, I just hoped to get rid of the last click. No such thing apparently. I image I could trigger the job via the API, and then redirect the user to the jobnumber#output URL, but that doesn't seem worth the trouble.
common-pile/stackexchange_filtered
Page LDAP query against AD in .NET Core using Novell LDAP I am using the Novell LDAP library for making queries to an Active Directory from a .NET Code application. Most of the queries succeed, but some return more than 1000 results, which the AD server refuses. I therefore tried to find out how to page LDAP queries using Novell's library. The solution I put together looks like public IEnumerable<LdapUser> GetUsers() { this.Connect(); try { var cntRead = 0; // Total users read. int? cntTotal = null; // Users available. var curPage = 0; // Current page. var pageSize = this._config.LdapPageSize; // Users per page. this.Bind(); this._logger.LogInformation("Searching LDAP users."); do { var constraints = new LdapSearchConstraints(); // The following has no effect: //constraints.MaxResults = 10000; // Commenting out the following succeeds until the 1000th entry. constraints.setControls(GetListControl(curPage, pageSize)); var results = this._connection.Search( this._config.LdapSearchBase, this.LdapSearchScope, this._config.LdapUsersFilter, this.LdapUserProperties, false, constraints); while (results.hasMore() && ((cntTotal == null) || (cntRead < cntTotal))) { ++cntRead; LdapUser user = null; try { var result = results.next(); Debug.WriteLine($"Found user {result.DN}."); user = new LdapUser() { AccountName = result.getAttribute(this._config.LdapAccountAttribute)?.StringValue, DisplayName = result.getAttribute(this._config.LdapDisplayNameAttribute)?.StringValue }; } catch (LdapReferralException) { continue; } yield return user; } ++curPage; cntTotal = GetTotalCount(results); } while ((cntTotal != null) && (cntRead < cntTotal)); } finally { this._connection.Disconnect(); } } and uses the following two helper methods: private static LdapControl GetListControl(int page, int pageSize) { Debug.Assert(page >= 0); Debug.Assert(pageSize >= 0); var index = page * pageSize + 1; var before = 0; var after = pageSize - 1; var count = 0; Debug.WriteLine($"LdapVirtualListControl({index}, {before}, {after}, {count}) = {before}:{after}:{index}:{count}"); return new LdapVirtualListControl(index, before, after, count); } private static int? GetTotalCount(LdapSearchResults results) { Debug.Assert(results != null); if (results.ResponseControls != null) { var r = (from c in results.ResponseControls let d = c as LdapVirtualListResponse where (d != null) select (LdapVirtualListResponse) c).SingleOrDefault(); if (r != null) { return r.ContentCount; } } return null; } Setting constraints.MaxResults does not seem to have an effect on the AD server. If I do not set the LdapVirtualListControl, the retrieval succeeds until the 1000th entry was retrieved. If I use the LdapVirtualListControl, the operation fails at the first call to results.next() with the following exception: System.Collections.Generic.KeyNotFoundException: The given key '76' was not present in the dictionary. at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Novell.Directory.Ldap.Utilclass.ResourcesHandler.getResultString(Int32 code, CultureInfo locale) at Novell.Directory.Ldap.LdapResponse.get_ResultException() at Novell.Directory.Ldap.LdapResponse.chkResultCode() at Novell.Directory.Ldap.LdapSearchResults.next() The code at https://github.com/dsbenghe/Novell.Directory.Ldap.NETStandard/blob/master/src/Novell.Directory.Ldap.NETStandard/Utilclass/ResultCodeMessages.cs suggests that this is just a follow-up error and the real problem is that the call fails with error code 76, which I do not know what it is. I therefore think that I am missing something in my query. What is wrong there? I fixed it - in case someone else runs into this: After some Internet research, I found on https://ldap.com/ldap-result-code-reference-other-server-side-result-codes/#rc-virtualListViewError what error code 76 means and that the LdapVirtualListResponse contains more information. In my case, the error was https://ldap.com/ldap-result-code-reference-other-server-side-result-codes/#rc-sortControlMissing - so it seems that a sort control is required for paging. In order to fix it, I added constraints.setControls(new[] { new LdapSortControl(new LdapSortKey("cn"), true), GetListControl(curPage, pageSize) }); Thanks for providing the resolution! This was super helpful.
common-pile/stackexchange_filtered
getx navigation Error Null check operator used on a null value I need when the user click the button to add data to firebase, the snake bar pop up with a success message, then go back. But there is NO navigation occurs. the error is occurs when i use navigator is: Error Null check operator used on a null value the code is: class AddProductController extends GetxController { addProduct() async { if ((addProductFormKey.currentState?.validate() ?? false) && pickedPhoto != null) { String docID = FirebaseFirestore.instance.collection('products').doc().id; var url = ""; try { UploadTask uploadTask = FirebaseStorage.instance .ref('users/products/$docID/') .putFile(pickedPhoto!); uploadTask.whenComplete(() async { url = await FirebaseStorage.instance .ref('users/products/$docID/') .getDownloadURL(); await FirebaseFirestore.instance .collection("products") .doc(docID) .set({ "imgUrl": url, }, SetOptions(merge: true)); Get.snackbar( "Sucess", "Your Product Is Added", snackPosition: SnackPosition.BOTTOM, ); }).catchError((onError) { print(onError); }); return Get.toNamed(Routes.PRODUCTS); // => doees not work } catch (e) { print("\n Error $e \n"); } } } } This error occurs when you use this operator (!). you have to use using ternary operator example as : void main() { Students? student; print(student?.name); } if i am sure that a variable is never be null at this place, i can use it safely. if i remove the navigator line no error happens. the error occurs once i put the navigator with no navigation. One example is this pickedPhoto!? yes, as i did it after checking if (pickedPhoto != null &&) Implements Getx Service in AddProductController class. it will look like this: class AddProductController extends GetxController implements GetxService{...... } why? my code is work too. why do you do so?
common-pile/stackexchange_filtered
How can I get jquery roundabout to work? I am having a hard time getting it to work at all. I'm sure it's something simple but I can't figure it out. I have the file saved as index.html, I have uploaded to my FTP server along with my jquery.js file, and the roundabout files. I have them linked in my index.html file, with the unordered list and the stylesheet. But for some reason it displays the unordered list without the effect of the jquery roundabout. Here is a screenshot of my code Here is the website Please help! You need to reference the jQuery library before any jQuery code. If you hit Ctrl+Shift+J, you can see the error in the console.
common-pile/stackexchange_filtered
Trustless exchange without a third party Is there a cryptographic (or even not entirely cryptographic) way of exchanging objects between Alice and Bob that would not require a third party, and Alice and Bob would not need to trust each other? For example, Alice and Bob have Y and X objects (some information) and want to exchange them (in other words both Alice and Bob after transaction will have access to the set Y∪X). How can they do this directly to each other? Is it possible to solve this problem, for example, using private keys, zk-proofs, etc? Is it possible to solve it in principle? Thank you! To make the problem more concrete, assume that Alice and Bob have cryptographic hashes of the desired data. (We can also assume Alice and Bob are identified by public keys but that doesn't seem so important.) Assuming Alice and Bob take turns communicating and the protocol has a deterministic number of rounds, then there is a last round where Bob, say, will send a message but not receive a response. At this point Bob already knows Y and Alice does not know X, so Bob could just not send the message. It's probably possible to make it so that Alice at least has some of X, in this case. This is known as the fair exchange problem. There's lots of research on the topic; see the link for an overview and starting point. As described in the introduction of Efficiently Making Secure Two-Party Computation Fair which is indirectly referenced by that answer, the two approaches to this problem either involve a Trusted Third Party (TTP) or Gradual Release which corresponds one party potentially getting only partial information. The introduction briefly includes an argument similar to the one in my comment. That said, the amount of involvement and what the TTP is trusted to do can be somewhat limited as described in the paper. @DerekElkins, absolutely. There is work on reducing or eliminating the need for a TTP, including gradual release, as well as Bitcoin-based methods (though perhaps you could consider the Bitcoin network as the TTP). Anyway, hopefully this should be enough of a starting point to learn more.
common-pile/stackexchange_filtered
How to update database after drag and drop using javascript/ajax? For some context, I have have three divs for dragging and dropping items. The first div will be the default div to start things off. Then I want the second two divs to update the database when I drop content in there. These divs will be dynamic and I may have 5 or 6 divs at any give time, but also could only have 2 or 3. I am NOT very experienced with javascript. I have gotten this far, but am struggling with getting further. This is a three part question: A) How do I provide a sum PER DIV - Right now the sum div sums up no matter which div I drop the element into. I'm also looking to subtract from the sum when an object leaves the div as well as delete the entry from the database (NOT INCLUDING THE ORIGINAL FIRST DIV). B) How do I send multiple $_POST values to update.php using an ajax request? I've tried data:{data: data, name: name, check: check, amount: amount},, but that makes my sum div stop working. If I run it as is, I'm just getting the "amount" value (which is the div id), but it's not posting as 'amount'. I'm trying to get the div id of the dropped element(amount), the div content of the dropped element (the name) and the div id that the object was dropped into, or the parent div's id (the check). As I said before this javascrip/ajax thing is new to me. I'm not finding what I'm looking for after searching for hours. C) Is there a way to return the 'total' variable to my script FROM update.php instead of adding things up inside of the script? javascript function allowDrop(ev) { ev.preventDefault(); } function drag(ev) { ev.dataTransfer.setData("text", ev.target.id); } var total = 0; function drop(ev, ui) { ev.preventDefault(); var data = ev.dataTransfer.getData("text"); ev.target.appendChild(document.getElementById(data)); $.ajax({ type:'POST', url:'../update.php', data:{data:data}, success: function(){ total += parseFloat(data); $('#sum').html(total); } }); } html <h2>Drag and Drop</h2> <div id="center"> <div id="2018-08-01" ondrop="drop(event)" ondragover="allowDrop(event)"> <div class="move" draggable="true" ondragstart="drag(event)" id="1056.23">Mortgage</div> <div class="move" draggable="true" ondragstart="drag(event)" id="10">Fees</div> </div> <div id="2018-08-05" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="sum">Sum Here</div> <div id="2018-08-15" ondrop="drop(event)" ondragover="allowDrop(event)"></div> <div id="sum">Sum Here</div> </div> update.php $name = $_POST['name']; $check = $_POST['check']; $amount = $_POST['amount']; $sql = "INSERT INTO bills (bills_id, bills_name, bills_check, bills_amount) VALUES ('', '$name', '$check', '$amount')"; mysqli_query($connect, $sql); My first note is be more specific when naming your variables, this will help you understand where problems are occurring. For example, this code may be valid, but it's very difficult to read: data:{data:data} Regarding your issues: a) You're identifying (<div id="sum">) and referencing the sum HTML objects with a non-unique ID (e.g. you have multiple HTML elements with the ID 'sum'). While you're allowed to assign non-unique values to an HTML tag's ID parameter, it's not the right way to do things. JQuery enforces the uniqueness, so when you request $('#sum') it will only return the first element that has that ID, and will ignore the rest. b) Regarding sending multiple values to a POST request. I need to rename some things (see my note above) so this will be clear. Your line that is defined as: var data = ev.dataTransfer.getData("text"); Should be name something better like this: var amountData = ev.dataTransfer.getData("text"); Then you want to modify it's structure to be more informative, so we will define it like this: var amountData = { name: "some name", check: check, amount: ev.dataTransfer.getData("text") } Your ajax call should look like this: $.ajax({ type:'POST', url:'../update.php', data:{data: amountData}, context: amountData, success: function(){ total += parseFloat(this.amount); $('#sum').html(total); } }); I'm using the javascript functional context to transfer your data into the success method. This topic is complex, and you can read about it, I'm not going into details here. Note that I have added the context: amountData configuration. This means that inside of your success method, this is equal to the amountData variable. Now that this = amountData I can reference it's object structure using dot-notation. So to get the amount out of the context object, I just reference it like so: total += parseFloat(this.amount); c) Regarding returning data from PHP. PHP isn't my strong suit, but basically you want to output valid JSON in your PHP page response. So assuming your PHP is returning JSON like this: { "total": "1.00" } Read the comments on this question: Returning JSON from PHP to JavaScript? Specifically comment #2 (as of this edit) Next you want to modify your AJAX call to look like this: $.ajax({ type:'POST', url:'../update.php', data:{data: amount_data}, dataType: 'json', error: function(xhr, status, error) { // Get some feedback in your browser console if something goes wrong. console.info(status + ": " + error); }, success: function(responseData){ total = parseFloat(responseData.total); // You need to better identify your HTML tags. $('#sum-more-unique-id').html(total); } }); Responses and B and C are mutually exclusive depending on your approach.
common-pile/stackexchange_filtered
React-Native: How to scale font size to support many different resolutions and screens in both Android and iOS? I have huge trouble trying to figure the correct font size on the many different screens that exist. Currently I have a helper function called getCorrectFontSizeForScreen. export function getCorrectFontSizeForScreen(currentFontSize){ const maxFontDifferFactor = 6; //the maximum pixels of font size we can go up or if(Platform.OS === 'ios'){ //iOS part let devRatio = PixelRatio.get(); this.fontFactor = (((screenWidth*devRatio)/320)*0.55+((screenHeight*devRatio)/640)*0.45) if(this.fontFactor<=1){ return currentFontSize-float2int(maxFontDifferFactor*0.3); }else if((this.fontFactor>=1) && (this.fontFactor<=1.6)){ return currentFontSize-float2int(maxFontDifferFactor*0.1); }else if((this.fontFactor>=1.6) && (this.fontFactor<=2)){ return currentFontSize; }else if((this.fontFactor>=2) && (this.fontFactor<=3)){ return currentFontSize+float2int(maxFontDifferFactor*0.85); }else if (this.fontFactor>=3){ return currentFontSize+float2int(maxFontDifferFactor); } }else{ //Android part let scale = screenWidth/375; //got this from the f8 facebook project this.fontFactor = (((screenWidth)/320)*0.65+((screenHeight)/640)*0.35) if(this.fontFactor<=1){ //for 0.8 until 1.0 use 8 (800x600 phone this.fontFactor == 0.961) return float2int(scale * (currentFontSize+8)); }else if((this.fontFactor>=1) && (this.fontFactor<=1.6)){ //for 1.0 until 1.5 use 4 (NEXUS 5 this.fontFactor == 1.055) return float2int(scale * (currentFontSize+4)); } else{ return float2int(scale * (currentFontSize+2)); } } function float2int (value) { return value | 0; //Converts a float to an integer } and then normalize the font size like this: const styles = StyleSheet.create({ aText:{ color: 'white', fontFamily: 'Whatever', fontSize: getCorrectFontSizeForScreen(14), } }); It seems to work well on iOS but not that well on Android... I guess I need more fontFactor groups to form this list with trial and error!! But I wonder, is there a better way to do this? What do others do about this? Thank you! Sizes in React Native are based on points, not pixels, hence you shouldn't need such a complex logic to change font size according to the device dpi. At the contrary, if you want to undo the scaling automatically applied you should divide the pixel size for the pixel ratio.
common-pile/stackexchange_filtered
Diffuclty using java-gnome in maven I am using Netbeans 12 IDE on Ubuntu 18.04 LTS and have installed java-gnome using command : sudo apt-get install libjava-gnome-java I created a small project to test the notification with the main class code as following : package com.mycompany.notifytest; import org.gnome.gtk.Gtk; import org.gnome.notify.Notify; import org.gnome.notify.Notification; /** * * @author nbs */ public class Main { public static void main(String[] args){ Gtk.init(args); Notify.init("NotifyTest"); Notification n = new Notification("Notify Test", "Test Passed !", ""); n.show(); } } Now whenever I run the project using F6 the notification is shown and the output is: cd /home/nbs/NetBeansProjects/NotifyTest; JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 /snap/netbeans/41/netbeans/java/maven/bin/mvn "-Dexec.args=-classpath %classpath com.mycompany.notifytest.Main" -Dexec.executable=/usr/lib/jvm/java-11-openjdk-amd64/bin/java org.codehaus.mojo:exec-maven-plugin:3.0.0:exec Running NetBeans Compile On Save execution. Phase execution is skipped and output directories of dependency projects (with Compile on Save turned on) will be used instead of their jar artifacts. Scanning for projects... ----------------------< com.mycompany:NotifyTest >---------------------- Building NotifyTest 1.0-SNAPSHOT --------------------------------[ jar ]--------------------------------- Downloading from central: https://repo.maven.apache.org/maven2/org/com/java.gnome/1.0/java.gnome-1.0.pom The POM for org.com:java.gnome:jar:1.0 is missing, no dependency information available --- exec-maven-plugin:3.0.0:exec (default-cli) @ NotifyTest --- ------------------------------------------------------------------------ BUILD SUCCESS ------------------------------------------------------------------------ Total time: 10.252 s Finished at: 2021-03-04T13:37:04+05:30 ------------------------------------------------------------------------ But after building the project when I try to run the .jar file from terminal, the problem is : Exception in thread "main" java.lang.NoClassDefFoundError: org/gnome/gtk/Gtk at com.mycompany.notifytest.Main.main(Main.java:17) Caused by: java.lang.ClassNotFoundException: org.gnome.gtk.Gtk at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ... 1 more Your resulting jar does not contain the needed classes/jar How can I overcome this problem? I think the java-gnome dependency is not present in the jar since if I add a print statemt in the code, it is executed. Unfortunately I don't know anything how you build your application nor do I know how your pom file looks like so ? What have you tried so far? How does your pom file look like? Which Java version/Maven version etc you are using?
common-pile/stackexchange_filtered
What would happen if ocean currents suddenly stopped (or changed)? I'm thinking about a device (in a story I'm writing) that could control ocean currents, and I'm wondering how it could be weaponized. For example, if an ocean current was suddenly stopped, what might happen to the climate around it? Would stopping one ocean have a significant effect, or would it take stopping many/all? Furthermore, what if an ocean current was reversed or shifted? Could it be used to cause a localized drought or freeze? Or otherwise severe, dangerous weather? The only way to stop currents would be to stop several natural phenomena, such as the rotation of the Earth, plate movements, etc. Which would probably have disastrous consequences. @overlord not quite. I’ll let someone else turn this Wikipedia article into an answer: https://en.m.wikipedia.org/wiki/Shutdown_of_thermohaline_circulation You are grossly underestimating the energy required to shudown or reverse an oceanic current. If one has that much energy available they could simply use it directly to destroy their enemies. Moreover, ocean currents flow for a reason, and they move vast amounts of water. What happens to the underlying cause of the ocean current? What happens to the water which used to flow? Oceanic currents have a large effect in redistributing the heat absorbed by the ocean water. Stopping or altering them would, as a consequence, have a large impact on the climate of the regions bordering the current, like we see with the Gulf stream. The Gulf Stream influences the climate of the east coast of North America from Florida to Newfoundland, and the west coast of Europe. Although there has been recent debate, there is consensus that the climate of Western Europe and Northern Europe is warmer than it would otherwise be due to the North Atlantic drift, one of the branches from the tail of the Gulf Stream. More in detail, if the circulation should somehow stop,, there would be sever impact on weather Hansen et al. 2015 found, that the shutdown or substantial slowdown of the Atlantic meridional overturning circulation, besides possibly contributing to extreme end-Eemian events, will cause a more general increase of severe weather. Additional surface cooling from ice melt increases surface and lower tropospheric temperature gradients, and causes in model simulations a large increase of mid-latitude eddy energy throughout the midlatitude troposphere. This in turn leads to an increase of baroclinicity produced by stronger temperature gradients, which provides energy for more severe weather events. Many of the most memorable and devastating storms in eastern North America and western Europe, popularly known as superstorms, have been winter cyclonic storms, though sometimes occurring in late fall or early spring, that generate near-hurricane-force winds and often large amounts of snowfall. Continued warming of low latitude oceans in coming decades will provide more water vapor to strengthen such storms. If this tropical warming is combined with a cooler North Atlantic Ocean from AMOC slowdown and an increase in midlatitude eddy energy, we can anticipate more severe baroclinic storms. Answering your question, yes, altering the oceanic currents would have a large impact on the weather. However keep in mind that it would take some years to see it. If that is a suitable time scale for your world, let it be. But if you rely on it to kill the bear chasing you while you harvest blueberries in the wild you will be history by the time the weapon strikes.
common-pile/stackexchange_filtered
Node.js - What's the scope of the require()d modules? I am trying to organize a Node.js application developed with Express 4 and am confused about the scope of modules which are imported with require(). Imagine that I use require('./services/user') to import a service in a module such as routes/user.js: var userService = require('./services/user'); Then I do the same require('./services/user') in another module routes/department.js. My question is: is userService the same instance in user.js and department.js or each of them has it's own userService object? That is to say, once you've exported some element through module.exports = XXX if you require the same file, will you get always the same instance? Could you show me where in the Node.js docs that's specified? Are routes/user.js and route/department.js required by the same code? Yes, in the express app.js module: app.use('/users', userRoutes);app.use('/departments', departmentRoutes); If I understand your question correctly, you have theses files: . |_. app.js |_. routes/ |_. user.js |_. department.js |_. services/ |_. user And your code do this: app.js call user.js user.js call user app.js call department.js department.js In that case, at the first time user is required, it is put on cache in require.cache. Then the second time it is called, the caller get require.cache['./service/user'], where is stored your object. As such, you do have the same object in both department.js and user.js. Source: http://nodejs.org/docs/latest/api/modules.html#modules_caching http://nodejs.org/docs/latest/api/modules.html#modules_cycles (helped me understand) Understanding Node.js modules: multiple requires return the same object? Self modifying code in node.js, would cluster work? (about require.cache) EDIT: Other helpful links: node.js require() cache - possible to invalidate? Thanks! Perfect answer. So if I understood well, as long as there's a common ancestor module all imported modules are catched and shared among all of them? That s right, as long as it s the same instance of node that is started, the required module are shared among all module.
common-pile/stackexchange_filtered
Simple IF/ELSEIF not comparing properly I'm new to PHP and this was asked before but the answers just won't cut it in my scenario. I have this piece of code: if ($node->field_available_for_payment[LANGUAGE_NONE][0]['value']==0){ } elseif($node->field_available_for_payment[LANGUAGE_NONE][0]['value']==1){ $status="awaitingapproval"; } elseif (3===3){ $status="paid"; } elseif ($node->field_shipped[LANGUAGE_NONE][0]['value']==1){ $status="shipped"; } var_dump($status); I get back value awaitingapproval (the first if/elseif evaluate to TRUE). However shouldn't I be getting back 'paid' instead since the 3===3 comparison evaluates to TRUE? as well? All the other S.0 answers regarding this type of questions mention the '=' operator vs '==' which is correct in my code. What is var_dump($node->field_available_for_payment[LANGUAGE_NONE][0]['value']);? If it is false, 0, '' or null then that is the only IF condition that will be evaluated because those == 0. @kingkero so should I swap this for a SWITCH instead? No it should not because if (first_statement_is_true) dosomething ELSE (if do something). else here is keyword Control structures like if/else stop executing once a truthy statement is reached. Since the first block is true the other blocks are never evaluated. If that first block should ever fail, (i.e. evaluate to false) then your second statement will always evaluate to true and the code will be executed.
common-pile/stackexchange_filtered
Android add chips anywhere inside EditText I would like to do something similar to this: In other words, these "tags" are optional and can be placed anywhere inside a text. These tags are used to indicate how the text after it will be treated when the user submits it to the server. Seems like you're asking for a third party libraries. Have you tried to search it first? I'm sure there are already a couple of them out there. yes I did find some chips libs, but all of them work the same. Meaning you cannot add custom text inside the edit text, you can only add the chips. There actually are many libraries when you search Android Chips Edittext. Please describe what you've tried and any errors or problems you are having as I said, I found limitations, not errors. None of them allowed me to add custom text in between tags if I wanted to. consider using this , its the best and good one https://github.com/DoodleScheduling/android-material-chips thanks Abdul, I'll try that one! @user3900456 always show your research when asking a question. It helps potential answerers to build on it. that lib doesn't seem to have this functionality I need. Maybe I should just try a different approach. Thanks for the help! @user3900456 hi did u implement this functionality ?
common-pile/stackexchange_filtered
Version controlled South migrations in virtualenv I have a Django site placed in folder site/. It's under version control. I use South for schema and data migrations for my applications. Site-specific applications are under folder site/ so they are all version-controlled along with their migrations. I manage a virtualenv to keep third party components dry and safe. I install packages via PyPI. The installed packages' list are frozen in requirements.txt so the they can be easily installed in another environment. The virtualenv is not under VCS. I think it is a good way if virtualenv can be easly deleted and reconstructed at any time. If I need to test my site, for instance, using another version of Python interpreter, simply activate another virtulalenv. I'd like to use South for third party packages, though. Here comes the problem. Migration scripts stored in the application's folder so they are outside of my site's repository. But I want migration scripts to be under version control so I can run them on different stages as well. I don't want to version control the whole virtualenv but the migration scripts for third party applications. How can I resolve this conflict? Is there any misconcept in my scenario? The SOUTH_MIGRATION_MODULES setting allows you to put migration modules for specified apps wherever you want them (i.e. inside your project tree). I think it depends a litte bit on your version control system. I recommend to use a sparse tree, one that only manages the migration folders of the various packages. Here I see two alternatives: Make a truly sparse tree for all packages, one that you check out before creating the virtualenv. Then populate the virtualenv, putting stuff into the existing folders. Collect all migrations into a separate repository, with a folder per project/external dependency. Check this out into the virtualenv, and create symlinks, linking from each project to its migration folder. In either case, I believe you can arrange for the migrations to exist as a separate project, so you can install it with the same process as you install everything else (easy_install/pip/whatever).
common-pile/stackexchange_filtered
We Need a Crystal Clear "Rules for Asking Questions" - Agree or no? I hesitate posting assertions/questions like this for fear of offending members who have worked hard to improve Health.SE and Stack Exchange in general. Therefore, please accept my suggestion in the spirit of continuous quality improvement, as I do not intend to criticize anyone for previous efforts. Our Help Center > Asking section contains some well-written, insightful, and helpful guidance, but, unfortunately, these pearls of wisdom are scattered about 14 different 'articles' (for lack of a better term). I suspect that the Tour was developed as an attempt to solve this problem, and for newcomers who complete the Tour, I think it does an excellent job. However, as we all know, most newcomers do not complete the Tour. (I base this assertion on how such things work on most websites - if you have path analysis data proving me wrong, please say so! ... I would love to be wrong on this point.) Some newcomers will never complete the Tour or click on Help, but we can't do much about such malcontents to begin with (other than repeatedly deleting their posts). Many sincere newcomers though will look to the Help menu for help in understanding the 'forum rules' (guidelines), and it is at that point a streamlined presentation of Rules for Asking Questions will, I believe, prove beneficial. IMHO we should, at least for now, keep the present 14 items in the Asking section, but put READ THIS FIRST - Rules for Asking Questions (or similar) up top. And I believe we should call them RULES, otherwise we are being namby-pamby, wishy-washy people-pleasers who simply invite trouble. This new streamlined explanation of the rules, should be just one component of an overall, coordinated effort to significantly reduce the number of off-topic (and worse) questions. One other important component would be: Before asking a question, require newcomers to click a box--or perhaps even type in their initials--affirming two statements: 1) "I completed the Health.SE Tour" and 2) "I read Rules for Asking Questions{hyperlinked to the Rules}, I understand these rules, and I promise to abide by them." (HT @Narusan for this suggestion in his answer to Are the questions threating the Health.Se Community?) Agree? Disagree? Modifications? Thanks! Mark P.S. Btw, I will help write a streamlined Rules for Asking Questions, if there is consensus to move forward. I'm not one to point out a problem and then expect someone else to do the work. That would be poor form indeed. I've just put in some numbers... Shocking, they really are. Wow. I agree that action must. be taken! I totally agree, but check boxes are simply too easy to skip through. People almost never read "agreements" in anything before clicking confirmation that they read them. I think we need to be stronger in giving them instructions in a way they can't avoid. Good point @DoctorWhom, and another important variable to test, particularly with regard to what type of 'barrier to entry' is sturdy enough to deter the folks who ask inane questions, but not too difficult such that it deters potentially great participants. Regarding the numbers In the last 30 days, the query @Erich created for me returned 92 low quality post. The total amount of posts in that timespan were 350. That makes 26% of the posts - simply put - garbage.1 Imagine every fourth email in your inbox being spam. I would have a) activated or upgraded my spam-filter or b) throw my email away. This is why we need to take action. What I would do Being honest, I skipped through the tour. Someone posted the link to the tour in one answer or question I asked, I clicked on it, skipped everything and then just proceeded my way to the site. This is why one of my first posts was something I wasn't supposed to do: I answered a question that's asking for personal medical advice. Now, if we had something different than the tour, mandatory to complete, I would still be able to skip everything, enter my initials / click to continue and that's it. I feel like this change would be a bit more pedantic but not much of an improvement. Suggestion Why not make this message pop up when a post is about to be posted? The pop up message could still be skipped, but users are reminded of site rules immediately before posting. Also, because there were some sort of misunderstandings what questions are off-topic, we could include the thumb-rule Welcome to Health.SE. We would like to inform you that if you have a question about you and your health, it is probably off-topic. You can try to rephrase the question to make it more general and apply to everyone. Similarly, before answering this message could pop up: Thank you for your effort! We would like to remind you that every answer must be backed up with references. See this list of reliable references. and additionally, if there are no links in the answer, simply prevent an answer from being posted. (If one has books as source, one would need to link to the ISBN then or something). These messages are so short that with a timer of somewhat 10s before being able to proceed, it is very difficult not to read them if they are printed in large letters on the screen. When should the message pop up? We should not make this based on reputation: Users with reputation of >100 might just be trusted users from somewhere else who have not had much contact with the site. In fact, that should be the cutting edge: How much contact one had with the site. I would suggest this message to pop up if the user currently has < 2 posts in this area (for questions: 2 questions; for answers 2 answers). As bad answers and question do get deleted, this means that the message will pop up until 2 quality posts in each category have been made. This seems fairly reasonable to me. IANAPP Now IANAPP (I am not a professional programmer) but this should be relatively easy to implement: an if-check for questions and answers, and the pop-up message itself. Where to go from now? I would strongly encourage us to do the following: Agree whether we would like to have this feature implemented. (Let the mods have the final say) Be nice to SE so that this feature gets implemented. Starting a query a) how many first questions were deleted and closed and b) how many first answers were deleted and closed over the last month. Implementing the feature for a month as trial and compare the query at the end of the trial time to the query before. Either accept or reject the implemented feature 1: I simply defined a low quality post as either being deleted, closed or having a score of -2 or less. In my eyes, this seems quite reasonable. Excellent ideas! Naturally, we should test everything so that 'what works' is based on actual user behavior as opposed to what we think will work. We definitely need this or something even more stringent in order to post a question, at least for now when we so badly need to teach posters what the purpose and scope of the site is! Perhaps we should open a separate Meta question to discuss proposals of how the popup will work and exactly what it should say. I have some programming background, and there certainly must be a way for the system to recognize when someone uses "my" and "I" in a question and provide some form of pop-up or forced revision before accepting the question. @DoctorWhom Sure, go ahead. I'm currently on holiday and won't be able to do anything on this site (in fact, I'll probably be gone for a few weeks soon), but you have my approval and my +1 for any proposal. A pop-up does sound like a good idea, and other forms of repetition of the rules may help some of the issues regarding first checking for duplicates, researching a topic themselves before posting, etc. However, regarding people asking personal medical questions, I think it will mainly stifle posting of what otherwise could be made into quality questions. I have a suggestion. Instead of just saying that personal medical questions are prohibited, show people how to rephrase these types of questions in the "right way", and how to determine if their question is even appropriate to be reformatted this way. Here's my reasoning. Why do many people come to this and other SE sites in the first place? Because they have a question that needs answering. And why do they have the question in the first place? Because the question has relevance for themselves. Whether in Stack Overflow, Bicycles, or Chess, people primarily ask about something having to do with their own lives. We have a name for people who sit around and think up questions that probably have nothing to do with themselves: scientists. Most people are not scientists, so it will be difficult and perhaps misguided to suppress people from asking health questions that in some way impact their own lives. I understand that this SE site has an inherent restriction that most others do not, in that asking for and providing medical advice in this format is verboten, but that shouldn't stop us from helping to channel these relevant curiosities into a more general form. I suggest a separate help page under the "Asking" section titled something like: "How to rephrase your personal medical question into one that has broader relevance." Link directly to it from the pop-up, if you like. As it currently stands, the only similar guideline I could find was the two sentence section "Make it relevant to others" on the page "How do I ask a good question?" At a minimum, adding a page like this should help cut down on the number of necessary edits of otherwise good questions that were just phrased in a self-referential manner. It should also help to weed out questions that conscientious people realize can't be generalized and therefore shouldn't be posted, like "I've tried Bactrim and I.V. Vancomycin for my MRSA infection but it's still there. Which one(s) should I try next?" Corresponding information could be inserted in the Answering section that instructs people first to make sure the question they would like to answer is phrased generally and even strongly encourages them to edit the question to the preferred format themselves before attempting to answer it. This would help create a "second wave" of question format checkers (after the original poster themselves) who in many cases would get to see the question before the site administrators have a chance to flag it or fix it. DoctorWhom, you seem to be quite good at reformatting personal health questions more generally. Could your method be put into algorithmic form? I would also be happy to contribute. Regarding the method of reformatting questions: a) take out every sentence about a personal medical background. This means that all that is left from a question is How can I overcome anxiety. b) Change I to One. c) Try to include general information instead of the personal medical background. I have fear of written exams changes into if they fear exams. Or, even more general: if they fear events that will significantly affect their future life opportunities. Either d) Hit "Submit" or e) Realise that this changed the question too much and that the edit won't get accepted. f) Close Sorry I missed this - I like your idea a lot, @Fonebone. I did something like that for the revision of our How To Ask page (https://health.meta.stackexchange.com/questions/761/improving-our-how-to-ask-page) , and maybe I could do a more in depth version at some point that we can link from there.
common-pile/stackexchange_filtered
Does the order of columns in a WHERE clause matter? Does the order of the columns in a WHERE clause effect performance? e.g. Say I put a column that has a higher potential for uniqueness first or visa versa? With a decent query optimiser: it shouldn't. But in practice, I suspect it might. You can only tell for your cases by measuring. And the measurements will likely change as the distribution of data changes in the database. +1: While 99.99% of the time, it doesn't matter... I just ran into an issue today where an extremely simple query against well-indexed fields was choosing the wrong execution plan unless the order of the two criteria in the WHERE clause was reversed. Funny: the query planner suggested a missing index with exactly the same specifications as the one the optimizer chose when I reversed the WHERE clause. Funnier: the order of the criteria that was faster was backward compared with the index order. Adding an index hint (or removing the ORDER BY) resolved it. (I am using SQL Server 2005.) You haven't answered the question at all @JᴀʏMᴇᴇ I have. The answer is: not all questions, especially in the general case, have an answer. This is an example of such a question. @Richard - all questions have an answer. If that answer is unknown or the question is not specific enough for an answer, you can request more info. To say "it shouldn't but it might" is not an answer. It's an extremely vague one at best. For Transact-SQL there is a defined precedence for operators in the condition of the WHERE clause. The optimizer may re-order this evaluation, so you shouldn't rely on short-circuiting behavior for correctness. The order is generally left to right, but selectivity/availability of indexes probably also matters. Simplifying your search condition should improve the ability of the optimizer to handle it. Ex: WHERE (a OR b) AND (b OR c) could be simplified to WHERE b OR (a AND c) Clearly in this case if the query can be constructed to find if b holds first it may be able to skip the evaluation of a and c and thus would run faster. Whether the optimizer can do this simple transformation I can't answer (it may be able to), but the point is that it probably can't do arbitrarily complex transformations and you may be able to effect query performance by rearranging your condition. If b is more selective or has an index, the optimizer would likely be able to construct a query using it first. EDIT: With regard to your question about ordering based on uniqueness, I would assume that the any hints you can provide to the optimizer based on your knowledge (actual, not assumed) of the data couldn't hurt. Pretend that it won't do any optimization and construct your query as if you needed to define it from most to least selective, but don't obsess about it until performance is actually a problem. Quoting from the reference above: The order of precedence for the logical operators is NOT (highest), followed by AND, followed by OR. Parentheses can be used to override this precedence in a search condition. The order of evaluation of logical operators can vary depending on choices made by the query optimizer. -1 There is no defined order of evaluation except in a CASE statement or (for some obscure backward compatibility reason) when chaining together EXISTS conditions. See this article for a demonstration @MartinSmith - Given that I've quoted from the T-SQL documentation, I would say that the author's observation that it's not a bug is debatable. The docs clearly specify that the operators in a WHERE clause obey precedence rules. Of course, it may be that the documentation is wrong and I've been misled. @tvanfosson - Sorry didn't notice that specific phrase in the docs that you based your answer on so have retracted my DV but looks like it has been corrected in later versions to say The order of evaluation of logical operators can vary depending on choices made by the query optimizer @MartinSmith - typical internet problem, the world revolves beneath you and makes you look foolish occasionally. I've updated; I'm not sure that the answer is actually clearer now but at least it's more current. For SQL Server 2000 / 20005 / 2008, the query optimizer usually will give you identical results no matter how you arrange the columns in the WHERE clause. Having said this, over the years of writing thousands of T-SQL commands I have found a few corner cases where the order altered the performance. Here are some characteristics of the queries that appeared to be subject to this problem: If you have a large number of tables in your query (10 or more). If you have several EXISTS, IN, NOT EXISTS, or NOT IN statements in your WHERE clause If you are using nested CTE's (common-table expressions) or a large number of CTE's. If you have a large number of sub-queries in your FROM clause. Here are some tips on trying to evaluate the best way to resolve the performance issue quickly: If the problem is related to 1 or 2, then try reordering the WHERE clause and compare the sub-tree cost of the queries in the estimated query plans. If the problem is related to 3 or 4, then try moving the sub-queries and CTE's out of the query and have them load temporary tables. The query plan optimizer is FAR more efficient at estimating query plans if you reduce the number of complex joins and sub-queries from the body of the T-SQL statement. If you are using temporary tables, then make certain you have specified primary keys for the temporary tables. This means avoid using SELECT INTO FROM to generate the table. Instead, explicitly create the table and specify a primary KEY before using an INSERT INTO SELECT statement. If you are using temporary tables and MANY processes on the server use temporary tables as well, then you may want to make a more permanent staging table that is truncated and reloaded during the query process. You are more likely to encounter disk contention issues if you are using the TempDB to store your working / staging tables. Move the statements in the WHERE clause that will filter the most data to the beginning of the WHERE clause. Please note that if this is your solution to the problem, then you will probably have poor performance again down the line when the query plan gets confused again about generating and picking the best execution plan. You are BEST off finding a way to reduce the complexity of the query so that the order of the WHERE clause is no longer relevant. I hope you find this information helpful. Good luck! +1 This is what I have found too - you are told that the optimizer will do it, but when you measure it suggest otherwise. It all depends on the DBMS, query optimizer and rules, but generally it does affect performance. If a where clause is ordered such that the first condition reduces the resultset significantly, the remaining conditions will only need to be evaluated for a smaller set. Following that logic, you can optimize a query based on condition order in a where clause. In theory any two queries that are equivalent should produce identical query plans. As the order of WHERE clauses has no effect on the logical meaning of the query, this should mean that the order of the WHERE clause should have no effect. This is because of the way that the query optimiser works. In a vastly simplified overview: First SQL Server parses the query and constructs a tree of logical operators (e.g JOIN or SELECT). Then it translates these logical operators into a "tree of physcial operations" (e.g. "Nested Loops" or "Index scan", i.e. an execution plan) Next it permutates through the set of equivalent "trees of physcial operations" (i.e. execution plans) by swapping out equivalent operations, estimating the cost of each plan until it finds the optimal one. The second step is done is a completely nieve way - it simply chooses the first / most obvious physical tree that it can, however in the 3rd step the query optimiser is able to look through all equivalent physical trees (i.e. execution plans), and so as long as the queries are actually equivalent it doesn't matter what initial plan we get in step 2, the set of plans all plans to be considered in step 3 is the same. (I can't remember the real names for the logical / physical trees, they are in a book but unfortunately the book is the other side of the world from me right now) See the following series of blog articles for more detail Inside the Optimizer: Constructing a Plan - Part 1 In reality however often the query optimiser doesn't have the chance to consider all equivalent trees in step 3 (for complex queries there can be a massive number of possible plans), and so after a certain cutoff time step 3 is cut short and the query optimiser has to choose the best plan that it has found so far - in this case not all plans will be considered. There is a lot of behind the sceene magic that goes on to ensure that the query optimiser selectively and inteligently chooses plans to consider, and so most of the time the plan choses is "good enough" - even if its not the absolute fastest plan, its probably not that much slower than the theoretical fastest, What this means however is that if we have a different starting plan in step 2 (which might happen if we write our query differently), this potentially means that a different subset of plans is considered in step 3, and so in theory SQL Server can come up with different query plans for equivalent queries depending on the way that they were written. In reality however 99% of the time you aren't going to notice the difference (for many simple plans there wont be any difference as the optimiser will actually consider all plans). Also you can't predict how any of this is going to work and so things that might seem sensible (like putting the WHERE clauses in a certain order), might not have anything like the expected effect. In the vast majority of cases the query optimizer will determine the most efficient way to select the data you have requested, irrespective of the ordering of the SARGS defined in the WHERE clause. The ordering is determined by factors such as the selectivity of the column in question (which SQL Server knows based on statistics) and whether or not indexes can be used. If you are ANDing conditions the first not true will return false, so order can affect performance. Upvoting because you think a downvote was unfair, rather than because you believe the answer merits an upvote isn't great practice. I believe that a query optimiser will re-arrange the rules, rather than just employ lazy evaluation. SpoonMeiser is correct; the optimizer (at least for SQL Server) uses more complex logic than simple C++-style evaluation.
common-pile/stackexchange_filtered
How do we see reflections in water when looking nearly straight down at the surface? According to Snell's law and Fresnel's equations, using IOR of 1.33 for water, my calculation of Fresnel's suggest the reflectivity of water at an angle of incidence of 0 is approx 0.02 (i.e. 2%). Googling turned up multiple references that quoted the same figure (suggesting my calculations are probably correct - or else multiple people are making the same mistakes that I am!). As I understand it, that means that looking straight down (e.g. from a bridge) onto a calm water surface, my eyes should only see 2% of the light versus if I were standing in the water looking up. That seems dark enough that I would never be able to see anything when looking down - not myself, not the bridge, but not the sky either - unless (perhaps) I wore blinkers, enabling my eyes to adjust to the extremely low (50x lower than surroundings!) light levels. And yet ... I can clearly see the outline of myself versus sky. I couldn't find a convenient bridge, but ... last week I went out with a camera and took photos of calm water at different angles on a sunny day. Experimentally, at angle-of-incidence of approx 60 degrees, reflections are bright, easy to distinguish fine detail in shadows of what's reflected, and fully coloured. My calculations of the Fresnel equations suggest this should be at approximately 5% (20x reduction in light). My first thought is that I need to more accurately measure the angles with the camera (I've been judging by eye), but ... I'm afraid that my Fresnel calculations are woefully wrong. Or ... can we really see 50x loss in light, without blocking out surrounding light sources? I thought the eye wasn't that sensitive unless it could readjust to low-light conditions?
common-pile/stackexchange_filtered
Active only specific dates in jQuery Datepicker Calendar I am trying to active only few dates (5, 10, 17, 25) of every month in jQuery Datepicker calendar. The rest of the dates should be disable. I am using just default code for datepicker: $( function() { $( "#pacdate, #pay_date" ).datepicker({ dateFormat: "yy-mm-dd" }); } );` This might help - http://stackoverflow.com/questions/7709320/jquery-ui-datepicker-enable-only-specific-days-in-array relevant question,just the opposite http://stackoverflow.com/questions/9742289/jquery-ui-date-picker-disabling-specific-dates
common-pile/stackexchange_filtered
How to see the average milliseconds of functions by VS2022 Performance Profiler? I use VS2022 Performance Profiler to analysis my app. I want to analyze the average time in milliseconds of some hot path functions. But Performance Profiler only show the total milliseconds and percentage. Is it possible to show average milliseconds of specific function like BenchmarkDotNet? That would require a profiler to know how many times a function had been called, which is not information it gets from just statistically sampling the program counter every so often. That would require some kind of tracing, like hardware with Intel PT to have the CPU record conditional branch destinations. Or dynamic instrumentation like with Intel PIN. Or static instrumentation like gcc -O3 -pg does, to create data for gprof. But all of these have some overhead, so they're definitely not on by default. gcc -pg is pretty lightweight. I don't use VS, so IDK if it has any of these. A microbenchmark that wraps a known repeat loop around a function creates a natural way to say something about average time for the way it was run in that experiment. It might not be the average time "in the wild" when the caches and branch-history have some competition.
common-pile/stackexchange_filtered
Migration Prefect1-2 and multithreading Please, need your help. I am seeking a way to migrate my quiet complexity Flow code from Prefect-1 to Prefect-2. Flow worked perfectly on the old Prefect version. But when I remade it to 2-nd version it started working well only on local runs. So, initial code that work on prefect1 looks like this: with Flow("load-somename-api", schedule=schedule) as flow: working_dir = "/home/…" flow.run_config = LocalRun(working_dir=working_dir) flow.executor = LocalDaskExecutor(scheduler="processes", num_workers=6) # Some parameters date_range_type = Parameter("p01_date_range_type", default="LAST_7_DAYS") start_date = Parameter("p02_start_date", default=None) end_date = Parameter("p03_end_date", default=None) campaign_list = Parameter("p04_campaign_list", default="ALL") date_range = get_dates(date_range_type, start_date, end_date) truncate_loading = truncate_table() campaigns_df = get_campaigns(date_range, campaign_list, truncate_loading) # Here starting map on 6 workers and subtasks began working in parallel, waiting the completions of “campaigns_df” # There is about 3-4 hundreds of subtasks campaign_info = get_campaigns_info.map(campaigns_df[0]) # After “campaign_info” completion next task should start also in parallel 6-threads load # There is about 2 thousands subtasks profile_info = get_profiles_info.map( campaigns_df[1], upstream_tasks=[unmapped(campaign_info)] ) # Then we took some list with a lot of int values banners_list = get_banners_list(upstream_tasks=[profile_info]) # First we mapped list and execute about 2-3 thsnds subtasks banners_info = get_banners_info.map(banners_list) # Second – mapping list in another task and execute them on 6 threads banners_stats = get_banners_stats.map( banners_list, unmapped(date_range), upstream_tasks=[unmapped(banners_info)] ) Code after remake on Prefect-2: @flow(name="load_somename_api", task_runner=DaskTaskRunner( cluster_kwargs={"n_workers": 3, "threads_per_worker": 2} )) def load_somename_api_flow(params: InputParameters): date_range = get_dates(params.date_range_type, params.start_date, params.end_date) truncate_loading = truncate_table() campaigns_df = get_campaigns(date_range, params.campaign_list, truncate_loading) # There is trouble, I think. Locally it creates a lot of (about 2-3k) tasks in queue and execute them – 6 parts in batch and finally all flow complete fine # But on server it is only add all tasks in queue and don’t execute them, so when the next task start all flow dropping into error campaigns_info = [get_campaigns_info.submit(i, wait_for=[campaigns_df]) for i in campaigns_df[0]] profile_info = [get_profiles_info.submit(j, wait_for=[campaigns_info]) for j in campaigns_df[1]] banners_list = get_banners_list(wait_for=[profile_info]) for k in range(len(banners_list)): bn_lst = banners_list_slice(k, banners_list) for bn in bn_lst: banners_info = get_banners_info.submit(bn, wait_for=[bn_lst]) get_banners_stats_tasks = get_banners_stats.submit(bn, unmapped(date_range), wait_for=[banners_info]) Logs info when flow falls in error. It submits 1793 subtasks and starts to run the next task. Any idea how to make it work on Prefect2 server (not only local) just like it worked on Prefect1? Some tasks should be performed in 6 or more threads. Not just to be added to the queue, but to be executed also. Thank you!
common-pile/stackexchange_filtered
What is the force of friction between two bodies given their masses and a force pulling them as a unit accross a surface? Where a force of 200N pulls two blocks together(as one system) across a horizontal table top(µ=0.800) $m_A$ = 5.00kg, $m_B$ = 10.0kg Find the acceleration of the system. Find f$_k$ between B and A I found a to be 5.485 m/s² which agrees with the textbook's 5.5m/s². The textbook says b is 173N but I can't seem to get a number even close to that. How does one go about solving this kind of equation, please provide calculations or formulas in the order they are needed, insteed of just abstract steps. If both block A, and B are moving together as a system, the two blocks will not have a kinetic friction between the two of them (because they are stationary to each other). Draw your free-body diagram of both blocks individually, and write an expression for all the forces acting on the each block. Share what you find by editing your question, so that we may know why you might not be getting the answer you desire. $\sum F = ma$ Edit: You did not explain the question well enough, but I can see that the friction is indeed equal to 172.5 N assuming that mass B is on top of mass A, and the force is applied on the top mass. Ok I figured it out, sorry the fact that block A is on top of block B is important. F$_a$ = 5.485m/s²(5kg) = 27.425N $\Sigma F_x = 200N ∴ f_s= 200N - F_a = 200N -27.435 =172.575$
common-pile/stackexchange_filtered
Residue for quotient of functions Let $f, g$ be holomorphic functions on a disk $\mathbb{D}(z_0,r)$ centered at $z_0$ and of radius $r>0$. Suppose $f$ has a simple zero at $z_0$. I want to find an expression for $Res(g/f, z_0)$. But I'm not sure what this expression should look like. Here's my guess: Since $f$ has a simple zero at $z_0$, $\exists h(z)$ holomorphic on $\mathbb{D}(z_0,r)$ such that $f(z)=(z-z_0)h(z)$ and $h(z_0)\ne 0$. So that we can represent $g/f = \frac{g(z)}{(z-z_0)h(z)}$, where we observe that $z_0$ is a pole of order 1 of $g/f$. This implies that we can express $$g/f=\frac{a_{-1}}{z-z_0}+\sum\limits_{n=0}^\infty a_n(z-z_0)^n$$ Hence, $$a_{-1}=\frac{g(z)}{f(z)}(z-z_0)-\sum\limits_{n=0}^\infty a_n(z-z_0)^{n+1}$$ Does this look like a correct approach? I think that this expression is too general because of the infinite series on the right-hand side. Is there a clue I'm missing? Update: Another approach might be this: $$g/f = \frac{g(z)}{(z-z_0)h(z)}=\frac{c_0+c_1(z-z_0)+\dots}{d_1(z-z_0)+\dots}=\frac{c_0}{d_1(z-z_0)+\dots}\\ +\frac{c_1(z-z_0)+\dots}{d_1(z-z_0)+\dots}=\frac{a_{-1}}{z-z_0}+\sum a_n(z-z_0)^n$$ But what next? $Res(g/f, x_0) = g/f'$. The proof of it is in most textbooks and is very straightforward. Think differentiation. Here's what I've finally come up with, with credit to the answer by Zaid Alyafeai. Since $f$ has a simple zero at $z_0$, we can express it as $f(z)=(z-z_0)h(z)$, where $h(z)$ is holomorphic on $\mathbb{D}(z_0, r)$ and $h(z)\ne 0$ on this set, so that $1/h(z)$ is holomorphic on $\mathbb{D}$ and has Taylor series $1/h(z)=\sum\limits_{k=0}^\infty a_k(z-z_0)^k$. Thus $\frac{1}{f}= \frac{1/h(z)}{z-z_0}=\sum\limits_{k=0}^\infty a_k(z-z_0)^{k-1}=\frac{a_0}{z-z_0}+O(1)$. $g(z)$ is holomorphic on $\mathbb{D}$, so it has Taylor series $g(z)=\sum\limits_{k=0}^\infty b_k (z-z_0)^k=b_0+O[(z-z_0)^2]$. So that $$\frac{g}{f}=\frac{b_0a_0}{z-z_0}+O(1)$$ where $b_0=g(z_0)$. Now, $f'(z)=h'(z)(z-z_0)+h(z)$, and $1/f'(z_0)=1/h(z_0)=a_0$. Hence, $$Res(\frac{g}{f}, z_0)=\frac{g(z_0)}{f'(z_0)}$$ Suppose that $z_0$ is a simple zero of the fraction $$\mathrm {Res}(g/f,z_0) = \lim_{z\to z_0}(z-z_0) \frac {g(z)}{f(z)}=\frac {g(z_0)}{f'(z_0)} \tag{1}$$ Alternatively Since $f$ has a simple zero at $z_0$ $$\frac{1}{f} = \frac{a_{-1}}{(z-z_0)}+\sum_{k=0}^\infty a_k (z-z_0)^k$$ Suppose that $g$ is analytic in a nbhd of $z_0$ $$g(z) = \sum_{k=0}^\infty b_k(z-z_0)^k$$ Then $$G(z)=\frac{g(z)}{f(z)} = \left( \sum_{k=0}^\infty b_k(z-z_0)^k \right)\left( \frac{a_{-1}}{(z-z_0)}+\sum_{k=0}^\infty a_k (z-z_0)^k\right)$$ Finally we have the residue at $z_0$ $$\mathrm {Res}(G(z),z_0) = a_{-1}b_0 \tag{2}$$ I can't use this definition. I need to expose the constant $c_{-1}$. That is, I can only use the definition that $Res(f, z_0)=c_{-1}$ in the Laurent series. @sequence, see my edit. I am not sure if this what you want. How do you determine that $a_{-1}b_0$ is the needed constant in the series of $G(z)$? There doesn't appear to be the constant with the index $-1$ there. @sequence, to evaluate the residue at $z_0$ you need the coefficient of $(z-z_0)^{-1}$ which is in that case $a_{-1}b_0$. Yes, this makes sense. Even though this doesn't the give answer more explicitly, in terms of the functions and their derivatives. We only know that $b_0=g(z_0)$, but we don't know what $a_{-1}$ is. @sequence, that seems impossible. You have to have some information about $f(z)$. Please see my answer. @ZaidAlyafeai @sequence, (+1) glad that hepled you.
common-pile/stackexchange_filtered
Remove svn:externals property from a folder I have a branch of the trunk. I need to re-set the properties of the externals in the branch to a different point. My idea was to remove them all and re-set them with propset. When I type svn propdel svn:externals http://path-to-branch/externals I get svn: E200009: Targets must be working copy paths Whats the problem with my command? Ok, i've tried: svn propset --revprop -r HEAD svn:externals "http://abc /abc" http://svn-server-path-to-branch/Externals svn: E175002: DAV request failed; it's possible that the repository's pre-revprop-change hook either failed or is non-existent svn: E175008: At least one property change failed; repository is unchanged svn: E175002: Error setting property 'externals': Revprop change blocked by pre-revprop-change hook (exit code 1) with output: Changing revision properties other than svn:log is prohibited Not sure what that means... "Revprop change blocked by pre-revprop-change hoot" - Assuming that's a typo and you meant hook, it means there's a custom pre-rev hook thats preventing you from making the change. You'll need to contact your repository's administrator. Your command is operating on the repository URL, not a working copy. Check out a working copy first: svn co http://path-to-branch path/to/workingcopy Then modify the property in your working copy: svn propdel svn:externals path/to/workingcopy Commit the change, and you should be all set. I would be remiss not to point out that it is not actually necessary to delete them first, propedit will overwrite whatever the property was beforehand. I've tried the following to over write some properties, but i'm not sure of the syntax: I've tried the following to over write some properties, but i'm not sure of the syntax: svn propset svn:externals --revprop -r HEAD "http://abc /abc" "http://target-to-svn-branch/externals" but it says that at least one propery change failed and "Error setting property 'externals': Revprop change blocked by pre-revprop-change hoot... Changing revision properties other than svn:log is prohibited" @Gui - externals is versioned property of PATH, not unversioned of revision "Revprop change blocked by pre-revprop-change hoot" - Assuming that's a typo and you meant hook, it means there's a custom pre-rev hook thats preventing you from making the change. You'll need to contact your repository's administrator.
common-pile/stackexchange_filtered
Calculating power using a for loop instead of applying the Math.pow method Trying a method to find the power of a number using a for loop and without using Math.pow. For this result I just got 2.0 as it doesn't seem to go back round through the loop. Help please. public void test() { { double t = 1; double b = 2; // base number double exponent = 2; for (int i = 1; i<=exponent; i++); t = t*b; System.out.println(t); Remove that semicolon following the for line. Also why does i start at 2? Thank you, it was just the semi-colon, works now! yes just noticed that also. I posted starting at two by mistake. Was experimenting with the code and hadn't noticed that I still had the 2 there, thank you for your help. Try this. double t = 1; double b = 2; // base number double exponent = 2; for (int i = 1; i<=exponent; i++) t = t*b; System.out.println(t); @SteveChalmers if this help you, could you accept it? It's because the first iteration around you are setting T equal to B. You aren't multiplying by the exponent just yet. So it needs to iterate 1 further time then you are expecting. just decrease the I value in your for loop. EG for(int i = 1; i <= exponent; i++) t=t*b; Hope this helps!
common-pile/stackexchange_filtered
Map of highway fuel stops in Germany I'm looking for a map of highway fuel stops in Germany. I've seen the what-looks-like official site Tank and Rast, but they are very obviously outdated (at least by 10 years, looking at the missing pieces of A6). I'm planning a long journey and I'd like to know where to stop in advance. I'd be also happy with having a good route planner (such as Viamichelin.com) but one that would offer the fuel stops to be shown as well. Fuel stops are numerous here. There are multiple large companies, as well as many privately owned ones. You do not really need to plan ahead for the fuel stops. maps.adac.de has this feature. The page appears to be German only, but it shouldn't be too difficult to use. The first input box on the left is your starting point, the one below it your destination. Then click on "Weitere auswählen (0/38)" and check the box next to "Tankstellen & Spritpreise". Next click "Weiter" and or "Route berechnen" and after some calculation time you'll see the fuel stops along your route. That being said: fuel stops on the Autobahn are barely ever more than 60 km apart and before you drive past one, you can see the distance to the next one on the sign. So I wouldn't worry about them in advance. yes, the fact that in central Europe you always know in how many kilometers the next stop is can't be stressed enough. I never inform myself before and I may skip several stops until I find one that works with my benefits card. Of course, you have to start thinking before you start using the fuel reserve. The only way to be in trouble is if there's a fuel stop strike. And a map won't help you around that. When I was stationed over there in the early 2000's, you could buy a map from Esso that had all their other stations located on the map. If you're looking for a non-tech solution to make sure you can get fuel, this is one option. When I traveled to Germany a few years ago (US citizen) I realized that my cell phone plan wouldn't have data service over there, and as such, Google maps would be useless. I looked around for a GPS app that allowed you to pre-load maps, and eventually decided on CoPilot GPS (I have no association with the app, just a satisfied user.) You do need to pay for maps, but you get the first one for free, to let you see if you even like the app. I chose the pack with Germany (which includes Austria, Switzerland, and Liechtenstein, which is nice.) I found the maps to be as accurate as could be expected, and were very helpful for getting around in a country I'd never been to before (and where my language was rather rusty!) If you have a smart phone with GPS, I cannot suggest it enough. It should certainly help you find your way, without having to worry about roaming charges or foreign data plans. Even if you don't take it with you, maybe their maps will be updated enough to help you plan in advance. Nokia Here & Drive are free and allow map download too. @Formagella Thanks, I've been wondering what other options there are out there. May try that one out in the future to see if I like it better.
common-pile/stackexchange_filtered
Play! framework 1.2.4, dependencies and Netbeans 7 I switched to the "new" dependency system in Play! Framework. I'm using Netbeans 7. In Netbeans, all my code compiles and there are no complaints. My site also runs nicely if I start it from the terminal. However, if I run it from Netbeans, I get some runtime errors (in this case it complains about some Excel 1.2.3 / Apache POI packages): Compilation error The file /app/controllers/Admin.java could not be compiled. Error raised is : org.apache.poi.hssf.usermodel.HSSFSheet cannot be resolved import org.apache.poi.hssf.usermodel.HSSFSheet; import org.apache.poi.hssf.usermodel.HSSFWorkbook; import org.apache.poi.poifs.filesystem.POIFSFileSystem; import org.apache.poi.ss.usermodel.Cell; import org.apache.poi.ss.usermodel.Row; Any one encountered this problem (or similar)? It seems that you have not added your poi lib to the Netbeans project classpath. If you added the lib after doing a netbeansify, you should do again a netbeansify, then reimport the project into Netbeans. Netbeans find the packages. But there's something about how the site is started from Netbeans, I think. I did a "play dependencies" before "play netbeansify". (so I did not add the lib after doing netbeansify)
common-pile/stackexchange_filtered
TypeError: Expected a job store instance or a string, got NoneType instead Why I am getting this error?? I am trying to apscheduler in my django project. But the function is working very well. After sending some email it turns off automatically. here is my code : import logging from django.conf import settings from apscheduler.schedulers.blocking import BlockingScheduler from django.core.management.base import BaseCommand from django.core.mail import send_mail from django.core.mail import EmailMessage from ...models import * from django_apscheduler.jobstores import DjangoJobStore from apscheduler.triggers.cron import CronTrigger logger = logging.getLogger(__name__) class Command(BaseCommand): help = "Running in the dark :)" def send_email_to_registered_users(self): assessment = Assessment.objects.all() mail_subject = "Newsletter" message = "Welcome to our newsletter" for i in assessment: sender = i.email email_send = EmailMessage(mail_subject, message, to=[sender]) email_send.send() print("email Sent") def handle(self, *args, **kwargs): scheduler = BlockingScheduler(timezone=settings.TIME_ZONE) scheduler.add_jobstore(DjangoJobStore(), "d") scheduler.add_jobstore( self.send_email_to_registered_users(), trigger=CronTrigger(second="*/10"), id="send_email_to_registered_users", max_instances=10, ) logger.info("Printing Jobs!!! and sending!!") scheduler.start() After taking a quick glance at the documentation, I think this is the fix: def handle(self, *args, **kwargs): scheduler = BlockingScheduler(timezone=settings.TIME_ZONE) scheduler.add_jobstore(DjangoJobStore(), "d") scheduler.add_job( # I'm guessing you want to add a job self.send_email_to_registered_users, # NOTE: no parenthesis trigger=CronTrigger(second="*/10"), id="send_email_to_registered_users", max_instances=10, ) logger.info("Printing Jobs!!! and sending!!") scheduler.start() PS. isn't it easier to create a crontab which runs the python manage.py ... command?
common-pile/stackexchange_filtered
How to implement a composite event after p:calendar value is set? I've a composite component with calendar inside. I need to catch an event when calendar was changed but in my mainBean (not composite bean) So, I've got 4 files: mainBean.java, main.xhtml and coposite: myCalendar.xhtml with calComp.java. I want to call 'changed' event in mainBean.java and have no idea how to achive that. Here is the code : Composite: <composite:interface componentType="myComponent"> <composite:attribute name="value" required="true"/> <composite:attribute name="myListener" method-signature="void listener()" /> </composite:interface> <composite:implementation> <h:panelGroup id="container"> <p:calendar value="#{cc.attrs.value}" valueChangeListener="#{cc.valueChanged}" <p:ajax event="dateSelect" update="@this,:buttonpanel" listener="#{cc.attrs.myListener}"/> </p:calendar> </h:panelGroup> </composite:implementation> Main page: <cc:inputdate value="#{mainBean.item.myDate}" myListener="#{mainBean.event1}"/> bean for main page: public void event1() { log("Event1!!!!"); } I've got also log in setMyDate() method. So I now exactly when mainBean has new date set. In composite's bean there is only "log code", but It was enough to see that Event1 was called before the new value was set.. My question is: How can I catch an event, that will be called AFTER the value for myDate is set? In that event I want to do "getMyValue" with new value. Please help Thanks.
common-pile/stackexchange_filtered
iTunes Connect yellow warning icon What does the yellow warning symbol beside my build number on iTunes Connect mean? Will my app get rejected because of this? I'm submitting an app with an apple-watch extension. Thanks If your app gets rejected by Apple, they'll usually provide a description of what you need to fix, and how you need to do it. If it's something small then they might just fix it for you and email you about the changes/fixes they made to it. It's there because Testflight doesn't support WatchKit extensions right now. You can still submit without any issues. I've done it numerous times. My app doesn't have watchkit target, but still the warning symbol shown for the build and the warning icon is not clickable to see the description.
common-pile/stackexchange_filtered
vector of struct pointer Following is my problem struct point{ int x; int y; }; struct OuterStruct { std ::vector <point *> pa; std ::vector <point *> pb; }; OuterStruct atest; // global variable Now in my main i am doing this point n ; n.x=1; n.y =2; atest.pa.push_back(n); atest.pb.push_back(n); .. and in some other function if i am using this global stucture ,value of pa,pb is lost,and they has some junk values. Am i doing something wrong here. Your code is incomplete, but based on what you've shown your vectors should be defined as std::vector<point>, not std::vector<point*>. I can't think of a way your code fragments would even form a valid program. Please post a minimal working example. Change vector of pointers: struct OuterStruct { std::vector<point*> pa; std::vector<point*> pb; }; into vector of objects: struct OuterStruct { std::vector<point> pa; std::vector<point> pb; }; And also use meaningful names (pa, pb, atest and OuterStruct don't say much...) be consistent in naming convention (if you name types with UpperCamelCase, then point->Point) avoid using global variables if possible create a custom constructor for point to ease its creation: struct point { point(int x_, int y_) : x(x_), y(y_) { } int x; int y; }; then you can simply do: atest.pa.push_back(point(1,2)); With C++11 you could even simplify it to atest.pa.emplace_back(1, 2);.
common-pile/stackexchange_filtered
C++ cannot pass objects of non-POD type This is my code: #include <iostream> #include <fstream> #include <cstdlib> #include <stdio.h> #include <curl/curl.h> using namespace std; int main () { ifstream llfile; llfile.open("C:/log.txt"); if(!llfile.is_open()){ exit(EXIT_FAILURE); } string word; llfile >> word; llfile.close(); string url = "http://example/auth.php?ll=" + word; CURL *curl; CURLcode res; curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, url); res = curl_easy_perform(curl); /* always cleanup */ curl_easy_cleanup(curl); } return 0; } This is my error when compiling: main.cpp|29|warning: cannot pass objects of non-POD type 'struct std::string' through '...'; call will abort at runtime +1 for an SSCCE, -1 for not actually asking a question. Uh, +0 I guess... The problem you have is that variable argument functions do not work on non-POD types, including std::string. That is a limiation of the system and cannot be modified. What you can, on the other hand, is change your code to pass a POD type (in particular a pointer to a nul terminated character array): curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); As the warning indicates, std::string is not a POD-type, and POD-types are required when calling variadic-argument functions (i.e., functions with an ... argument). However, char const* is appropriate here; change curl_easy_setopt(curl, CURLOPT_URL, url); to curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
common-pile/stackexchange_filtered
Right way to fit image to CALayer contents in swift programmatically? I am new to programming. @IBOutlet weak var myView: UIView! var l: CALayer { return myView.layer } override func viewDidLoad() { super.viewDidLoad() setUpLayer() } func setUpLayer() { l.contents = UIImage(named: "IMG_3682.jpg")?.CGImage l.contentsGravity = kCAGravityCenter } The image appears way bigger than myView. I got imges with various sizes and ratios. and I want to fit them into UIView.layer What is the right way to do it? thanks Remove l.contentsGravity = kCAGravityCenter Remove line: l.contentsGravity = kCAGravityCenter
common-pile/stackexchange_filtered
Android - Bitmap and memory management? I've seen in a lot of samples, that developers call recycle() on bitmap, and then set it to null. Why is this necessary, doesn't the garbage collector take care of releasing the bitmap? Bitmap bitmap = BitmapFactory.decodeStream(inputStream); bitmap.recycle(); bitmap = null; Join the club. It kind of does but not quite. The thing is that in the pre-Honeycomb versions of Android the memory for bitmaps was (is) allocated from unmanaged memory, which creates all sorts of problems. It is still released but from the finalizer of the bitmap object implementation. Which means that it will take at least 2 passes of GC to collect it. Also if for whatever reason the finalizer fails to execute - you got the picture. Another thing is - it is really difficult to trace - DDMS does not see it and neither does MAT For Android 3.0 this has been changed and bitmaps are implemented over managed byte arrays, but for the older phones... bitmap.recycle(); release the native heap that is used in bitmaps.And setting it to null is to assist the GC to quickly collect your reference. @aryaxt: Note that while the finalizer will do a recycle() for you, calling it yourself releases the memory sooner, making it that much less likely you will run out of heap space. yes....one more thing to note....you need to be sure that the bitmap is no more used before you recycle...otherwise you would run into exceptions when trying to use a recycled bitmap. Also as of Android 3.0 bitmaps do not use the native heap anymore. From docs at http://developer.android.com/reference/android/graphics/Bitmap.html#recycle%28%29. Free the native object associated with this bitmap, and clear the reference to the pixel data. This will not free the pixel data synchronously; it simply allows it to be garbage collected if there are no other references. The bitmap is marked as "dead", meaning it will throw an exception if getPixels() or setPixels() is called, and will draw nothing. This operation cannot be reversed, so it should only be called if you are sure there are no further uses for the bitmap. This is an advanced call, and normally need not be called, since the normal GC process will free up this memory when there are no more references to this bitmap. So it doesn't seem to be necessary to call. The only time I've ever heard a need to manually set an object to null is if its a static variable (or some variable that won't go out of scope easily) and you want to force it out of memory. Maybe if you are continuously allocating bitmaps rapidly there may be a need to try and force garbage collection, but for the majority of cases it is probably not needed. All seems well as per the documentation,but there has been so many cases where the bitmap has caused OOM....so if you face this issue in your code a general way to fix it is to make sure that we set bitmap to null and call the gc from code...(yes i know this is not the optimal and it is not guranteed to gc)...but this has been a last resort to get some memory back....You can also try using SoftReferences for bitmap caching I've loaded hundreds of bitmaps into memory at once and haven't had a problem. The only way I can see this is a problem on modern phones is if you are memory leaking your bitmaps or you are rapidly allocating and throwing away bitmaps (much more than you can fit on your screen at once). http://code.google.com/p/android/issues/detail?id=11089 Check romain guys response.... His response does nothing to back your claim that you need to call recycle regularly. I could not even look at the code file posted, nor does the person mention any details about the emulator settings, such as device ram size. Romain guy's response actually seems to backs up what I said, that it would only be necessary if you were rapidly allocating and throwing away bitmaps. Again, I've never done it in any of my apps, and I've loaded up to hundreds of bitmaps (at least 300 around 100x100) at once and never had a problem. "so if you face this issue in your code a general way to fix it is to make sure that we set bitmap to null and call the gc from code" If you can check my comment I do say that if you run into memory issues...we need to then inform the GC to collect the bitmaps...as you need atleast 2-3 GC cycles to actually collect the native heap...a faster way would be to notify the GC that the native heap can be collected right away since my application would not be using it anymore... ...and in most of the cases we would not load 100's of bitmaps at once and keep it for the scope of the app(except in very specific scenarios)... This article from android development docs has a lot of information on this topic. While you're at it also check the article about caching if you'll be using multiple bitmaps.
common-pile/stackexchange_filtered
Update birthdate incase the date is Hijri by the value of converting hijri birthdate to Gregorian Table contains many records with Hijri date value and Gregorian value in BirthDate column, so please how to update Birthdate by the value of converting Hijri date to Gregorian thanks I tried this script but it dosen't work - I meant no changes - although I get (18422 row(s) affected) UPDATE MEMBER SET BIRTHDATE = case when (SUBSTRING(cast(birthdate as nvarchar), 1, 2) ='14') or (SUBSTRING(cast(birthdate as nvarchar), 1, 2) ='13') then (SELECT CONVERT(date, birthdate , 131) ) else birthdate end What datatype is birthdate? Well yeah, even UPDATE member SET birthday = birthday would give "(18422 row(s) affected)". What is the actual effect of your query? Does it set any records to wrong values? No changes, all records for the column birthdate as it is, that meant it is still in hijri date with same old value, birthdate type is date try the below code it will convert birthdate column records into Hijri format: update MEMBER set BIRTHDATE = CONVERT(VARCHAR(100),birthdate,131) --- [Hijri date to Gregorian date] update MEMBER set BIRTHDATE = CONVERT(datetime, birthdate, 131) --- [ Gregorian date to Hijri date] There are several blogs about this, likethis one: http://blogs.msdn.com/b/wael/archive/2007/04/29/sql-server-hijri-hijra-dates.aspx I go through this page before I have asked this question and it is not solving my problem, by the way I didn't get any error message, instead of that I get records updated message, thanks
common-pile/stackexchange_filtered
OMD : Thruk doesn't detect nagios config I've installed on a CentOS 7 system latest version of OMD (omd-2.90-labs-edition). It installs ok. I use the default naemon monitoring engine and Thruk as the web GUI. I was using before an old version of OMD with Shinken and Thruk. Naemon seems to work ok, and reuses my host and service definitions. OMD provides several cores, all of them compatible with Nagios. But Thruk shows an empty configuration (no hosts, no services) The log file (~/var/log/thruk.log) is empty. I know the nagios core is working ok by the looking at the logs. Any ideas about what could be wrong? I forgot to modify the ~/etc/thruk/cgi.cfg in order to grant permissions to current user to all hosts and services. Modifying this line and restarting Thruk is enough: authorized_for_admin=myuser
common-pile/stackexchange_filtered
SQL ERROR ORA-00923: FROM keyword not found where expected SELECT Person_ID, CONCAT('First_name','Surname') AS "Person_Name", Next_of_kin, '~ No next of kin ~' AS Next_of_kin_name, Next_of_kin_age AS NULL FROM PERSON WHERE Next_of_kin IS NULL UNION SELECT Childs.Person_ID, CONCAT('Childs.First_name','Childs.Surname') AS "Person_name", Next_of_kin, CONCAT('Fathers.First_name','Fathers.Surname') AS "Next_of_kin_name",TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Fathers.birth_date, 'YYYY') (TO_CHAR(sysdate, 'YYYY')- TO_CHAR(Birth_date, 'YYYY')) AS Next_of_kin_age FROM Person Childs, Person Fathers WHERE Childs.next_of_kin = Fathers.Person_ID AND TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Fathers.birth_date, 'YYYY') >= 50; NULL is a reserved word (https://en.wikipedia.org/wiki/SQL_reserved_words), and cannot be used as column alias. Chose another alias, or delimit as "NULL". There's also a missing comma, or someting else, in the second SELECT list. Last column Name returned as NULL but actually u want return value as NULL for Next_of_kin_age. Also, You have missed operator in last column of second select, I have assumed minus operator. TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Fathers.birth_date, 'YYYY') (TO_CHAR(sysdate, 'YYYY')- TO_CHAR(Birth_date, 'YYYY')) AS Next_of_kin_age Try This SELECT Person_ID, CONCAT('First_name','Surname') AS "Person_Name", Next_of_kin, '~ No next of kin ~' AS Next_of_kin_name, NULL AS Next_of_kin_age FROM PERSON WHERE Next_of_kin IS NULL UNION SELECT Childs.Person_ID, CONCAT('Childs.First_name','Childs.Surname') AS "Person_name", Next_of_kin, CONCAT('Fathers.First_name','Fathers.Surname') AS "Next_of_kin_name", TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Fathers.birth_date, 'YYYY') - (TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Birth_date, 'YYYY')) AS Next_of_kin_age FROM Person Childs, Person Fathers WHERE Childs.next_of_kin = Fathers.Person_ID AND TO_CHAR(sysdate, 'YYYY') - TO_CHAR(Fathers.birth_date, 'YYYY') >= 50;
common-pile/stackexchange_filtered
Is there a way to prevent Excel from automatically forcing my character string to a date from within R? Within R, I have a character string ID formatted like XX-XX where XX is any integer number between 01 and 99. However, when the numbers that make the character string could resemble a date, Excel is automatically forcing this change. I am writing to a .csv file directly from within R using write.csv(). Unfortunately, I am not able to change the ID format convention and I also require this to be controlled from within R as it is a small part of a very large automated process where people using the software do not necessarily have any understanding of the mechanics. Furthermore, configuring excel on every person who uses this systems software is not desirable but will consider it as a last resort. Is this possible? I am open to using a different writing option like the xlsx package if it can provide a solution. MWE Provided: # Create object with digits that will provoke the problem. ID <- data.frame(x = '03-15') # Write object to a csv file within the working directory. write.csv(ID, file = 'problemFile.csv') # Now open the .csv file in excel and view the result. I recommend the openxlsx package. This worked for me: ID <- data.frame(x = '03-15') wb <- createWorkbook() addWorksheet(wb, "Sheet 1") writeData(wb, "Sheet 1", x = ID) saveWorkbook(wb, "test.xlsx", overwrite = TRUE) openXL("test.xlsx") Check this: characters converted to dates when using write.csv Nevermind, it won't work. maybe fileEncoding Check this out https://stackoverflow.com/questions/29956486/characters-converted-to-dates-when-using-write-csv Answers are supposed to at least give steps or a process to address the question in order to start being complete. An explanation of the steps might also be in order. Using a link to another question is not a suitable answer. Making an additional comment on that answer to another linked question does nothing to improve the answer either. It was not my first answer, once I realized it was already answered elsewhere I edit it so the user could see it, you know, only to make sure.
common-pile/stackexchange_filtered
In-App Purchase Android I am developing an application for android and implemented in app purchase. When I click a button it works fine. I don't want to show in app purchase dialog one more time when user uninstalls application and installs it again. Please help me with suitable example. Thanks in advance. Once your managed product is purchased and you do not use consume, a user owns it forever. So at startup check owned products and draw UI accordingly. A very good example.
common-pile/stackexchange_filtered
Failed to enable constraints using TableAdapters I'm trying to check if a username exists in my table when everytime a character is entered in the TextBox. Here is my code: Within the register.aspx.cs file I have a TextChanged event on the TextBox: protected void username_txt_TextChanged(object sender, EventArgs e) { string check = authentication.checkUsername(username_txt.Text); if(check == "false") { username_lbl.Text = "Available"; } else { username_lbl.Text = "Not Available"; } } It calls this method: public static string checkUsername(string Username) { userInfoTableAdapters.usersTableAdapter userInfoTableAdapters = new userInfoTableAdapters.usersTableAdapter(); DataTable userDataTable = userInfoTableAdapters.checkUsername(Username); DataRow row = userDataTable.Rows[0]; int rowValue = System.Convert.ToInt16(row["Users"]); if (rowValue == 0) { return "false"; } else { return "true"; } } The query that is being executed is: SELECT COUNT(username) AS Users FROM users WHERE (username = @Username) For some reason, it keeps breaking on this line: DataTable userDataTable = userInfoTableAdapters.checkUsername(Username); It gives an error that says: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Just incase, the username field in my table is Unique and Not Null, I have tried just executing the query itself and it works perfectly so it isn't at the query end. Does anyone understand what I am doing wrong? Your query doesn't return the row - so using a TableAdapter query that returns the DataTable is inappropriate in this case. I'd recommend using your query with something like the function below. I took the liberty of actually returning boolean.... public static bool checkUsername(string userName) { SqlClient.SqlCommand withCmd = new SqlClient.SqlCommand(); bool result = false; withCmd.Connection.Open(); withCmd.CommandType = CommandType.text; withCmd.CommandText = "SELECT COUNT(username) AS Users FROM users WHERE (username = @Username)" withCmd.Parameters.Add(new System.Data.SqlClient.SqlParameter("@Username", System.Data.SqlDbType.VarChar, 16)).Value = userName; try { int intResult; object scalarResult = withCmd.ExecuteScalar(); if ((scalarResult != DBNull.Value) && (scalarResult != null) && (int.TryParse(scalarResult, out intResult))) result = (intResult==1); } catch (Exception ex) { result = false; // hmm, bad...can't tell handle error... } finally { // only close if we opened the connection above ... withCmd.Connection.Close(); } } return result; } A TableAdapter does support scalar queries on the Table Object, when you add and name your query, check the properties of that query and be sure its ExecuteMode is Scalar. It will then return the integer value, not the row! On the other hand, if you want to keep your structure, change the query to actually return the row, something like SELECT uu.* AS dbo.Users uu FROM users WHERE (username = @Username) and make the result of the checkUsername() function depend on the number of rows returned (which should be 1 or zero....) I don't understand? I took the EXACT same approach in a project created on Visual Studio 2010, I am currently using 2013 so maybe they've changed a few things? The reason why I don't want to be selecting all is simply because it'll just be querying and irrelevant data that I don't need... There is a way to do this with a query added to a TableAdapter - but not if you try to use DataTable userDataTable as the result. The simple reality is that SELECT COUNT(*) does not return a DataRow that fits into userDataTable -- hence the constraint exception. I have updated the answer to try and show the TableAdapter with a scalar query - but that is harder to document here. Looking to see if this question had a better answer elsewhere, I ran across this MSDN reference on 'How to: Create TableAdapter Queries', see: https://msdn.microsoft.com/en-us/library/kda44dwy.aspx
common-pile/stackexchange_filtered
contao isotope image size etc i've got a problem regarding the image size in the productoverview in isotope, when i upload an image to a product and then watch it back on the page the image is covering the whole page. now i tried to make it smaller by going to : store configuration > galleries but the only thing that gets smaller is the watermark, like its just ignoring the productimage... i would upload some screenshots but i can't because of the whole reputation thingy on stackoverflow. i hope to get an answer soon.
common-pile/stackexchange_filtered