text
stringlengths
70
452k
dataset
stringclasses
2 values
Sample loss in Android app when retrieving data via BLE I'm developing an Android app which retrieves data via BLE from a device in real-time. As stated in the device's datasheet, BLE uses a 20-ms connection interval. Twenty user-data bytes (which is equal to 2- samples for each channel and 2-bytes running counter) are sent in GATT notifications. Data from the device is ping-pong buffered and up to six BLE-notification packets are sent every 14 ms based on an OSAL timer. The sample rate is set as 160 samples/sec. Each sample is 3 bytes and is sending 3 channels. Each notification packet consists of 20 bytes containing the following: Measurement Sample1 (Raw ADC data) Channel1 (3 bytes) Channel2 (3 bytes) Channel3 (3 bytes) Measurement Sample 2 (Raw ADC data) Channel1 (3 bytes) Channel2 (3 bytes) Channel3 (3 bytes) Afterwards I plot this data, but it looks like that I am only getting a sample rate around 105, while there should be 160 samples/sec. It just looks like some samples are missing looking from the signal. I'm posting a code that I use below. I was wondering what could be the cause for that, is there a bug or a design flaw in the code? Are there any alternative methods to retrieve the data? // This is used to register and get callback methods from Bluetooth adapter private final BluetoothGattCallback mGattCallback = new BluetoothGattCallback() { @Override public void onConnectionStateChange(BluetoothGatt gatt, int status, int newState) { String intentAction; if (newState == BluetoothProfile.STATE_CONNECTED) { intentAction = ACTION_GATT_CONNECTED; mConnectionState = STATE_CONNECTED; broadcastUpdate(intentAction); Log.i(TAG, "Connected to GATT server."); Log.i(TAG, "Attempting to start service discovery:" + mBluetoothGatt.discoverServices()); } else if (newState == BluetoothProfile.STATE_DISCONNECTED) { intentAction = ACTION_GATT_DISCONNECTED; mConnectionState = STATE_DISCONNECTED; Log.i(TAG, "Disconnected from GATT server."); broadcastUpdate(intentAction); } } @Override public void onServicesDiscovered(BluetoothGatt gatt, int status) { if (status == BluetoothGatt.GATT_SUCCESS) { broadcastUpdate(ACTION_GATT_SERVICES_DISCOVERED); } else { Log.w(TAG, "onServicesDiscovered received: " + status); } } //This method gets triggered auto whenever characteristics is read by BT @Override public void onCharacteristicRead(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic, int status) { if (status == BluetoothGatt.GATT_SUCCESS) { //on successful read of value, call update function. broadcastUpdate(ACTION_DATA_AVAILABLE, characteristic); Log.d("SOURCE", "charactersitics read"); } Log.d("SOURCE", "error reading: " + Integer.toString(status)); } @Override public void onCharacteristicChanged(BluetoothGatt gatt, BluetoothGattCharacteristic characteristic) { broadcastUpdate(ACTION_DATA_AVAILABLE, characteristic); } }; Intent Sampintent; private void broadcastUpdate(final String action, final BluetoothGattCharacteristic characteristic) { if (UUID_HEART_RATE_MEASUREMENT.equals(characteristic.getUuid())) { Sampintent = new Intent(action); final byte[] sigbytes = characteristic.getValue(); Message m = Message.obtain(); Bundle b =new Bundle(); b.putByteArray("bytes",sigbytes); m.setData(b); mHandler.sendMessage(m); } } // handler object used to post process the sample read. private PostSampleRead mHandler = new PostSampleRead(); public class PostSampleRead extends Handler { @Override public void handleMessage(Message msg) { super.handleMessage(msg); try { //raw byte array read at a time by BT. rouhgly 20 bytes per packet final byte[] b = msg.getData().getByteArray("bytes"); // ignoring first two bytes b[0] and b[1] , contain running counter int b1 = b[2] & 0xff; int b2 = b[3] & 0xff; int b3 = b[4] & 0xff; if (b1 == 255 && b1 == b2 && b2 == b3) { } else { // First sample of ch1 byte[] channel1 = {b[2], b[3], b[4]}; // First sample of ch2 byte[] channel2 = {b[5], b[6], b[7]}; // First sample of ch3 byte[] channel3 = {b[8], b[9], b[10]}; //pack three bytes to one value BigInteger ch1 = new BigInteger(channel1); BigInteger ch2 = new BigInteger(channel2); BigInteger ch3 = new BigInteger(channel3); String c1 = ch1.toString(); String c2 = ch2.toString(); String c3 = ch3.toString(); try { //sneding all three samples to activity via broadcast. Sampintent.putExtra(EXTRA_DATA_CH1, c1); Sampintent.putExtra(EXTRA_DATA_CH2, c2); Sampintent.putExtra(EXTRA_DATA_CH3, c3); sendBroadcast(Sampintent); } catch (Exception e) { e.printStackTrace(); } // Second sample of ch1 byte[] channel1_ = {b[11], b[12], b[13]}; // Second sample of ch2 byte[] channel2_ = {b[14], b[15], b[16]}; // Second sample of ch3 byte[] channel3_ = {b[17], b[18], b[19]}; packing 3bytes into one value. BigInteger ch1_ = new BigInteger(channel1_); BigInteger ch2_ = new BigInteger(channel2_); BigInteger ch3_ = new BigInteger(channel3_); String c1_ = ch1_.toString(); String c2_ = ch2_.toString(); String c3_ = ch3_.toString(); sending to activity via broadcast. Sampintent.putExtra(EXTRA_DATA_CH1, c1_); Sampintent.putExtra(EXTRA_DATA_CH2, c2_); Sampintent.putExtra(EXTRA_DATA_CH3, c3_); sendBroadcast(Sampintent); } } catch (Exception e) { e.printStackTrace(); } } }
common-pile/stackexchange_filtered
using dojox.form.PasswordValidation in Programatic Way Im building a registration form and planning to use dojox.form.PasswordValidation to verify if the inputted passwords are the same. Is there a way to use dojox.form.PasswordValidation programatically? If i do this: <div id="sample"> <input type="password" pwType="new" /> <input type="password" pwType="verify" /> </div> <script> var a = new dojox.form.PasswordValidation({}, "sample"); </script> The above code works as expected, but I want to strip-off those "pwType" tags and create a pure HTML tags only. If I do that, where should I put "pwType"? P.S. I'm using Dojo 1.6 Unfortunately it looks like this widget is not really up-to-date with the recent changes where dojo tries to move all the invalid html attributes into the valid data-* attributes. When looking at the postCreate method of that widget it has this code in the middle of it: dojo.forEach(["old","new","verify"], function(i){ widgets.push(dojo.query("input[pwType=" + i + "]", this.containerNode)[0]); }, this); And just afterwards it makes sure that it found the necessary inputs, otherwise it will throw an error. So if you want to use something other than the pwType attributes, then you will probably have to overwrite the postCreate method of this widget to query something else, for example: dojo.query("input[data-dojo-password-type=" + i + "]") and then you can specify the values in data-dojo-password-type instead of pwType like this: <div id="sample"> <input type="password" data-dojo-password-type="new" /> <input type="password" data-dojo-password-type="verify" /> </div> what about something like : var theDiv = dojo.create("div", {id: "sample"}), theNewPass = dojo.create("input", {type: "password", pwType: "new"}, theDiv, "last"), theVerifPass = dojo.create("input", {type: "password", pwType: "verify"}, theDiv, "last"), a = new dojox.form.PasswordValidation({}, "sample") ;
common-pile/stackexchange_filtered
Understanding 外国でも使える電話を借りて来た I'm having difficulty translating this particular sentence into English, and even thus understanding it fully. 私は成田空港で外国でも使えるけいたい電話を借りて来たから、問題がないよ。 My best attempt at a translation: I had a problem at Narita Airport because I had to rent a mobile phone to use. I don't know how to translate the 外国でも bit and have it make sense nor am I sure about translating the 来た either. Why did you translate 問題がない to "I had a problem"? You already have a great answer, but to clarify the points you asked about, 外国でも should be interpreted as 外国で + も where 外国で使える means "able to use in a foreign country", i.e. "works abroad". も is "also", so that 外国でも使える means "also works abroad". As I wrote in my comment, ~てきた is used both metaphorically and literally. In this case, it is used literally and 借りて来た means "I borrowed and came", i.e. "borrowed before I came", or here (Before I came,) I rented a mobile phone, which also works abroad, at Narita Airport, so there's no problem. The English translation is a bit cumbersome, but such is the nature of literal translations. Edit. A different translation (partly due to snailboat) could be I went and got a rental phone which also works abroad at Narita Airport, so there is no problem. I see now, I totally missed that the で and も were separate particles and not together as in でも meaning 'but' or 'even if' etc. But でも "but, even if" can also be understood as で + も...
common-pile/stackexchange_filtered
auto complete for the rest of the text im making a posting script little like twitter how ever im making the posting text to be 100 Characters only. now i wanna make something like when user write about 50 Characters there will be another 50 Characters empty i want a php method to fill the rest of the text with dots (.) like this the user post is: hello my name is Youssef Subehi in database i want it to be : hello my name is Youssef Subehi ...... dots to be 100 Characters Thanks in Advance. please post your current code so we can see what it's doing str_pad($myString, 100 ,'.',STR_PAD_RIGHT); Should work for you. It will fill the string with dots. till the length is 100
common-pile/stackexchange_filtered
What paths are guaranteed to exist on Windows Server 2008 R2? What paths are guaranteed to exist on a Windows Server 2008 R2 instance? A client is requiring that some instructions specify exact paths in all cases. (The person executing said instructions is not supposed to have to decide on any path themselves, even when the path makes absolutely no difference.) So I need to know what paths I can rely on to be there. It's fine with me if they involve environment variables, but they need to be variables guaranteed to hold an existing path. (That is, no modification to a path that doesn't exist possible.) Or are there no guaranteed paths? To be honest, I'm hoping the answer is that there are none. Then I can respond to this by telling them they have to guarantee me some paths exist before I can make the change. It is better to use environmental variables for paths rather than hard coded paths as some of the paths may change slightly, but the variable won't: http://www.technipages.com/list-of-windows-environment-variables.html Fair enough, but are they guaranteed to represent existing locations? That is the point of environmental variables - a common variable name that holds specific details to the current system configuration. For instance, %HOMEPATH% will always point to the currently logged in user's home directory, this will always be a valid path. Better yet, that specific option will always be writable to the currently logged in user, so you won't have to worry about permissions getting in the way either (unlike if you chose %HOMEDRIVE%, where Group Policy may prevent a user from writing to). Yes, many of the various environmental variables that point to folders must exist in order for Windows to operate. If someone would post that below and name one or two of them, I will accept it as the answer. You can use environmental variables. These are variables that the system uses and so they are required to be valid paths. They are also ones that will work across various Windows platforms, so even if the standard hard coded paths change, the path loaded into the variable by windows will remain valid. %HOMEPATH% - points to the home directory of the currently logged in user. This path will always be writable to the user so you won't have permissions issues if the users will be installing the software themselves. %HOMEDRIVE% - points to the drive that the system was installed on (usually C:, but can change). This is not the best option for installation, group policy often prevents users from writing here. %PROGRAMFILES% - default program files folder, common place for installations. +1: %APPDATA% and %TEMP% are two good ones as well. :) Thank you. I will probably use HOMEPATH since the directory is intended for a user to place files in. Use the SHGetSpecialFolderPath() Windows API to retrieve the path corresponding to any of the various special folder symbolic names. For example, calling it on CSIDL_DESKTOPDIRECTORY guarantees to give you the localized name of the user's desktop directory. I used this API to build the directory utility included with my Hamilton C shell which, in turn, I use to know where to put things during installation. Useful, but unfortunately, this is going in a document, not code. (Hence why I asked on Super User instead of StackOverflow.) Thanks, though. Hm. Some documentation says that method is unsupported and to use ShGetFolderPath instead. Does that work just as well? I'll be darned. I wrote my code using SHGetSpecialFolderPath() ten years ago on Win2K. It was definitely documented as supported then and no matter what the docs say today, it's still works on Win7. But it does look like MS would like us to migrate to their newer API, SHGetKnownFolderPath(). Thanks for picking that out.
common-pile/stackexchange_filtered
require_once failed in Wordpress I am developing for first time a WordPress plugin. I have two .php (file1.php, file2.php). In file1.php I have defined a form with a action to file2.php for upload images. In file2.php I just do a require_once('./wp-admin/includes/file.php'); but I receive this error NetworkError: 500 Internal Server Error - http://localhost/wordpress/wp-content/plugins/file2.php I tried with require_once(), include(), with the entire route, etc. I reviewed permissions, routes and it's correct. I tried the require_once() in file1.php and it works but I need it in file2.php. Any ideas? Thanks in advance. Deploying the logs: require_once('./wp-admin/includes/file.php'); Warning: require_once(./wp-admin/includes/file.php): failed to open stream: No such file or directory in /var/www/html/wordpress/wp-content/plugins/file2.php on line 15 Fatal error: require_once(): Failed opening required './wp-admin/includes/file.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/wordpress/wp-content/plugins/file2.php on line 15 require_once('/var/www/html/wordpress/wp-admin/includes/file.php'); Fatal error: Call to undefined function __() in /var/www/html/wordpress/wp-admin/includes/file.php on line 16 This error is to a WordPress function, so I suppose that it is not the error, it will be another but.... 500 is hiding some other error, check logs @Dagon Yes, sorry. I received this error Warning: require_once(./wp-admin/includes/file.php): failed to open stream: No such file or directory in /var/www/html/wordpress/wp-content/plugins/file2.php on line 6 Fatal error: require_once(): Failed opening required './wp-admin/includes/file.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/wordpress/wp-content/plugins/file2.php on line 6 but it exists, in the same directory and it has permissions so I don't understand the error. well that error seems clear @Dagon maybe not because if I comment that require, the error doesn't appear. I think it is a route error but I said that I tried several routes and nothing. Thanks for the answer. Seems Your require path is incorrect... Check http://wordpress.stackexchange.com/questions/116366/what-is-the-right-way-to-include-a-wp-admin-file-in-your-theme @DpĚN Thx, I've checked several articles, post, etc and my problem persist since 2 days xD so something strange does the .php so I posted here :( @VicSeedoubleyew Thank you, that was the solution. @VicSeedoubleyew done, but I can't vote yet. I have not enough reputacion :( Oh right :) well, some day
common-pile/stackexchange_filtered
How to debug a preprocessor macro I recently came across this project. The code is largely written in C and the API consists of just a few C functions. Unfortunately the project seems to contain some bugs, in particular I keep getting "double free or corruption" errors. I am trying to use valgrind and gdb to find out what is wrong. The problem seems to be in the memory allocator. Unfortunately the first valgrind error occurs in some ~400 line long preprocessor macro defined in a header. Unfortunately gdb can't break on the generated code. The stack trace is not very useful either. Is there any technique which can be used to deal with these kind of errors? If it were me? Convert the macro into an inline function. If the project really uses them that extensively (and that didn't scare me off from using the code), I might try preprocessing to a file, and then compiling and debugging that file. 400+lines macro Oo ! Try to generate the c source after the preprocessor pass (gcc -E option) and set breakpoints in this file. Well, they define a generic priority queue "the C way" :( Ehm, did I read that right? A 400 line preprocessor macro?? Strong recommendation: instantly forget about that thing! As usual, you should try to produce the simplest possible code that exhibit the problem and either show it here or submit it on the support page for the project and you want external help. If you want to deal with it alone, also write a short example, and if you really need to debug an external macro, let the compiler generate the intermediary step after macro pre-processing (-E option for gcc), remove the #linepragmas from the pre-processed source and debug that pre-processed code. What is exact source code you work on it. I download its directory and what to work on it. The online compiler Wandbox.org has a "CPP" mode that is very useful to experiment with the C preprocessor. See an example here: https://wandbox.org/permlink/tFUsKMIXaQj8hhte You can do the same thing offline, with gcc -P or cl.exe /E
common-pile/stackexchange_filtered
Android SQLite Repeated Elements I have an issue with SQLite on android. Right now, I'm pulling a JSON object from a server, parsing it, and putting each sub-object in a Table with things such as the Name, Row_ID, unique ID, etc. using this code: public void fillTable(Object[] detailedList){ for(int i=0;i<detailedList.length;++i){ Log.w("MyApp", "Creating Entry: " + Integer.toString(i)); String[] article = (String[]) detailedList[i]; createEntry(article[0], article[1], article[2], article[3], article[4], article[5]); } } createEntry does what it sounds like. It takes 6 strings, and uses cv.put to make an entry. No problems. When I try to order them however, via: public String[] getAllTitles(int m){ Log.w("MyApp", "getTitle1"); String[] columns = new String[]{KEY_ROWID, KEY_URLID, KEY_URL, KEY_TITLE, KEY_TIME, KEY_TAGS, KEY_STATE}; Log.w("MyApp", "getTitle2"); Cursor c = ourDatabase.query(DATABASE_TABLENAME, columns, null, null, null, null, KEY_TIME); Log.w("MyApp", "getTitle3"); String title[] = new String[m]; Log.w("MyApp", "getTitle4"); int i = 0; int rowTitle = c.getColumnIndex(KEY_TITLE); Log.w("MyApp", "getTitle5"); for(c.moveToFirst();i<m;c.moveToNext()){ title[i++] = c.getString(rowTitle); Log.w("MyApp", "getTitle " + Integer.toString(i)); } return title; } Each entry actually has many duplicates. I'm assuming as many duplicates as times I have synced. Is there any way to manually call the onUpgrade method, which drops the table and creates a new one, or a better way to clear out duplicates? Secondary question, is there any way to order by reverse? I'm ordering by time now, and the oldest added entries are first (smallest number). Is there a reverse to that? If you don't want duplicates in one column then create that column with the UNIQUE keyword. Your database will then check that you don't insert duplicates and you can even specify what should happen in that case. I guess this would be good for you: CREATE TABLE mytable ( _id INTEGER PRIMARY KEY AUTOINCREMENT, theone TEXT UNIQUE ON CONFLICT REPLACE ) If you insert something into that table that already exists it will delete the row that already has that item and inserts your new row then. That also means that the replaced row gets a new _id (because _id is set to automatically grow - you must not insert that id yourself or it will not work) Your second question: you can specify the direction of the order of if you append ASC (ascending) or DESC (descending). You want DESC probably. Cursor c = ourDatabase.query(DATABASE_TABLENAME, columns, null, null, null, null, KEY_TIME + " DESC"); Thanks. I had seen the "desc" tag before, but it was for people implementing other SQL classes than I am. Thanks!
common-pile/stackexchange_filtered
Xcode - how to include c library and header file to cocoa project? How do I add c library to Xcode Cocoa project? Or what is the best option, I don't want to copy them into Cocoa project directory. I have a C project called a which compiles into library a.dylib and header file a.h, the project is located in it's own directory. I want to use this library from my objective-c application in Xcode. How do I add the header file and library to my Xcode project? I can drag the a.dylib into other frameworks but what do I do with a.h? I figured it out. I point to location of project a deployment directory (headers) to Search Path in project settings either: as Header Search Paths, if used as <a/a.h> or into User Header Search Paths, if used as "a/a.h" As for library I just drag it to Xcode project and set it to refer to library instead of copy. Here are the steps before adding a header file test.h in your project. Here is the files location root -> Library -> include -> test.h click on build settings Find User Header Search path. add your header file location here. add following value to Debug, Release and Any Architecture field. $(SRCROOT)/Library/include. Your project Root is the folder that contains your project, it conatins .xcodeproj file. After adding path you will be able to add header in like this # include "test.h" You can drag them both the .a and .h files to Xcode, but make sure to not check the "Copy items to project folder". As for referencing the header file, you'll need to put it in a place where you can add a path to it in your #include/#import statements. Is there a reason you don't want to copy the header to your project file? the reason for not copying it is that it is a separate project and i want to point to it, but might include it later on as it might be easier to manage, thanks Do you not want to copy the SOURCE, or the LIBRARY itself? To make #importing easier, I would copy the header file. Preferably I would not copy either. I generally leave projects separate and only include the them in the path, this is for c/c++ projects using makefiles. I'm figuring out the best way for Cocoa and C libraries.
common-pile/stackexchange_filtered
Why are algebraic schemes called algebraic? In scheme theory, an algebraic scheme is the data of a scheme + a morphism of finite type to the spectrum of a field. Where does the term "algebraic scheme" come from? It does not seem intuitive to me (to me all schemes are equally algebraic, others may have a different opinion). Are there any historical accounts regarding this matter? Here is one mention of the category of algebraic schemes without an explicit reference to the base (though it appears that in that terminology all schemes are understood to come with a morphism to the spectrum of a field). In the Stacks Project, they use "algebraic $k$-schemes" (with $k$ being a field). @NajibIdrissi that is kind of funny, I did not think of that. Maybe so, maybe so. It's not a great terminology, and I actually never saw it (or didn't pay attention). +1 if you prefer "scheme of finite type over a field". @YCor reassuring to see some people of similar opinion. But actually now that I'm looking, I can see in Grothendieck (1960) http://www.numdam.org/article/SB_1958-1960__5__193_0.pdf the use of "schéma algébrique sur $A$" (algebraic scheme over $A$), where $A$ does not have to be a field (yet it seems to mean finite type). In the context of field extensions, finite type does not imply algebraic. finite type schemes over a field are exactly the schemes which correspond to algebraic varieties, so algebraic scheme = algebraic variety. I think that is the reason for the name. @GLe interesting theory indeed. Where I come from, varieties are usually separated, though I understand it can be different elsewhere. Varieties are separated, but so were schemes back when this terminology was introduced. Then, schemes=preschemes+separated. @SamirCanning OK, then change the question. Why are "algebraic preschemes" called algebraic? The reason is that an algebraic scheme over a field can be covered by a finite number of spectra of finitely generated algebras. Those affine schemes that arise as the spectrum of a finitely generated algebra over a field are called algebraic, and the intuition behind this is that they are defined by a finite set of polynomials in a finite number of symbols. I think this terminology and point of view is common in the context of group schemes, see for example the notes of J.S. Milne "finite type schemes over a field are exactly the schemes which correspond to algebraic varieties" - No: they also have to be reduced and separated (and some people also assume irreducible). @Qfwfq Yes, the one you cite appears to be the most common definition. I was following the definition of alg. variety given in Qing Liu's book. It doesn't change anything for the purpose of answering the question. Algebraic varieties (whatever you want them to be) and algebraic schemes share the "algebraic" adjective because they can be covered by finitely many spectra of finitely generated algebras. @GLe: yep, sorry. I personally woldn't call "variety" something nonreduced, but as you say it's just terminology.. @FrançoisBrunault Is it not a variant of the Nullstellensatz that a finite type field extension is finite, hence algebraic? @red_trumpet It depends on whether "finite type" means as field extension or ring extension. In the second case we have algebraicity by the Nullstellensatz, but not in the first case.
common-pile/stackexchange_filtered
How can I resolve ERR_REQUIRE_ESM error when using this simple node-fetch query? I am trying to do a simple API fetch using node-fetch. Getting the following error: internal/modules/cjs/loader.js:1089 throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\node_modules\node-fetch\src\index.js require() of ES modules is not supported. require() of C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\node_modules\node-fetch\src\index.js from C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\index.js is an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which defines all .js files in that package scope as ES modules. Instead rename C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\node_modules\node-fetch\src\index.js to end in .cjs, change the requiring code to use import(), or remove "type": "module" from C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\node_modules\node-fetch\package.json. ←[90m at Object.Module._extensions..js (internal/modules/cjs/loader.js:1089:13)←[39m ←[90m at Module.load (internal/modules/cjs/loader.js:937:32)←[39m ←[90m at Function.Module._load (internal/modules/cjs/loader.js:778:12)←[39m ←[90m at Module.require (internal/modules/cjs/loader.js:961:19)←[39m ←[90m at require (internal/modules/cjs/helpers.js:92:18)←[39m at Object.<anonymous> (C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\index.js:2:15) ←[90m at Module._compile (internal/modules/cjs/loader.js:1072:14)←[39m ←[90m at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)←[39m ←[90m at Module.load (internal/modules/cjs/loader.js:937:32)←[39m ←[90m at Function.Module._load (internal/modules/cjs/loader.js:778:12)←[39m { code: ←[32m'ERR_REQUIRE_ESM'←[39m } My node version is v14.17.6 I have installed node-fetch Here is my index.js: const Fetch = require('node-fetch') fetch("https://api.github.com/users") .then((res) => res.json()) .then((res) => console.log(res)); Here is my package.json: { "name": "Fetch_PBI_Access_Token", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "node-fetch": "^3.0.0" } } If I add "type": "module" I then get the following error: This file is being treated as an ES module because it has a '.js' file extension and 'C:\Users\raben\OneDrive\Work\PowerBI Embedded\Fetch_PBI_Access_Token\package.json' contains "type": "module". To treat it as a CommonJS script, rename it to use the '.cjs' file extension. at file:///C:/Users/raben/OneDrive/Work/PowerBI%20Embedded/Fetch_PBI_Access_Token/index.js:2:15 ←[90m at ModuleJob.run (internal/modules/esm/module_job.js:170:25)←[39m ←[90m at async Loader.import (internal/modules/esm/loader.js:178:24)←[39m ←[90m at async Object.loadESM (internal/process/esm_loader.js:68:5)←[39m it will help you - npm install<EMAIL_ADDRESS>npm install --save-dev<EMAIL_ADDRESS>- downgrade your node-fetch lib to 2.x and error will dissapear The error really says it all. Since node-fetch is an ES module, you shouldn't use the require syntax to import it, but the import syntax: import fetch from 'node-fetch'; Note: Older versions of node-fetch are still CommonJS packages (i.e., can be used with require), so if you downgrade your dependency to some 2.x version, your code should work as-is. I get the error: Cannot use import statement outside a module
common-pile/stackexchange_filtered
Suppose $T,S$ are two non-identity elements in $PSL(2,\mathbb{R})$ and $TS=ST$. Then the number of fixed points of $S$ and $T$ are same. What I could see is $T$ maps the fixed point set of $S$ to itself, so does $S$ for the fixed point set of $T$. But I can not proceed further. I was actually looking at the proof of a stronger result that says the fixed set is equal if and only if they commute. The proof uses the above statement. I seem to stuck to this point. You meant $T,S$ are non-trivial. Go to $PGL_2(\Bbb{C})$, if possible diagonalize, find the elements commuting with $\pmatrix{a&0\ 0&1}, a\in \Bbb{C}^*$, the other case is $\pmatrix{1&1\ 0&1}$, again find the elements commuting with it. @reuns, Yes, I meant $S,T$ are non-identity. @reuns, I don't understand what you are trying to say. Im really just spelling out the comment of reuns in greater detail. The fixed points of $S$ (resp. $T$) on $\mathbb P^1_{\mathbb R}$ correspond to eigenvectors of (lifts of) $S$ (resp. $T$). This is because for $p \in \mathbb R^2 \setminus \{0\}$, one has $$[Sp] = [p] \iff \text{there is } \lambda \neq 0 \text { such that } Sp = \lambda p.$$ Here, we use square brackets to denote the corresponding point in $\mathbb P^1_{\mathbb R}$. Can you show that the number of fixed points of $S$ (resp. $T$) is the number of eigenvalues of $S$ (resp. $T$)? (Here, you use that $S$ and $T$ are not the identity.) Afterwards, compare eigenspaces of $S$ and $T$ using $ST = TS$. You should update your answer by taking into account the possibility of complex eigenvalues.
common-pile/stackexchange_filtered
Integral related to the modified Bessel function I would like to solve the integral $$F_n(\kappa,\theta,\phi)=\int_{-\pi}^{\pi}{\rm e}^{\kappa\cos(x-\theta)}\cos(n\, x-\phi)\,{\rm d}x$$ that appears related to the identity $$I_n(\kappa)=\frac{1}{\pi}\int_{0}^{\pi}{\rm e}^{\kappa\cos(x)}\cos(n\, x)\,{\rm d}x,$$ where $I_n(\kappa)$ is the Modified Bessel Function of the first kind. Any ideas? After elementary trig manipulations it seems that the problem could be solved if the integral $$\int_{-\pi}^{\pi}e^{\kappa \cos(x)}\sin(n, x){\rm d} x$$ was known. And this integral is zero so I think the problem is solved... Why is the integral zero for any $\kappa$ and $n$? Reduce to known integral Assume $n$ is a non-negative integer. Then the integrand is periodic with period $2\pi$. Then: $$ F_n= \int_{-\pi}^\pi \exp\left( \kappa \cos(x-\theta) \right) \cos(n x - \phi) \mathrm{d}x = \int_{-\theta-\pi}^{-\theta+\pi} \exp\left( \kappa \cos(x) \right) \cos(n x + n \theta- \phi) \mathrm{d}x $$ The latter integral, using periodicity can be reduced to pieces of $(-\pi, \pi)$ interval, which can be rearranged, so that $$ \int_{-\pi}^\pi \exp\left( \kappa \cos(x-\theta) \right) \cos(n x - \phi) \mathrm{d}x = \int_{-\pi}^{+\pi} \exp\left( \kappa \cos(x) \right) \cos(n x + n \theta- \phi) \mathrm{d}x $$ Denote $\varphi = n\theta - \phi$, and use $\cos(n x + \varphi) = \cos(n x) \cos(\varphi) - \sin(n x) \sin(\varphi)$ to write: $$ \begin{eqnarray} F_n &=& \cos(\varphi) \int_{-\pi}^{\pi} \exp\left( \kappa \cos(x) \right) \cos(n x) \mathcal{d} x - \sin(\varphi) \int_{-\pi}^{\pi} \exp\left( \kappa \cos(x) \right) \sin(n x) \mathcal{d} x \\ &=& 2 \pi \cos(\varphi) I_n(\kappa) - \sin(\varphi) \cdot 0 \end{eqnarray} $$ The last integral is zero as integral of an odd function $\sin(n x) \exp(\kappa \cos(x))$ over a symmetric domain. Thus: $$ F_n(\kappa, \theta, \theta) = 2 \pi \cos(n \theta - \phi) I_n(\kappa) $$ Differentiate under integral sign Alternatively, we could establish that $F_n$ satisfies ODE in $\kappa$. $$ \begin{eqnarray} \kappa^2 \partial_\kappa^2 F_n + \kappa \partial_\kappa F_n &=& \int_{-\pi}^\pi \exp(\kappa \cos(x-\theta)) \left(\kappa^2 \underbrace{\cos^2(x-\theta)}_{1-\sin^2(x-\theta)}+ \kappa \cos(x-\theta) \right) \cos(n x - \phi) \mathrm{d} x \\ &=& \kappa^2 F_n - \int_{-\pi}^\pi \left(\frac{\mathrm{d}^2}{\mathrm{d}x^2} \exp(\kappa \cos(x-\theta)) \right) \cos(n x-\phi) \mathrm{d} x \\ &\stackrel{\text{by parts}}{=}& (\kappa^2 + n^2) F_n + \text{boundary terms} \end{eqnarray} $$ where the boundary term vanishes, if $n$ is an integer: $$\begin{eqnarray} \text{boundary terms} &=& - \left(\left. \left(\frac{\mathrm{d}}{\mathrm{d}x} \exp(\kappa \cos(x-\theta)) \right) \cos(n x-\phi) \right|_{x=-\pi}^{x=\pi} \right) \\ &\phantom{=}& - \left(\left. n \exp(\kappa \cos(x-\theta) ) \sin(n x-\phi) \right|_{x=-\pi}^{x=\pi} \right) \\ &=& 2 \sin(\pi n) \exp( -\kappa \cos(\theta) ) \left( n \cos(\phi) - \kappa \sin(\theta) \sin(\phi) \right) = 0 \end{eqnarray} $$ Thus $F_n$ satisfies the differential equation of $I_n(\kappa)$: $$ \kappa^2 \frac{\mathrm{d}^2}{\mathrm{d} \kappa^2} F_n + \kappa \frac{\mathrm{d}}{\mathrm{d} \kappa} F_n - (\kappa^2 + n^2) F_n = 0 $$ Because for $\kappa = 0$, $F_n$ is finite: $$ \left.F_n \right|_{\kappa = 0} = \int_{-\pi}^\pi \cos(n x -\phi) \mathrm{d} x = 2 \cos(\phi) \frac{\sin(\pi n)}{n} = 2 \pi \cos(\phi) \delta_{n,0} $$ we conclude that $$ F_n = g_n(\theta, \phi) I_n(\kappa) $$ Thus $g_{0}(\theta, \phi) = \cos(\phi)$. One can similarly establish an ordinary differential equation for $F_n$ as a function of $\theta$: $$ \frac{\mathrm{d}^2}{\mathrm{d} \theta^2} F_n = \int_{-\pi}^\pi \left(\frac{\mathrm{d}^2}{\mathrm{d}x^2} \exp(\kappa \cos(x-\theta)) \right) \cos(n x-\phi) \mathrm{d} x = -n^2 F_n $$ and trivially $$ \frac{\mathrm{d}^2}{\mathrm{d} \phi^2} F_n = - F_n $$ Combining these, with initial conditions, we arrive at the same result: $$ F_n(\kappa, \theta, \phi) = 2\pi \cos\left(n \theta - \phi\right) I_n\left( \kappa \right) $$ Sahsha, thanks for the detailed explanation.
common-pile/stackexchange_filtered
How to know which base OS is used in httpd:latest image of apache How to know which base OS is used in httpd (latest version) image of apache. Thanks in advance for reply. Does this answer your question? How to find out the base image for a docker image You can also look for various marker files in /etc in the container filesystem. This doesn't really seem like a programming-related question, though. It's based on debian:buster-slim Some docker-images on dockerhub supply links to Dockerfile-sources: https://hub.docker.com/_/httpd https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile Alternatively start a container from image and run cat /etc/*release How did you find the link https://github.com/docker-library/httpd/blob/master/2.4/Dockerfile. Also, what is the way to know the base OS of other images like nginx, tomcat etc. So, my question is generic, i want to know is there any standard process laid down to know the base OS used in different images. go to https://hub.docker.com/_/httpd and look for anything that hints at where the source of the Dockerfile could be hosted (usually github). Finding out the the OS of the image can always be done from inside the running container (see cat /etc/*release
common-pile/stackexchange_filtered
Crystal radio as user interface problem If I built a coil like the one in this crystal set. How could I use a micro to detect which point the alligator clip is on? Basically I want to use the crystal set as a user interface for an audio playback system. Thanks for the responses everyone. As pointed out i'm partly trying to gauge how easy it would be to do with the coil or wether to just fake it. Although feasible it sounds like it will be easier to fake it. Resistance is a good solution, it also might be interesting to treat the coil like a resonator. Try using a uC to give it a rising edge on the end clipped into and measure the rising edge on the other side. I could simulate this and see if it was feasible, but it would probably save hours to use a function generator and an Oscope, or just use a uC. A capture pin on a PIC and a PWM output would be my choice, pin goes high from PWM on a PIC at timer overflow, capture is relative to timers. Just an Idea, if I think of any others I will let you know. If it could be done you could have two simple interrupts and just w check the variable they wrote to before doing any audio out. -Max Thinking about it, a shmitt trigger is probably the piece I left out, by making sure that the trigger level is high enough I am sure you could accurately measure a delay. If the difference in delay would be enough, I am not sure. If you used a very thin gauge wire you could measure the resistance from one end of the coil to the connection point. However, even with 28 gauge wire you're looking at a total of 6 ohms for the entire coil, so reading that with a micro is going to require additional components. Alternately you can measure the inductance of the coil from one end to the tap, which would vary based on which tap was chosen. However, you're just building an interface. Does it have to be a real coil, or does it just have to look like a coil? Were I in your shoes, I'd cut the back side of the coil around the taps and use one of many methods for determining which tap was connected. This could be done with one input per tap, or with an analog input if you added resistors to the coil so the resistance significantly increased between each tap. These are interesting solutions, but I'm not sure the differences will be enough (especially with resistance) to get an accurate measurement with common parts. Inductance may work better with an filter to slow down the discharge of the field. Longer delay = greater inductance, and you could adjust it to your measurement resolution. Agreed, one would need to include additional components (opamp, constant current source, for one solution) to measure the small resistance changes. The inductance method also requires additional circuitry to measure the inductance change, and can be significantly affected by other nearby objects (if the user is wearing a watch it may affect the measurement enough to believe it's on a different tap) I would have a jumper wire for every possible location, and select from them using... a rotary switch. Since this isn't even a real radio (just an interface), you could use a multi-pole switch to select audio playback function.
common-pile/stackexchange_filtered
Proxy must be string in package.json in React , create-react-app (2.0) I found similar question here When specified, "proxy" in package.json must be a string , but the solution is not working for me , please help me to set up proxy in create-react-app(2.0) I use "proxy": "http://localhost:8888" in my package.json , but some times it gives error like Error connect from http://localhost:3000 to http://localhost:8888 How to avoid this kind of error ? This means your proxy server is not running. Please make sure it's started before or alongside your application! If your node server is being restarted by Nodemon or something similar, it's expected you see these failed requests every once and a while. In the client side directory: npm i http-proxy-middleware --save Create setupProxy.js file in client/src. No need to import this anywhere. create-react-app will look for this directory Add your proxies to this file. const proxy=require("http-proxy-middleware") module.exports=function(app){ app.use(proxy(['/api','/auth/google'],{target:"http://localhost:8888"}))} We are saying that make a proxy and if anyone tries to visit the route /auth/google or /api (specify your routes)on our react server, automatically forward the request on to localhost:8888.
common-pile/stackexchange_filtered
es6 use ${} for multiple state in react For some reason I have to do this for(let i=0;i<=6;i++){ price.push({ min_price: `this.state.special_${i}_min`, max_price: `this.state.special_${i}_max` }); } But it's not what I expect, it doesn't work, it became string instead of getting the value of my states. You are just assigning the string to those properties. Try using square brackets. price.push({ min_price: this.state[`special_${i}_min`], max_price: this.state[`special_${i}_max`] }); you can write like this in es6: for(let i=0;i<=6;i++){ price.push({ min_price: this.state[`special_${i}_min`], max_price: this.state[`special_${i}_max`], }); } for es5: for(let i=0;i<=6;i++){ price.push({ min_price: this.state["special_"+i+"_min"], max_price: this.state["special_"+i+"_max"], }); }
common-pile/stackexchange_filtered
Does this count as a proper derivation of the formula for work and kinetic energy? $v_{av}$ = average velocity, $d$= distance, $a$ = acceleration, $m$ = mass Given: $$v_{av}*t=d$$ $$t=\frac{v_f-v_i}{a}$$ $$v_{av}=\frac{v_f+v_i}{2}$$ Then: $$\frac{v_f+v_i}{2}*\frac{v_f-v_i}{a}=d$$ $$\frac{v_f^2-v_i^2}{2a}=d$$ $$\frac12(v_f^2-v_i^2)=ad$$ multiply by m to adjust for the amount of mass that accelerated $$\frac12m(v_f^2-v_i^2)=mad$$ And if not, why is this not proper as a derivation of kinetic energy and work? What's incorrect, conceptually, about my process? This has much in common with the usual algebra-based approach, my only critique would be that the focus is on symbol manipulation instead of talking about the meaning of what you're doing (and that's a style issue rather than something fundamental). "Serious" treatments use calculus, because it makes it explicit that you don't need periods of constant acceleration, but again that's a nitpick. @dmckee Would it be fine if I post a derivation more focused on the meaning of the symbols then? I'd rather not make a separate thread. This derivation works where $F$ is constant over the path, then $W=\int_{path}Fds=F\Delta s$ but not for the general case where $F$ (and $a$) are not constant. @Gert: That's mostly what dmckee was hinting at with the "usual algebra-based approach" This is not a proper derivation. At a fundamental level, there are at least three important points that are not taken into account by this approach: as you consider a second mass point, it is somewhat difficult to adjust (in a non-arbitrary way) the derivation to obtain the correct energy term related to the angular momentum and/or rigid body rotation (think about a satellite in orbit around a planet, for example); this approach, as it is, cannot drive you to the corret answer if you have a force which act orthogonally to the direction of motion of mass; basically, it misses the point of vector calculus: would you use a scalar product, a vector product, or a combination of the two, and why? Without any further hypotesis, it is impossible to decide (think about a mass in uniform, circular motion; or a charge in motion in magnetic field); it is neglecting one of the most important issues: the fact that the position, velocities and potentials are in the most general case functions of time. Energy is not about average quantities. Try to write down the energy associated with a falling mass in a uniform gravitational field, derive it with respect to time, and see what happens... If you do the same using your approach, you will end up with meaningless results. I think this is a harsh answer. As mentioned by @dmckee, it is appropriate for an algebra based approach. It only deals with the one dimensional case with constant acceleration, using algebra instead of calculus. @Frédéric: I didn't mean to be rude, I am sorry if it seemed like that. The point I make is not about dimensionality (a uniform circular motion is a one-dimentional problem). What I wanted to say is, that the fact of using an algebra-based approach can lead to wrong results, even for 1-d problem, if the proper assumptions are not made. In any case, I am sorry again if the passed message was a different one, and thank you for spotting it out.
common-pile/stackexchange_filtered
Passing query parameter to props of custom component in NextJS I'm currently making a profile page wherein the route /profile/displaynamehere will display a page where one of its components is a basic information page that shows the display name of the user. The component is called BasicInfo and accepts props called displayName. Here's how it looks like: export default function Profile() { const router = useRouter(); const displayNameQuery = router.query.displayName; return ( ... <BasicInfo displayName={displayNameQuery} /> ... ) } The problem is, displayName or {displayNameQuery} in this context is undefined whenever I try to console.log it. Is there a way wherein I can pass the query parameter as props to my component? can you please share the code where you are setting displayNameQuery route? are you doing that using router.push()? displayName should match the file name as well. Since it is a dynamic parameter, your folder structure should be pages/profile/[displayName].js. Can you confirm if thats the case? @AnkitSaxena No, Im not using router.push. I'm merely using useRouter from next/router. @PsyGik Yes, that is how my page is structured Does this answer your question: Next.js router is returning query parameters as undefined on first render??
common-pile/stackexchange_filtered
Excel vlookup 3 columns from sheet A and match with the two columns in sheet B, and give 3rd column in sheet B I would like to do a vlookup or any function to match data in two sheets (sheet A and sheet B). This is my sheet A: This is my sheet B (Imagine the column is A, B, C, instead of E,F,G in the image): I want the answer in sheet B, column C. E.g. the result should be like below. I tested the function below, but not working. =VLOOKUP($A1+$B1,SheetA!$A:$C,3,FALSE) yeah. I'm able to do vlookup for 1 column, but not matching two criteria. =INDEX(SheetA!$C$3:$C$6, MATCH(1, (SheetB!E3 = SheetA!$A$3:$A$6) * (SheetB!F3 = SheetA!$B$3:$B$6),0)) and press CTRL+SHIFT+ENTER after typing into formula bar, because it's an array formula. =VLOOKUP(SheetB!$A3, SheetA!$A$3:$A$6,3,0) <- not working for me. These will only work exactly as written if your sheet names are exactly as you indicated, and your data matches your images, with data starting in row 1 in both sheets. Mako212. Yeah! The index works! thanks! You can use an array formula version of INDEX/MATCH to match on multiple criteria (you must press CTRL+SHIFT+ENTER after typing it in the formula bar to make it an array formula): =INDEX(SheetA!$C$3:$C$6, MATCH(1, (SheetB!E3 = SheetA!$A$3:$A$6) * (SheetB!F3 = SheetA!$B$3:$B$6),0)) Each set of criteria goes in parenthesis within MATCH, separated by *, with the value on the left, and the range to match from on the right of the = sign.
common-pile/stackexchange_filtered
R - select cases so that the mean of a variable is some given number I previously worked on a project where we examined some sociological data. I did the descriptive statistics and after several months, I was asked to make some graphs from the stats. I made the graphs, but something seemed odd and when I compared the graph to the numbers in the report, I noticed that they are different. Upon investigating further, I noticed that my cleaning code (which removed participants with duplicate IDs) now results with more rows, e.g. more participants with unique IDs than previously. I now have 730 participants, whereas previously there were 702 I don't know if this was due to updates of some packages and unfortunately I cannot post the actual data here because it is confidential, but I am trying to find out who these 28 participants are and what happened in the data. Therefore, I would like to know if there is a method that allows the user to filter the cases so that the mean of some variables is a set number. Ideally it would be something like this, but of course I know that it's not going to work in this form: iris %>% filter_if(mean(.$Petal.Length) == 1.3) I know that this was an incorrect attempt but I don't know any other way that I would try this, so I am looking for help and suggestions. No. Say you had 3 cases c(1, 2, 3) and filtered to mean of 2. How would it know if it's all cases, just 1 and 3, or just 2? @caldwellst It wouldn't but I know the exact sample size, so if the sample size for your example was 2, it could only be c(1, 3). Other combinations would produce either a mean of 1.5 or 2.5. It sounds as though you want to find which 702 out of 730 participants have the mean that you found previously. In other words, which 702 participants have a sum of (702 * old mean). Since there are 2.8 * 10^50 ways to select 702 elements from a set of 730 you can't do this by an exhaustive search. Have you no other clues to go on? @AllanCameron That is correct! I previously tried writing a for loop that would go through all possible combinations of participants, but that would take several trillions of years to compute. Unfortunately I don't have any other clues, just descriptive stats for 20 variables. Is there any other approach I could take to solve this? @J.Doe see my answer below I'm not convinced this is a tractable problem, but you may get somewhere by doing the following. Firstly, work out what the sum of the variable was in your original analysis, and what it is now: old_sum <- 702 * old_mean new_sum <- 730 * new_mean Now work out what the sum of the variable in the extra 28 cases would be: extra_sum <- new_sum - old_sum This allows you to work out the relative proportions of the sum of the variable from the old cases and from the extra cases. Put these proportions in a vector: contributions <- c(extra_sum/new_sum, old_sum/new_sum) Now, using the functions described in my answer to this question, you can find the optimal solution to partitioning your variable to match these two proportions. The rows which end up in the "extra" partition are likely to be the new ones. Even if they aren't the new ones, you will be left with a sample that has a mean that differs from your original by less than one part in a million.
common-pile/stackexchange_filtered
Time independent Kerr metric The Kerr metric expressed in terms of polar coordinates $r,\theta,\phi$, such that $x = r\sin(\theta)\cos(\phi)$, $y = r\sin(\theta)\sin(\phi)$, $z = r\cos(\theta)$. Then the Kerr metric is given as \begin{align*} ds^2 = &-\left(1 - \frac{2GMr}{r^2+a^2\cos^2(\theta)}\right) dt^2 + \left(\frac{r^2+a^2\cos^2(\theta)}{r^2-2GMr+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2\\ &+ \left(r^2+a^2+\frac{2GMra^2}{r^2+a^2\cos^2(\theta)}\right)\sin^2(\theta) d\phi^2 - \left(\frac{4GMra\sin^2(\theta)}{r^2+a^2\cos^2(\theta)}\right) d\phi\, dt \end{align*} where $a \equiv S/M$ is the object's angular momentum per unit mass, and $G$ is the gravitational constant. This is an exact solution for the empty-space Einstein equation. Say, If we are to consider the metric for a constant time, $t_0$. Is it then possible to define the Kerr metric on a submanifold of spacetime, say only in space? If so how can I accomlish this? Is it as simple as dropping the time dependent terms, i.e \begin{align*} ds^2 = & \left(\frac{r^2+a^2\cos^2(\theta)}{r^2-2GMr+a^2}\right) dr^2 + \left(r^2+a^2\cos(\theta)\right) d\theta^2\\ &+ \left(r^2+a^2+\frac{2GMra^2}{r^2+a^2\cos^2(\theta)}\right)\sin^2(\theta) d\phi^2 \end{align*} or do I need to use the induced metric to describe the metric on the submanifold? Edit : I solved the geodesic differential equations using a "time independent" Kerr metric, with a = 0 (i.e this reduces Kerr metric to the Schwarzschild metric), and the Schwarzschild radius to define the other parameters : Most plots I got spiraled around a singularity at the origo. Here is a plot where I set $\phi$ to a constant, the z-axis becomes the "time" : Update : I have found the following figure which seem to verify my first figure. Strategies for Direct Visualization of Second-Rank Tensor Fields by Werner Benger and Hans-Christian Hege At heart, something like this has to be properly understood as a 3+1 decomposition of the spacetime, which can be understood using the ADM decomposition: https://en.wikipedia.org/wiki/ADM_formalism Note that different choices for the time coordinate will give you radically different 3-geometries. The metric is telling you how to calculate the proper time along a path of your choosing. If you select a path where the time is everywhere constant then as you integrate along that path $dt = 0$ and any terms involving $dt$ disappear. It is as simple as that. Its nice to know that. That simplifies things a lot for me. I intend to visualize the metric by solving the geodesic differential equations. And it makes things much easier if I can simply consider the problem in 3D. @imranal: I don't think you can actually do that. Geodesics in space are not the same as geodesics in spacetime. @imranal: I think what you're describing is a bit different to what I thought you meant. The hypersurface of constant time is a Riemannian manifold, with a metric obtained by setting $ds=0$. You could solve the geodesic equation for this manifold and maybe this is a good way to understand the shape of the manifold. However the curves you get have no physical relevance in the sense that they are not physically meaningful trajectories. @javier: Can I drop one of the space components instead, say $\phi$ ?
common-pile/stackexchange_filtered
Communication between sibling components I have thee following structure: <master> <frame></frame> <toolbox></toolbox> </master> master -> master component frame -> frame component (which loads an iframe) toolbox -> toolbox component I add items from toolbox to the iframe with drang & drop and for each dropped element I add a click event via JS addEventListener. When I click an element I want to return some data and grab it into my toolbox component and display the clicked element. I've tried using a service with EventEmitter (I know it's bad practice), but even if the data is received in the toolbox component, i can't do any changes to component variables. By default propertiesPanel Input is set to "True", therefore my div is hidden. <div class="selected-item" [class.is-hidden]="propertiesPanel"></div> Then I have the component: @Input() propertiesPanel = true; ngOnInit() { // _connector is the Service I'm using for communication this._connector.get().subscribe( (data) => { this.propertiesPanel = false; // it changes the value to false this.addElements(); // dummy function - will have to add properties to the panel depending on which element I click } ); } So, even if I set the Input to false, there's no action in the actual DOM whatsoever and I don't know why. (is-hidden class it's not removed) Any thoughts? Thanks Solution Apprently you need to use NgZone inside subscribe() in order to re-render the view... sort of. this._ngZone.run(() => { this.propertiesPanel = false; } You confused with propertiesButton and propertiesPanel ? No, just copied it wrong.... sorry Where are you getting the @Input data from(I mean parent component)? The input is from toolbox component. I'm using it only there. It looks fine for me. Can you reproduce the error using plunker It's kindda tricky because of the iframe. But propertiesPanel changes to false, but nothing happens in the DOM - class is not removed. Maybe because I'm inside subscribe? Idk... It must work. Once look at this plunker https://plnkr.co/edit/QJLSXcpcELw3ZbMhlbyJ?p=preview I know it works ok like that, but not inside a service subscription. Yes, it does. this refers to current class even there also. Apprently you need to use NgZone inside subscribe() in order to re-render the view... sort of. this._ngZone.run(() => { this.propertiesPanel = false; }
common-pile/stackexchange_filtered
Nginx proxy server cache images from apache server I have 2 servers one apache and one nginx. I setup nginx as proxy location / { proxy_pass http://apachesample.com:8080/; include /etc/nginx/conf.d/proxy.conf; proxy_buffering off; proxy_cache web-cache; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; // try to add this section to add header and cache image location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { expires max; add_header Pragma public; #add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } } All images are not found when i try to add expries header ? How can i set cache expires header for images ? Do i have to do it on apache server ? FIXED: enable mod_rewrite I am sorry but the question is a little unclear. Do you want nginx to cache images? What is the current behaviour you see with the configuration you have posted? What do you mean when you say images are not found? when i try to open image by link domain.com/image.jpg and get not found 404. All source code is putted on apache server, so i want to cache images on nginx server I am not able to point out any issues in your configuration. Can you check this question and see if it helps? I did not enable mode rewrite on apache server Is the issue fixed after you enabled mod rewrite? If yes, can you please update your question with that information?
common-pile/stackexchange_filtered
awk extract a column and output a file named by the column header I have a .txt file like this: col1 col2 col3 col4 1 3 4 A 2 4 6 B 3 1 5 D 5 3 7 F I want to extract every single column (i) after column 1 and output column1 and column i into a new file named by the header of column i. That means that I will have three output files named "col2.reform.txt", "col3.reform.txt" and "col4.reform.txt" respectively. For example, the output "col2.reform.txt" file will look like this: col1 col2 1 3 2 4 3 1 5 3 I tried my code like this: awk '{for (i=1; i <=NF; i++) print $1"\t"$i > ("{awk 'NR==1' $i}"".reform.txt")}' inputfile And apparently the "{awk 'NR==1' $i}" part does not work, and I got a file named {awk 'NR==1' $i}.reform.txt. How can I get the file name correctly? Thanks! PS: how can I deleted the file "{awk 'NR==1' $i}.reform.txt" in the terminal? Edited: The above column name is just an example. I would prefer to use commands that extract the header of the column name, as my file in reality uses different words as the header. here's a similar one... $ awk 'NR==1 {n=split($0,h)} {for(i=2;i<=n;i++) print $1,$i > (h[i]".reform.txt")}' file ==> col2.reform.txt <== col1 col2 1 3 2 4 3 1 5 3 ==> col3.reform.txt <== col1 col3 1 4 2 6 3 5 5 7 ==> col4.reform.txt <== col1 col4 1 A 2 B 3 D 5 F Thanks for the concise one! Just a question as I'm a rookie, what is {n=split($0,h)} for? Also, h[i] is a short form for heading[i]? it splits the first row into array elements, shorthand for for(i=1;i<=NF;i++) h[i]=$i, but more flexible. It returns NF and you can use a different delimiter(s) than the FS one. h is just the array name, you can rename it to header or colName etc. That will fail with a "too many open files" error once you get past a dozen or so output files unless you're using GNU awk and then it'll just start to slow down. Based on your shown samples, could you please try following. Written with shown samples in GNU awk. awk ' FNR==1{ for(i=1;i<=NF;i++){ heading[i]=$i } next } { for(i=2;i<=NF;i++){ close(outFile) outFile="col"i".reform.txt" if(!indVal[i]++){ print heading[1],heading[i] > (outFile) } print $1,$i >> (outFile) } } ' Input_file Output files will be created with names eg--> col2.reform.txt, col3.reform.txt, col4.reform.txt and so on... sample of col2.reform.txt content will be as follows: cat col2.reform.txt col1 col2 1 3 2 4 3 1 5 3 Explanation: Adding detailed explanation for above. awk ' ##Starting awk program from here. FNR==1{ ##Checking condition if this is first line then do following. for(i=1;i<=NF;i++){ ##Traversing through all fields of current line. heading[i]=$i ##Creating heading array with index of i and value of current field. } next ##next will skip all further statements from here. } { for(i=2;i<=NF;i++){ ##Traversing from 2nd field to till last field of all rest of lines. close(outFile) ##Closing outFile to avoid too many opened files error. outFile="col"i".reform.txt" ##Creating outFile which has output file name in it. if(!indVal[i]++){ print heading[1],heading[i] > (outFile) } ##Checking condition if i is NOT present in indVal then print 1st element of heading and current element of heading into outFile. print $1,$i >> (outFile) ##Printing 1st field and current field values to output file here. } } ' Input_file ##Mentioning Input_file name here. $ awk ' NR==1 { split($0,hdrs) } { for (i=2; i<=NF; i++) { out = hdrs[i]".reform.txt" if (FNR==1) { printf "" " > " out # to erase exiting file contents if any } print $1, $i " >> " out close(out) } } ' file > col2.reform.txtcol1 col2 >> col2.reform.txt > col3.reform.txtcol1 col3 >> col3.reform.txt > col4.reform.txtcol1 col4 >> col4.reform.txt 1 3 >> col2.reform.txt 1 4 >> col3.reform.txt 1 A >> col4.reform.txt 2 4 >> col2.reform.txt 2 6 >> col3.reform.txt 2 B >> col4.reform.txt 3 1 >> col2.reform.txt 3 5 >> col3.reform.txt 3 D >> col4.reform.txt 5 3 >> col2.reform.txt 5 7 >> col3.reform.txt 5 F >> col4.reform.txt Just change " > " to > and " >> " to >> when you're done testing and want to actually generate the output files.
common-pile/stackexchange_filtered
Error in geom_bar plot: Must request at least one colour from a hue palette Why am I getting this error with the following code while working through Wickham & Grolemund, 2016 (R for Data Science): library(tidyverse) ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, color = clarity, fill = NA), position = "identity") The error reads: Must request at least one colour from a hue palette The code gives the expected output when it is run as follows: library(tidyverse) ggplot(data = diamonds, mapping = aes(x = cut, color = clarity) ) + geom_bar(fill = NA, position = "identity") Where am I getting it wrong? I'm not sure whether this is expected behaviour here or not. If you want the bars to be unfilled, you should put the argument fill = NA outside the aes call. For example, the following works fine: ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, color = clarity), fill = NA, position = "identity") Things are more complicated when you put the NA inside the aes call, because if you use a single value (like NA) as an aesthetic mapping, ggplot internally creates a whole column of NA values in its internal data frame and maps the aesthetic to that. That doesn't matter too much if you have a continuous color scale; in fact, your code works fine if you specify that you want a 'numeric-flavoured' NA: ggplot(data = diamonds) + geom_bar(mapping = aes(x = cut, color = clarity, fill = NA_real_),position = "identity") Note though that rather than the fill = NA_real_ being interpreted as 'no fill', it is interpreted as 'fill with the default NA colour`, which is gray. If you have a discrete type of NA such as NA_character_ or just plain old NA (which is of class 'logical'), you get the error because when ggplot tries to set up the fill scale, it counts the number of unique non-NA values to fetch from its discrete color palette. It is the palette function that complains and throws an error when you ask it for 0 colours. In fact, the simplest code that replicates your error is: scales::hue_pal()(0) #> Error: Must request at least one colour from a hue palette. The behaviour of ggplot has changed since 2016 when the tutorial you are following was written, so you would need to look back in the change log to see whether this was an intentional change, but the bottom line is that fill = NA should be used outside aes if you want the bars to be unfilled.
common-pile/stackexchange_filtered
Can I download Ubuntu with my Mac and transfer it via USB? I have a dell optiflex 755 and i have fitted a new hard drive and power supply. There is no operating system, so I am trying to install ubuntu. I need to download it on a mac and then transfer it. Will this work? Yes, it will work. How much ram do you have? not sure waht ram is.. im so lost with this.. i have been to computer store twice, and online chat with tech and cant get this to work.. how do i know how much ram i have? you said you had fitted a new hard drive, but you don't know what ram is. don't worry about it. You need to download the file here, and then follow these instructions: The easiest way to burn an ISO, the file you need to install Ubuntu from a DVD, is by using Disk Utility. Launch 'Disk Utility' (Applications → Utilities → Disk Utility). Insert your blank DVD. Drag and drop your .iso file to the left pane in Disk Utility. Now both the blank disc and the .iso should be listed. Select the .iso file, and click on the 'Burn' button in the toolbar. Ensure that the 'Verify burned data' checkbox is ticked (you may need to click on the disclosure triangle to see the checkbox). Click 'Burn'. The data will be burned and verified. Taken from ubuntu.com this didn't work.. saying cd/dvd not compatible.. i don't think its downloading iso correctly I think you need to get a new dvd.
common-pile/stackexchange_filtered
create new variables with a for cycle Hi I have a list of values named: $value1 $value2 $value3 ... and I'd like to assign each value to an array element; something like: $my_array[1]=$value1; $my_array[2]=$value2; $my_array[3]=$value3; How can I do this using a for cycle? The array is not a problem but I can't figure out how to write some code for the value, it should be something like: for($i=1; $i<=10000; $i++) { $my_array[$i]=$value$i; } Try this: for ($i=1; $i<=10000; $i++) { $val_name = "value" . $i; $my_array[$i]=$$val_name; } If you're wondering, it's called Variable Variables Thanks everyones! It works but I should use just one $ instead of two, so $my_array[$i]=$val_name; for ($i = 1; isset(${"value$i"}); $i++) { $my_array[$i] = ${"value$i"}; } This syntax is known as variable variables. You can use the $$ syntax: for($i = 1; $i <= 10000; $i++) { $name = 'value' . $i; $my_array[$i] = $$name; } You are almost there: for($i=1; $i<=10000; $i++) { $my_array[$i] = $value; } Or this, if you want to append the counter as well: for($i=1; $i<=10000; $i++) { $my_array[$i] = $value . $i; } What you are looking for are the {}. $my_array[$i]=${'value'.$i};
common-pile/stackexchange_filtered
What is the range of meaning of ganab (steal) as used in Exodus 20:15 and projected into a modern context? It is fairly common to see Exodus 20:15 ("Thou shalt not steal") cited as Biblical support for the idea that copying creative works without permission is theft, or, in other words, the claim that actions that are considered copyright infringement would be sinful even if they were not illegal. An example of this view can be found in James Russel Lowell's poem "International Copyright": In vain we call old notions fudge,     And bend our conscience to our dealing; The Ten Commandments will not budge,     And stealing will continue stealing. Is this Biblically accurate? Does Exodus 20:15, properly interpreted, apply to what is today called "intellectual property," or only to tangible property? Because the Bible does not specifically address copyright or any similar concept, I would think that this would be determined by whether or not "theft" was commonly understood to include copying of creative works at the time the Ten Commandments were given. I'm afraid this is beyond the scope of this site. Let me see whether I can rescue this question by rephrasing it, Dan. @DanFefferman how is it off-topic? I'm asking if a common interpretation of a Bible verse is accurate; isn't that hermeneutics? If the last paragraph is the issue, then I could just remove that; same with the Lowell quote. @someone, I've restated the question to make it primarily a question of interpretation (hermeneutics) and I'm working on my answer to you. I like your quoted poem. It also reminds me of "The Gods of the Copybook Headings" by Kipling. I retracted my "close" vote. Not sure if this was because of Dieter's edits. The pertinent Hebrew verb in Ex 20:15 is גָּנַב (ganab) which has a range of meanings that include (eg, from the NASB translation): actually stolen (1), brought to me stealthily (1), carries away (1), deceive (1), deceived (1), deceiving (1), fact kidnapped (1), kidnapping (1), kidnaps (1), steal (9), steal away (1), stealing (1), steals (3), steals him away (1), stealth (1), stole (3), stole away (1), stolen (8), stolen you away (1). The word occurs 40 times in the OT, eg, Gen 30:33, 31:19, 20, 26, 27, 30, 32, 39, 40:15, 44:8, Ex 20:15, 21;16, 22:1, 22:7, 12, Lev 19:11, Deut 24:7, Josh 7:11, 2 Sam 15:6, 19:3, etc. Note that, at its heart, the act of stealing something is an act of deception; by taking something that does not belong to the thief and then embarking on an act of extended deception by pretending that the "something' belongs to the thief and not true owner. That is stealing not only involves theft (8th commandment) but also deception in contravention of the 9th commandment as well. I see no reason why stealing intellectual property should not contravene this commandment as any other physical object, whether it is legal or otherwise. After all, stealing intellectual property is also a deception because it represents oneself other than in the true light. Good answer, +1. And with that, congratulations on breaking 100k reputation! Quite the accomplishment. @HoldToTheRod - Ha Ha - many thanks Form the Strong's Hebrew Concordance: ganab: to steal Original Word: גָּנַב Part of Speech: Verb Transliteration: ganab Phonetic Spelling: (gaw-nab') Definition: to steal NASB Translation actually stolen (1), brought to me stealthily (1), carries away (1), deceive (1), deceived (1), deceiving (1), fact kidnapped (1), kidnapping (1), kidnaps (1), steal (9), steal away (1), stealing (1), steals (3), steals him away (1), stealth (1), stole (3), stole away (1), stolen (8), stolen you away (1). Gen. 30:33; Gen. 31:19; Gen. 31:20; Gen. 31:26; Gen. 31:27; Gen. 31:30; Gen. 31:32; Gen. 31:39; Gen. 40:15; Gen. 44:8; Exod. 20:15; Exod. 21:16; Exod. 22:1; Exod. 22:7; Exod. 22:12; Lev. 19:11; Deut. 5:19; Deut. 24:7; Jos. 7:11; 2 Sam. 15:6; 2 Sam. 19:3; 2 Sam. 19:41; 2 Sam. 21:12; 2 Ki. 11:2; 2 Chr. 22:11; Job 4:12; Job 21:18; Job 27:20; Prov. 6:30; Prov. 9:17; Prov. 30:9; Jer. 7:9; Jer. 23:30; Hos. 4:2; Obad. 1:5; Zech. 5:3 Exodus 20:15 NKJV 15 “You shall not steal. Steven Cole - This command acknowledges the right to own private property. It forbids all theft, robbery, extortion, embezzlement, and taking bribes. It prohibits cheating on your income taxes, as well as welfare and Medicare fraud. You violate this command if you steal intellectual property through plagiarism or copyright violations. It’s wrong to steal office supplies or equipment, or to steal time from your employer. It’s sin to incur debt that you know you are unable to pay back. While sometimes bankruptcy is unavoidable, Christians should do their best to pay creditors what is owed. (See my message [4/6/08], “To Cure a Thief.”) (Obeying The Big Ten Exodus 20:1-17) Steal (01589)(ganab) means to carry away, to take that which belongs to another and generally signifies taking something that belongs to another secretly, without consent. Thus to steal is a nuance distinguished from the concept "to rob" in the sense that stealing is done in secret. There are other Hebrew verbs for violent aspect of theft. According to Capitol Ministries: “You shall not steal.” Exodus 20:15 presumes that people own something that can be stolen. For instance, I cannot take my neighbor’s donkey because it belongs to my neighbor. Or as a modern example includes intellectual property. You cannot search through my email files and give them to whomever you choose unbeknownst to me; to do so is to steal another’s property. According to THEOLOGY OF WORK Stealing occurs in many forms besides robbing someone. Any time we acquire something of value from its rightful owner without consent, we are engaging in theft. Likewise, profiting by taking advantage of people’s fears, vulnerabilities, powerlessness, or desperation is a form of stealing because their consent is not truly voluntary. Violating patents, copyrights, and other intellectual property laws is stealing because it deprives owners of the ability to profit from their creation under the terms of civil law. I'll conclude with my thoughts: I think the command, You shall not steal, includes all and any type of theft. While man may not have known yet of intellectual property theft and copyrights, God certainly did. Was copying another's work a sin? Well, it was if it in some way stole from them! That's why I like the definition: Any time we acquire something of value from its rightful owner without consent, we are engaging in theft. In this definition, intellectual property would be included. It most certainly is a sin now according to Romans 13:1-2. Would what is considered copyright infringement be covered if there were no copyright laws? For example, for the millennia between the invention of writing and the development of copyright, was copying others' writings without claiming credit a sin? The idea that people own ideas that they develop and should be able to control the use, other than expecting others to give them credit, seems to be a modern concept. I am not aware of any country having a law similar to modern copyright prior to Britain's passage of the Statute of Anne in 1710. Copyright and patent law is intended to protect for the work of creative people. Apparently, forgeries (writing in another person's name) occurred in the New Testament when Paul had to sign his letters with a distinctive signature. One could also argue that altering scriptures is creating a "derivative work," using today's legal language, which was condemned by Jesus in Revelation. The bottom line is that those who claim to follow scriptures should abide by the laws of the land (Jesus paid taxes to Rome) except where they specifically conflict with scriptures and our consciences. @Someone It's a good question, but one that is difficult to answer. I think the command, You shall not steal, includes all and any type of theft. While man may not have known yet of intellectual property theft and copyrights, God certainly did. Was copying another's work a sin? Well, it was if it in some way stole from them! Which is why I like the definition: "Any time we acquire something of value from its rightful owner without consent, we are engaging in theft." It most certainly is a sin now according to Romans 13:1-2. The ancient world did not have a legal concept of intellectual property, so despite containing over 600 commands, the Torah unsurprisingly does not include an explicit prohibition on taking thy neighbor's IP. However, the verb translated "steal", גָּנַב ("ganab") is a fairly broad concept. Strong's Concordance lists the following English equivalents: carry away, indeed, secretly bring, steal away, get by stealth A primitive root; to thieve (literally or figuratively); by implication, to deceive -- carry away, X indeed, secretly bring, steal (away), get by stealth. When Paul recites a series of commandments to the Romans (see 13:9), including the prohibition on stealing, he uses the verb κλέπτω ("klepto"). Both the Hebrew and the Greek verbs used in "thou shalt not steal" regularly convey a dimension of stealth - the idea being that one is taking something without the owner's knowledge or permission. The manner of the taking is not particularly relevant. The focus is that the owner is being deprived of their right to decide how their possession is used. To properly interpret this commandment, it's important to look at it in context with all the other places in scripture where it's used. A long list of references can be reviewed here: https://biblehub.com/hebrew/1589.htm From that list, we can see that the Hebrew word, ganab, as steal is applied in scriptures to • Objects (silver, idols, etc.) • Animals • Humans (kidnapping/man-stealing) • Stealth/stealthy behavior/deceit (2 Samuel 19:3, Genesis 31:20) • Hearts/affections (2 Samuel 15:6) Regarding interpretation, I'm also reminded of the highly rated Polish miniseries, called The Decalogue, where each of the 10 Commandments are explored in a modern setting (https://www.imdb.com/title/tt0092337/). In a modern setting, many additional applications of stealing could be considered: • Time from an employer • Ideas from a colleague • Plagiarism • Personal information (selling to cyber-criminals) • Identity as in impersonation • Cheating on taxes • Intellectual property such as artwork, program code, etc. I suppose that even things such as creating and selling tools for theft such as rootkits and renting malware would fall under the general category. But a strict interpretation of this commandment can be problematic. What if someone's work inspires you to create something similar? Musicians often run into this problem. Deuteronomy 25:4 states, "You shall not muzzle an ox when it is treading out the grain." Then, consider 1 Corinthians 9:8-10 ESV, where Paul writes For it is written in the Law of Moses, “You shall not muzzle an ox when it treads out the grain.” Is it for oxen that God is concerned? Does he not certainly speak for our sake? It was written for our sake, because the plowman should plow in hope and the thresher thresh in hope of sharing in the crop. To James Russel Lowell's poem that you quoted, "fudging" is simply rationalizing a behavior. For example, "I'm not stealing something, I'm just borrowing it for a while as long as I 'intend' to return it later." So all this, points to hermeneutics as determining the original intent of a commandment rather than choosing a legalistic rationalization.
common-pile/stackexchange_filtered
Installation of scoop times out I want to install scoop on a laptop on which I do not have administrator rights. I use the following commands in PowerShell: PS> Set-ExecutionPolicy RemoteSigned -scope CurrentUser PS> iex (new-object net.webclient).downloadstring('https://get.scoop.sh') Exception calling "DownloadString" with "1" argument(s): "The operation has timed out" At line:1 char:1 + iex (new-object net.webclient).downloadstring('https://get.scoop.sh') + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : WebException I believe this is due to it using IPv6 by default instead of IPv4: PS> ping get.scoop.sh Pinging d19x8hxthvia8.cloudfront.net [2600:9000:2002:7200:1f:b80:d400:93a1] with 32 bytes of data: General failure. General failure. PS> ping -4 get.scoop.sh Pinging d19x8hxthvia8.cloudfront.net [<IP_ADDRESS>] with 32 bytes of data: Reply from <IP_ADDRESS>: bytes=32 time=27ms TTL=246 Reply from <IP_ADDRESS>: bytes=32 time=25ms TTL=246 How can I force the iex command to use IPv4 instead of IPv6? Note that generally the OS will only prefer IPv6 if it sees that IPv6 is available to begin with. It's possible that your router has partial configuration, e.g. advertising an address prefix but not a default route. Normally IPv6 should be working. I now noticed I can't ping Google either as it defaults to IPv6. I'll investigate whether this is due to a laptop misconfiguration or a router misconfiguration. I found the following working solution: PS> Set-ExecutionPolicy RemoteSigned -scope CurrentUser PS> $wc = new-object net.webclient PS> $wc.headers.add('host', 'get.scoop.sh') PS> iex $wc.downloadstring('https://<IP_ADDRESS>') Initializing... Downloading... Extracting... Creating shim... Adding ~\scoop\shims to your path. Scoop was installed successfully! Type 'scoop help' for instructions.
common-pile/stackexchange_filtered
How to use Microdata's 'itemref' to reference similar products that are listed outside of the Product scope? I am trying to use Microdata to define my website using the Schema.org definitions. On an ItemPage I am displaying the information about a product. Additionally, I want to link similar products to the current product while the related product is being displayed outside of the current product's scope. I tried to achieve this using the itemref attribute. However, when I review the page on Structured Data Testing Tool it does not show the related products are part of the ItemPage node. <body itemscope itemtype="http://schema.org/ItemPage"> <header itemprop="hasPart" itemscope itemtype="http://schema.org/WPHeader"> </header> <article itemprop="mainEntity" itemscope itemtype="http://schema.org/Product" id="details_10"> <h2 itemprop="name">Product 10</h2> </article> <aside> <article itemref="details_10" itemscope itemtype="http://schema.org/Product"> <h3 itemprop="name">Product 20</h3> </article> <article itemref="details_10" itemscope itemtype="http://schema.org/Product"> <h3 itemprop="name">Product 30</h3> </article> </aside> <footer itemprop="hasPart" itemscope itemtype="http://schema.org/WPFooter"> </footer> </body> It has to be used the other way around: The itemref attribute needs to be on the element that represents the item you want to add properties to. This would be the primary product in your case. The elements you want to add need the itemprop attribute (which you are missing, along with the isSimilarTo property), and an id that gets referenced by the itemref attribute. These would be the similar products in your case. So, this would give: <body itemscope itemtype="http://schema.org/ItemPage"> <article itemprop="mainEntity" itemscope itemtype="http://schema.org/Product" itemref="details_20 details_30"></article> <aside> <article id="details_20" itemprop="isSimilarTo" itemscope itemtype="http://schema.org/Product"></article> <article id="details_30" itemprop="isSimilarTo" itemscope itemtype="http://schema.org/Product"></article> </aside> </body> The problem with this markup is that the two isSimilarTo properties get added to the ItemPage, too (because they are nested under it), which would be incorrect. To avoid this, the best solution would be not to specify the ItemPage on the body element, but on a div or similar. <body> <div itemscope itemtype="http://schema.org/ItemPage"> <article itemprop="mainEntity" itemscope itemtype="http://schema.org/Product" itemref="details_20 details_30"></article> </div> <aside> <article id="details_20" itemprop="isSimilarTo" itemscope itemtype="http://schema.org/Product"></article> <article id="details_30" itemprop="isSimilarTo" itemscope itemtype="http://schema.org/Product"></article> </aside> </body> (You could also use itemref to avoid having to nest the primary Product under the ItemPage. This would also allow to specify the ItemPage on the head element, for example.) The problem is that my ItemPage has a header and footer which is why the ItemPage is used at the body. The header and footer are linked using hasPart. I have a header on the top, main product details, similar products list and the footer at the end. Not sure how to get around that @Junior: You could use itemref to add all these items to the ItemPage (header, footer, main product: itemref="header details_10 footer"). -- There is also an ugly hack to avoid that the nested properties for the similar products get added to the ItemPage, but I wouldn’t recommend it.
common-pile/stackexchange_filtered
How to extract parameter from icalendar event in Ruby? I'm fairly new to Ruby and am working on a small project. I'd like to print out some details from an iCalendar (*.ics file) event. To handle the ics-File I use the icalendar gem. So far, I've managed to extract the correct event but I also need to print out the attendee. My problem: The ATTENDEE field is using several parameters. Here is an example: BEGIN:VEVENT DTSTAMP:20150527T074021Z DTSTART;VALUE=DATE:20150525 DTEND;VALUE=DATE:20150530 SUMMARY:OnCall Duty UID:1234 DESCRIPTION: CREATED:20150512T063102Z LAST-MODIFIED:20150512T063102Z ATTENDEE;X-MYUSER-KEY=dfdfdf;CN=Jon <EMAIL_ADDRESS>SEQUENCE:1 X-CONFLUENCE-SUBCALENDAR-TYPE:other TRANSP:TRANSPARENT STATUS:CONFIRMED END:VEVENT Currently my code looks like this: require 'icalendar' require 'active_support/all' require 'open-uri' ics_file = File.open("ops.ics") cal = Icalendar.parse(ics_file).first events = cal.events now = DateTime.now currentOnDuty = events.select{ |e| e.summary == "OnCall Duty" && e.dtstart.to_time >= now.beginning_of_week && e.dtend.to_time <= now.end_of_week } puts "User: #{currentOnDuty.first.attendee}" which creates the following output: [#<URI::MailTo<EMAIL_ADDRESS> What I rather need is the "CN" parameter for ATTENDEE. So I would like to get the output: User: Jon Doe The docs have only a few examples for creating an event with parameters but so far I could not figure out how to extract a certain parameter. Any hints on how to extract "CN"-parameter from the ATTENDEE field? Solution: currentOnDuty.first.attendee.ical_params[:cn].first what you get in currentOnDuty object? currentOnDuty is an array of Icalendar::Event objects. Since I only need one event i use currentOnDuty.first. Would you mind to try puts currentOnDuty.first.attendee.inspect and share an output, please. Plus, try puts currentOnDuty.first.attendee.cn. attendee.inspect prints out [#<URI::MailTo<EMAIL_ADDRESS>and attendee.cn gives me an undefined method 'cn' Two more suggestions: print out ...attendee.class and try ...attendee['cn'], ...attendee['CN'] and ...attendee[:cn]. attendee.class gives me Array. The other variantes print out '[]': no implicit conversion of String into Integer (TypeError) Icalendar wraps ical with values, so that: ▶ cal.first.events.first.attendee.first.class #⇒ Icalendar::Values::CalAddress < Icalendar::Values::Uri This class has ical_params attribute exposed: ▶ cal.first.events.first.attendee.first.ical_params[:cn] #⇒ [ [0] "John Doe" ] So (you’d like probably to check the presence of this attribute): ▶ cal.first.events.first.attendee.first.ical_params[:cn].first #⇒ "John Doe" Hope it helps. That was the final hint! Thanks a lot.
common-pile/stackexchange_filtered
Display Outline element by default I would like to see the Outline of a file by default when I open the VSCode. Now every time I open the VSCode I need to go to View -> Open View... and then type Outline. Is it possible to have Outline displayed by default when I open the program? have you tried the ... menu in the Explorer Bar cannot reproduce. VS Code should remember for each workspace the state of the outline view being displayed. Please copy and paste the output of the Help: About command (run in the command palette) into your question post. potentially related: https://stackoverflow.com/q/77522118/11107541
common-pile/stackexchange_filtered
Why did Dumbledore appoint Firenze to teach Divination? I assume Dumbledore allowed the subject of Divination to continue at Hogwarts in order to protect Professor Trelawney. If that is the case, why would Dumbledore bother to find another Divination teacher after Trelawney was dismissed by Umbridge? Why didn't he just end the class for that year? I think that giving up Divination would be admitting failure. By choosing the replacement teacher, Dumbledore made a point that he is the one to decide how to teach, not Umbridge. Also he annoyed her a great deal by appointing a non-human. Well, the subject existed before Trelawny, and presumably he was going to take her back after Umbridge was gone, so why mess up the students? Lavender Brown IIRC enjoyed the subject; perhaps others did too that we don't see. Firstly, if Professor Dumbledore didn't appoint a new Divination teacher, then Umbridge would have the Ministry appoint someone else as a teacher. Dumbledore suspects that they would get a worse teacher that way. The evidence for this is in Harry Potter and the Order of the Phoenix chapter 26, right after firing Professor Trelawney is announced. ‘And what,’ she said in a whisper that carried all around the Entrance Hall, ‘are you going to do with her once I appoint a new Divination teacher who needs her lodgings?’ ‘Oh, that won't be a problem,’ said Dumbledore pleasangly. ‘You see, I have already found us a new Divination teacher, and he will prefer lodgings on the ground floor.’ ‘You've found –?’ said Umbridge shrilly. ‘You've found? Might I remind you, Dumbledore, that under Educational Decree Number Twenty-two –’ ‘The Ministry has the right to appoint a suitable candidate if – and only if – the Headmaster is unable to find one,’ said Dumbledore. ‘And I am happy to say that on this occasion I have succeeded. May I introduce you?’ Even apart from that, it may be important that Hogwarts continues education in Divination. The number of O.W.L. or N.E.W.T. exams students get may matter for them getting better jobs. It is also likely that students from third year to fifth year inclusive must take at least two elective classes out of the five offered (arithmancy, divination, muggle studies, care of magical creatures, and ancient runes). It would be inconvenient for most students to change to other subjects at this point. ISTR that Firenze was also a bit of an outcast among the centaur for having helped the humans; for Dumbledore to do so was a way of helping Firenze as well. @gowenfawr No, I believe he has become an outcast because he has chosen to teach humans, so that was after. From wikia: "In 1992, Firenze saved Harry Potter in the forest from Lord Voldemort, frightening Voldemort away and carrying Harry on his back to safety. Despite his heroics, his herd saw this as a dishonourable act, as they considered themselves too great to be ridden by humans. About four years later, in March 1996, Albus Dumbledore, Headmaster of Hogwarts, hired him to teach Divination in Sybill Trelawney's stead, after she was sacked by Dolores Umbridge." Yes, it got worse after he became a teacher... but he was on the outs before that. @gowenfawr Yes, the other centaurs already didn't like him, but teaching humans has really tipped the scales, he was exiled from his tribe at that point. and I'm just saying that giving a job to a guy who's on the outs with his Cenbros is exactly the kind of thing Dumbledore would do :) @RDFozz argh yes. Thanks for pointing it out. I'm not quite sure about your last paragraph there; Dumbledore was perfectly willing to part with the subject of Divination before. At the end of book 5, Dumbledore says that prior to Trelawney's interview, "it was against [his] inclination to allow the subject of Divination to continue at all." Admittedly, though, in this case it was in the middle of the school year, which would have made things a little more inconvenient. Firenze fills the position and allows Trelawney to remain at Hogwarts in her current room. As b_jonas pointed out: Umbridge would have appointed a new Divination teacher. That teacher would undoubtedly be worse than Firenze and also be another Ministry spy. Aside from this, I believe the real reason Dumbledore appointed Firenze was that he was preventing Umbridge from removing Trelawney from Hogwarts entirely. ‘And what,’ she said in a whisper that carried all around the Entrance Hall, ‘are you going to do with her once I appoint a new Divination teacher who needs her lodgings?’ Firenze needs ground-floor accomodations and so Trelawney's room is still available to her. Dumbledore wished her to remain at Hogwarts for her safety. I cannot ask Firenze to return to the Forest, where he is now an outcast, nor can I ask Sybill Trelawney to leave. Between ourselves, she has no idea of the danger she would be in outside the castle. She does not know — and I think it would be unwise to enlighten her — that she made the prophecy about you and Voldemort, you see. This may have played some part, but I doubt it was the most important motivation. The Hogwarts castle is large. Dumbledore would have found a way to lodge two teachers somehow anyway.
common-pile/stackexchange_filtered
Converting from select to check list? I have a select list like such: <select name="taxonomy[1][]" multiple="multiple" class="form-select" id="edit-taxonomy-1" size="7"> <option value="13">Glaciers</option> <option value="14">Lake Ice</option> <option value="17">Permafrost</option> <option value="16">River Ice</option> <option value="15">Sea Ice</option> <option value="12">Snow</option> </select> This list is being dynamically created and I'm not sure how to change the output display. I have seen tutorials like this: http://www.netvivs.com/convert-regular-select-into-a-drop-down-checkbox-list-using-jquery/ which convert a select into a dropdown check list. Is there a way using php,javascript,jquery, to convert the select to display as a checklist? Preferably in table form so I get 2 rows with 3 columns? change the html with that to checkboxes? not change the html. change the display Use the link you reference in your question. Are you looking for someone to write the code? @cdburgess no I'm looking for someone to start me off or reference me to a library that does something like that You could do it in a custom way like so: $('#edit-taxonomy-1').parent().append('<table id="checkboxes"><tbody><tr></tr></tbody></table>'); $('#edit-taxonomy-1 option').each(function() { var label = $(this).html(); var value = $(this).attr('value'); $('#checkboxes tbody tr').append('<td><input type="checkbox" value="'+value+'">'+label+'</td>'); }); Or just use the plugin: http://code.google.com/p/dropdown-check-list/ this didn't work the first time and I tried it again and it worked cooool Looks like the example link does it using jquery... Just make sure you have a source to jquery.js library, as well as a source to the library that site has offered as a download. Then just call, $("#edit-taxonomy-1").dropdownchecklist(); I havent tried it, but thats what the tutorial says to do... Good luck. PS - I had a hard time trying to find the download of the actual library from that site, so here it is. http://code.google.com/p/dropdown-check-list/ I'd like the checkboxes without the dropdown though Ahh I see. Can you not set an attribute / use javascript to keep the drop down open? I'm not very good with javascript and just don't really know how to do that
common-pile/stackexchange_filtered
Lock-Free Data Structures in C++ Compare and Swap Routine In this paper: Lock-Free Data Structures (pdf) the following "Compare and Swap" fundamental is shown: template <class T> bool CAS(T* addr, T exp, T val) { if (*addr == exp) { *addr = val; return true; } return false; } And then says The entire procedure is atomic But how is that so? Is it not possible that some other actor could change the value of addr between the if and the assignment? In which case, assuming all code is using this CAS fundamental, it would be found the next time something "expected" it to be a certain way, and it wasn't. However, that doesn't change the fact that it could happen, in which case, is it still atomic? What about the other actor returning true, even when it's changes were overwritten by this actor? If that can't possibly happen, then why? I want to believe the author, so what am I missing here? I am thinking it must be obvious. My apologies in advance if this seems trivial. He is describing an atomic operation which is given by the implementation, "somehow." That is pseudo-code for something implemented in hardware. Yes, the author is asserting that the operation is atomic, then giving you some code so you understand what it does. One implementation (that is atomic) is Microsoft's InterlockedCompareExchange() (see http://msdn.microsoft.com/en-us/library/ms683560(VS.85).aspx). If you just compile that code, it is most certainly not atomic. With GCC, you have similarily atomic operators including CAS described here: http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html
common-pile/stackexchange_filtered
reload UITableView inside closure causes strange behaviour I am using CloudKit to fetch CKRecords to populate a UITableView. The operations Array acts as the datasource for the tableView let predicate = NSPredicate(value: true) let query = CKQuery(recordType: "Operation", predicate: predicate) let queryOperation = CKQueryOperation(query: query) operations = [] queryOperation.recordFetchedBlock = { record in self.operations.append(record) } queryOperation.queryCompletionBlock = { cursor, error in if error == nil { dispatch_sync(dispatch_get_main_queue(), {self.tableView.reloadData()}) } } let database = CKContainer.defaultContainer().privateCloudDatabase database.addOperation(queryOperation) If i simply use self.tableView.reload() in the closure the tableView will update but there is a 5-10s delay unless I swipe the tableView where the cells will suddenly show. However when I dispatch on the main thread (as in the code above) the tableView cells will show but sometimes the top cell starts flashing or it renders very strange. I am wondering if anyone has any tips on how to implement the tableView.reloadData() inside this closure? EDIT: This behaviour only seems to appear when i call the above code from viewDidLoad. If i call the above code after the tableview updates fine. How about dispatch_async to the main queue? Thanks for the input. Unfortunately it did not solve it Does this happen both on simulator and on device? Good question, don't know how to test it on the simulator as it won't allow iCloud you can run your app in the simulator with an iCloud account. Just start up your simulator go to the home screen (menu hardware, home) open the settings app, go to iCloud and log in. You do have to execute the reloadData on the main queue. Looking at your code. I don't think your problem has to do with iCloud. Thanks for that. I ended up changing my approach to include CoreData as a local cache which also solved the rendering problems.
common-pile/stackexchange_filtered
ModuleNotFoundError: No module named 'distutils.util' for python 3.9 as it is already installed for 3.8.10? I am attempting to install the package pyminizip, but i get the error stating distutils.util module is missing and when attempting the fixes I found online when updating distutils it simply states I already have the newest version. I am using WSL and would love an answer that can solve my problem. Thanks in advance. See full errors here: sudo pip3 install pyminizip Traceback (most recent call last): File "/usr/bin/pip3", line 11, in <module> load_entry_point('pip==20.0.2', 'console_scripts', 'pip3')() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 490, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2854, in load_entry_point return ep.load() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2445, in load return self.resolve() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2451, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module> from pip._internal.cli.autocompletion import autocomplete File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module> from pip._internal.cli.main_parser import create_main_parser File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module> from pip._internal.cli import cmdoptions File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 19, in <module> from distutils.util import strtobool ModuleNotFoundError: No module named 'distutils.util' teunbergsma@DESKTOP-7IC0BEP:~/ITRL/1ITRL$ sudo apt-get install python3.9-distutils Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'python3-distutils' instead of 'python3.9-distutils' python3-distutils is already the newest version (3.8.10-0ubuntu1~20.04). 0 upgraded, 0 newly installed, 0 to remove and 400 not upgraded. pip uninstall python3-distutils and then pip install --pre python3.8-distutils should solve it
common-pile/stackexchange_filtered
Implementation of ELO algorithm for rating Hey guys I wanted to implement elo algorithm to rate the players in my lan. My php script contains the following. <?php class Rating { const KFACTOR = 16; protected $_ratingA; protected $_ratingB; protected $_scoreA; protected $_scoreB; protected $_expectedA; protected $_expectedB; protected $_newRatingA; protected $_newRatingB; public function __construct($ratingA,$ratingB,$scoreA,$scoreB) { $this->_ratingA = $ratingA; $this->_ratingB = $ratingB; $this->_scoreA = $scoreA; $this->_scoreB = $scoreB; $expectedScores = $this -> _getExpectedScores($this -> _ratingA,$this -> _ratingB); $this->_expectedA = $expectedScores['a']; $this->_expectedB = $expectedScores['b']; $newRatings = $this ->_getNewRatings($this -> _ratingA, $this -> _ratingB, $this -> _expectedA, $this -> _expectedB, $this -> _scoreA, $this -> _scoreB); $this->_newRatingA = $newRatings['a']; $this->_newRatingB = $newRatings['b']; } public function setNewSettings($ratingA,$ratingB,$scoreA,$scoreB) { $this -> _ratingA = $ratingA; $this -> _ratingB = $ratingB; $this -> _scoreA = $scoreA; $this -> _scoreB = $scoreB; $expectedScores = $this -> _getExpectedScores($this -> _ratingA,$this -> _ratingB); $this -> _expectedA = $expectedScores['a']; $this -> _expectedB = $expectedScores['b']; $newRatings = $this ->_getNewRatings($this -> _ratingA, $this -> _ratingB, $this -> _expectedA, $this -> _expectedB, $this -> _scoreA, $this -> _scoreB); $this -> _newRatingA = $newRatings['a']; $this -> _newRatingB = $newRatings['b']; } public function getNewRatings() { return array ( 'a' => $this -> _newRatingA, 'b' => $this -> _newRatingB ); } protected function _getExpectedScores($ratingA,$ratingB) { $expectedScoreA = 1 / ( 1 + ( pow( 10 , ( $ratingB - $ratingA ) / 400 ) ) ); $expectedScoreB = 1 / ( 1 + ( pow( 10 , ( $ratingA - $ratingB ) / 400 ) ) ); return array ( 'a' => $expectedScoreA, 'b' => $expectedScoreB ); } protected function _getNewRatings($ratingA,$ratingB,$expectedA,$expectedB,$scoreA,$scoreB) { $newRatingA = $ratingA + ( self::KFACTOR * ( $scoreA - $expectedA ) ); $newRatingB = $ratingB + ( self::KFACTOR * ( $scoreB - $expectedB ) ); return array ( 'a' => $newRatingA, 'b' => $newRatingB ); } } //first player won $first_player_rating=200; $second_player_rating=200; $n=new Rating($first_pic_rating,$second_pic_rating,1400,1400); $a=$n->getNewRatings(); var_dump($a); ?> I have given the first_player_rating and second_player_rating both to 200 which is wrong but the question is, how much should be the value of first_player_rating and second_player_rating if the first player has just won the match... I downvoted, because I do not see any code issues. I do not see any problems, any mistakes or anything wrong. I consider this as offtopic. IMO. Of course, I may be not right. Describe your problem better, show solution attempts. see the question properly, the rating given to both the player, its not done really, there must be a value for both the player rating, i am asking for the initial value, if the first player has just won the match the approach was wrong.... <?php class elo { private $Ra,$Rb,$Pa,$Pb; public $won,$NRa,$NRb; function __construct($Ra,$Rb,$won)//$Ra and $Rb are rating given... { $k=24; $this->Ra=$Ra; $this->Rb=$Rb; $Pa=1/(1+10^(($Rb-$Ra)/400)); $Pb=1/(1+10^(($Ra-$Rb)/400)); $this->won=$won; if($won=="first") { $this->NRa=$this->Ra+($k*$Pa); $this->NRb=$this->Rb-($k*$Pb); }else { $this->NRa=$this->Ra-($k*$Pa); $this->NRb=$this->Rb+($k*$Pb); } } function getNewRatings() { $result=array($this->NRa,$this->NRb); return $result; } } ?> <?php require_once('elo.php'); $n=new elo(1400,1400,"first"); $a=$n->getNewRatings(); var_dump($a); ?> now the given class can be used...
common-pile/stackexchange_filtered
Docker without docker I am trying to execute some docker containers, without docker itself !!! Maybe you know CloudSuite Benchmarks. I am trying to run the MediaStreaming one without the Docker (I need the executables and not the containers, because I have to run them via an intel pintool that uses executables) I used the export instruction like that: docker export streaming_server > server.tar.gz then I unzipped the tar files. I am not sure, what should I do on the next step. As you can see on the link, things are getting tricky. I have to execute something like that: docker run -d --name streaming_server --volumes-from streaming_dataset --net streaming_network cloudsuite/media-streaming:server Any ideas how to do it? I did try it first on hello-wolrd but things were more easy, after the unzip, I had just an executable, now I am not sure how to do it. Thanks in advance! https://github.com/ParsaLab/cloudsuite/blob/master/benchmarks/media-streaming/server/Dockerfile, looks like it runs nginx... As far as I can see, you don't have to extract the docker images, the source is just located over here: https://github.com/ParsaLab/cloudsuite/tree/master/benchmarks/media-streaming
common-pile/stackexchange_filtered
Emulating Value Type Polymorphism in C Sharp I'm looking for a way to create some kind of value type hierarchical class structure. I know that enums do not support inheritence since they are value types and are therefore sealed, so I'm probably looking for some kind of static class implementation. My purpose for this is to redefine roles in this ASP.NET application I'm working on. Currently the roles are string constants (ROLE_ADMIN, ROLE_USER, etc.) and are put in the session and checked throughout the application something like so: Person.getRole() == ROLE_ADMIN (which is actually a string comparison). I'd like to refactor this so that I can have type safety, as well as some kind of polymorphic behavior as desribed below. Consider the following role structure: user1 Group1 user2 Admin User3 If a person's role is set to Group1, but a certain element on the page is only visible by user1, then I want the statement Person.getRole() == user1 to be true. EDIT: Thinking of the above structure as a tree, I wanted to be able to both define a user in terms of a leaf or a node, and check permissions in terms of a leaf or a node. However that raises a dilemma, does the checking of permissions in terms of a node check if the user belongs to a group (group1), or IS a group (Admin). I think the problem is that naming the root node of my tree as 'Admin' is misleading, since nodes should be thought of as groups and not as roles themselves. The root node should be something like 'All', and when it is defined as 'All' would inherit all the leaves of the tree and would therefore belong to all the groups. I'm going to cut my losses and drop this issue for now. For future reference, I have marked the answer I found to be most appropriate. Unfortunately there wasn't any business reason for this change, and it's starting to take too much time to complete. I'm not sure what value types have to do with this (string is a reference type). The common approach here is use IPrincipal.IsInRole, or something similar. Your 'hierarchical' problem would shift to the part where the roles are filled. I suppose I was looking for objects that don't change state and simply represent a 'value', so I associated that with value types. Someone else has also mentioned immutable objects, which I forgot about. I wasn't aware of the Principal and Identity structure built into .NET, and I will look into it. This solution on it's own does not implement type safety, so I'd have to look into options for wrapping the principal / identity system to use enums. I know there are methods to convert strings and enums back and forth. I would also like to wrap it in such a way that I don't have to perform checks like Principal.IsInRole('user1') || Principal.IsInRole('user2') to determine if they are in group1. Usage is more like currentPrinicpal.IsInRole("group1") One way to emulate value type semantics with objects would be immutable objects. If that's an unfamiliar concept then the short version is "After being constructed they do not change state" that is no setters, no public fields (but who would do such a silly thing any ways ;p) and no methods with side effects. Eric Lippert has written a very nice serie of articles on the concept of immutability: immutability part one I had forgotten about immutable objects. I will look into this further.
common-pile/stackexchange_filtered
In Google Analytics, is it possible to track users separately by whether or not theyre logged in? For example, a logged in user will always reach my /customer/info page after logging in, so can i separate my Google analytics result based on whether or not users of my site reached that page? I've contacted google support many times, and they all said this is impossible. But i feel like they were just trying to not help me. they always said i need to redirect the customer to some kind of thankyou.hmtl page before they're brought to their profile page, but why do i need this thankyou.html page when a user is always brough to their profile page after logging in and my user profile page is also an html page (cshtml) and start the tracking there? a few result from google says this is the code i want but im not sure where to impement that. if (isset($userId)) { $gacode = "ga('create', 'UA-XXXX-Y', { 'userId': '%s' });"; echo sprintf($gacode, $userId); } else { $gacode = "ga('create', 'UA-XXXX-Y');"; echo sprintf($gacode); } i expect a way for Google Analytics to give me two separate data based on whether or not a user is logged in. i want to see the different pages a logged in users might visit vs non logged in users You have to set the userId tracking (similar to your example) and then create a view for the userId (from the Property settings ). In this view you will get the data only of the users who have logged in (the data where the userId is sent to Analytics). https://support.google.com/analytics/answer/3123666?hl=en&ref_topic=3123660 hi michelle, that you for your reply. i talked to ym boss and he said there is no suchparameters like user ID that our website tracks , so google analytics cant grab that. however, doesnt google track each user session? isnt there a way to separate analytics to sessions that contains visit to the user profile page (meaning they logged in) and sessions where they do not (meaning they do not login/guest users)?? You can use custom dimension. Configure a custom dimension at hit level with 'loggedOut' value as default, then when the user do the login (and for as long as it is logged) changes its value to 'loggedIn'. In this way you can see in Analytics the pages visited when a user is logged in and when he is not logged.
common-pile/stackexchange_filtered
Method after a ActionListener I need to perform a method after the clicking of a JButton in a Java Project. I'm making a client-server game and after the click of a button I need that the client/server start to wait untill the opponent perform a click. The problem is that at the end of the action listener code I start a loop end untill the opponent don't perform another click the jbutton stays clicked.. public void actionPerformed(ActionEvent e) { JButton o = (JButton)e.getSource(); String name = o.getName().substring(3); Click(Integer.parseInt(name)); if(isServer) ListenServer(); else ListeClient(); } ListenServer() and ListenClient() are two loop function... How can I call this methods AFTER the click??? Thanks and sorry for the bad english Read about Concurrency in Swing You can use threads and synchronization. See https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html for examples.
common-pile/stackexchange_filtered
Copy only file that matches variable string I'm trying to copy files based on the hour of Get-Date's time stamp to match it to files that are saved from a different source in a filename_10_00.ext structure. If I use 24 hour time, I can use the hour of Get-Date to determine which file I'm looking for. (This is to run hourly backups) This is what I have: $current_time = Get-Date $hour_var = $current_time.Hour Get-ChildItem -Path \\Path\To\Source | Where-Object { $_.Name -match "$hour_var" } | Copy-Item -Destination \\Path\To\Destination Can somebody tell me why it can't seem to match based on the PowerShell I've provided? I am looking for pure PowerShell with ideally no imported modules. Please show sample output of Get-ChildItem -Path \\Path\To\Source as well as the desired and actual output of Get-ChildItem -Path \\Path\To\Source | Where-Object {$_.Name -match "$hour_var"}. Unfortunately, there is no sample output. The desired output would be that it uses $hour_var as a regular expression to scan all files for the "filename_hh_mm_ss.ext" to compare the returned hour from $hour_var with the hour in hh from the filename. If Get-ChildItem -Path \\Path\To\Source doesn't produce any output, why are you surprised that nothing is being copied? My apologies. I misunderstood the output of Get-ChildItem -Path \Path\To\Source as the output of the copy. I can see the files in that folder when I Write-host $test after I assign the Get-ChildItem command to $test
common-pile/stackexchange_filtered
Device doesn't interact with Firebase, but emulator does Right now I'm developing a Flutter app using Firebase's Realtime Database. The problem I have is that my device can't seem to connect with Firebase at all. It can't upload any data to the DB nor read anything. The thing got weird when I tried the exact same app and code on an Android emulator, as it worked as expected. Example code This button below is supposed to upload something to the /carreras reference. Well, when I run an app with this code on an emulator, it works perfectly, but on my real device it doesn't. new RaisedButton( child: new Text("upload"), onPressed: () { var ref = FirebaseDatabase.instance.reference().child("carreras"); ref.set("some data").then((_) { Scaffold.of(context).showSnackBar(new SnackBar( content: new Text("data uploaded"), )); }); }, ) My first guess is that it has to do with internet permissions but I've checked the app info on my device and emulator, and it says the app doesn't request anything. I've searched through the web too and I don't seem to find anything related nor useful. Have you checked the rules in firebase...? Right now both read and write are set to true. I have tried to set them to auth === null but it didn't work either way. By any chance did you find a solution? I just runned your code in real device. And it works fine. My guess is that maybe your real device's api level lower than 21. Are you running your app on a pre-Lollipop device? if it's 5.0 or higher, then my answer is wrong and the bug lies somewhere else.
common-pile/stackexchange_filtered
Getting an error using MongoDB aggregate $sample operation. -- MongoError: $sample stage could not find a non-duplicate document MongoError: $sample stage could not find a non-duplicate document after 100 while using a random cursor. This is likely a sporadic failure, please try again. Using MongoDB version 3.2.8 and just upgraded to 3.2.10. I'm following the example set forth by the MongoDB documentation.. There is very little documentation. The only thing I can find doing a search is a bug in an issue queue which doesn't solve the problem for me. My implementation is very simple. function createBattle(done) { // Need to make sure that the yachts are different. async.doWhilst( function(callback) { Yacht.aggregate( { $sample: { size: 2 } }, function (err, results) { console.log(err.toString()); if (err) return callback(err); callback(null, results); } ) }, function(results) { return results.length !== 2 || _.isEqual(results[0]._id, results[1]._id); }, function(err, results) { if (err) return done(err); done(null, results); } ) } I'm not sure what is going on. Any ideas? I am using Mongoose. It is in development. My guess is that something occurred when I changed the schema of the document. It works with the test mock data after creating 100000 fake documents or just 10 documents. So I decided to try a fresh start on the collection on the development instance of the database. Regardless, I exported the collection into a clean JSON file. I deleted the collection in the database. Then I imported the JSON dump. It works fine. It's a bug. This is probably the last straw with MongoDB for me. I'll probably go back to SQL, use Redis, and try PostGRES. fwiw - not an issue with mongoose. appears to be with wiredtiger and still present in mongodb 4.0.4. see https://jira.mongodb.org/browse/SERVER-29446 . See https://github.com/wiredtiger/wiredtiger/pull/2194/files for possible code-comment explanation of "why"
common-pile/stackexchange_filtered
Cannot click input element within button element Take a look at the below markup & fiddle: http://jsfiddle.net/minlare/oh1mg7j6/ <button id="button" type="button"> <span id="test" style="background:pink;">test element</span> Add File <input type="file" name="file" multiple="multiple" id="upload"> </button> In Chrome, each element within the button can be selected through the developer console and js click events are delegated. In Firefox/IE you cannot select the child elements or pickup js click events. Is there a way around this in Firefox/IE? That's not valid HTML markup. A button cannot contains interactive content such as input element (except of type hidden). You'd have better to explain why do you need to wrap an input type file inside a button, doesn't really make sense this is working inFirfox And Chrome as well; refer this codepen http://codepen.io/drewrygh/pen/rCucG An older verion of http://fineuploader.com/ required a button element (for init), and it then inserted the input element @A.Wolff @minlare Sounds strange! Any concrete example? Why don't you upgrade plugin version? Licensing prevents upgrade, I'll opt for using a div (remember having an issue when not using a button, but cant remember what it was... hopefully I'll be okay or it'll crop up again soon!) Thanks @A.Wolff Licensing prevents upgrade Ah ya, this happen sometimes :( Anyway, glad you've fixed it @minlare Fine Uploader never required a <button> element for the file chooser. In fact, the FAQ has stated for quite some time not to use a <button> due to the fact that the click event will never target the child input. It is not suggested to use elements inside button and so you can use "div" instead of "button" which will make it working both in mozilla and chrome. Check below <div id="button" type="button"> <span id="test" style="background:pink;">test element</span> Add File <input type="file" name="file" multiple="multiple" id="upload"> </div> http://jsfiddle.net/oh1mg7j6/8/ Thanks, this was to be my next step, just hoping there may have been a way to achieve this with css @minlare if above solution helped, mark it as answer / vote it It's not a good style.Even I can say that this way is not right.you can set "click" event on your button to click the input.so if you want to hide input[file] element,but leave it clickable you can do like I said.Here is a very good link for events docs and examples. http://www.w3docs.com/learn-javascript/javascript-events.html
common-pile/stackexchange_filtered
What is the most direct proof of $f$ is continuous iff $f\left(\overline{A}\right) \subset \overline{f(A)}$? Here is our definition of continuity: Let $X$ and $Y$ be any topological spaces, let $f \colon X \rightarrow Y$ be a mapping, and let $p$ be a point of $X$. Then $f$ is said to be continuous at point p if, for every open set $V$ of $Y$ such that $f(p) \in V$, there exists an open set $U$ of $X$ such that $p \in U$ and $f(U) \subset V$. Let $S$ be any subset of $X$. If $f$ is continuous at every point of $S$, then $f$ is said to be continuous on set $S$. And, if $f$ is continuous at every point of $X$, then $f$ is said simply to be continuous. Then what is the most direct way of proving the following statement? Let $X$ and $Y$ be any topological spaces. Then a mapping $f \colon X \rightarrow Y$ is continuous (at every point of $X$) if and only if, for every subset $A$ of $X$, $$ f \left( \overline{A} \right) \subset \overline{f(A)}, $$ where on the left-hand-side we have the colsure of $A$ in the topological space $X$ and on the right-hand-side we have the closure of $f(A)$ in the topological space $Y$. My Attempt: Suppose that $f \colon X \rightarrow Y$ is continuous. Let $q$ be any point of $f\left(\overline{A}\right)$. We show that this point $q \in \overline{f(A)}$. Let $V$ be any open set of $Y$ such that $q \in V$. In order to show that $q \in \overline{f(A)}$, we need to show that $V \cap f(A) \neq \emptyset$. Now as $q \in f\left( \overline{A} \right)$, so there exists a point $p \in \overline{A}$ such that $q = f(p)$; moreover as $p \in \overline{A}$ and $\overline{A} \subset X$, so $p \in X$; and as $p \in X$ and $f$ is continuous at every point of $X$, so $f$ is continuous at $p$ also. Thus the mapping $f \colon X \rightarrow Y$ is continuous at point $p \in X$ and $V$ is an open set of $Y$ containing $f(p)$. So there exists an open set $U$ of $X$ such that $p \in U$ and $f(U) \subset V$. Now as $p \in \overline{A}$ and $U$ is an open set of $X$ containing $p$, so we must have $U \cap A \neq \emptyset$; let $a \in U \cap A$. Then $a \in A$ and $a \in U$, which implies that $f(a) \in f(A)$ and $f(a) \in f(U)$, but $f(U) \subset V$, so we can conclude that $f(a) \in V$ also. Thus we have $f(a) \in f(A) \cap V$, which implies that $f(A) \cap V \neq \emptyset$. So far we have shown that, for every open set $V$ of the topological space $Y$ such that $q \in V$, we have $f(A) \cap V \neq \emptyset$. Therefore $q \in \overline{f(A)}$. But $q$ was an arbitrary point of set $f\left(\overline{A}\right)$. Hence we can conclude that $$ f \left( \overline{A} \right) \subset \overline{f(A)}. $$ Am I right? Conversely, suppose that, for every subset $A$ of $X$, we have $$ f \left( \overline{A} \right) \subset \overline{f(A)}. $$ We show that $f$ is continuous (at every point of $X$). Let $p$ be an arbitrary point of $X$. We show that $f$ is continuous at $p$. For this, let $V$ be any open set of $Y$ such that $f(p) \in V$. Then $Y\setminus V$ is a closed set of $Y$ and $f(p) \not\in Y \setminus V$. As $Y \setminus V$ is a closed set of $Y$, so $$ \overline{Y \setminus V} = Y \setminus V, $$ which implies that $$ f^{-1} \left( \overline{Y \setminus V} \right) = f^{-1} (Y \setminus V) = f^{-1}(Y) \setminus f^{-1}(V) = X \setminus f^{-1} (V). $$ Is my work up to this point correct? If so, then how to proceed from here? Or, are there any mistakes in what I have done? https://artofproblemsolving.com/community/q2h1957657p13529050 Will, it be of help? @RalphClausen thank you for intending to be helpful, but the mathematical expressions on that page aren't proprely visible with my High Constrast display settings. I cannot find any mistakes in this, but (just like you) do not know yet how to proceed. Let $X$ and $Y$ be topological spaces and $f \colon X → Y$ be map. We say $f$ is drag-continuous if for $V ⊆ Y$ open $f^{-1}(V)$ is open in $X$. $f$ is touch-continuous if for $T ⊆ X$ arbitrary $f(\overline T) ⊆ \overline {f(T)}$. Drag-continuous maps are ones so that for any open $V ⊆ Y$ and any $x ∈ X$ with $f(x) ∈ V$, they drag an entire neighbourhood $U ⊆ X$ of $x$ into $V$, that is $f(U) ⊆ V$. Touch-continuous maps are ones so that if $x ∈ X$ touches a part $T ⊆ X$, that is $x ∈ \overline T$, then $f(x)$ touches $f(T)$, that is $f(x) ∈ \overline {f(T)}$. To prove they are equivalent, remember the basic facts that for arbitrary maps $f \colon X → Y$ and $A ⊆ X$ and $B ⊆ Y$, we have $f(A) ⊆ B \iff A ⊆ f^{-1} (B)$, being drag-continuous is equivalent to preimages of closed sets being closed, for sets $A ⊆ X$ and $T ⊆ X$ closed, $A ⊆ T \iff \overline A ⊆ T$, and for sets $T ⊆ X$, we have $T ⊆ X~\text{is closed} \iff \overline T ⊆ T$. Let $f$ be drag-continuous and $A ⊆ X$. Then $$f(\overline A) ⊆ \overline {f(A)} \iff \overline A ⊆f^{-1} (\overline {f(A)}) \iff A ⊆ f^{-1}(\overline {f(A)}) \iff f(A) ⊆ \overline {f(A)},$$ so indeed $f(\overline A) ⊆ \overline {f(A)}$, as the latter inclusion is true by extensivity of closing. Let $f$ be touch-continuous and $B ⊆ X$ closed. Then $$f^{-1}(B) ⊆ X ~\text{is closed} \iff \overline{f^{-1}(B)} ⊆ f^{-1} (B) \iff f(\overline{f^{-1}(B)}) ⊆ B.$$ Now for $A = f^{-1}(B)$ we have $f(A) ⊆ B$, and since $B$ is closed, $\overline {f(A)} ⊆ B$, so $$f(\overline A) ⊆ \overline{f(A)} ⊆ B, \quad\text{so}\quad f(\overline{f^{-1}(B)}) ⊆ B,$$ hence $f^{-1}(B) ⊆ X$ is indeed closed. Did the same thing as drhab, just gave the shortest proof I could come up with. Let $f:X\to Y$ be continuous and let $A\subseteq X$. Then $f^{-1}\left(\overline{f\left(A\right)}\right)$ is closed since it is the preimage of a closed set. This evidently with $A\subseteq f^{-1}\left(\overline{f\left(A\right)}\right)$ so we are allowed to conclude that $\overline{A}\subseteq f^{-1}\left(\overline{f\left(A\right)}\right)$ or equivalently $f\left(\overline{A}\right)\subseteq\overline{f\left(A\right)}$. Let $f:X\to Y$ be not continuous. Then some closed set $B\subseteq Y$ exists such that $A:=f^{-1}\left(B\right)$ is not closed. Then $\overline{A}-A$ will contain an element $x$. Then $f\left(x\right)\notin B$ because $x\notin A=f^{-1}(B)$. Observe that $f\left(A\right)\subseteq B$ so that - because $B$ is closed - we have: $\overline{f\left(A\right)}\subseteq B$. We conclude that $f\left(x\right)\notin\overline{f\left(A\right)}$. But $x\in\overline A$ so that $f(x)\in f(\overline A)$ so this shows that we do not have $f\left(\overline{A}\right)\subseteq\overline{f\left(A\right)}$. So it has been proved that whenever $f$ is not continuous we can find a set $A$ such that $f\left(\overline{A}\right)\subseteq\overline{f\left(A\right)}$ is not true. thank you for your help, but could you please go through my attempted proof and complete the reasoning of the converse part therein? I must admit that my answer is a respond not so much to the body of your question but to the title (asking for "most direct proof.."). I will have a look but make no further promises :-). Suppose that $f$ is continuous, and let $A \subseteq X$ be any subset. $\overline{f[A]}$ is closed in $Y$ and contains $f[A]$ and so by continuity, $f^{-1}[\overline{f[A]}]$ is closed and it clearly contains $A$. So $\overline{A} \subseteq f^{-1}[\overline{f[A]}]$ (the closure is the smallest closed superset of $A$) and so $f[\overline{A}] \subseteq \overline{f[A]}$ by definition. OTOH if $f$ fulfills the closure condition, let $C \subseteq Y$ be closed. Define $A= f^{-1}[C]$ and by the property, $$f[\overline{A}] \subseteq \overline{f[A]} = \overline{f[f^{-1}[C]]} \subseteq \overline{C}=C$$ as $C$ is closed and this implies that $\overline{A} \subseteq f^{-1}[C]=A$ and thus $A$ is closed and $f$ is continuous (inverse image of a closed set is closed). This is the same proof as mine, but written down slightly shorter. @k.stm The OP's asking for the shortest proof, right? True, but it doesn’t really shorten any argument I made, but only condenses the exposition. You could have just as easily commented on mine “Maybe shorten the last two lines by writing $A = f^{-1}(B)$ from the beginning”. So, it’s a difference in style, not in line of thinking. Doesn’t matter, though.
common-pile/stackexchange_filtered
Convexity of Call option prices using Put-Call parity relationship I am trying to price vanilla options using a particular Bayesian approach that I have found in a paper. To do that I need to construct a likelihood function, approximating the tail of the distribution using the derivatives of the call prices with respect to the strike. In the paper the work is carried out using only calls, while my dataset includes both calls and puts. since I want to work with both of them I tried to recover the call prices from the observed put prices simply using the put-call parity relationship. The problem is that I expected to find that the call prices recovered in that way respect the convexity in the strike price, so that the price of the derivative decreases when the strike increases. Here there is an example of what I obtain. These are the initial observed prices for calls and puts (ordered by increasing strike): [723.15, 713.1, 490.95, 432.0, 421.9, 393.6, 391.7, 386.5, 372.0, 341.81, 0.05, 319.42, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.08, 0.05, 0.05, 0.05, 0.1, 0.05, 223.65, 0.08, 214.74, 0.1, 0.05, 0.1, 0.1, 0.1, 194.04, 0.1, 0.1, 0.1, 0.09, 0.08, 173.38, 0.13, 0.15, 0.15, 0.1, 0.25, 147.35, 0.18, 136.92, 0.15, 0.15, 0.2, 123.6, 0.2, 122.45, 0.2, 118.6, 0.25, 108.0, 0.3, 102.15, 0.3, 0.31, 99.0, 0.35, 92.0, 0.4, 0.46, 77.18, 0.48, 79.7, 0.46, 71.85, 0.55, 66.95, 0.66, 65.0, 0.65, 59.33, 0.75, 52.8, 0.93, 48.0, 1.1, 42.51, 1.38, 40.7, 1.7, 33.75, 2.03, 28.88, 2.7, 26.4, 3.4, 22.1, 4.6, 18.4, 6.0, 14.15, 7.12, 11.25, 7.8, 9.0, 5.7, 11.36, 3.0, 14.0, 2.25, 18.4, 1.37, 23.8, 0.8, 26.4, 0.51, 33.2, 0.3, 36.5, 0.22, 0.2, 0.2, 51.4, 0.15, 0.15, 0.1, 0.1, 0.1, 0.05, 89.15, 0.1, 0.05, 0.05, 201.05, 251.05] Here instead I have the same observed price after the use of the put-call parity: [723.15, 713.1, 490.95, 432.0, 421.9, 393.6, 391.7, 386.5, 372.0, 341.81, 340.39, 319.42, 320.39, 310.39, 300.39, 295.39, 290.39, 275.39, 270.39, 265.39, 260.39, 255.42, 250.39, 245.39, 240.39, 235.44, 230.39, 223.65, 225.42, 214.74, 220.44, 215.39, 210.44, 205.44, 200.44, 194.04, 195.44, 190.44, 185.44, 180.43, 175.42, 173.38, 170.47, 165.49, 160.49, 155.44, 150.59, 147.35, 145.52, 136.92, 140.49, 135.49, 130.54, 123.6, 125.54, 122.45, 120.54, 118.6, 115.59, 108.0, 110.64, 102.15, 105.64, 100.65, 99.0, 95.69, 92.0, 90.74, 85.8, 77.18, 80.82, 79.7, 75.8, 71.85, 70.89, 66.95, 66.0, 65.0, 60.99, 59.33, 56.09, 52.8, 51.27, 48.0, 46.44, 42.51, 41.72, 40.7, 37.04, 33.75, 32.37, 28.88, 28.04, 26.4, 23.74, 22.1, 19.94, 18.4, 16.34, 14.15, 12.46, 11.25, 7.8, 9.34, 5.7, 6.7, 3.0, 4.34, 2.25, 3.74, 1.37, 4.14, 0.8, 1.74, 0.51, 3.54, 0.3, 1.84, 0.22, 0.2, 0.2, 1.74, 0.15, 0.15, 0.1, 0.1, 0.1, 0.05, 9.49, 0.1, 0.05, 0.05, 1.39, 1.39] For completeness these are the respective type of option (call/put) and the strike prices, while the maturity is fixed and very short in this case( the same problem appears also for longer maturities). All options are written on SPX. [1370.0, 1380.0, 1600.0, 1660.0, 1670.0, 1700.0, 1705.0, 1710.0, 1720.0, 1750.0, 1760.0, 1775.0, 1780.0, 1790.0, 1800.0, 1805.0, 1810.0, 1825.0, 1830.0, 1835.0, 1840.0, 1845.0, 1850.0, 1855.0, 1860.0, 1865.0, 1870.0, 1870.0, 1875.0, 1875.0, 1880.0, 1885.0, 1890.0, 1895.0, 1900.0, 1900.0, 1905.0, 1910.0, 1915.0, 1920.0, 1925.0, 1925.0, 1930.0, 1935.0, 1940.0, 1945.0, 1950.0, 1950.0, 1955.0, 1955.0, 1960.0, 1965.0, 1970.0, 1970.0, 1975.0, 1975.0, 1980.0, 1980.0, 1985.0, 1985.0, 1990.0, 1990.0, 1995.0, 2000.0, 2000.0, 2005.0, 2005.0, 2010.0, 2015.0, 2015.0, 2020.0, 2020.0, 2025.0, 2025.0, 2030.0, 2030.0, 2035.0, 2035.0, 2040.0, 2040.0, 2045.0, 2045.0, 2050.0, 2050.0, 2055.0, 2055.0, 2060.0, 2060.0, 2065.0, 2065.0, 2070.0, 2070.0, 2075.0, 2075.0, 2080.0, 2080.0, 2085.0, 2085.0, 2090.0, 2090.0, 2095.0, 2095.0, 2100.0, 2100.0, 2105.0, 2105.0, 2110.0, 2110.0, 2115.0, 2115.0, 2120.0, 2120.0, 2125.0, 2125.0, 2130.0, 2130.0, 2135.0, 2135.0, 2140.0, 2145.0, 2150.0, 2150.0, 2155.0, 2160.0, 2165.0, 2170.0, 2175.0, 2180.0, 2180.0, 2185.0, 2190.0, 2195.0, 2300.0, 2350.0] ['C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'C', 'P', 'C', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'P', 'C', 'P', 'C', 'P', 'P', 'P', 'P', 'P', 'C', 'P', 'P', 'P', 'P', 'P', 'C', 'P', 'P', 'P', 'P', 'P', 'C', 'P', 'C', 'P', 'P', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'P', 'C', 'P', 'C', 'P', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'P', 'C', 'C', 'C', 'P', 'C', 'C', 'C', 'C', 'C', 'C', 'P', 'C', 'C', 'C', 'P', 'P'] The only simplified assumption that I did is assuming interest rate equal to 0, so that I obtain the call prices like C = P + S - K. Could the problem derive from that assumption or there is another reason that brings to violate the convexity in K of such call prices? Is there an easy way to infer the IR from my available data? One example of my problem is showed by this pair: [319.42, 320.39] [1775.0, 1780.0] ['C', 'P'] In my opinion the call price 320.39 obtained by the PCP should be lower than the previous one, because its strike is higher. This repeats many times in my dataset if I apply the PCP. First why are you showing so many decimal places, and where did you get this data? Secondly please show a specific example of where you think the 'convexity ' is violated. Sorry for that. I improved the format and showed one example. This is probably caused by incorrect put/call parity calculation, due to the fact that deep in the money call option prices are not exactly contemporaneous with the underlying futures price you are using. Another possibility is that the deep in the money option prices are stale, since no one trades them. I'm 100pct sure that the prices are not real. One last thing: there's no point in studying short dated options that are more than 20pct in or out of the money, since they have very little time value. Either increase expiration or focus on closer strikes. Actually, when imposing the relationship $C - P = S - K$ you are assuming more than just a zero risk free rate. Indeed the true call put parity writes: $$ C - P = DF(0,T) \left( F(0,T) - K \right) $$ Thus you are also saying that $DF(0,T)F(0,T) = S$ while in fact, $DF(0,T)F(0,T) = S e^{(-repo-q)T}$, thus you are neglecting the effect of dividends (repo rate or similar liquidity costs). Now, given the data in your hands, you could easily estimate $F(0,T)$ as the strike where the call/put price curves intersect.
common-pile/stackexchange_filtered
read any letter for my combobox I have this simple query that populates a ComboBox with values: void fillCari()//fill Cari-med dropdown with values { try { string connectionString = "Data Source=LPMSW09000012JD\\SQLEXPRESS;Initial Catalog=Carimed_Inventory;Integrated Security=True"; SqlConnection con2 = new SqlConnection(connectionString); con2.Open(); string query = "SELECT * FROM dbo.Carimed"; SqlCommand cmd2 = new SqlCommand(query, con2); SqlDataReader dr2 = cmd2.ExecuteReader(); while (dr2.Read()) { string cari_des = dr2.GetString(dr2.GetOrdinal("Item_Description")); comboBox3.Items.Add(cari_des); comboBox3.Text.Trim(); } } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } It does all that it needs to do which is when typing in a letter it starts to filter the drop down accordingly. What I want to next is to filter the dropdown box based on any value typed in. eg. If a word is say "16 cat123" usually the user would have to start off with typing the number "16" or the number "1" for it to show the results. Instead of this I would want if the user should start off by typing "cat123" then it still bring up the "16 cat123" which is the original item. How can I achieve this? Could this be done through that of the like operator within my SELECT query? You should be disposing the SqlConnection, SqlCommand and SqlDataReader objects. either explicitly using .Close() or with using (… statements. Yeah that was an over sight in my code, done now. @stuartd If you have the option of using Entity Framework, it will make your life much easier. I found a work around by using the guide of this user from here What was done by the author was that of overriding the default combobox setting in winforms. I just found a way to tie it into my code and got it up and running. Hopefully this is of help to someone in the future. within the SELECT query you can ask it like that : SqlCommand cmd2 = new SqlCommand(query, con2); string query = "SELECT * FROM dbo.Carimed WHERE Item_Description LIKE '%@TEXT%'"; cmd2.Parameters.Add("@TEXT", SqlDbType.NVARCHAR).Value = comboBox3.Text; SqlDataReader dr2 = cmd2.ExecuteReader(); This should filter your query to select the words that contains the text typed. This link may help you.
common-pile/stackexchange_filtered
How can I protect a GitHub branch and allow only GitHub actions to push to it and no one else? I am writing a GitHub workflow where I am building documentation from the main branch docstrings and pushing it to gh-pages and having GitHub pages deploy off of gh-pages branch. How can I protect that branch so that only GitHub branches can push to it and not allow anyone else? Can you just lock the branch and then only admins can force push? Im not sure you could protect it from yourself but if its locked then it becomes "read only" for everyone except admins. as long as someone random is not pushing to my gh-pages then I would be happy I facing exactly the same use case in my repo, after read the issue in the GitHub community, here comes my workaround as follow: Set two branch rules main : check "Require a pull request before merging" and "Require 1 approvals" gh-pages: check "Allow force pushes" and add yourself or an org-account to the "Specify who can force push" create a PAT of yourself or the org-account and save to repository secret (In Setting/Actions secrets and variables/Actions) add with token when checkout code in GHA: steps: - uses: actions/checkout@v3 with: token: ${{ secrets.GH_TOKEN}} in Settings/Actions permissions/Fork pull request workflows from outside collaborators, check "require approval for first-time contributors" or "require approval for all outside collaborators" A new branch protection rule can be set up at https://github.com/USERNAME/REPO/settings/branch_protection_rules/new. Any branch that has a matching name will be protected. The option "lock branch" will lock the branch and only allow admins to override and commit. Additionally, branch protection rules should be set up on main to require a pull request that way people cannot just push directly to main. This would also lock out the GH Action, so it does not solve OPs issue.
common-pile/stackexchange_filtered
Put space between digits in C# I'm need generate a random number an insert a space or comma between digits. This value will be speaked using AWS Polly. Without space, to 6565, she sepeak "six thousand five hundred and sixty five". Part of the code, generating the random number, I already got it, but I don't know how to insert the space between the digits. Anyone can help me? See bellow: var withoutSpaces = new Random().Next(10000,100000); var withSpaces = ???????? return withSpaces; Wait your answer! withSpaces, must be a number or could it be a string? can be a string, no problem. It's even better if it's a string. string.Join(" ",withoutSpaces.ToString().ToCharArray()), duplicates e.g. https://stackoverflow.com/questions/33363636/trying-to-add-spaces-between-characters-in-a-string-in-c-sharp if the code is for something security related, don't use System.Random and instead use secure RNG - at the very least though you'll probably want to reuse the new Random() because otherwise two requests at the same time will get the same number. @MostafaTarekYassien a numeric data type can not contain spaces... Use format: withoutSpaces.ToString("# # # # # #"). Disadvantage: on a small number, this will give spaces at the beginning. var withoutSpaces = new Random().Next(10000, 100000); string withSpaces = ""; for (int i = 0; i < withoutSpaces.ToString().Length; i++) { string test = withoutSpaces.ToString().ElementAt(i) + " "; withSpaces += test; } string WithSpacesFinal = withSpaces.Trim(); This should do it for you Hi Mostafa! Thank you for your answer. I will to try your solution, but Luke answer solve my doubt. @Luke Answer is way better than my answer
common-pile/stackexchange_filtered
vue-router - How to get previous page url? I want to get previous page link or url in vue-router. Like a this. How to do it? const link = this.$router.getPrevLink(); // function is not exist? this.$router.push(link); Near concept is this.$router.go(-1). this.$router.go(-1); All of vue-router's navigation guards receive the previous route as a from argument .. Every guard function receives three arguments: to: Route: the target Route Object being navigated to. from: Route: the current route being navigated away from. next: Function: this function must be called to resolve the hook. The action depends on the arguments provided to next As an example you could use beforeRouteEnter, an in-component navigation guard, to get the previous route and store it in your data .. ... data() { return { ... prevRoute: null } }, beforeRouteEnter(to, from, next) { next(vm => { vm.prevRoute = from }) }, ... Then you can use this.prevRoute.path to get the previous URL. Note: That you can only use beforeRouteEnter in the component that the route is defined for and not a child component which is not defined in the router. @hitautodestruct Then adding it as a global mixin wouldn't be a good idea since components that don't have a route would mess up ?! @Husam Ibrahim Also what is the advantage of this method compared to the .go(-1) ? @Ibris The OP asked for a way to get the URL of the previous page (I don't know the details of their use case). If you simply want to go to the previous page then this.$router.go(-1) is the way to go ;) Say I directly navigate to the SPA by copying and pasting the link in the browser, this.$router.go(-1) is not suitable in this case. This does not work for me, I get the error message: [Vue warn]: Error in the mounted hook:" TypeError: this.prevRoute is null " when trying to get this.prevRoute.path from the console. Confusing is the fact that console.log (this) correctly displays prevRoute. @HusamIbrahim @Tenarius Seems that this is the expected behavior because next callbacks are invoked after a nextTick. So you can't access properties set by the callback in created/mounted because these hooks come earlier. See this issue. is there beforeRouteEnter in Vue 3? For Vue 3 with Vue Router 4, my solution was: this.$router.options.history.state.back The final code: export default defineComponent({ name: 'Error404', data () { return { lastPath: null } }, created () { this.lastPath = this.$router.options.history.state.back }, computed: { prevRoutePatch () { return this.lastPath ? this.lastPath : '/dashboard' } } }) Regards. For Vuejs 3, First import { useRouter } from "vue-router"; Second invoke the router instance const router = useRouter(); then for example: router.options.history.state.back works like cake! vue-router v4.0.16: router.options.history.state['back'] Though this answer is great, the received route wouldn't always be the route before in history in case of popstate navigations (aka when the user clicks the back button). So if you use Vuex here's a code snippet that respects this behavior: const store = new Vuex.Store({ state: { routerHistory: [], }, }); const router = new VueRouter({ mode: 'history', scrollBehavior(to, from, savedPosition) { const fromHistory = Boolean(savedPosition); if (fromHistory && store.state.routerHistory.length > 0) { store.state.routerHistory.splice(-1, 1); } else { store.state.routerHistory.push(from); } return savedPosition || { x: 0, y: 0 }; }, }); Once implemented, you can define a getter for the previous route inside your store: const store = new Vuex.Store({ // ... getters: { previousRoute: (state) => { const historyLen = state.routerHistory.length; if (historyLen == 0) return null; return state.routerHistory[historyLen - 1]; }, }, }); This code uses the scrollBehavior hook, which only receives the savedPosition argument if it was a popstate navigation. Thanks to Vuex we can then store all the routes in an array (over multiple pages). This routing solution will fall back to a static url if a previous url does not exist. <template> <div> <router-link :to="prevRoutePath">Previous Page</router-link> <router-link to="/">Home Page</router-link> </div> </template> <script> export default { beforeRouteEnter(to, from, next) { next(vm => { vm.prevRoute = from; }); }, computed: { prevRoutePath() {return this.prevRoute ? this.prevRoute.path : '/'}, } } </script> <template> <div> <router-link :to="previousPage">Previous Page</router-link> <router-link to="/">Home Page</router-link> </div> </template> <script setup> import { useRouter } from 'vue-router'; import { computed } from 'vue'; const router = useRouter(); const previousPage = computed(() => { const lastPath = router.options.history.state.back; return lastPath ? lastPath : '/'; }); </script> To yank it right out of the $router object, use this in your computed or methods: lastRouteName: function() { let returnVal = ''; const routerStack = this.$router.history.stack; const idx = this.$router.history.index; if (idx > 0) { returnVal = routerStack[idx - 1].name; } return returnVal; } That'll return the route name, but you can also return the .path or whatever other property if you like. To inspect the history object, do console.log(this.$router.history); Property 'history' does not exist on type 'VueRouter' Most likely this only works for VueRouter 4. For Vue 2.6 that works for me: In-App.vue component when you initialize the VueRouter you can get from and to details as below: export const globalState = Vue.observable({ from: {}, to: {} }); const router = new VueRouter({ base: '/', routes: routes, mode: 'history', scrollBehavior(to, from, savedPosition) { Vue.set(globalState, 'from', from); Vue.set(globalState, 'to', to); return { x: 0, y: 0 }; }, }); routes are ur routes and each time u navigate to a different URL you will have all from and to routes details. Then u can import the globalState object and use it like that import { globalState } from './app'; goBack() { const lastPath = globalState.from?.fullPath; const currentPath = globalState.to?.fullPath; if (lastPath && currentPath !== lastPath) { this.$router.push(lastPath); } else { this.$router.push('another-path'); } }
common-pile/stackexchange_filtered
How to reformat handsontable positioning? I am using handsontable to display some information in a few tables and it looks like the handsontable is covering my header text. I have them in html next to each other in the DOM. I know this is a CSS thing I have to change to ovverride it. How can I have these tables load nicely on top of each other on the same page? Here is what it looks like. Don't worry about the data. It loads properly. Here is some parts of my javascript for one table. there are others in here. document.addEventListener("DOMContentLoaded", function(){ var containerCPhotoCounts = document.getElementById('tableCountryPhotoCounts'); //load handsontables var hot = new Handsontable(containerCPhotoCounts, { data: countryPhotoCounts, minSpareRows: 0, columnSorting: true, colWidths: [200, 200, 200], disableVisualSelection: true, colHeaders: ['Country Code', 'Photo Count', 'Percent of Total'], columns: [ {data: 'country',editor: false}, {data: 'count',editor: false}, {data: 'percent', type: 'numeric',editor: false,format: '0.000 %'} ] }); } Here is what my html looks like. It's jade( I'm using express). h3 Photo Count By Country #tableCountryPhotoCounts h3 Photo Count By Date #tableDatePhotoCounts h3 Profile Count By Country #tableCountryProfileCounts h3 Photo Count By Profile #tableProfilePhotoCounts This code is not complete, even no fiddle link has been provided. I would suggest you to clear the bottom margin or Add some margin in all the tables. if you can share the code which is coming on browser then we would love to help you. Agreed, this is as simple as adding a margin to your table divs. Just add a class to your table divs and with css do something like margin: 5px 0px 5px 0px
common-pile/stackexchange_filtered
directly tapping Create2 battery Since the mini DIN power is limited to 250ma, has anyone found a way to tap the battery to supply power to added electronics? I'd prefer this over adding a secondary battery and separate charger Thanks, Frank We have tried it and have a stably working system using a DC DC voltage converter directly off the battery. We have seen issues though where it will try to run the converter even when the 14-18v battery is at 4 volts so can extremely deep discharge the battery. If you do this, I would suggest calculating your max current load you want to run and validating it won't over-tax the battery in an un-safe way or have significant spikes in power that will be a problem across the system. Buffer voltage + UPS battery Therefore, we introduced a buffer inbetween (OpenUPS or battery backup battery) to buffer it, clean things up better and allow the computer to function without the create being powered. Compute platform shutdown on low power We also set up an automatic shutdown of the laptop if voltage becomes too low and "return to dock" commands after 15 minutes to make sure it re-charges itself as regularly as possible if not being actively used. Check charge regularly on charger Additionally, we found that on the charger, when in trickle charge mode, the system does not actually test the voltage (Expects it to always be full), so the computer will discharge the battery and it won't turn the charger back on. You therefore have to pull it out of passive mode and then trigger it again (Back out then dock and charge) and it will correctly check the voltage and charge. State This is currently not a fully tested system, but have have tried it on multiple create 2's / NUC's / Laptops and have had the system recharging itself for 10+ days in a row while powering lidar, camera, NUC, create, etc. If you try this, would love to hear how it goes for you and improvements you see as valuable. Please also just be careful and have a fire extinguisher near-by. ;-) Wow! I am really impressed by al the testing you went through. Thank you! I also had setup a similar approach using a dc-dc converter and was running a smaller load consisting of an iPad and an EZ-Robot. @faengelm would love to hear how it goes for you and any issues you encounter. I know this is several years old at this point, but I'm trying and failing to do the same thing here. Could you elaborate on your procedure to force the Create2 to switch from trickle-charge to full-charge mode? I'm currently using the ROS create_driver and switching from Passive->Full->Passive does not seem to change anything (charge level is still incorrect) This was a long time ago, but my memory is that we created a watchdog timer every N minutes to toggle the robot into active state for a period of time to wake it up and then a few seconds later gave it the command to go back to sleep and start charging again. We had to fork the create driver. Looking back, I would suggest using the Kabuki instead as it is not much more expensive and has stable charging. We spent way too much time debugging a system which wasn't designed for long-term active use.
common-pile/stackexchange_filtered
specify optional parameter windows form control vb.net I need to send a subroutine a Windows Form control name as optional parameter e.g. Sub putdebug(ByVal str As String, Optional ByVal ctrl As ListBox = lbSystem) ..output to different lisboxes depending on ctrl name, default to lbSystem if not specified. But the lbSystem is getting underlined and "Constant Expression Required" is error. Thanks! Try this instead: Sub putdebug(ByVal str As String, Optional ByVal ctrl As ListBox = nothing) If IsNothing(ctrl) Then ctrl = lbSystem End If End Sub VB wont allow you to assign a value to the optional parameter directly. It's so sad that I instantly knew what your user name meant. :) You're one of the only people who does! Spent my whole childhood typing it in :) Had to put "If IsNothing(ctrl) Then" but that worked! Thanks. I'm off to drink some Duff Beer. haha glad it helped - ill update the answer with your changes.
common-pile/stackexchange_filtered
Analytic expression for the Tsirelson bound of the I3322 inequality? Finding Tsirelson bounds for Bell inequalities is a well-loved problem in quantum information theory. A famous case where it is still open is for the I3322 inequality. In this paper Pál and Vértesi conjectured that it is the limit of the sequence $$a_n = \max_{c_i\in[0,1]} \lambda(M_{n})$$ where $\lambda(\cdot)$ is the maximal eigenvalue, and $M_{n}$ is the $(n+1)\times (n+1)$ tridiagonal matrix $$\begin{pmatrix} c_0-c_0^2 & \frac1{\sqrt2}\sqrt{1-c_0^2} & & \\ \frac1{\sqrt2}\sqrt{1-c_0^2} & c_1c_0+\frac{c_1-c_0}{2} & \frac12\sqrt{1-c_1^2}\\ & \frac12\sqrt{1-c_1^2}& \ddots & \ddots\\ & & \ddots & \ddots & \frac12\sqrt{1-c_{n-1}^2}\\ & & & \frac12\sqrt{1-c_{n-1}^2} & c_nc_{n-1}+\frac{c_n-c_{n-1}}{2} \end{pmatrix}$$ Can one get an analytic expression for it? This expression is nice for calculating $a_n$ numerically, but solving it exactly is a nightmare. I managed to do it for $a_1$, Mathematica did it for $a_2$, but after that there are only numerics. The first few values are $a_1 = \frac{1}{16} \left(5 + 5 \sqrt{5}+\sqrt{50 \sqrt{5}-106}\right) \approx 1.161835$ $a_2 \approx 1.224739 $ $a_3 \approx 1.238024 $ $a_{100} \approx 1.250875$ Update: The asymptotic behaviour of the optimal solutions seems to be rather simple. Ignoring boundary effects, numerical evidence suggests that the $c_i$ converge quickly to a limiting value $C \approx 0.878273$, and that the coefficients of the optimal eigenstate decay exponentially with $i$. Assuming that both these behaviours do happen, elementary arguments show that $$\lim_{n\to\infty} a_n = \frac{4C^4-C^2+1}{4C^2-1}$$ so the problem reduces to calculating $C$. What are diagonals? Unclear for me. Sorry, the matrix with properly written diagonals was too big. I rewrote the question to make it more clear.
common-pile/stackexchange_filtered
matplotlib image to base64 without saving I am creating a wordcloud image with this code. wordcloud = WordCloud( background_color='white', random_state=30, width=1500, height=1200, font_path = font_path, prefer_horizontal=1) wordcloud.generate_from_frequencies(frequencies=d) I show the image with matplotlib like this: plt.imshow(wordcloud) plt.axis('off') plt.show() I am using this as part of a web app. I want to convert this image to base64 and store as a string as a value in a dictionary key for a specific instance. I see a lot of posts about how to convert images to base64 but it looks like they involve saving the figure locally before encoding. How do I do this without saving anywhere so I can just go from image to string? This code looks kind of like what I want. import base64 from PIL import Image from io import BytesIO with open("image.jpg", "rb") as image_file: data = base64.b64encode(image_file.read()) im = Image.open(BytesIO(base64.b64decode(data))) im.save('image1.png', 'PNG') If I just did this, would this accomplish my task? data = base64.b64encode(wordcloud) If I just did this, would this accomplish my task? data = base64.b64encode(wordcloud) No. You need to "save" the image, get that bytestream, and encode that to base64. You don't have to save the image to an actual file; you can actually use a buffer. w = WordCloud().generate('Test') buffer = io.BytesIO() w.to_image().save(buffer, 'png') b64 = base64.b64encode(buffer.getvalue()) And to convert that back and display the image img = Image.open(io.BytesIO(base64.b64decode(b64))) plt.imshow(img) plt.show()
common-pile/stackexchange_filtered
time based query on hive table My table structure is like this: hive> describe user_data2; OK received_at string message_id string type string version string timestamp_user string user_id string sent_at string channel string time_log string And I am targetting this fields, hive> select received_at, time_log, user_id from user_data2 limit 5; OK 2016-01-08T12:27:05.565Z<PHONE_NUMBER> 836871 2016-01-08T12:27:12.634Z<PHONE_NUMBER> 800798 2016-01-08T12:27:12.632Z<PHONE_NUMBER> 795799 2016-01-08T12:27:13.694Z<PHONE_NUMBER> 820359 2016-01-08T12:27:15.821Z<PHONE_NUMBER> 294141 On this I want to make time based query. like Avg Hours Active; per month; period: last 12 months % of users active between 0-1h / day % of users active between 1-2h / day % of users active between 2-4h / day % of users active between 4-8h / day % of users active between 8-12h / day % of users active between 12-16h / day % of users active between 16-24h / day I got some clue of using Datetime UDF - https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions But I am not aware how to use this function. I tried: select unix_timestamp(received_at) from user_data2 limit 5; OK NULL NULL NULL NULL NULL Which gives none. I appreciate if someone give example of using time UDF and getting records between two hours or some other time frame. RTFM - the documentation for unix_timestamp() is explicit, it expects an ODBC format without time zone while your data uses ISO8601 format with TZ >>> you must first reformat the STRING with some regexp_replace(), then convert to a "local TZ" TIMESTAMP with from_utc_timestamp(), and only then you will be able to play with unix_timestamp() as a LONG. By the way, a "UNIX timestamp" will not be very helpful, since you want to group records based on wall clock time... @SamsonScharfrichter: Thanks, but in time_log I have timestamp, using that can I play what I want? Assuming your local TZ is Rome... select from_utc_timestamp(regexp_replace(regexp_replace(RECEIVED_AT, 'T',' '), '\\..*$',''), 'Europe/Rome') as TS_RECEIVED, cast(from_unixtime(cast(TIME_LOG as int)) as timestamp) as TS_LOGGED from WTF ; +------------------------+------------------------+--+ | ts_received | ts_logged | +------------------------+------------------------+--+ | 2016-01-08 13:27:05.0 | 2016-01-08 13:27:05.0 | +------------------------+------------------------+--+ thanks, still my issue is not solve. But this is very helpful to me
common-pile/stackexchange_filtered
How to calculate touched position of ImageView in ScrollView, Considering Zoom factor? I have an image that should be click-able in certain areas, say 26 areas in that Image. What's the best way to do that ? I made an image with same size, each click-able area in Image has specific color, with white background, so when the image is clicked, I can get the Pixel color of second image in that position and realize which area is clicked. The problem is, i need to show picture in ScrollView, and image is resized to fit the screen width. I know i can calculate the clicked offset using event.getRawY() + sv.getScrollY();, but how to calculate the zoom factor ? Actually since both images are the same size, i need to calculate the clicked position considering zoom factor to find the precise position of that pixel in second Image. You can use the event.getY()-method to retrieve the position relative in your View. How about zoom factor ?, Original Image is 352700, but displayed image is stretched to fit the screen width, i need to get the touched position, and scale it to 352700 basis. About the zoom factor.. That is just some math to calculate the correct positions (x,y) depending on the zoom factor :) i think this snipped help you. @Override public boolean onTouch(View v, MotionEvent event) { if (v.getId() == R.id.imageView1 || v.getId() == R.id.image_scrollview) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: startX = (int) event.getX(); startY = (int) event.getY(); break; case MotionEvent.ACTION_UP: int cordinate_Y = (int) ((event.getY() + v.getScrollY()) / diffH); int cordinate_X = (int) ((event.getX() + v.getScrollX()) / diffW); if ((Math.abs((int) event.getX() - startX) > 10) || (Math.abs((int) event.getY() - startY) > 10)) { break; } int pixelColor = bitmap.getPixel(cordinate_X, cordinate_Y); if (pixelColor != -1) { showDetails(texts[map.get(pixelColor)], images[map.get(pixelColor)], titles[map.get(pixelColor)]); } break; } } return false; } Thanks, It actually helped. i had figured it out long before though. In your case it may be somewhat easier to use WebView instead. You can use <map> and <area> for creating clickable areas and send callbacks to your application using JavaScript any comments how to do that ? Can you please share the example code for the same, using WebView and HTML.
common-pile/stackexchange_filtered
How to change default path of wx-config in makefile? I am working on linux server in which wx-widget 2.8 is installed in /usr/local/. I have installed wxWidgets 2.9.5 in my build directory /home/jacob/. How to change path of wx-config to my build dir in Makefile? The default path of wx-config --prefix is /usr/local why do you need that? Just call the appropriate wx-config... I am using MPI and i dont have sudo rights on server. I installed wxWdgets in my build dir to make it available for all nodes that is OK. how do you use wx-config? You seem to be asking about how to change the path returned by /usr/local/bin/wx-config but this is a wrong way to approach the problem. If you installed wxWidgets under /home/jacob, you must now have /home/jacob/bin/wx-config -- simply use this one instead, either explicitly, or by putting /home/jacob/bin before /usr/local/bin in your PATH.
common-pile/stackexchange_filtered
Is "that" a conjunction or relative pronoun in the following sentence? It comes as no surprise that Taiwan has the highest density of convenience stores in the world. Is "that" a conjunction or relative pronoun in this sentence? What do you think and why?
common-pile/stackexchange_filtered
Django form: fields defined by query result, single view to update multiple objects simultaneously I have the following models in a Django 1.8 project: class MeditationType(models.Model): """ Stores user's meditation types and goals """ creation_date = models.DateTimeField(auto_now_add=True) modify_date = models.DateTimeField(auto_now=True) user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) meditation_name = models.CharField(max_length=30) # Meditation goals per weekday, in minutes goal_sun = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_mon = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_tue = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_wed = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_thu = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_fri = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) goal_sat = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) def get_absolute_url(self): return reverse('meditation_types_update', kwargs={'pk': self.pk}) class MeditationLog(models.Model): """ Stores user's meditation logs (journal entries) """ creation_date = models.DateTimeField(auto_now_add=True) modify_date = models.DateTimeField(auto_now=True) meditation_date = models.DateTimeField() user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) meditation_type = models.ForeignKey(MeditationType, on_delete=models.CASCADE) # Amount of time (in minutes) this meditation type was practiced on this date meditation_minutes = models.PositiveIntegerField(default=0, validators=[MaxValueValidator(1440)]) def get_absolute_url(self): return reverse('meditation_types_update', kwargs={'pk': self.pk}) I would like to build a form with the following fields: form fields Please note: The form will be used to create and update entries. The 3 types of meditation used in the example (Focused-attention, Mindfulness and Loving kindness) were created by the user (MeditationType model) so these fields will vary. For some users may be a single meditation type, for other user may be 10 different meditation types. So the form fields have to be defined dynamically according to each user's meditation types. When this form is submitted/posted, we have to save multiple instances of the MeditationLog object (one per meditation type). What would be the most simple and effective way to do that in Django 1.8? Also, if you can think of a better way to build the models (that will make the form building easier), please let me know. Thanks in advance. Will there be any other MeditationType objects? Am I correct in assuming you have a ManyToMany field from your User to MeditationType using through='MeditationLog'? Sardorbek: Thanks for your reply. Yes, each user will create her own meditation types. Maybe 1, 3 or 10 different types per user. Rob: Thanks for your reply. I don't have this ManyToMany field. I'm using Django's default user model. One user can have multiple meditation types but each meditation type only belongs to a single user. Could you please elaborate your point? @BrunoF: Never mind. I was looking at it wrong. You don't actually need a ManyToMany here. Sorry if I caused any confusion. After some more research, I finally found something to work with: https://jacobian.org/writing/dynamic-form-generation/ (Jacob is one of the original founders of Django) http://www.dougalmatthews.com/2009/Dec/16/nicer-dynamic-forms-in-django/ https://code.djangoproject.com/wiki/CookBookNewFormsDynamicFields http://agiliq.com/blog/2008/10/dynamic-forms-with-django/ http://blog.p3infotech.in/2013/how-to-create-dynamic-forms-in-django/ If you have a similar problem the links above may help. Thanks to those who replied.
common-pile/stackexchange_filtered
Using Devise to sign up, sign in and sign out with an iOS app I have a very basic Rails app that uses Devise for user sign up, sign in, etc. I'd like to expose Devise to an iOS app. There's a lot of threads that are high level explanations, but I'm very new to Rails. I'm basically posting a request to users/sign_up like so: AFHTTPClient *client = [AFHTTPClient clientWithBaseURL:[NSURL URLWithString:@"http://localhost:5000"]]; //[client defaultValueForHeader:@"Accept"]; [client setParameterEncoding:AFJSONParameterEncoding]; NSDictionary *params = [NSDictionary dictionaryWithObjects:[NSArray arrayWithObjects: <EMAIL_ADDRESS> @"testtest", @"testtest", nil] forKeys:[NSArray arrayWithObjects: @"email", @"password", @"password_confirmation", nil]]; [client registerHTTPOperationClass:[AFHTTPRequestOperation class]]; NSURLRequest *request = [client requestWithMethod:@"GET" path:@"/users/sign_up" parameters:params]; AFHTTPRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { NSLog(@"%@", JSON); NSLog(@"%@", [response allHeaderFields]); } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { }]; [operation setAcceptableContentTypes:[NSSet setWithObjects:@"application/json", @"text/json", @"text/javascript", @"text/plain",@"text/html", nil]]; [operation start]; This seems to be going through on the iOS side as I receive a 200 OK: 2013-01-21 13:38:00.412 Starfish[35773:c07] I restkit.network:RKHTTPRequestOperation.m:152 GET 'http://localhost:5000/signup?password=testtest&password_confirmation=testtest&email=first.last%40gmail.com' 2013-01-21 13:38:01.073 Starfish[35773:c07] (null) 2013-01-21 13:38:01.073 Starfish[35773:7207] I restkit.network:RKHTTPRequestOperation.m:179 GET 'http://localhost:5000/signup?password=testtest&password_confirmation=testtest&email=first.last%40gmail.com' (200 OK) [0.6611 s] 2013-01-21 13:38:01.074 Starfish[35773:c07] { "Cache-Control" = "max-age=0, private, must-revalidate"; Connection = close; "Content-Type" = "text/html; charset=utf-8"; Etag = "\"5dfe88700e41a015a8f524850446e327\""; Server = "thin 1.5.0 codename Knife"; "Set-Cookie" = "_geo-photo_session=BAh7B0kiD3Nlc3Npb25faWQGOgZFRkkiJTEyZTJjMjFkZjAwZGZlMWIzMTJmNTg5ODNkNjM5OGI0BjsAVEkiEF9jc3JmX3Rva2VuBjsARkkiMTN4eUZGUzVhb1pIdXc1QlJJUnBxSFVrZmYvUDdpeGtaeklxNGhOejFOeEk9BjsARg%3D%3D--5130edc939facbe7004570141c069d40f5f7feca; path=/; HttpOnly"; "X-Request-Id" = a2480798afed4e7aca17f3f1e2d3d44d; "X-Runtime" = "0.219984"; "X-UA-Compatible" = "IE=Edge"; } And on the Rails side: Started GET "/signup?password=[FILTERED]&password_confirmation=[FILTERED]&email=first.last%40gmail.com" for <IP_ADDRESS> at 2013-01-21 13:38:00 -0800 13:43:15 web.1 | Processing by Devise::RegistrationsController#new as */* 13:43:15 web.1 | Parameters: {"password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]"<EMAIL_ADDRESS>13:43:15 web.1 | Rendered devise/shared/_links.erb (2.1ms) 13:43:15 web.1 | Rendered devise/registrations/new.html.erb within layouts/application (23.7ms) 13:43:15 web.1 | Completed 200 OK in 142ms (Views: 64.9ms | ActiveRecord: 6.9ms) Out-of-the-box signup: <h2>Sign up</h2> <%= form_for(resource, :as => resource_name, :url => registration_path(resource_name)) do |f| %> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <div><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></div> <div><%= f.submit "Sign up" %></div> <% end %> <%= render "devise/shared/links" %> The problem, which I don't know Rails enough to solve, is that I don't want any views rendered. I simply want to pass the sign up or login credentials as JSON objects and receive a validation that the user was created or signed in. What I have above isn't creating the user because it's attempting to render a view. Any help would be greatly appreciated. Thanks. Can you post your signup method from Rails? There's 2 parts to this, the iOS app requesting JSON, and Rails responding to that. That's where I'm stuck. It's the out-of-the-box model/view/controller that Devise installs. I'm talking about your actual controller code (not view). Not everyone memorizes what Devise gives you out-of-the-box. Chances are you just need to edit a respond_to. I'm doing a similar thing, and have the sign up part of it working so far via CURL using JSON (which should translate to iOS easily), and also via the browser. Using a modification of this answer from NeverBe: I modified config/initializers/devise.rb and added the line: config.navigational_formats = ["*/*", "/", :html, :json] I modified config/routes.rb and replace this line: devise_for :users with this line: devise_for :users, :controllers => {:registrations => "registrations"} I created a new file app/controllers/registrations_controller.rb class RegistrationsController < Devise::RegistrationsController def create respond_to do |format| format.html { super } format.json { if params[:user][:email].nil? render :status => 400, :json => {:message => 'User request must contain the user email.'} return elsif params[:user][:password].nil? render :status => 400, :json => {:message => 'User request must contain the user password.'} return end if params[:user][:email] duplicate_user = User.find_by_email(params[:user][:email]) unless duplicate_user.nil? render :status => 409, :json => {:message => 'Duplicate email. A user already exists with that email address.'} return end end @user = User.create(params[:user]) if @user.save render :json => {:user => @user} else render :status => 400, :json => {:message => @user.errors.full_messages} end } end end end Additional note: You still get error handling even without all that custom error handling code, but I wanted to return status code errors (400 and 409) instead of 200 with custom error messages which is what I was getting from the default devise behavior.
common-pile/stackexchange_filtered
How can I convert linq object to asp.net object..? Code: Domain ob = new Domain(); [HttpPost] public ActionResult Create(Domain ob) { try { //// TODO: Add insert logic here FirstTestDataContext db = new FirstTestDataContext(); tblSample ord = new tblSample(); ord = ob; db.tblSamples.InsertOnSubmit(ord); db.SubmitChanges(); return RedirectToAction("Index"); } catch { return View(); } } Here I am getting an error like this Cannot implicitly convert type 'mvcInsertLinqForms.Models.Domain' to 'mvcInsertLinqForms.tblSample' What do you expect the ord = ob; statement to do? And why are you creating a new tblSample for no reason? The assignment ord = ob has left side of type "tblSample" while right side of type Domain. Are they assignable types? @Jon The expectation is probably obvious, the statement should convert one object into the other. It is the language/tech which is too constrained to understand and autoimplement such expectation :) What are you trying to achieve? Right now you have said that an apple is an orange which obviously isn't the case. You cannot assign ord to ob because they are not of the same type. You seem to be attempting to map the view model (ob) to your domain model (tblSample). You could do this by setting the corresponding properties of the domain model: [HttpPost] public ActionResult Create(Domain ob) { try { tblSample ord = new tblSample(); // now map the domain model properties from the // view model properties which is passed as action // argument: ord.Prop1 = ob.Prop1; ord.Prop2 = ob.Prop2; ... FirstTestDataContext db = new FirstTestDataContext(); db.tblSamples.InsertOnSubmit(ord); db.SubmitChanges(); return RedirectToAction("Index"); } catch { return View(); } } and to avoid doing this mapping manually you could use a tool such as AutoMapper which could help you mapping back and forth between your view models and your domain models. [HttpPost] public ActionResult (Domain model) // or (FormCollection form), use form.get("phone") { //--- return View(); }
common-pile/stackexchange_filtered
I get a signing error when I add an image to Assets in the SwiftUI Tutorial I get a signing error when I add an image to Assets in the SwiftUI Tutorial I'm working on the SwiftUI Tutorial, but when I add an image or JSON file to Assets, I get the following Sign Error. CodeSign /Users/k~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Products/Debug-iphonesimulator/SwiftUIPractice.app (in target 'SwiftUIPractice' from project 'SwiftUIPractice') cd /Users/~~/project/SwiftUIPractice export CODESIGN_ALLOCATE\=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/codesign_allocate Signing Identity: "-" /usr/bin/codesign --force --sign - --entitlements /Users/~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Intermediates.noindex/SwiftUIPractice.build/Debug-iphonesimulator/SwiftUIPractice.build/SwiftUIPractice.app.xcent --timestamp\=none --generate-entitlement-der /Users/~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Products/Debug-iphonesimulator/SwiftUIPractice.app /Users/~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Products/Debug-iphonesimulator/SwiftUIPractice.app: replacing existing signature /Users/~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Products/Debug-iphonesimulator/SwiftUIPractice.app: code object is not signed at all In subcomponent: /Users/~~/Library/Developer/Xcode/DerivedData/SwiftUIPractice-ctdnfoihimrfqeejvxgheakcfzfv/Build/Products/Debug-iphonesimulator/SwiftUIPractice.app/Assets.car Command CodeSign failed with a nonzero exit code Welcome to StackOverflow! I would delete the Derived Data and "Clean BuildFolder" under the "Product" menu in Xcode. Instructions for deleting the derived data are all over the web. see if that fixes things. Thank you Comments! I have tried what you say, but problem was not solved...
common-pile/stackexchange_filtered
PHP_SELF not equating I'm trying to code a sidebar for my website. For this, I store pairs of values corresponding to the links in a file, say names.txt: Home-index.php Coding-coding.php and my php file will then take these values and turn them into links on the side, i.e. <a href = "index.php">Home</a> <a href = "coding.php">Coding</a> And so forth. Now, I want to make it so that, for the page that i'm on, instead of displaying a hyperlink, it will display a bold version of the text. i.e. if i was on index.php, it outputs <strong>Home</strong> <a href = "coding.php">Coding</a> To do this, i use explode to split my strings from the file, and then PHP_SELF to detect whether it is the same page. On top of this, since the script will be in different pages, I put that into a file (side.php) and then used include('side.php'). Inside side.php there is the sidebar function, which takes the page directory (via PHP_SELF) as an argument. Everything else works fine, except for the bold-text. Help? P.S. When i say PHP_SELF I mean $_SERVER['PHP_SELF']. Thanks! (My code, or part thereof...) In side.php: <?php function sidebar($input){ # echo ($input); $sitenav = fopen("file","r"); $amts=0; do{ $sidebarvals[$amts]=fgets($sitenav); $amts=$amts+1; }while ($sidebarvals[$amts-1]!="EOF"); $amts--; for ($i=0;$i<$amts;$i++){ $parts = explode("-",$sidebarvals[$i]); if ("/" . $parts[1]!=$input){ echo "<a href=" . $parts[1] .">" . $parts[0] . "</a>"; }else{ echo "<strong>" . $parts[0] . "</strong>"; } echo "<br>"; } fclose($sitenav); } ?> and on each page: include ('side.php'); sidebar($_SERVER['PHP_SELF']); You should post the code you are using. The problem is possibly the backslash at the start of $_SERVER['PHP_SELF']. Well, i tried to do a concatenation of the string using "/" . $namefromfile I'll put up my code then. Have you done var_dump()'s of the relevant variables? As i Know PHP_SELF returns file name with full file path as index.php will return /index.php. But looks like that you are trying to match index.php against returned value /index.php. So, in order to correct you must get file name from PHP_SELF and it can be achieved in this way. $url_array = explode('/',$_SERVER['PHP_SELF']); $last = count($url_array) - 1; $file = url_array[$last]; Well thanks. I am trying to do it by appending "/" to the other part of the string. Why doesn't that work?
common-pile/stackexchange_filtered
symfony form does not process my search filters I have created a search form in symfony to allow me to search for a term in a title or content of my Media entity or to search by Category. The form is well processed via the MediasController, my findWithSearch function is created in the MediaRepository and the form is displayed in the Medias/index.html.twig template. In the debugger, the query seems to work fine but the filtering is not done on the page, if someone could tell me why Repository of project: https://github.com/Eosia/blog-symf/blob/main/src/Controller/MediasController.php This is because on line 56 you do: $donnees = $this->getDoctrine()->getRepository(Media::class)->findBy([],['id' => 'desc']); Which is used by your page to display the results. And you never use the data filtered by the form on line 46.
common-pile/stackexchange_filtered
Folding of wave-vector in Band Theory of Metals In the Kronig-Penney model in the Band Theory of Metals, we derive the energy levels as function of wave vector as shown in the figure. But my professor showed that we represent the levels folded from $-\pi$ to $\pi$. This is also represented on the Wolfram Demonstration on the model. Why do we do it? What's the point I am missing? The point you are missing is Bloch's theorem - since the potential is periodic, you can fold everything back into the first Brillouin zone. @JonCuster But it doesn't physically mean anything, does it? My prof's words made it look like it does. So, it's just a way of representing the picture? Garyp's answer below is most of it. While the two representations are equivalent, the reduced scheme will be used for almost everything. In addition, the extended scheme does not make clear which transitions so not require any momentum changes - only the reduced form does. Get used to using and seeing the reduced form. The extended form, a teaching tool, has pretty much done its job at this point. My answer to a related question might help. The short version is that due to the periodic nature of the potential, a wave vector $k$ is identically the same as the wave vector $k+2\pi/a$. There are at least two ways of showing bands on a graph. One is the extended zone scheme which you have shown above. The other is the reduced zone scheme where we take advantage of the periodicity of $k$, and show all the vectors in the interval $(-\pi/a, \pi/a)$. The bands appear folded back in this scheme. More details in the cited answer.
common-pile/stackexchange_filtered
AVD android 30 stuck boot loop after remount I'm trying to enable write access to an android emulator in order to push file into /system but I meet boot loop after using adb remount adb reboot. I'm using avd image android 30, arc x86-64. Possible duplicate of https://android.stackexchange.com/questions/232234/why-adb-remount-retruns-remount-failed-on-android-emulator Nah. Seem this post is another issue. I solved this problem by disabling verify on android emulator. Command: adb shell avbctl disable-verification
common-pile/stackexchange_filtered
Blank crs attributes in fiona: Can't get proj4 parameters from OpenFileGeodatabase I’m having trouble obtaining crs values for geodatabase feature classes in fiona (version 1.7.5). Specifically, the crs attribute returns empty for many of the geodatabases I try to read. This is not an isolated incident -- it occurs frequently with OpenFileGeodatabase types. For example... $ fio info --indent 2 /Users/felix/Data/VilasTransportation.gdb /usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:205: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__ return f(*args, **kwds) { "driver": "OpenFileGDB", "schema": { "properties": { "Name": "str:100", "NameAlt": "str:100", "Type": "str:20", "Ownership": "str:30", "Municipality": "str:20", "Source": "str:30", "Comment": "str:250", "EditDate": "datetime", "Editor": "str:5", "SHAPE_Length": "float" }, "geometry": "MultiLineString" }, "crs": "", "crs_wkt": "PROJCS[\"NAD_1983_HARN_WISCRS_Vilas_County_Feet\",GEOGCS[\"GCS_North_American_1983_HARN\",DATUM[\"NAD83_High_Accuracy_Reference_Network\",SPHEROID[\"GRS_1980\",6378137.0,298.257222101]],PRIMEM[\"Greenwich\",0.0],UNIT[\"Degree\",0.0174532925199433],AUTHORITY[\"EPSG\",\"4269\"]],PROJECTION[\"Lambert_Conformal_Conic_1SP\"],PARAMETER[\"False_Easting\",441000.0],PARAMETER[\"False_Northing\",165147.666],PARAMETER[\"Central_Meridian\",-89.48888888888889],PARAMETER[\"Scale_Factor\",1.0000730142],PARAMETER[\"Latitude_Of_Origin\",46.07784409055556],UNIT[\"Foot_US\",0.3048006096012192],AUTHORITY[\"Esri\",\"103463\"]]", "bounds": [ <PHONE_NUMBER>191318, 83800.42979672551, <PHONE_NUMBER>880489, <PHONE_NUMBER>2439904 ], "name": "TransportationLines", "count": 3946 } As demonstrated below, GDAL has no problem generating a proj4 string for the same dataset. $ gdalsrsinfo /Users/felix/Data/VilasTransportation.gdb PROJ.4 : '+proj=lcc +lat_1=46.07784409055556 +lat_0=46.07784409055556 +lon_0=-89.48888888888889 +k_0=1.0000730142 +x_0=134417.0688341377 +y_0=50337.10927101854 +ellps=GRS80 +units=us-ft +no_defs ' OGC WKT : PROJCS["NAD_1983_HARN_WISCRS_Vilas_County_Feet", GEOGCS["GCS_North_American_1983_HARN", DATUM["NAD83_High_Accuracy_Reference_Network", SPHEROID["GRS_1980",6378137.0,298.257222101]], PRIMEM["Greenwich",0.0], UNIT["Degree",0.0174532925199433]], PROJECTION["Lambert_Conformal_Conic_1SP"], PARAMETER["False_Easting",441000.0], PARAMETER["False_Northing",165147.666], PARAMETER["Central_Meridian",-89.48888888888889], PARAMETER["Scale_Factor",1.0000730142], PARAMETER["Latitude_Of_Origin",46.07784409055556], UNIT["Foot_US",0.3048006096012192], AUTHORITY["ESRI","103463"]] I need proj4 parameters to use in a pyproj.Proj() instance, so simply grabbing the fiona.crs_wkt value doesn’t solve my issue. Similarly, I want to avoid conversions between proj4 and wkt formats with ogr/osr, or parsing the wkt for parameters. Why does fiona miss the crs info, but GDAL doesn't? Is it possible to obtain proj4 parameters in fiona with the same consistency as GDAL command line? The gdb example can be downloaded here (transportation layer) --> http://vcgis.co.vilas.wi.us/vcom/Download.html Afraid I don't have a GDB to test this on, so your mileage may vary :-/ Also using Fiona 1.7.5, and pyproj <IP_ADDRESS> I tried saving a layer from QGIS using a totally bespoke CRS (Orthographic, so the proj4 string would not correspond to an EPSG code, and would force Fiona to try to get the proj4 parameters instead) I saved it as a geoJSON file a shapefile In the case of geoJSON, source.crs gave me {'init': 'epsg:4326'} which is usable as a pyproj initialisation string (even if the projection is wrong) In the case of the shapefile, it did break things out into a dictionary of proj4 parameters... {'y_0': 0, 'proj': 'ortho', 'ellps': 'WGS84', 'lon_0': 105.423, 'x_0': 0, 'no_defs': True, 'units': 'm', 'lat_0': -6.102} So although Fiona's .crs should split out the proj4 parameters, this may not work with all drivers. I see your CRS authority is ESRI, so it's possible the driver isn't able to look it up and gives up (but the shapefile driver doesn't). If you can find an equivalent EPSG code which matches your projection, that might help. This issue on the Fiona Github looks as if it might be related. EDIT I see the same thing with your dataset. If you need to do this from Python and gdalsrsinfo works, you could always use subprocess to run gdalsrsinfo against your gdb file, and extract the line with the proj.4 string from the output. Hacky, I admit, but it would work.. import subprocess subprocess.check_output(["gdalsrsinfo", "-o", "proj4", "/path/to.gdb"]) Thanks Steven. The two examples you give would work great in pyproj.Proj(). However, in this case, source.crs is giving me {}, which would throw a RuntimeError: projection not named if I tried to initialize with this NoneType value. Fiona 1.7.5 tends to open my geodatabases with the "OpenFileGDB" driver. In most cases, retrieving the .crs attribute gives me an acceptable output to plug into pyproj, but other times, such as in this case, I get nothing. I edited this post to include the gdb example and a more detailed view of the resulting collection when opening the gdb with fiona. thanks for the clarifications/edits. I see the same thing. You could try answers to this question to see if there's a way to get from the OGC WKT representation (which you do get) to a proj4 string
common-pile/stackexchange_filtered
Menu indentation after full screen Using the JW player, streaming flv via rtmp. I am also using the snell.swf skin. When I click on full screen, then return to normal I am seeing the menu for the currently playing item indent. Any ideas what is causing this? Also sometimes seeing menu items highlighted which are not the currently playing ones. Any ideas here as well?
common-pile/stackexchange_filtered
Web part using 3rd party permissions Is it possible to filter (i.e. hide) web parts from the WebPartAdder based on 3rd party user permissions? We have built in user permissions in our application that defines which web parts a user has permissions to add to a page (permissions obtained using web services). In previous versions of SharePoint, we created a custom web part picker that allowed our users to select and add web parts to a web part page based on those permissions. In SP 2010 we'd like to use the SharePoint ribbon to expose these web parts but we haven't found a way to filter web parts based on these permission sets. I've found virutally no usable documenation on MSDN describing the WebPartAdder control. You can specify permissions on webparts in the webpart gallery, but this doesn't remove them from the list of available webparts. The unauthorized user will get an access denied message when attempting to add them to a page.
common-pile/stackexchange_filtered
How do use Nuclio to read events from Azure Service Bus? I'd like to directly read messages from Azure Service Bus with Nuclio using the Python runtime. Does anyone have experience with this? I'm assuming I need to create the ServiceBusClient inside of an init_context function, but the examples from azure show that occurring within it's own context manager, like so: conn_str = <CRED> queue_name = <NAME> with ServiceBusClient.from_connection_string(conn_str) as client: with client.get_queue_receiver(queue_name, max_wait_time=30) as receiver: for msg in receiver: print(str(msg)) I'm assuming best practice would be to create the ServiceBusClient inside of init_context, then call setattr(context.user_data, 'my_servicebus', my_servicebus.from_connection_string()) Anyone have experience with this? I suggest you explore connecting the Service Buss to Azure Event Hub. Although your idea of initiating the connection in init_context is a good start, you will have the complexity of managing the state and configuration of the ESB connection. Nuclio includes an Azure Event Hub trigger. Not only will it simplify your deployment, but you will take advantage of Nuclio’s autoscaling and recovery options. I found this article that seems to guide you through integrating ESB with Hub. https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-send-messages-to-or-receive-from-service-bus-event-hub/ba-p/2136244
common-pile/stackexchange_filtered
What is the Aggregation, Delegation and Association in hibernate? In an interview, interviewer asked this this question. I know association, we achieve associations using Mappings.Please help to understand Aggregation and Delegation in hibernate. I consider this question as a very interesting one. Aggregation and Composition are very confusing terms, especially in Java (compared to C++). Hibernate adds more meaning to an ownership part of these terms. Also, probably, Composition should be used in place of Delegation in the question. Aggregation in SQL usually refers to (Aggregation) Functions like count(), sum() or avg(), and the HQL offers a subset of those functions. Read about it in the documentation. Aggregation in Hibernate (or the mentioned "Composition") refers to the concept of Embeddable Types, read all about it here. Instead of providing a dedicated Table for aggregates, you can embed it into the parent type.
common-pile/stackexchange_filtered
Matlab: subtract every value in a row from every other, for every row I have data like this: [... 0...0...0...0 6...0...0...0 8...5...2...0 9...8...3...1 0...0...0...0 Within each row I would like to subtract every value individually from every other in that row. So that I get a new matrix which shows all the differences like this: [... null 0 3...6...3 1...6...8...5...7...2 null I hope you see what I mean. I don't want to subtract 0 from anything (O is null for me-if you have a way to replace 0 with null thats fine). Or at least, if this has to be done, I want those results to be discarded. But I DO want there to be some placeholder when a row is entirely made of 0s. The ordering of the resulting subtractions doesn't matter, except that the row order overall should be maintained. You can use PDIST to calculate the distances: data =[0 0 0 0 6 0 0 0 8 5 2 0 9 8 3 1 0 0 0 0]; nRows = size(data,1); %# for speed, preassign 'out' out = cell(nRows,1); for r = 1:nRows pts = data(r,data(r,:)>0); %# this assumes valid entries are >0 switch length(pts), case 0, out{r} = []; %# empty for 'null' case 1, out{r}=0; %# zero if only one valid number otherwise out{r}=pdist(pts'); %'# calculate all differences end end %# You access elements of a cell array with curly brackets >> out{3} ans = 3 6 3 @jonas. once again a great answer. What is name of the output matrix in that? @Tom: I called it out, for lack of creativity. @jonas. But in this case, it seems I am unable to display them all at once by writing 'out' or xlswrite('data.xls',out)? @Tom: Yes, because out is a cell array, which I use to store arrays of variable length. @Jonas. Yes I see. So then how will I go on to operate on all of the 'out' rows at once? Can I create a new matrix with NaN or something in the null columns?? @Tom: What do you want to do? In many cases, cellfun can be useful. Note that if you have N non-zeros in a row, you get N(N-1)/2 differences, so if you want to transform out into a full array, it might get rather large. @jonas. It's because I then need to convert all my values into new values (each individually from a look-up table) and then add them up per row. I don't mind if the full array is large, because my initial array is never more than 13 columns now. @Tom: assuming the LUT allows direct indexing, you can convert the values like this: convertedSum = cellfun(@(x)sum(LUT(x)),out). For every element in the cell array (i.e. every row), this will read the values from the look-up table LUT, and sum the result. Note that if LUT is an array and not a function, you'll need to do something about the cell elements containing 0, since you can't index with 0 unless they're logical 0's. Note that also, you could perform the looking-up and summing directly, without needing to produce the cell array.
common-pile/stackexchange_filtered
Referencing A New Variable From A Different Method I'm new to C# and am making a text based game. In one method, I ask the player to answer an open ended question, and I store that response in a local variable called response. I would like to access the contents of the string response later on in a different method. I, of course, run into problems here because response is a local variable and cannot be called outside of its initial method. I understand the idea of creating a new class and passing the string as an argument to be accessed in any method I choose, but how do I do this with a yet to be filled in string? Since the string response gets filled in by the player, how can I access the contents of that string in a different method? I might just be missing something obvious here, but I appreciate the help. If you want to see a variable between methods of the same class you declare a global class variable @Steve maybe you mean a member class variable? there is no global (class or not) variable in c# I've actually solved it, but thank you both for your help! your need is to have "something" that has a wider scope than a single method. You have a few options: You can return the response value and let the caller handle it the way it need it. e.g.- string response = ""; response = MyMethod(); Console.WriteLine(response); you can pass the variable by reference, e.g. void MyMethod (ref string response) and call it with something like e.g. string response = ""; MyMethod(ref response); Console.WriteLine(response); not the idiomatic way to do it, ref parameter are seldom used, usually in particular context requiring it You can store the response value in a class member variable, so the caller will be able to access it. see lidqy answer as example. As a general rule of thumb, the smaller the scope of a variable, the easier to handle it in code, it's easier to follow the code and understand when and how the variable is set. So, if you just need to handle the value in the calling method (or bubbling the value a few method up), I'd go whith #1 You could store the response data in a location that both methods can access. They need to share the same scope. For example, if both methods belong to the same class, the natural way to let both methods access this data would be to declare a field in that class. In this little example below response is declared as a field not a local variable. A (private) field's scope covers all methods and other members of a class. Method SetUserResponse assigns a value to field "response" an method ShowUserResponse displays that value. class Program { private string response; static void Main(){ var prog = new Program(); prog.SetUserResponse(); prog.ShowUserResponse(); } private void SetUserResponse() { Console.WriteLine("Enter something you think about"); response = Console.ReadLine(); } private void ShowUserResponse() { Console.WriteLine("You said you would think about: {0}", response); } } Very straightforward and helpful response, thank you so much!
common-pile/stackexchange_filtered
How to define a Hibernate OneToOne unidirectional mapping in Parent where the FK column is in the child? Background: I'm in the process of upgrading to Hibernate 6.1.4 (from 5.3.x) and have run into problems with OneToOne bidirectional mappings (which appears to be a bug, and I've written up). I'm looking for a workaround that doesn't require changing the schema and am considering making the mapping unidirectional but have run into an issue. Here's a simplified version of the starting point: @Entity @Table(name = "PARENT_T") public class Parent { @Id @Column(name = "PARENT_PK") private Integer id; @OneToOne(targetEntity = Child.class, mappedBy = "parent", cascade = CascadeType.ALL, fetch = FetchType.LAZY) private Child child; // getters and setters... } @Entity @Table(name = "PARENT_T") public class Child { @Id @Column(name = "CHILD_PK") private Integer id; @OneToOne(targetEntity = Parent.class, fetch = FetchType.EAGER) @JoinColumn(name = "PARENT_FK", nullable = false) private Parent parent; // getters and setters... } So, I would like to remove the Child-to-Parent mapping, and just map the attribute: @Column(name = "PARENT_FK", nullable = false) private Long parentFK; However, this means that the mappedBy = "parent" in the Parent is no longer valid. I can add a JoinColumn annotation, but per the docs, the JoinColumn name is in the source entity (here, Parent): The name of the foreign key column. The table in which it is found depends upon the context. If the join is for a OneToOne or ManyToOne mapping using a foreign key mapping strategy, the foreign key column is in the table of the source entity or embeddable. I saw a suggestion to use a OneToMany mapping, since: If the join is for a unidirectional OneToMany mapping using a foreign key mapping strategy, the foreign key is in the table of the target entity. ... and then treat it as a One-to-One. However, this seems like a kludge. So: Is there a way to map a OneToOne relationship where the foreign key column lies with the target entity (here: Child), rather than the source (here: Parent)? Conceptually, I'm just looking for a table equivalent of mappedBy in the annotation, Something like: @OneToOne(targetEntity = Child.class, mappedByColumn = "PARENT_FK", cascade = CascadeType.ALL, fetch = FetchType.LAZY) Thanks! Try this mapping here: @MapsId @OneToOne(cascade = CascadeType.ALL, fetch = FetchType.LAZY) @JoinColumn(name = "PARENT_PK", referencedColumnName = "PARENT_FK") private Child child; this doesn't seem to work. If the value is not present, rather than null, I get: org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [com.....Child#80000042]. I had thought it was working if a value was present, but since the PK value it is reporting, 80000042, is that of the Parent, I'm guessing it was just coincidence(since the parent and child entities just happen to have the same pk). @christian Try removing the @JoinColumn part Without the JoinColumn, there would be no way to know that PARENT_FK is the column to use in Child. PARENT_FK is not the id of Child, @christian (and why can't I put the mention at the beginning of my comment. It keeps removing it). If PARENT_FK is not the primary key, I hope you made that column at least unique, or otherwise you might run into other trouble. Anyway, try removing @MapsId and use @JoinColumn(name = "PARENT_PK", referencedColumnName = "PARENT_FK", insertable = false, updatable = false) It doesn't work because generated query contains has join statement with PARENT_T. PARENT_FK = CHILD_T. PARENT_FK (I supposed that the mconner has typo for the name of the Child table).
common-pile/stackexchange_filtered
How do you split string to display as a list? So I am using textareas in a form to taking a list. <tr> <td><label asp-for="Ingredients"></label></td> <td><textarea asp-for="Ingredients" ></textarea></td> <td><span asp-validation-for="Ingredients"></span></td> </tr> In my viewModel I have [Required] public string Ingredients { get; set; } In the details it is called using <div style="margin-left:15px"><u>Instructions:</u> <br /> @Model.Instructions</div> <br /> It ends up displaying like: item 1, item 2, item 3 I want it to display like: item1 item2 item3 So how do I split the ingredients at the commas without displaying them while dropping them down a line each? Possible duplicate of C# Split A String By Another String Possible duplicate of Split and join C# string Ingredients.Replace(", ", "<br />"); maybe? Wouldn't that get escaped in the Razor view @RufusL ? Possible duplicate of C# Splitting Strings? @rene Very possibly; it's out of my area. Hence the "maybe?" :) easiest is probably a @foreach in your view over @Model.Instructions.Split(','); and then output the value and a <br /> Note, it really doesn't matter if you're pulling from a database or not - a string is a string is a string. Given that, on any string you can use String.Split to break it into an array based on some delimiter such as ',' in your case. Then use String.Join to make that array back into a string, joining with either markup (such as "<br />") or System.Environment.NewLine for more of a plaintext representation.
common-pile/stackexchange_filtered
WebGL - Is it possible to pass gl into an img.onload function? In my WebGL program that I'm writing in ES6, I have a class called Texture. At first, it draws the texture as 1 red pixel. Once the image loads, I want to replace the red pixel with an image. I'm a bit stuck as I'm not sure what the best approach for this as it currently throws an error and doesn't work: Uncaught TypeError: Failed to execute 'texImage2D' on 'WebGLRenderingContext': No function was found that matched the signature provided. My best guess is that this happens because JavaScript is asynchronous, so once the image loads, gl is gone, or maybe I'm wrong about this. Help understanding this and/or a solution would be great appreciated. export default class Texture { constructor(gl, src) { // create texture then bind it this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, this.texture); // texture settings for the texture gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); // create it as a red pixel first so it can be quickly drawn then replaced with the actual image gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array([255, 0, 0, 255])); // active the texture gl.activeTexture(gl.TEXTURE0); // load the image this.img = new Image(); this.img.src = src; // replace the red pixel with the texture once it is loaded this.img.onload = function() { gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this.img); } } } I did try this: this.img.onload = function(gl) { gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this.img); } But it throws this error: Uncaught TypeError: t.texImage2D is not a function is this helpful? https://stackoverflow.com/questions/49187705/what-scope-is-this-of-a-node-js-object-when-it-gets-called-at-the-top-level/49187855#49187855 @gman this is actually informative into how it actually works, thanks! You are creating a new scope inside the function. The easiest way to prevent it in ES6 is an arrow function: this.img.onload = () => { gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this.img); } This is perfect, I'll accept your answer when I can and I'll read into arrow functions. Keyword this inside your onload function is not the same this context you expect. One way is to use arrow function as in Sebastian Speitel's answer. You can also save the reference into self variable, which is a common pattern: var self = this; this.img.onload = function() { gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, self.img); } Any function can access a variable from outer scope, that's why self is accessible inside this.img.onload. I just tested this and it also works. Thanks for the explanation, the arrow function is my preferred solution for this though, but it's definitely useful to know this information as I'm in the process of learning ES6. Yes, go with ES6 if you can. Arrow functions don't have their own this. This is tied to a context they're created in, which is useful in many situations - like yours for example :)
common-pile/stackexchange_filtered
Use list-Element in XSLT So below you can see my given XML. I matched the template and I'm already in the Student-node (xsl:template match="Class/Student"): <Class> <Heading>This is a sentence.</Heading> <Student>Alex</Student> <Student>Emilia</Student> <Student>John</Student> </Class> Now I need to get a list out of all Students and what I want to get should look like this: <ul> <li>Alex</li> <li>Emilia</li> <li>John</li> </ul> I think I have a mistake in the way I am thinking, because my XSLT looks like this at the moment: <xsl:template match="Class/Student"> <ul> <xsl:for-each select="../Student"> <li> <xsl:apply-templates/> </li> </xsl:for-each> </ul> </xsl:template> But what I actually get is: <ul> <li>Alex</li> <li>Emilia</li> <li>John</li> <ul> <ul> <li>Alex</li> <li>Emilia</li> <li>John</li> <ul> <ul> <li>Alex</li> <li>Emilia</li> <li>John</li> <ul> I think the problem is the for-each I use but I have no idea what else I should do in this case. Welcome to Stack Overflow. Excellent job making your first question clearly illustrate your issue. Thanks for your positive feedback :) As you have already made the step to use template matching with the template match="Class/Student" I would suggest to stick with that approach and simply write two templates, one for the Class elements, the other for the Student elements <xsl:template match="Class"> <ul> <xsl:apply-templates select="Student"/> </ul> </xsl:template> <xsl:template match="Student"> <li> <xsl:apply-templates/> </li> </xsl:template> For more complex cases this results in cleaner and more modular code. Where my answer focused on explaining why your code wasn't working, Martin's answer shows you the ideal solution. When writing XSLT, it is best to leverage pattern matching as much as possible as Martin does here. (+1) Notice in particular the simplicity and independence of the two templates. Loops are ok, but this declarative representation is elegant. I tried both solutions and i have to admit that i prefer this one, because this fits better into my stylesheet. thank you both for these answers! You want one ul per Class, not per Student, so change <xsl:template match="Class/Student"> to <xsl:template match="Class"> Then change <xsl:for-each select="../Student"> to <xsl:for-each select="Student"> to get one li per Student child element of the Class current node. Of course you are right. Was actually my bad but thank you so much it worked!
common-pile/stackexchange_filtered
How to load a html file's content into javascript variable in django? I'd like to insert contents of abc.html after a div when some event fires. I tried var content = {% include "path/to/abc.html" %}; $(".myDiv").click(function() { $("#anotherDiv").insertAfter(content); }); However, it seems `{% include %} fails because of some sort of parsing error. (abc.html is multi-line) abc.html contains django filters/tags. any thoughts? @dan-klasson: please enlighten me. How? You are missing one double-quote in {% include "path/to/abc.html %}. Should be {% include "path/to/abc.html" %}. Anyway I don't think it will work out, think of other methods then. Like asifrc's answer. Maybe try replacing \n in the string as well. it will not work in that way. You should make a js function that read the files after getting the exact path before showing the content this is previewing import file content http://catherinetenajeros.blogspot.com/2013/05/preview-file-before-importing.html. The difference is, this is showing the content when it is import and your is showing the content based on the exact path. If you can understand the flow of the codes, you can get the solution. One alternative approach would be to put the contents into a hidden div, and just get your click function to unhide it. possible to detach it and attach it somewhere else? .detach() and .insertAfter() would seem to do the job. If abc.html is pure html, then you could do a simple ajax request to load the file, e.g. $(".myDiv").click(function() { $.get('path/to/abc.html', {}, function(content) { $(content).insertAfter("#anotherDiv"); }); }); Although if you're worried about loadtime, you could load it earlier into a variable. Let me know if the code makes sense or if you have any questions :) Ah, yeah I tried it, unfortunately the html contains django tags. (not pure html), I'll edit my question accordingly
common-pile/stackexchange_filtered
deleting existing files from temp directory in Azure We are using the following to write a file to the temp direcotry in Azure webapp for almost a year: string tempName = Path.GetTempFileName(); using (var tempFile = new FileStream(tempName, FileMode.Create)) { await tempMemory.CopyToAsync(tempFile); } We encounter a problem of IOException of file alreay exists in the temp file. Is there any way deleting the files from that temp directory and not from code. we do not want to apply a new code version currently, but just to somehow delete the files from the temp directory. we tried using the azure webapp console from the portal but it didn't delete them. Have you tried Kudu? Its console is available through https://yourappname.scm.azurewebsites.net. Replace yourappname with your Web App's name Restart the WebApp, it will clear the temp directory. Thanks. Worked !!
common-pile/stackexchange_filtered
Please validate my idea to promote the site. I asked this question. After seeing Mr Weiner's tweet about the question on reddit, I asked him to try asking it here to compare quality and response times. Then I remember Jeff saying in a podcast or a blog post that a valid practice would be to ask on behalf of the person, and then present them with the answer, and that this is a great way to promote the network. So that's what I did. Let me know if it's not appropriate for this community. My personal opinion: It is an interesting question, but there are two problems in the current formulation: I don't think that it is reasonable to assume that all mathematicians know what Gerrymandering is, so a short explanation would have been appropriate. It is not clear how the perimeter of an electoral district should be defined, so that should be part of the question. Interesting, I actually didn't know: http://en.wikipedia.org/wiki/Gerrymandering
common-pile/stackexchange_filtered
How to get data by matching only by date from datetime field in php and mysql Value stored under database field datetime: 27-12-2013 11:37:00 I want to get the data by using only date(27-12-2013) cause in datepicker I am only selecting the date to get the data of that particular date. Is it possible? if yes please give me a little code to understand. TIA This is the code I am using. include("config.php"); $date = $_POST['date']; //date format come like 27-12-2013 $result = mysql_query("SELECT * FROM reports where datetime = '$date'"); //actual format is 27-12-2013 11:37:00 can you write your php call? show codes to us check mysql date() function in your query convert your date to dd-mm-yyyy format and then search @IsmailAltunören I just added code please check date_time LIKE '%$date%' can we get from this? "I have a field in database called: 27-12-2013 11:37:00". This is the worst thing I've read today :-( then your day is worst than that.. Use the MySQL DATE function. SELECT whatever FROM tableName WHERE DATE(dateTimeColumn) = dateGoesHere. More specifically, given the code sample you added: $result = mysql_query("SELECT * FROM reports where DATE(datetime) = '$date'"); DATE(dateTimeColumn) whate is the field name i have to use here? date_time LIKE '%$date%' can we get from this? It's working fine with this way like search feature. date_time LIKE '%$date%' Anyway thanks for the time. That implies to me that your datetime column is not actually a datetime, but a string. echo date("d/m/Y", strtotime("2014-12-07 23:36:00"));
common-pile/stackexchange_filtered
Instantiate and load a block programmatically in twig template I created a simple block that shows some text grabbed from an external source, and it renders as expected when I place it from the back-end (Structure->Block Layout). But I want to be able to place it via code only inside a twig template from the theme directory. I thought I would use {{ drupal_block('id defined in Block.php') }} from the Twig tweak module but it never works unless the block is first placed from the back-end which assigns it a machine name then I can display it anywhere using that machine name in drupal_block() instead of the id I defined in its source code. So how can I instantiate it programmatically first and then place it the the twig template I want? During my search I came across Plugin Derivatives, I don't know if it is the solution I want, or how to use it. I am sure there are other ways to solve this problem, but here is how I would do it. Create your block, then go to admin/structure/block and place your block under "Disabled" list. You can then implement hook_preprocess_node in your .theme.php file. theme file: /** * Implements hook_preprocess_node() */ function mytheme_preprocess_node(&$variables) { $block = \Drupal\block\Entity\Block::load('block_id'); if ($block) { $variables['my_custom_block'] = \Drupal::entityTypeManager()->getViewBuilder('block')->view($block); } } In your template twig file: {% if my_custom_block %} {{ my_custom_block }} {% endif %} You just have to make sure that your block_id correct. I like the approach, though I never saw a 'disabled' section in Drupal 8. Unfortunately it doesn't work... If I disable the block it disappears whether using drupal_block() or preprocess_node(). Your twig code is missing a % at the opening if block and there is an extra space at the closing block.
common-pile/stackexchange_filtered
How to create dynamic router for nested items in a sidebar with multilevel dropdown? This is my code currently but the path is not dynamic, everytime when there is a new section in the dropdown I have to hard code the path, is there anyway that I can write a function to auto generate those paths? (The first one is generating the multilevel sidebar, the second file is the router, and the third file is the content in the pages when you click on a section in the sidebar) import React from "react"; import * as AiIcons from "react-icons/ai"; import * as IoIcons from "react-icons/io"; export const SidebarData = [ { label: "Vertical App", path: "/verticalapp", icon: <AiIcons.AiFillHome />, id: 1, branches: [ { label: "Weather App", path: "/verticalapp/weatherapp", icon: <IoIcons.IoIosPaper />, id: 2, branches: [], }, { label: "Occupancy App", path: "/verticalapp/occupancyapp", icon: <IoIcons.IoIosPaper />, id: 3, branches: [], }, ], }, { label: "Company", path: "/company", icon: <IoIcons.IoIosPaper />, id: 4, branches: [ { label: "Reseller", path: "/company/reseller", icon: <IoIcons.IoIosPaper />, id: 5, branches: [ { label: "Client 1", path: "/company/reseller/client1", icon: <IoIcons.IoIosPaper />, id: 6, branches: [ { label: "Client 11", path: "/company/reseller/client11", icon: <IoIcons.IoIosPaper />, id: 7, branches: [], }, ], }, { label: "Client 2", path: "/company/reseller/client2", icon: <IoIcons.IoIosPaper />, id: 8, branches: [ { label: "Client 21", path: "/company/reseller/client21", icon: <IoIcons.IoIosPaper />, id: 9, branches: [], }, { label: "Client 22", path: "/company/reseller/client22", icon: <IoIcons.IoIosPaper />, id: 10, branches: [], }, ], }, ], }, { label: "Client", path: "/company/client", icon: <IoIcons.IoIosPaper />, id: 11, branches: [ { label: "Client 3", path: "/company/client/client3", icon: <IoIcons.IoIosPaper />, id: 12, branches: [], }, { label: "Client 4", path: "/company/client/client4", icon: <IoIcons.IoIosPaper />, id: 13, branches: [], }, ], }, { label: "Consumer", path: "/company/consumer", icon: <IoIcons.IoIosPaper />, id: 14, branches: [], }, ], }, { label: "Contact Us", path: "/contactus", icon: <IoIcons.IoMdHelpCircle />, id: 20, branches: [], }, ]; import './App.css'; import Sidebar from './components/Sidebar'; import { BrowserRouter as Router, Switch, Route } from 'react-router-dom'; import VerticalApp from './pages/VerticalApp'; //import { Company, Reseller, Client, Consumer, Client1, Client2 } from './pages/Company'; import {Company} from './pages/Company'; function App() { return ( <Router> <Sidebar /> <Switch> <Route path='/verticalapp' exact component={VerticalApp} /> <Route path='/company' exact component={Company} /> {/* <Route path='/company/:label' component={Company} /> */} {/* <Route path='/company/reseller' exact component={Reseller} /> <Route path='/company/client' exact component={Client} /> <Route path='/company/consumer' exact component={Consumer} /> <Route path='/company/reseller/client1' exact component={Client1} /> <Route path='/company/reseller/client2' exact component={Client2} /> */} </Switch> </Router> ); } export default App; import React from "react"; export const Company = () => { return ( <div className="reports"> <h1>Company</h1> </div> ); }; export const Reseller = () => { return ( <div className="reports"> <h1>Company/Reseller</h1> </div> ); }; export const Client = () => { return ( <div className="reports"> <h1>Company/Client</h1> </div> ); }; export const Consumer = () => { return ( <div className="reports"> <h1>Company/Consumer</h1> </div> ); }; export const Client1 = () => { return ( <div className="reports"> <h1>Company/Reseller/Client1</h1> </div> ); }; export const Client2 = () => { return ( <div className="reports"> <h1>Company/Reseller/Client2</h1> </div> ); }; You could add route props to the sidebarData and recursively build an array of routes. Example: const sidebarData = [ { label: "Vertical App", path: "/verticalapp", icon: <AiIcons.AiFillHome />, id: 1, routeProps: { render: () => <h1>Vertical App</h1> }, branches: [ { label: "Weather App", path: "/verticalapp/weatherapp", icon: <IoIcons.IoIosPaper />, id: 2, branches: [] }, { label: "Occupancy App", path: "/verticalapp/occupancyapp", icon: <IoIcons.IoIosPaper />, id: 3, branches: [] } ] }, { label: "Company", path: "/company", icon: <IoIcons.IoIosPaper />, id: 4, routeProps: { component: Company }, branches: [ { label: "Reseller", path: "/company/reseller", icon: <IoIcons.IoIosPaper />, id: 5, routeProps: { component: Reseller }, branches: [ { label: "Client 1", path: "/company/reseller/client1", icon: <IoIcons.IoIosPaper />, id: 6, routeProps: { component: Client1 }, branches: [ { label: "Client 11", path: "/company/reseller/client11", icon: <IoIcons.IoIosPaper />, id: 7, branches: [] } ] }, { label: "Client 2", path: "/company/reseller/client2", icon: <IoIcons.IoIosPaper />, id: 8, routeProps: { component: Client2 }, branches: [ { label: "Client 21", path: "/company/reseller/client21", icon: <IoIcons.IoIosPaper />, id: 9, branches: [] }, { label: "Client 22", path: "/company/reseller/client22", icon: <IoIcons.IoIosPaper />, id: 10, branches: [] } ] } ] }, { label: "Client", path: "/company/client", icon: <IoIcons.IoIosPaper />, id: 11, routeProps: { component: Client }, branches: [ { label: "Client 3", path: "/company/client/client3", icon: <IoIcons.IoIosPaper />, id: 12, branches: [] }, { label: "Client 4", path: "/company/client/client4", icon: <IoIcons.IoIosPaper />, id: 13, branches: [] } ] }, { label: "Consumer", path: "/company/consumer", icon: <IoIcons.IoIosPaper />, id: 14, routeProps: { component: Consumer }, branches: [] } ] }, { label: "Contact Us", path: "/contactus", icon: <IoIcons.IoMdHelpCircle />, id: 20, routeProps: { render: () => <h1>About Us</h1> }, branches: [] } ]; Compute the routes const routes = (data) => data // include only data items with route props .filter(({ routeProps }) => !!routeProps) // recursively get "nested" routes, flatten to single array .flatMap(({ branches, id, path, routeProps }) => [ ...routes(branches), { id, path, ...routeProps } ]); const getRoutes = (data) => routes(data) // sort more specific routes before less specific routes .sort((a, b) => b.path.localeCompare(a.path)) // map to Route component .map(({ id, path, ...routeProps }) => ( <Route key={id} path={path} {...routeProps} /> )); ... <Switch> {getRoutes(sidebarData)} </Switch> Thank you for answering! But then I will still need to add path for every new branches and label, is there a way to not do that? @Yuki You need to declare the paths you want to match and render somewhere so they can be matched and render routed content. If this isn't what you are wanting to do then it's completely unclear what you are trying to do or expecting. Can you clarify more precisely the use case you are trying to capture?
common-pile/stackexchange_filtered
Having issue when playing Smash Ultimate in dock I would be playing Super Smash Bros Ultimate on the dock and during gameplay the TV screen goes like a greyish black color like it disconnected BUT the music is still playing and going to the home page doesn't fix it. Please note this "bug" only happens while playing SSBU as I've tested with other games and this only happens when playing docked (TV mode). The TV screen doesn't show the "no signal" screen saver either and I'm not using a monitor. I'm suspecting an issue with my gamecard maybe. Do you have any friends with Smash Ultimate that you could borrow their game card to test it? Perhaps a rental service nearby to grab a copy of the game? I currently dont have any rental services nearby or friends with this game. Although ive tested playing all the other games i have and cannot find an issue. I am currently testing wither or not it only happens during multiplayer since i only play smash on the dock when with friends. Also whoever fixed the grammer on my original post. Thanks
common-pile/stackexchange_filtered
git diff ignoring whitespace unless a space is deleted If I use git diff --ignore-space-change --ignore-all-space to get only relevant changes, I miss changes, where a space between two words was completely deleted. Example echo "the space before this string is irrelevant" echo "foo bar are two words" changes to echo "the space before this string is irrelevant" echo "foobar are two words" I wouldn't see the change gluing foo and bar together, creating a new single word. I want to see those (sometimes really relevant) changes in the git diff output, something like git diff --ignore-space-change-unless-no-space-left Is that diff reported with --word-diff ? (the output wouldn't be a regular patch, though) Just take out --ignore-all-space. $ cat f1 f2 echo "the space before this string is irrelevant" echo "foo bar are two words" echo "the space before this string is irrelevant" echo "foobar are two words" $ git diff --ignore-space-change f1 f2 diff --git a/f1 b/f2 index c9f73d1..fba75e4 100644 --- a/f1 +++ b/f2 @@ -1,2 +1,2 @@ echo "the space before this string is irrelevant" -echo "foo bar are two words" +echo "foobar are two words"
common-pile/stackexchange_filtered
Why doesn't FileStream as an argument to Streamwriter write to text file? In the code included below, I am able to write the contents of the string 'fullname' to a text file in the specified directory when using the following statement: System.IO.File.WriteAllText(path, fullname); However, if I write the string path to a FileStream object (withe arguments specified), and then pass that FileStream object as an argument to the StreamWriter object, the file is created, but no contents are written. First attempt: Comment out System.IO.File.WriteAllText(path, fullname); and use the three lines above it. This creates the file but no contents are written into the file. Second attempt: Un-comment the System.IO.File.WriteAllText(path, fullname); statement and comment the three lines above it. This executes as desired. Here is the full block of code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace FileInputOutput { class Program { static void Main(string[] args) { // Use the Split() method of the String Class string fullname = " Robert Gordon Orr "; fullname = fullname.Trim(); string[] splitNameArray = fullname.Split(' '); Console.WriteLine("First Name is: {0}", splitNameArray[0]); Console.WriteLine("Middle Name is: {0}", splitNameArray[1]); Console.WriteLine("Last Name is: {0}", splitNameArray[2]); Console.WriteLine("Full name is: {0}", fullname); string path = @"C:\Programming\C#\C# Practice Folder\Console Applications\FileInputOutput\textfile.txt"; FileStream fs = new FileStream(path, FileMode.Create, FileAccess.ReadWrite); StreamWriter toFile = new StreamWriter(fs); toFile.Write(fullname); //System.IO.File.WriteAllText(path, fullname);`enter code here` Console.ReadLine(); } } } You can use File.WriteAllText instead.. It is because you look at the file at the wrong time. The FileStream didn't flush its buffer yet. It didn't have any reason to do so, you didn't close the file. Using the using statement is not optional here. Welcome! Questions usually seem better when you don't preface them with statements about your experience or your research. It would be better to begin with your actual question (everything after the exclamation point), and then include relevant quotes from your research and links to your sources at the end. As others have said: streams must be flushed in .NET in order for them to write to disk. This can be done manually, however I would simply change your code to have using statements on your streams: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace FileInputOutput { class Program { static void Main(string[] args) { // Use the Split() method of the String Class string fullname = " Robert Gordon Orr "; fullname = fullname.Trim(); string[] splitNameArray = fullname.Split(' '); Console.WriteLine("First Name is: {0}", splitNameArray[0]); Console.WriteLine("Middle Name is: {0}", splitNameArray[1]); Console.WriteLine("Last Name is: {0}", splitNameArray[2]); Console.WriteLine("Full name is: {0}", fullname); string path = @"C:\textfile.txt"; using (FileStream fs = new FileStream(path, FileMode.Create, FileAccess.ReadWrite)) { using (StreamWriter toFile = new StreamWriter(fs)) { toFile.Write(fullname); } } //System.IO.File.WriteAllText(path, fullname);`enter code here` Console.ReadLine(); } } } Calling Dispose() on a stream (as using implicitly does), causes the stream to be flushed and closed at the end of the using block. Thank you to all. I am somewhat familiar with the 'using' statement from applying it in ADO database calls. Upon using the 'using' code block above, my file was successfully written. I will investigate this further to better understand the execution. Thanks again to all. I think you are just forgetting to flush your file stream: fs.Flush(); This is needed because according to msdn, this is what makes the FileStream to actually write the buffer to the file. Flush: Clears buffers for this stream and causes any buffered data to be written to the file. (Overrides Stream.Flush().) Regards. From MSDN on StreamWriter You must call Close to ensure that all data is correctly written out to the underlying stream. So the problem here is mainly that, since you don't actually close the StreamWriter, the data gets backed up but doesn't push to the file, even though the FileStream immediately created the file in its constructor. Never ever forget to close your stream, as failing to do so could lead to major problems down the line.
common-pile/stackexchange_filtered
Java OpenCV: HOGDescriptor.detectMultiScale parameters I'm still new to image/video processing and a part of our project includes detecting people in a video. For this I plan to use OpenCV with HOG and SVM. Can somebody describe what the parameters for the method HOGDescriptor.detectMultiScale(...) do? Specifically the hit threshold, padding, scale, final threshold, and use mean shift grouping. The javadoc of opencv doesn't really help. Thanks! did you look at the doc ? if you dont understand a specific parameter , ask it here @Pradheep Yes, it doesn't include any description for the parameters. The javadoc I'm looking at is this: http://docs.opencv.org/java/2.4.2/org/opencv/objdetect/HOGDescriptor.html
common-pile/stackexchange_filtered
Access Hive table from HDinsight cluster I am using pyspark to access hive inside my HDinsight cluster. When i go and query hive it shows me all the databases but when i query from spark it just shows default database. I believe it just go and query spark catalog by default. The workaround i found was i should use Hive warehouse connector to connect to hive from spark. Is there any other way to do it? Code spark = SparkSession \ .builder \ .appName("Python Spark SQL Hive integration example") \ .config("hive.metastore.uris", "thrift://hn0-mytestua.abc.dxbx.internal.cloudapp.net:9083") \ .config("spark.sql.warehouse.dir", '/hive/warehouse/external') \ .enableHiveSupport() \ .getOrCreate() spark.sql("show databases").show() If you don't want to specify hive related configuration in spark code then you can simply copy hive-site.xml file from $HIVE_HOME/conf foder to $SPARK_HOME/conf folder. If it's not possible to do file copy then you can use below configurations while creating SparkSession or while launching spark jobs to connect to hive. spark.sql.hive.metastore.jars = $HIVE_HOME/lib/* // No need to specify this if it's already in CLASSPATH spark.hadoop.hive.metastore.uris = thrift://<host>:9083 spark.sql.hive.metastore.version= <hive version> No need to specify metastore version if your hive version matches with default version specified here in documenation. Azure HDInsight provides HWC connector to integrate hive with spark. i don't have access to files there are just properties in ambari Okay, then you can go ahead with programmatic approach only. and what is that? but this is not working, it is by default pointing to spark catalog, how to change it? Updated the answer, check with those configurations. i have added .config("spark.sql.hive.metastore.jars" , '$HIVE_HOME/lib/*' ) .config("spark.sql.warehouse.dir", '/hive/warehouse/external') in configs,it still doesn't show up hive tables. You are using spark-submit command and launching your spark app right? btw what's hive and spark versions? Hive <IP_ADDRESS>.1.5.19, spark 2.4.4 Let us continue this discussion in chat.
common-pile/stackexchange_filtered
Span is not aligning properly inside Div CSS <span> is only aligning horizontally but not vertically inside <div> CSS: .upload-cont{ cursor:pointer; margin-left:130px; display:inline-block; border:2px dashed #a8a8a8; max-width:220px; max-height:180px; min-width:220px; min-height:180px; } .add-text{ display:block; font-size:10px; font-weight:bold; color:#999; word-wrap:break-word; text-align:center; width:100px; margin:auto; vertical-align:middle; } HTML: <div class="upload-cont"> <span class="add-text">Something</span> </div> What should i do to align the <span> vertically that is at the middle of <div>? Check this jsfiddle: http://jsfiddle.net/xdYUs/1/ Try this: http://jsfiddle.net/xdYUs/2/ Use position:relative; to the container and position:absolute; to the span element. As I've seen, your container has fixed width and height. You can use that by setting the top and left properties of the span element Try this... .add-text{ display:block; font-size:10px; font-weight:bold; color:#999; text-align:center; width:100px; margin: 40% auto; } jsFiddle Example Greetings... .add-text { display:block; font-size:10px; font-weight:bold; color:#999; word-wrap:break-word; text-align:center; width:100px; margin:auto; vertical-align:middle; line-height:170px; } line-height => 180px (container) - 10px (font size) = 170px In actual code i had content wrapped in and sentence will be breaked so each line will have 170px height.
common-pile/stackexchange_filtered
Sudden change of class from "matrix" to "integer" Apparently some matrices when indexed in a certain way stay matrices, others, indexed in the same way become arrays. Example: > test = matrix(1:10, nrow = 2) > class(test) [1] "matrix" > class(test[,1:2]) [1] "matrix" > test = matrix(1:10, nrow = 1) > class(test) [1] "matrix" > class(test[,1:2]) [1] "integer" I would like to know: Why is this a feature and not a bug? Is there a polished way to make it stay a matrix? (I mean, sure, I could store nrow(test) somewhere and then use matrix(test, nrow = somewhere)) but i hope that there's a neat one-liner for something that in principle should have worked authomatically. P.S. Why I find this an issue? Because if I then transpose the vector with t() (the vector that lost the matrix structure) it gives me a row vector, I wanted a column vector, as it should have been in theory. Also, rowMeans() doesn't work anymore with that vector. Yes, when [ applied to a matrix gets a single row or column back, the extra dimensions are silently "dropped" and it becomes a vector. This can be disabled by using the drop = FALSE argument. class(test[, 1:2, drop = FALSE]). Regarding 1: it is widely considered a bug (or at least a mistake), which is why e.g. ‘tibble’ explicitly breaks inheritance rules to change this behaviour in the tibble class which inherits from data.frame. This is also one of the top answers in the old FAQ about the biggest R "gotchas"
common-pile/stackexchange_filtered
How to see proxy configuration on mac (not UI) to use corporate git How can I see in mac the proxy configuration on my machine ( Not the ui) My friend have problem to use our corporate git when He do git clone he get proxy error but in my mac I was able to clone this repo. in the UI both configuration are the same https://kb.netgear.com/25191/Configuring-TCP-IP-and-Proxy-Settings-on-Mac-OSX And maybe we miss something else... we use in the command line export http_proxy=http://proxy.mycompany.corp:8080 and also https_proxy=http://proxy.mycompany.corp:8080 without success, any idea ? Try curl -v google.co.uk That will show you the proxy server and proxy port you are using Then you can compare with your friends output.
common-pile/stackexchange_filtered
Computer inoperational but motherboard LED on My friend's computer switched off suddenly and since then nothing happens when you press the power button - no spinup, fans or anything. The only sign of life is that the motherboard LED is on. Is this a motherboard or a PSU issue? We can't find any blown capacitors. The model of motherboard would be helpful. Is the light labeled? Some boards have lights to indicate that there's any power, some have lights that indicate there's good power. It's impossible to tell without testing. Short the trigger pins on the PSU to see if it starts up, and try a fresh PSU with the motherboard as well.
common-pile/stackexchange_filtered
Passing variable from PHP to Python on Windows 10 affected by import pandas I am trying to pass a variable from PHP to Python on Windows, but the line "import pandas" is causing an issue. All my code below is bare-bones of the actual process I am trying to create for simplicity. The first chunk of code is my Index, the second is the PHP code called by Index.php, and the last chunk is Python. Index.php <!DOCTYPE html> <html> <head> <b>Enter a folder path </b> </head> <body> <form action="BlastParse.php" method="post"> Path: <input type ="text" name="path"><br> <input type="submit"> </form> </body> </html> BlastParse.php <html> <body> <?php #getting path passed from index.php $masterpath = $_POST["path"]; echo 'The path requested to be passed is: ' . $masterpath . '<br>'; #my directories $python = 'C:/Users/Garrett/Anaconda3/python.exe'; $pyscript = 'C:/Users/Garrett/Documents/Python/temp.py'; $pyscriptPrimed = $pyscript . ' '; #creating the command $command ="$python $pyscriptPrimed"; #executing the command to call temp.py; adding passed path to command exec($command .$masterpath, $output, $return_var); ?> </body> </html> temp.py import os import sys #path passed into python from php file_path = sys.argv[1] #file_path = 'Write this string to file' with open("C:/Users/Garrett/Documents/Python/copy.txt", 'w') as file: file.write(file_path) #PROBLEM HERE import pandas as pd with open("C:/Users/Garrett/Documents/Python/copy2.txt", 'w') as file: file.write(file_path) I am using the writing to copy.txt and copy2.txt for debugging purposes roughly since there isn't anything produced on the terminal. When I comment out the import pandas line, the copy2.txt file is created and written to properly. If not, the copy2.txt file is not created and the $return_var variable returns a 1 in PHP (which I'm not sure what the error code represents yet). I am running on Windows 10 with Python 3.7, and using VS Code through Anaconda. So, it looks like pandas is failing to import. Likely do to path issues. Not an expert on php exec, but if I had to guess, that would be the reason In all likelihood this is because pandas is not installed where you are trying to run. This could be because you have not activated your anaconda environment before you are calling your python script. I haven't tested the code below but it should point you in the right direction: $command ="source activate environment-name && $python $pyscriptPrimed && source deactivate";' In order to help debug this the first thing I'd try is wrapping the import statement in a try catch and either printing: try: import pandas as pd except Exception as e: print(str(e)) If that fails to print to console try this to write to file: try: import pandas as pd except Exception as e: with open("C:/Users/Garrett/Documents/Python/error.txt", 'w') as file: file.write(str(e)) Just as an aside on your comment in the question error code 1 from a script is a Catchall for general errors. An exit status of 0 is a success Thanks so much! I added the try except and it spit out that I needed to also import numpy....so I did. It then spit out that "Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy." I then uninstalled numpy, reinstalled, and it worked!
common-pile/stackexchange_filtered
The main contribution is (proposing / to propose) a new method The main contribution of this paper is (proposing/to propose) a new method. I'm confused about using gerund or infinitive in this sentence. I searched for this sentence on the internet, but I didn't find anything that solves my problem. Which one should I choose? Either of these is perfectly acceptable, and there is no significant difference in meaning. "The contribution is {gerund form} X" and "The contribution is {infinitive form} X" should be interchangeable for any verb and any value of X, without significant difference in meaning. There's four things that can happen with this sentence. Three arrive at a the same meaning. Possibility 1: Is proposing is an expression of a continuous aspect. This states what the paper "is doing" right now. A speaker/writer might intend this if communicating what she believes the contribution is while he/she is reading paragraphs from it. Possibility 2: Proposing is a gerund This means we're talking about an activity abstractly - meaning who/what does it isn't known or important, and we don't mean the effect or result, but the actual process/activity. The paper itself talks about proposing, but isn't doing the proposing itself, nor is telling anyone specific to do it, so this is a good use case for gerund. Possibility 3: To propose, the infinitive Infinitives are verbs that aren't "finited" by subjects or objects. It can be used like a gerund in many cases, and it works here to mean the same thing. And: Possibility 4: Phrasal variation of to be - to be to X. To be to X is a variation of to be that is a stronger version of supposed to be X with an implication of obligation. For example, He is to be at the park at 6pm means he's supposed to be at the park at 6pm, implying he was told or obligated to be there. Unless you are writing the paper, or having it written, and wanting to make clear what the contribution's purpose is, this wouldn't apply. You should be fine either way, but if you wish to avoid this entirely, use a noun. The main contribution of this paper is a proposal of a new method.
common-pile/stackexchange_filtered
Split using regex Trying to build a search criteria query parsing using StringTokenizer. The query goes like (key:operator:value) (key:operator:value) and (key:operator:value) I have tried Stack<Object> stack = new Stack<>(); StringTokenizer tokenizer = new StringTokenizer(filterString, "()", true); while (tokenizer.hasMoreElements()) { String token = tokenizer.nextToken(); if (isOpenBrace(token)) { stack.push(token); } else if (isCloseBrace(token)) { Object preVal = null; Criteria criteria = null; String operator = null; while (!isOpenBrace(stack.peek())) { Object top = stack.pop(); if (isString(top)) { String op = (String) top; if (isAnd(op) || isOr(op)) { operator = (String) top; } else { throw new Exception("Invalid operand"); } } else if (isCriteria(top)) { if (preVal != null) { if (isAnd(operator)) { top = ((Criteria) top).and((Criteria) preVal); criteria = (Criteria) top; preVal = top; } else if (isOr(operator)) { top = ((Criteria) top).or((Criteria) preVal); criteria = (Criteria) top; preVal = top; } } else { preVal = top; criteria = (Criteria) preVal; } } } if (criteria != null) { stack.pop(); stack.push(criteria); } } else if (isAnd(token) || isOr(token)) { stack.push(token); } else { String[] parts = token.split(COLON); // do rest stuffs } if (stack.size() != 1) { throw new Exception("Invalid filter"); } Note: The Criteria internal specific. This works fine as long as there are no () in the value part of the expression. Im trying to convert the same using split() and regexes so that, the tokens I get is "(", "key:operator:va(lue)", ")" for string "(key:operator:va(lue)" I'm not sure that you should be using regex for this. A parser, what you already have, seems more appropriate. Is there a way to split based on this regex like "string:string:string" which splits on this regex and all the other characters Hardly a chance to get it right with nested parenthesis and regex. With regex you would not be able to count, and espeically if tokens can have similar start characters this will get unmaintainable pretty fast. Have a look at https://www.antlr.org/
common-pile/stackexchange_filtered
An ellipsis with N dots How might one define a lualatex macro \lip, taking optional arguments * and [N], such that, with no argument, the expansion is an ellipsis consisting of three dots, with [N] it has N dots, and with * its ends a sentence? It should be possible to use it like \lip this, ie. without closing it with {}. PS. Ideally, if the ellipsis is followed by a comma, the space between the last dot and the comma should be the same as the space between the dots. What do you mean by “end a sentence”? @egreg, spacefactor 3000. You do not need LuaLaTeX for this, it can be done quite easily with expl3: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{expl3,xparse} \ExplSyntaxOn \NewDocumentCommand\lip{s O{3}}{ $ \prg_replicate:nn{#2}{\ldotp} $ \IfBooleanT{#1}{\spacefactor\sfcode`\.\relax} } \ExplSyntaxOff \begin{document} Hello \lip, I know how to write dots: \lip*[10] Anyway \lip[3] not all dots end sentences. \end{document} Of course, if you use it like some \lips words, TeX will gobble the space. This is very hard to avoid even with LuaTeX because LuaTeX does not change the TeX parsing rules. Of course this problem does not exists if you use a star or the optional argument. If you really need it, there are three options I can think of: Always add the space if no argument has been given. This would break e.g. at \lip, in the example above. Use xspace but remember the drawbacks \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{expl3,xparse,xspace} \ExplSyntaxOn \NewDocumentCommand\lip{s o}{ $ \prg_replicate:nn{\IfValueTF{#2}{#2}{3}}{\ldotp} $ \IfBooleanTF{#1}{ \spacefactor\sfcode`\.\relax }{ \IfValueF{#2}{\xspace} } } \ExplSyntaxOff \begin{document} Hello \lip, I know how to write dots: \lip*[10] Anyway \lip not all dots end sentences. \end{document} The "LuaTeX solution": Use a process_input_buffer callback to detect all input lines where \lip is used and always add an explicit space afterwards. This would be extremely fragile and will not interact properly with macros etc. So it is much more reliable to just manually add \ or [3] when this occurs. @jfbu That would work, but I think \prg_replicate:nn looks much cleaner. @jfbu Just loading expl3 adds about 0.5 seconds of startup time to my LaTeX run. I try to avoid loading it where possible. \documentclass{article} \makeatletter \def\lip{\@ifstar{\let\@liptmp\relax\@lip}{\let\@liptmp\@\@lip}} \newcommand\@lip[1][3]{{\uccode`m=`.\uppercase\expandafter{\romannumeral#1000}}\@liptmp\space \ignorespaces} \begin{document} aaa\lip[5] bbb aaa\lip bbb aaa\lip[7] bbb aaa\lip*[5] bbb aaa\lip* bbb aaa\lip*[7] bbb \end{document} I guess that after \lips* you want a normal space (subject to the space factor): \documentclass{article} \usepackage{xparse} % the definition of \textellipsis is % .\kern\fontdimen3\font % repeated three times \ExplSyntaxOn \NewDocumentCommand{\lips}{sO{3}} { \prg_replicate:nn { #2 } { .\kern\fontdimen3\font } \IfBooleanT{#1} { \unkern\spacefactor 3000 \scan_stop: \c_space_tl } \ignorespaces } \ExplSyntaxOff \begin{document} Three \lips Four \lips[4] Five \lips[5], with a comma Here is \lips It was not end of sentence. Here is \lips* It was end of sentence. Here is \mbox{\lips\unkern} It was not end of sentence \end{document} The last line emulates a normal space not subject to the space factor, so to appreciate the difference. looks as if the space before the ellipsis is larger than after. @Toothrot It's a normal space. You didn't specify what you want. oh i see what is going on. the space after the ellipsis is unstrechable @Toothrot I tried to guess, maybe you can be more precise.
common-pile/stackexchange_filtered