qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
56,201,386
I am trying to update my Android project. I initially got this error: > > ERROR: Gradle DSL method not found: 'classpath()' > Possible causes: > The project 'xxx' may be using a version of the Android Gradle plug-in that does not contain the method (e.g. 'testCompile' was added in 1.1.0). > Upgrade plugin to version 3.4.1 and sync project > > > So I tried this in my app's `build.gradle`: ``` buildscript { repositories { maven { url 'https://maven.fabric.io/public' } } dependencies { classpath 'com.android.tools.build:gradle:3.4.1' classpath 'io.fabric.tools:gradle:1.29.0' } } ``` But now I'm getting this error: > > ERROR: Could not find com.android.tools.build:gradle:3.4.1. > > > What to do? Thanks for your help.
2019/05/18
[ "https://Stackoverflow.com/questions/56201386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2885727/" ]
It's going to be easiest for you to use a [callable function](https://firebase.google.com/docs/functions/callable), since that lets you: 1. Automatically send the current user's uid in the request. 2. Know very easily on the function side if a UID was provided in the request, and refuse service if none was provided. The flutter plugin is [here](https://pub.dev/packages/cloud_functions). You should be able to do the equivalent work yourself, though, since callable functions are just a wrapper around normal HTTP connections. It's possible for you to get the ID token of the logged in user.
``` import 'package:firebase_messaging/firebase_messaging.dart'; . . . final FirebaseMessaging _firebaseMessaging = FirebaseMessaging(); @override Future<void> initState() { super.initState(); _firebaseMessaging.getToken().then((token) { assert(token != null); print("teken is: " + token); }); } ```
56,201,386
I am trying to update my Android project. I initially got this error: > > ERROR: Gradle DSL method not found: 'classpath()' > Possible causes: > The project 'xxx' may be using a version of the Android Gradle plug-in that does not contain the method (e.g. 'testCompile' was added in 1.1.0). > Upgrade plugin to version 3.4.1 and sync project > > > So I tried this in my app's `build.gradle`: ``` buildscript { repositories { maven { url 'https://maven.fabric.io/public' } } dependencies { classpath 'com.android.tools.build:gradle:3.4.1' classpath 'io.fabric.tools:gradle:1.29.0' } } ``` But now I'm getting this error: > > ERROR: Could not find com.android.tools.build:gradle:3.4.1. > > > What to do? Thanks for your help.
2019/05/18
[ "https://Stackoverflow.com/questions/56201386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2885727/" ]
It's going to be easiest for you to use a [callable function](https://firebase.google.com/docs/functions/callable), since that lets you: 1. Automatically send the current user's uid in the request. 2. Know very easily on the function side if a UID was provided in the request, and refuse service if none was provided. The flutter plugin is [here](https://pub.dev/packages/cloud_functions). You should be able to do the equivalent work yourself, though, since callable functions are just a wrapper around normal HTTP connections. It's possible for you to get the ID token of the logged in user.
Get your token from firebaseAuth and put in a string. ``` Future<Details> getDetails() async { String bearer = await FirebaseAuth.instance.currentUser!.getIdToken(); print("Bearer: " + bearer.toString()); String token = "Bearer ${bearer}"; var apiUrl = Uri.parse('Your url here'); final response = await http.get(apiUrl, headers: { 'Authorization' : '${token}' }); final responseJson = jsonDecode(response.body); return Details.fromJson(responseJson); } ```
56,201,386
I am trying to update my Android project. I initially got this error: > > ERROR: Gradle DSL method not found: 'classpath()' > Possible causes: > The project 'xxx' may be using a version of the Android Gradle plug-in that does not contain the method (e.g. 'testCompile' was added in 1.1.0). > Upgrade plugin to version 3.4.1 and sync project > > > So I tried this in my app's `build.gradle`: ``` buildscript { repositories { maven { url 'https://maven.fabric.io/public' } } dependencies { classpath 'com.android.tools.build:gradle:3.4.1' classpath 'io.fabric.tools:gradle:1.29.0' } } ``` But now I'm getting this error: > > ERROR: Could not find com.android.tools.build:gradle:3.4.1. > > > What to do? Thanks for your help.
2019/05/18
[ "https://Stackoverflow.com/questions/56201386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2885727/" ]
I agree with @Doug on this one - callable wraps this for you and will be easier -, but my use case required me to make HTTPS calls (`onRequest` in Functions). Also, I think you're just in the correct path - but you're **possibly** not checking it in your Cloud Functions. In your app, you'll call: ```dart _httpsCall() async { // Fetch the currentUser, and then get its id token final user = await FirebaseAuth.instance.currentUser(); final idToken = await user.getIdToken(); final token = idToken.token; // Create authorization header final header = { "authorization": 'Bearer $token' }; get("http://YOUR_PROJECT_BASE_URL/httpsFunction", headers: header) .then((response) { final status = response.statusCode; print('STATUS CODE: $status'); }) .catchError((e) { print(e); }); } ``` In your function, you'll check for the token: ```js export const httpsFunction = functions.https.onRequest((request, response) => { const authorization = request.header("authorization") if (authorization) { const idToken = authorization.split('Bearer ')[1] if (!idToken) { response.status(400).send({ response: "Unauthenticated request!" }) return } return admin.auth().verifyIdToken(idToken) .then(decodedToken => { // You can check for your custom claims here as well response.status(200).send({ response: "Authenticated request!" }) }) .catch(err => { response.status(400).send({ response: "Unauthenticated request!" }) }) } response.status(400).send({ response: "Unauthenticated request!" }) }) ``` Keep in mind: If I'm not mistaken, those tokens are valid for 1 hour, if you are going to store them somewhere, just be aware of this. I've tested locally and i**t takes around 200~500ms** - every time - to get only the id token, which in most cases are not that big of overhead - but is significant.
``` import 'package:firebase_messaging/firebase_messaging.dart'; . . . final FirebaseMessaging _firebaseMessaging = FirebaseMessaging(); @override Future<void> initState() { super.initState(); _firebaseMessaging.getToken().then((token) { assert(token != null); print("teken is: " + token); }); } ```
56,201,386
I am trying to update my Android project. I initially got this error: > > ERROR: Gradle DSL method not found: 'classpath()' > Possible causes: > The project 'xxx' may be using a version of the Android Gradle plug-in that does not contain the method (e.g. 'testCompile' was added in 1.1.0). > Upgrade plugin to version 3.4.1 and sync project > > > So I tried this in my app's `build.gradle`: ``` buildscript { repositories { maven { url 'https://maven.fabric.io/public' } } dependencies { classpath 'com.android.tools.build:gradle:3.4.1' classpath 'io.fabric.tools:gradle:1.29.0' } } ``` But now I'm getting this error: > > ERROR: Could not find com.android.tools.build:gradle:3.4.1. > > > What to do? Thanks for your help.
2019/05/18
[ "https://Stackoverflow.com/questions/56201386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2885727/" ]
I agree with @Doug on this one - callable wraps this for you and will be easier -, but my use case required me to make HTTPS calls (`onRequest` in Functions). Also, I think you're just in the correct path - but you're **possibly** not checking it in your Cloud Functions. In your app, you'll call: ```dart _httpsCall() async { // Fetch the currentUser, and then get its id token final user = await FirebaseAuth.instance.currentUser(); final idToken = await user.getIdToken(); final token = idToken.token; // Create authorization header final header = { "authorization": 'Bearer $token' }; get("http://YOUR_PROJECT_BASE_URL/httpsFunction", headers: header) .then((response) { final status = response.statusCode; print('STATUS CODE: $status'); }) .catchError((e) { print(e); }); } ``` In your function, you'll check for the token: ```js export const httpsFunction = functions.https.onRequest((request, response) => { const authorization = request.header("authorization") if (authorization) { const idToken = authorization.split('Bearer ')[1] if (!idToken) { response.status(400).send({ response: "Unauthenticated request!" }) return } return admin.auth().verifyIdToken(idToken) .then(decodedToken => { // You can check for your custom claims here as well response.status(200).send({ response: "Authenticated request!" }) }) .catch(err => { response.status(400).send({ response: "Unauthenticated request!" }) }) } response.status(400).send({ response: "Unauthenticated request!" }) }) ``` Keep in mind: If I'm not mistaken, those tokens are valid for 1 hour, if you are going to store them somewhere, just be aware of this. I've tested locally and i**t takes around 200~500ms** - every time - to get only the id token, which in most cases are not that big of overhead - but is significant.
Get your token from firebaseAuth and put in a string. ``` Future<Details> getDetails() async { String bearer = await FirebaseAuth.instance.currentUser!.getIdToken(); print("Bearer: " + bearer.toString()); String token = "Bearer ${bearer}"; var apiUrl = Uri.parse('Your url here'); final response = await http.get(apiUrl, headers: { 'Authorization' : '${token}' }); final responseJson = jsonDecode(response.body); return Details.fromJson(responseJson); } ```
62,750,012
I have a Mule 4 flow which connects to a SFTP location to read files, and perform a set of operations. This works fine when I run the project from Anypoint Studio. However when I deploy this, I see the following error when the projects gets deployed to cloudhub: **Connectivity test failed for config 'SFTP\_Config'. Application deployment will continue. Error was: Could not establish SFTP connection with host: '**.**.**.***' at port: '22' - timeout: socket is not established org.mule.runtime.api.connection. ConnectionException: Could not establish SFTP connection with host: '**.**.**.***' at port: '22' - timeout: socket is not established at org.mule.runtime.core.internal.connection.ErrorTypeHandlerConnectionProviderWrapper.lambda$connect$0(ErrorTypeHandlerConnectionProviderWrapper.java:70) at java.util.Optional.map(Optional.java:215)** My SFTP connection config is as shown below: ``` <sftp:config name="SFTP_Config" doc:name="SFTP Config" doc:id="3cac96f7-4985-48eb-a4fc-77a312a6dc22" > <sftp:connection workingDir="${sftp.basepath}" host="${sftp.host}" username="${sftp.user}" password="${sftp.password}" port="${sftp.port}"> <reconnection > <reconnect frequency="20000" count="5" /> </reconnection> </sftp:connection> </sftp:config> ``` Am I missing something in the connection configuration. Or is the error due to something else? Any help would be highly appreciated.
2020/07/06
[ "https://Stackoverflow.com/questions/62750012", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1982597/" ]
Usually communication will fail from CloudHub when it is a private server, not accessible from the public Internet, or access requires a whitelist from the server side. You can test if there is network connectivity deploying a test application to perform tests as described here: <https://help.mulesoft.com/s/article/Network-connectivity-testing> In the first case, you need to [setup a VPN from CloudHub to the network](https://docs.mulesoft.com/runtime-manager/vpn-about) where the SFTP server resides. You may also need to [setup the internal DNS](https://docs.mulesoft.com/runtime-manager/resolve-private-domains-vpc-task) if the host name is not public or not an IP. For the second you can setup a [public static IP](https://docs.mulesoft.com/runtime-manager/deploying-to-cloudhub#static-ips-tab-settings) for the worker where the application is deployed.
Double check that your sftp.host is available from cloud.
54,291
Ich habe anderen Diskussionen, die sich auf dieses Thema beziehen, aber Ich habe die Antwort meiner Frage nicht gefunden. Es geht um einem einfachen Zweifel: Sind die beiden Sätze hierunter austauschbar? Oder haben sie verschiedene Bedeutungen? "Er macht Urlaub in Spanien, **um** Spanisch **zu** lernen." vs. "Er macht Urlaub in Spanien, Spanisch **zu** lernen."
2019/09/10
[ "https://german.stackexchange.com/questions/54291", "https://german.stackexchange.com", "https://german.stackexchange.com/users/39728/" ]
It seems your (only) problem is the part "*Nichts wie...*". This is used in a number of phrases in casual oral communication. Thieves after noticing that they were being spotted by the home owner: > > Nix wie weg! > > > (*Nix* is a popular casual/oral short form of *nichts*) People in a house that caught fire: > > Nichts wie raus hier! > > > People in the garden, surprised by a sudden thunderstorm, and finding it the best option to quickly retreat into the house: > > Herrje, ein Gewitter! Nichts wie rein! > > > The full meaning would be something like: > > Es bleibt uns nichts anderes übrig, als hier schnell zu verschwinden (or what ever action is needed) > > > Or if you insist on having the *wie* used in the long sentence, take > > **Nichts** ist jetzt so geraten **wie** hier schnell **weg**zugehen. > > > But of course in cases of emergency you prefer shorter (and less twisted) expressions, therefore *Nichts wie...* A bit less expressive you could also say: > > Schnell weg! > > > Schnell rein! > > > Schnell raus! > > >
It literally means "Nothing (is (as good)) as/like (going) out (of) here". Similar constructs can also be found in English, eg "Nothing like a hot bath now!", or Latin "Nihil nisi...." (nothing if not....). Alternatively, it could be a contraction of "nichts (zu tun) wie (=als) raus hier (zu gehen)", "nothing (to do) than out of here (walk/go)". Very strictly speaking, all these are subtly incorrect: "raus" is a simplification of "heraus" (out TO HERE = out of there), not "hinaus" (out TO THERE = out of here). In practice, used interchangeably.
51,406,540
I have two matrices, `t1` and `t2`: ``` > t1 aaa bbb ccc ddd [1,] 1 2 3 4 > t2 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] "aaa" "ddd" "aaa" "bbb" "ccc" "bbb" "ddd" "aaa" "ccc" ``` Is there a way to obtain a matrix that replaces the data of `t2` according to the table `t1` without a loop? The matrix I would like to have at the end is: ``` > t3 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] 1 4 1 2 3 2 4 1 3 ``` I tried with the `%in%` matching, but as the two matrices have not the same length of course it doesn't work.
2018/07/18
[ "https://Stackoverflow.com/questions/51406540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7064929/" ]
I think you'd be better off using the enum in your match like: ``` SortCountryField withName <your_string> match { case SortCountryField.countryName => //Some operations case SortCountryField.countryStatus => //another operation } ``` If your string sometimes doesn't match any field then you can easily wrap this in a `Try` like in the following code: ``` Try(SortCountryField withName <your_string>) match { case Success(SortCountryField.countryName) => //Some operations case Success(SortCountryField.countryStatus) => //another operation case _ => //another operation } ```
You can also do: ``` 'myString' match{ case x if x == SortCountryField.countryName.toString => //Some operations case x if x == SortCountryField.countryStatus.toString => //another operation } ```
51,406,540
I have two matrices, `t1` and `t2`: ``` > t1 aaa bbb ccc ddd [1,] 1 2 3 4 > t2 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] "aaa" "ddd" "aaa" "bbb" "ccc" "bbb" "ddd" "aaa" "ccc" ``` Is there a way to obtain a matrix that replaces the data of `t2` according to the table `t1` without a loop? The matrix I would like to have at the end is: ``` > t3 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] 1 4 1 2 3 2 4 1 3 ``` I tried with the `%in%` matching, but as the two matrices have not the same length of course it doesn't work.
2018/07/18
[ "https://Stackoverflow.com/questions/51406540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7064929/" ]
I think you'd be better off using the enum in your match like: ``` SortCountryField withName <your_string> match { case SortCountryField.countryName => //Some operations case SortCountryField.countryStatus => //another operation } ``` If your string sometimes doesn't match any field then you can easily wrap this in a `Try` like in the following code: ``` Try(SortCountryField withName <your_string>) match { case Success(SortCountryField.countryName) => //Some operations case Success(SortCountryField.countryStatus) => //another operation case _ => //another operation } ```
How does your implicit converter looks like? My guess is that you have converter `SortCountryField => String`, but you need `SortCountryField.Value => String` converter.
51,406,540
I have two matrices, `t1` and `t2`: ``` > t1 aaa bbb ccc ddd [1,] 1 2 3 4 > t2 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] "aaa" "ddd" "aaa" "bbb" "ccc" "bbb" "ddd" "aaa" "ccc" ``` Is there a way to obtain a matrix that replaces the data of `t2` according to the table `t1` without a loop? The matrix I would like to have at the end is: ``` > t3 e1 e2 e3 e4 e5 e6 e7 e8 e9 [1,] 1 4 1 2 3 2 4 1 3 ``` I tried with the `%in%` matching, but as the two matrices have not the same length of course it doesn't work.
2018/07/18
[ "https://Stackoverflow.com/questions/51406540", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7064929/" ]
I think you'd be better off using the enum in your match like: ``` SortCountryField withName <your_string> match { case SortCountryField.countryName => //Some operations case SortCountryField.countryStatus => //another operation } ``` If your string sometimes doesn't match any field then you can easily wrap this in a `Try` like in the following code: ``` Try(SortCountryField withName <your_string>) match { case Success(SortCountryField.countryName) => //Some operations case Success(SortCountryField.countryStatus) => //another operation case _ => //another operation } ```
Add below function in your enum: ``` implicit def toString(value: Value): String = value.toString ``` Use below in matching: ``` val countryStaus:String = SortCountryField.countryStatus 'myString' match { case `countryStatus` => //Some operations case _ => // Another operation } ```
23,356,774
I am struggling with an issue where my company is attempting to white label it's solution. We have a Solution A, inside Repository A like this ``` Repository A Solution A Project A Project B Project C Project D Project E (not shared and dependent on project A/B/C/D) ``` We wish to share a number of projects from Solution A into another Solution B which exists in Repository B. For example ``` Repository B Solution B Project A (From Solution A and Repository A) Project B (From Solution A and Repository A) Project C (From Solution A and Repository A) Project D (From Solution A and Repository A) Project F (not shared and dependent on project A/B/C/D) ``` This way when any of the Project A/B/C or D are updated and committed, then the Solution B will just need to be updated and Projects A/B/C and D will be updated from Repository A. Project F will then be built against the new version of Project A/B/C and D. Is this even possible with TFS? Please note we require the actual projects in the solutions not the assemblies. This is for debugging purposes.
2014/04/29
[ "https://Stackoverflow.com/questions/23356774", "https://Stackoverflow.com", "https://Stackoverflow.com/users/578059/" ]
This is possible! But you know Nuget.org? I think Nuget is the best solution for your case. In this presentation Scott Hanselman shows how use NuGet as Enteprise solution: [NuGet for the Enterprise: NuGet in a Continuous Integration Automated Build System](http://www.hanselman.com/blog/NuGetForTheEnterpriseNuGetInAContinuousIntegrationAutomatedBuildSystem.aspx)
Considering only the following clear requirement > > Please note we require the actual projects in the solutions not the > assemblies. > > > You could just keep one solution - with project E, F, ..., N being the white labeled applications. Like such: ``` Repository A Solution A Project Library A Project Library B Project Library C Project Library D Project Whitelabel E Project Whitelabel F Project Whitelabel N ```
42,332,659
I am working on an SPA with ReactJS. I have a root component App and then several child components. In the the App component I am trying to store some application level state such as logged in user id, and other data. However I am not seeing my state be propagated down the child components. App ``` import { Router, Route, Link, IndexRoute, browserHistory, hashHistory } from 'react-router'; import ParameterContainer from './components/parameter/parameter-container'; import NavMenu from './components/navigation/nav-menu'; import {Alert} from 'react-bootstrap'; import SelectFilter from './components/sample/sample-container'; // Main component and root component export default class App extends React.Component { constructor(props) { super(props); this.state = { userId: null, roles: null, parameterTypes: { 'STRING': 'STRING', 'BOOLEAN': 'BOOLEAN', 'INTEGER': 'INTEGER', 'DECIMAL': 'DECIMAL' } }; } render() { return ( <div> <NavMenu /> <div className="container"> {this.props.children} </div> </div> ) } } // page for 404 class NoMatch extends React.Component { render() { return ( <div className="container"> <Alert bsStyle="danger"> <h1>404: Not Found</h1> <h3>The requested resource does not exist!</h3> </Alert> <img src="images/404.png" style={{display: 'block', margin: '0 auto', width: 300, height: '*'}} /> </div> ) } } // render the application ReactDOM.render(( <Router history={hashHistory}> <Route path="/" component={App}> <Route path="parameter" component={ParameterContainer} /> <Route path="sample" component={SelectFilter} /> <Route path="*" component={NoMatch}/> </Route> </Router> ), document.getElementById('react')) ``` Child Component ``` import React from 'react'; export default class ParameterContainer extends React.Component { constructor(props) { super(props); this.state = { parameters: [] }; this.client = rest.wrap(mime); this.fetchFromApi = this.fetchFromApi.bind(this); console.log('Props:' + props); } render() { .... } ``` The this.props does not contain what I expected. I need to pass data down to children components.
2017/02/19
[ "https://Stackoverflow.com/questions/42332659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4594940/" ]
Have you checked filenames and permission? file names are case sensitive in Linux (your hosting provider uses it)
You have to change directory permissions at your server. Directories and files within the `storage` and the `bootstrap/cache` directories should be writable (e.g. chmod 777)
57,792,257
Morning! So I have a PowerShell server build script that does the following as part of it: Asks if you want to add an AD group to Administrators on the server you're building, or if you want to search a different server for the groups and users in Administrators and use one of those. 1. If you want to add an AD group to Administrators, it asks you what AD group and saves that in a variable ($OSAdministrators) 2. If you want to search a server for the groups and users, you put in the server, it searches, and displays the results of all groups and users in Administrators. It then asks you to type out which group you want to use, and saves that in the same variable ($OSAdministrators). Example code for #2: ``` $OSAdministratorsSearchHost = Read-Host "Enter the hostname of the server to search for Administrators groups" function Get-LocalAdmin { $admins = Gwmi win32_groupuser –Computer $OSAdministratorsSearchHost $admins = $admins |? {$_.GroupComponent –like '*"Administrators"'} $admins |% { $_.partcomponent –match “.+Domain\=(.+)\,Name\=(.+)$” > $nul $matches[1].trim('"') + “\” + $matches[2].trim('"') } } Get-LocalAdmin $OSAdministrators = Read-Host "Enter the name of the AD group from the list above to add to Administrators on the new server; press Enter to skip" ``` This works great if you only want to add 1 group. The problem is that sometimes you may have a couple groups you'd like to add to a server, and I'm not sure how to deal with that. For example, for #2 above I'd love to have it like this: ``` $OSAdministrators = Read-Host "Enter the name(s) of the AD group(s) from the list above to add to Administrators on the new server. If entering multiple, separate them with a comma (e.g. "Server Group 1,Server Group 2")" ``` But I'm not sure how to break out "Server Group 1" and "Server Group 2" and use that later in my code where it actually adds the group to Administrators on the server you're building: ``` $DomainName = "[where the domain FQDN would be]" $AdminGroup = [ADSI]"WinNT://$HostName/Administrators,group" $Group = [ADSI]"WinNT://$DomainName/$OSAdministrators,group" $AdminGroup.Add($Group.Path) ``` I've tried searching online, but the way I'm searching it's not finding anything for this specific use-case, or the solutions seem to be overly complicated for what I'm trying to do (I'm talking 30 lines of code just to parse through inputs). I would think there'd be a simpler way I'm just missing. Any direction would be greatly appreciated. Thanks!
2019/09/04
[ "https://Stackoverflow.com/questions/57792257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4425426/" ]
If you can add some controls as to how the input is entered, you could split input by `,`. Then process the split items in a loop. ``` $OSAdministrators = Read-Host "Enter the name(s) of the AD group(s) from the list above to add to Administrators on the new server. If entering multiple, separate them with a comma (e.g. "Server Group 1,Server Group 2")" $OSAdmins = ($OSAdministrators -split ",").Trim() foreach ($OSAdministrator in $OSAdmins) { $DomainName = "[where the domain FQDN would be]" $AdminGroup = [ADSI]"WinNT://$HostName/Administrators,group" $Group = [ADSI]"WinNT://$DomainName/$OSAdministrator,group" $AdminGroup.Add($Group.Path) } ``` Within your current loop iteration, you can reuse `$OSAdministrator` wherever you need. Outside of the loop, you can access elements in the `$OSAdmins` array.
if you can use a GUI at that point, this may do what you need ... ``` function Get-LocalAdminList ($ComputerName) { (Get-CimInstance -ClassName Win32_GroupUser –Computer $ComputerName | Where-Object { $_.GroupComponent -match 'administrators' }).PartComponent.Name } $ChosenItemList = Get-LocalAdminList -ComputerName $env:COMPUTERNAME | Out-GridView -OutputMode Multiple -Title 'Please select the desired Local Admin members & click OK' $ChosenItemList ``` when i select one item, i get that one in the list. with two ... i get two strings back. [*grin*]
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
For explicitly export some files into separate output files (in this case **all.min.js** and **all.jquery.js**) use: ``` uglify: { js: { files : { 'js/all.min.js' : [ 'js/modernizr.js', 'js/vendor/modernizr-2.6.2-respond-1.1.0.min.js', 'js/bootstrap.min.js', 'js/main.js', 'js/ZeroClipboard.min.js', 'js/bootstrap-datepicker/bootstrap-datepicker.js' ], 'js/all.jquery.js' : [ 'js/vendor/jquery-1.9.1.js', 'js/vendor/jquery-migrate-1.2.1.js', 'js/vendor/jquery-ui.js' ] } }, options: { banner: '\n/*! <%= pkg.name %> <%= grunt.template.today("dd-mm-yyyy") %> */\n', preserveComments: 'some', report: 'min' } }, ```
In an intention to help others who come to this page in future - I came across a video which explains on how to minify JS files using Grunt JS here: <https://www.youtube.com/watch?v=Gkv7pA0PMJQ> The source code is made available here: <http://www.techcbt.com/Post/359/Grunt-JS/how-to-minify-uglify-javascript-files-using-grunt-js> Just in case, if the above links are not working: 1. You can minify all javascript files and combine/concat into one file using the following script: ```js module.exports = function(grunt){ grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), uglify:{ t1:{ files:{ 'dest/all.min.js': ['src/app.js', 'src/one.js', 'src/t/two.js'] } } } }); }; ``` 2. If you would like to have source maps also generated, you can enable "sourceMap" option as follows: ```js module.exports = function(grunt){ grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), uglify:{ t1:{ options : { sourceMap : true, }, files:{ 'dest/all.min.js': ['src/app.js', 'src/one.js', 'src/t/two.js'] } } } }); }; ``` 3. In order to retain entire folder structure while minifying JS files, you can use the following script: ```js module.exports = function(grunt){ grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), uglify:{ t1:{ files: [{ cwd: 'src/', src: '**/*.js', dest: 'dest/', expand: true, flatten: false, ext: '.min.js' }] } } }); }; ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Had the same problem and found a solution that would automatically minify all my scripts separately: ``` uglify: { build: { files: [{ expand: true, src: '**/*.js', dest: 'build/scripts', cwd: 'app/scripts' }] } } ```
This below gruntjs works for me for creating minified files for all the js files under a dir ``` module.exports = function(grunt) { // Project configuration. grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), uglify: { build: { files: [{ expand: true, src: '**/*.js', dest: 'build/scripts', cwd: 'public_html/app', ext: '.min.js' }] } } }); // Load the plugin that provides the "uglify" task. grunt.loadNpmTasks('grunt-contrib-uglify'); // Default task(s). grunt.registerTask('default', ['uglify']); }; ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Had the same problem and found a solution that would automatically minify all my scripts separately: ``` uglify: { build: { files: [{ expand: true, src: '**/*.js', dest: 'build/scripts', cwd: 'app/scripts' }] } } ```
From the grunt docs for min: > > This task is a multi task, meaning that grunt will automatically > iterate over all min targets if a target is not specified. > > > So you can do this: ``` min: { min_a: { src: 'a.js', dest: 'a.min.js' }, min_b: { src: 'b.js', dest: 'b.min.js' }, min_c: { src: 'c.js', dest: 'c.min.js' } ``` There's nothing special about the name 'dist' for these tasks.
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
From the grunt docs for min: > > This task is a multi task, meaning that grunt will automatically > iterate over all min targets if a target is not specified. > > > So you can do this: ``` min: { min_a: { src: 'a.js', dest: 'a.min.js' }, min_b: { src: 'b.js', dest: 'b.min.js' }, min_c: { src: 'c.js', dest: 'c.min.js' } ``` There's nothing special about the name 'dist' for these tasks.
For explicitly export some files into separate output files (in this case **all.min.js** and **all.jquery.js**) use: ``` uglify: { js: { files : { 'js/all.min.js' : [ 'js/modernizr.js', 'js/vendor/modernizr-2.6.2-respond-1.1.0.min.js', 'js/bootstrap.min.js', 'js/main.js', 'js/ZeroClipboard.min.js', 'js/bootstrap-datepicker/bootstrap-datepicker.js' ], 'js/all.jquery.js' : [ 'js/vendor/jquery-1.9.1.js', 'js/vendor/jquery-migrate-1.2.1.js', 'js/vendor/jquery-ui.js' ] } }, options: { banner: '\n/*! <%= pkg.name %> <%= grunt.template.today("dd-mm-yyyy") %> */\n', preserveComments: 'some', report: 'min' } }, ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Or you can use expandMapping, like this: ``` min: { files: grunt.file.expandMapping(['path/*.js', 'path2/*.js'], 'destination/', { rename: function(destBase, destPath) { return destBase+destPath.replace('.js', '.min.js'); } }) } ``` And the output: path/test.js => destination/path/test.min.js path2/foo.js => destination/path2/foo.min.js
I guess it only matters for watch tasks. In grunt 0.4 you can do this ``` var filesA = 'a.js', filesB = 'b.js', filesC = 'c.js'; ... min: { min_a: { src: filesA, dest: 'a.min.js' }, min_b: { src: filesB, dest: 'b.min.js' }, min_c: { src: filesC, dest: 'c.min.js' } watch: { min_a: { files: filesA, tasks: ['min:min_a'] }, min_b: { files: filesB, tasks: ['min:min_b'] }, min_c: { files: filesC, tasks: ['min:min_c'] } } ``` After that just start `grunt watch` and all will be fine automagically.
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Or you can use expandMapping, like this: ``` min: { files: grunt.file.expandMapping(['path/*.js', 'path2/*.js'], 'destination/', { rename: function(destBase, destPath) { return destBase+destPath.replace('.js', '.min.js'); } }) } ``` And the output: path/test.js => destination/path/test.min.js path2/foo.js => destination/path2/foo.min.js
For explicitly export some files into separate output files (in this case **all.min.js** and **all.jquery.js**) use: ``` uglify: { js: { files : { 'js/all.min.js' : [ 'js/modernizr.js', 'js/vendor/modernizr-2.6.2-respond-1.1.0.min.js', 'js/bootstrap.min.js', 'js/main.js', 'js/ZeroClipboard.min.js', 'js/bootstrap-datepicker/bootstrap-datepicker.js' ], 'js/all.jquery.js' : [ 'js/vendor/jquery-1.9.1.js', 'js/vendor/jquery-migrate-1.2.1.js', 'js/vendor/jquery-ui.js' ] } }, options: { banner: '\n/*! <%= pkg.name %> <%= grunt.template.today("dd-mm-yyyy") %> */\n', preserveComments: 'some', report: 'min' } }, ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
From the grunt docs for min: > > This task is a multi task, meaning that grunt will automatically > iterate over all min targets if a target is not specified. > > > So you can do this: ``` min: { min_a: { src: 'a.js', dest: 'a.min.js' }, min_b: { src: 'b.js', dest: 'b.min.js' }, min_c: { src: 'c.js', dest: 'c.min.js' } ``` There's nothing special about the name 'dist' for these tasks.
Use the `ext` option to name the files `.min.js` instead of `.js` ``` uglify: { build: { files: [{ expand: true, src: '**/*.js', dest: 'build/scripts', cwd: 'app/scripts', ext: '.min.js' }] } } ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
From the grunt docs for min: > > This task is a multi task, meaning that grunt will automatically > iterate over all min targets if a target is not specified. > > > So you can do this: ``` min: { min_a: { src: 'a.js', dest: 'a.min.js' }, min_b: { src: 'b.js', dest: 'b.min.js' }, min_c: { src: 'c.js', dest: 'c.min.js' } ``` There's nothing special about the name 'dist' for these tasks.
You also can use copy and [grunt-mindirect](https://github.com/buunguyen/grunt-mindirect). ``` copy: { dist: { src: 'a.js', dest: 'a.min.js' } }, minidirect: { all: 'js/min/*.min.js' } ``` This should work.
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Use the `ext` option to name the files `.min.js` instead of `.js` ``` uglify: { build: { files: [{ expand: true, src: '**/*.js', dest: 'build/scripts', cwd: 'app/scripts', ext: '.min.js' }] } } ```
I like to keep the original files and also create uglified ones: ``` uglify: { dist: { files: [{ expand: true, src: '**/*.js', dest: 'destdir', cwd: 'srcdir', rename: function(dest, src) { return dest + '/' + src.replace('.js', '.min.js'); } }] } }, ```
13,358,680
there are some js files in static/js/ ``` 1. a.js 2. b.js 3. c.js ``` how to config grunt.js to get below files: ``` 1. a.min.js 2. b.min.js 3. c.min.js ``` as far, I have to type specific file name: ``` min: { dist: { src: 'js/**/*.js', dest: 'js/min/xxx.min.js' } } ```
2012/11/13
[ "https://Stackoverflow.com/questions/13358680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1047567/" ]
Or you can use expandMapping, like this: ``` min: { files: grunt.file.expandMapping(['path/*.js', 'path2/*.js'], 'destination/', { rename: function(destBase, destPath) { return destBase+destPath.replace('.js', '.min.js'); } }) } ``` And the output: path/test.js => destination/path/test.min.js path2/foo.js => destination/path2/foo.min.js
You also can use copy and [grunt-mindirect](https://github.com/buunguyen/grunt-mindirect). ``` copy: { dist: { src: 'a.js', dest: 'a.min.js' } }, minidirect: { all: 'js/min/*.min.js' } ``` This should work.
692,351
Say I have a range of files named: "blah-10-blah", "blah-11-blah" etc...up to "blah-30-blah". I would like to change the names to "blah-20-blah", "blah-21-blah" etc...up to "blah-40-blah". Is there a way of doing this in the terminal?
2015/10/31
[ "https://askubuntu.com/questions/692351", "https://askubuntu.com", "https://askubuntu.com/users/465987/" ]
It's important to process the files in an inverse numerical order, otherwise the task will fail due to already existing files with the target filename: ``` find . -maxdepth 1 -type f -name 'blah-??-blah' -print0 | sort -zr | xargs -0 rename 's/-\K([0-9]{2})/$1+10/e' ``` * `find . -maxdepth 1 -type f -name 'blah-??-blah' -print0`: prints a NULL-separated list of the files in the current working directory matching the globbing pattern `blah-??-blah`; * `sort -zr`: sorts the list in an inverse numerical order; * `xargs -0 rename 's/-\K([0-9]{2})/$1+10/e'`: renames the files substituting the first couple of digits after a dash with the correspondent value incremented by 10; ```none % tree . ├── blah-10-blah ├── blah-11-blah ├── blah-12-blah ├── blah-13-blah ├── blah-14-blah ├── blah-15-blah ├── blah-16-blah ├── blah-17-blah ├── blah-18-blah ├── blah-19-blah ├── blah-20-blah ├── blah-21-blah ├── blah-22-blah ├── blah-23-blah ├── blah-24-blah ├── blah-25-blah ├── blah-26-blah ├── blah-27-blah ├── blah-28-blah ├── blah-29-blah └── blah-30-blah 0 directories, 21 files % find . -maxdepth 1 -type f -name 'blah-??-blah' -print0 | sort -zr | xargs -0 rename 's/-\K([0-9]{2})/$1+10/e' % tree . ├── blah-20-blah ├── blah-21-blah ├── blah-22-blah ├── blah-23-blah ├── blah-24-blah ├── blah-25-blah ├── blah-26-blah ├── blah-27-blah ├── blah-28-blah ├── blah-29-blah ├── blah-30-blah ├── blah-31-blah ├── blah-32-blah ├── blah-33-blah ├── blah-34-blah ├── blah-35-blah ├── blah-36-blah ├── blah-37-blah ├── blah-38-blah ├── blah-39-blah └── blah-40-blah 0 directories, 21 files ``` If using Zsh, the task can be heavily simplified, as Zsh allows to expand filenames in an inverse numerical order: ``` rename 's/-\K([0-9]{2})/$1+10/e' blah-??-blah(On) ``` ```none % tree . ├── blah-10-blah ├── blah-11-blah ├── blah-12-blah ├── blah-13-blah ├── blah-14-blah ├── blah-15-blah ├── blah-16-blah ├── blah-17-blah ├── blah-18-blah ├── blah-19-blah ├── blah-20-blah ├── blah-21-blah ├── blah-22-blah ├── blah-23-blah ├── blah-24-blah ├── blah-25-blah ├── blah-26-blah ├── blah-27-blah ├── blah-28-blah ├── blah-29-blah └── blah-30-blah 0 directories, 21 files % rename 's/-\K([0-9]{2})/$1+10/e' blah-??-blah(On) % tree . ├── blah-20-blah ├── blah-21-blah ├── blah-22-blah ├── blah-23-blah ├── blah-24-blah ├── blah-25-blah ├── blah-26-blah ├── blah-27-blah ├── blah-28-blah ├── blah-29-blah ├── blah-30-blah ├── blah-31-blah ├── blah-32-blah ├── blah-33-blah ├── blah-34-blah ├── blah-35-blah ├── blah-36-blah ├── blah-37-blah ├── blah-38-blah ├── blah-39-blah └── blah-40-blah 0 directories, 21 files ```
Basic idea of this approach is to throw all the files into temporary "basket" directory, and then pick them out one by one, create new name , and move back with new name into the original directory. The script bellow takes a single argument ( $1 ), which is directory where the files you want to rename are located. **Demo** ``` xieerqi:$ ls testdir blah-10-blah blah-20-blah blah-30-blah blah-40-blah xieerqi:$ cat testdir/* I am file 10 I am file 20 I am file 30 I am file 40 xieerqi:$ ./incrementNames.sh testdir blah-10-blah ../blah-20-blah blah-20-blah ../blah-30-blah blah-30-blah ../blah-40-blah blah-40-blah ../blah-50-blah xieerqi:$ ls testdir blah-20-blah blah-30-blah blah-40-blah blah-50-blah TMP/ xieerqi:$ cat testdir/blah blah-20-blah blah-30-blah blah-40-blah blah-50-blah xieerqi:$ cat testdir/blah-20-blah I am file 10 xieerqi:$ cat testdir/blah-30-blah I am file 20 ``` **Script** ``` #!/bin/bash if [ "$#" -ne 1 ]; then echo "Usage: incrementNames.sh /path/to/dir" && exit fi # navigate to target directory # create temporary directory cd "$1" mkdir TMP # move everything to TMP directory find . -maxdepth 1 -type f -iname "*-*" -exec mv -t TMP {} \+ # drop down to TMP directory pick filesback into the directory , and rename them as we go cd TMP find . -maxdepth 1 -type f -iname "*-*" -printf "%f\n" | sort | while IFS= read FILENAME do NEW="$( awk -F '-' '{print $1FS$2+10FS$3 }' <<< "$FILENAME")" echo "$FILENAME" "../$NEW" mv "$FILENAME" ../"$NEW" done ``` **Limitation** This script is specifically for files following pattern `text-number-text`, or at least `text-number` patter. It wont work for others
692,351
Say I have a range of files named: "blah-10-blah", "blah-11-blah" etc...up to "blah-30-blah". I would like to change the names to "blah-20-blah", "blah-21-blah" etc...up to "blah-40-blah". Is there a way of doing this in the terminal?
2015/10/31
[ "https://askubuntu.com/questions/692351", "https://askubuntu.com", "https://askubuntu.com/users/465987/" ]
You can do: ```bsh #!/bin/bash files=( blah-??-blah ) for ((i=${#files[@]}-1; i>=0; i--)); do first="${files[$i]%%-*}" num="$(grep -o '[0-9]\+' <<<"${files[$i]}")" last="${files##*-}" echo mv "$first-$num-$last" "$first-$((num+10))-$last" done ``` If you are satisfied with everything, add `| bash` at the end to let the `mv` operation take place. * We have put the relevant file names into an array `files` * Then we have iterated over the elements of the array from the end i.e. from last to first * `first` will have the first part of file name i.e. part prior to the first `-` * `last` will have the last part of the file name i.e. the part after last `-` * `num` will have the number in between two `-` * `mv "$first-$num-$last" "$first-$((num+10))-$last"` will do the rename operation accordingly **Example:** ```bsh $ ls -1 blah-10-blah blah-11-blah blah-12-blah blah-13-blah blah-14-blah blah-15-blah blah-16-blah blah-17-blah blah-18-blah blah-19-blah blah-20-blah blah-21-blah blah-22-blah blah-23-blah blah-24-blah blah-25-blah blah-26-blah blah-27-blah blah-28-blah blah-29-blah blah-30-blah blah-foo-1 blah-foo-2 $ for ((i=${#files[@]}-1; i>=0; i--)); do first="${files[$i]%%-*}" \ num="$(grep -o '[0-9]\+' <<<"${files[$i]}")" last="${files##*-}"; \ echo mv "$first-$num-$last" "$first-$((num+10))-$last"; done mv blah-30-blah blah-40-blah mv blah-29-blah blah-39-blah mv blah-28-blah blah-38-blah mv blah-27-blah blah-37-blah mv blah-26-blah blah-36-blah mv blah-25-blah blah-35-blah mv blah-24-blah blah-34-blah mv blah-23-blah blah-33-blah mv blah-22-blah blah-32-blah mv blah-21-blah blah-31-blah mv blah-20-blah blah-30-blah mv blah-19-blah blah-29-blah mv blah-18-blah blah-28-blah mv blah-17-blah blah-27-blah mv blah-16-blah blah-26-blah mv blah-15-blah blah-25-blah mv blah-14-blah blah-24-blah mv blah-13-blah blah-23-blah mv blah-12-blah blah-22-blah mv blah-11-blah blah-21-blah mv blah-10-blah blah-20-blah $ for ((i=${#files[@]}-1; i>=0; i--)); do first="${files[$i]%%-*}" \ num="$(grep -o '[0-9]\+' <<<"${files[$i]}")" last="${files##*-}";\ echo mv "$first-$num-$last" "$first-$((num+10))-$last"; done | bash $ ls -1 blah-20-blah blah-21-blah blah-22-blah blah-23-blah blah-24-blah blah-25-blah blah-26-blah blah-27-blah blah-28-blah blah-29-blah blah-30-blah blah-31-blah blah-32-blah blah-33-blah blah-34-blah blah-35-blah blah-36-blah blah-37-blah blah-38-blah blah-39-blah blah-40-blah blah-foo-1 blah-foo-2 ```
Basic idea of this approach is to throw all the files into temporary "basket" directory, and then pick them out one by one, create new name , and move back with new name into the original directory. The script bellow takes a single argument ( $1 ), which is directory where the files you want to rename are located. **Demo** ``` xieerqi:$ ls testdir blah-10-blah blah-20-blah blah-30-blah blah-40-blah xieerqi:$ cat testdir/* I am file 10 I am file 20 I am file 30 I am file 40 xieerqi:$ ./incrementNames.sh testdir blah-10-blah ../blah-20-blah blah-20-blah ../blah-30-blah blah-30-blah ../blah-40-blah blah-40-blah ../blah-50-blah xieerqi:$ ls testdir blah-20-blah blah-30-blah blah-40-blah blah-50-blah TMP/ xieerqi:$ cat testdir/blah blah-20-blah blah-30-blah blah-40-blah blah-50-blah xieerqi:$ cat testdir/blah-20-blah I am file 10 xieerqi:$ cat testdir/blah-30-blah I am file 20 ``` **Script** ``` #!/bin/bash if [ "$#" -ne 1 ]; then echo "Usage: incrementNames.sh /path/to/dir" && exit fi # navigate to target directory # create temporary directory cd "$1" mkdir TMP # move everything to TMP directory find . -maxdepth 1 -type f -iname "*-*" -exec mv -t TMP {} \+ # drop down to TMP directory pick filesback into the directory , and rename them as we go cd TMP find . -maxdepth 1 -type f -iname "*-*" -printf "%f\n" | sort | while IFS= read FILENAME do NEW="$( awk -F '-' '{print $1FS$2+10FS$3 }' <<< "$FILENAME")" echo "$FILENAME" "../$NEW" mv "$FILENAME" ../"$NEW" done ``` **Limitation** This script is specifically for files following pattern `text-number-text`, or at least `text-number` patter. It wont work for others
8,673,016
I've been trying to build a Terminal Emulator for Android. Being pretty new to this, my idea was to execute each command and store the output in a file, whose contents would be displayed after each execution. *Pseudo Code :* ``` public Boolean execCommands(String command) { try { rt = Runtime.getRuntime(); process = rt.exec("su"); DataOutputStream os = new DataOutputStream(process.getOutputStream()); os.writeBytes("echo $ \""+command+ "\" >> /sdcard/Android/data/terminalemulatorlog.txt\n\n\n"); /**** Note : String command = (EditText)findViewById(R.id.command).getText().toString(); ****/ os.flush(); os.writeBytes("exit\n"); os.flush(); process.waitFor(); } // Error Handling displayOutput(); //Loads and displays the Text File (/sdcard/Android/data/terminalemulatorlog.txt) return true; } ``` This piece of code works except for a few special commands (Eg. 'clear'). But what I'm more concerned about are the following problems : 1. Each time a command is to be executed, I end up seeking SuperUser permissions (second line of code). And I'd like to do away with this. 2. In cases when the user enters one command followed by another, Such as : ``` cd /sdcard touch File.txt ``` The *File.txt* is created in '/' and not in '/sdcard'. As of now to avoid this, I'm keeping a track of all the 'cd' commands to figure out what the present working directory is. And I'm hoping that there is a better way around this. I'd be grateful if someone could help me out here.
2011/12/29
[ "https://Stackoverflow.com/questions/8673016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1069405/" ]
Not sure if you are still needing this or not, but here is how I am issuing multiple commands at one time and not using "su" to have them run. ``` try { String[] commands = { "dumpstate > /sdcard/LogFiles/dumpstate.txt", "dumpsys > /sdcard/LogFiles/dumpsys.txt", "logcat -d > /sdcard/LogFiles/log.txt", "cat /sdcard/LogFiles/dumpstate.txt /sdcard/LogFiles/dumpsys.txt /sdcard/LogFiles/log.txt > /sdcard/LogFiles/bugreport.rtf" }; Process p = Runtime.getRuntime().exec("/system/bin/sh -"); DataOutputStream os = new DataOutputStream(p.getOutputStream()); for (String tmpCmd : commands) { os.writeBytes(tmpCmd + "\n"); } } catch (IOException e) { e.printStackTrace(); } ```
This a bit late but here a few ways of doing this. 1) Instead of using su as a starting point use /system/bin/sh. and after calling ``` rt.exec("/system/bin/sh"); ``` You should hold onto the Output Stream and Input Stream to give further commands. After you issued a command you should echo a magic line like "---EOF---" and stop reading input after reading that line. If you don't have this you'll end up with the read function from the InputStream blocking. 2) Pipe the data to a native process you've written that simply moves the data on to your Android Application with a terminating character or string attached to the end. I am not entirely sure how to do this, but it is essentially the same as the previous method just relies on you native application as a middle man. This will get you close to a functioning "Terminal Emulator". 3)If you wan't a true Ternimal Emulator then there's no other way to do it than : using a native application that opens a connection to a psuedoterminal. Here's some basic information of how to open a pty : [link](http://www.kernel.org/doc/man-pages/online/pages/man3/openpty.3.html) Terminal Emulator is a open source project that uses this technique. Have a look [here](http://www.google.com/url?sa=t&rct=j&q=android%20terminal%20emulator%20git&source=web&cd=1&cad=rja&ved=0CCcQFjAA&url=https://github.com/jackpal/Android-Terminal-Emulator&ei=CUuGUMKFCdOFhQeRuYDICA&usg=AFQjCNEEkG6vlDB_xRBgVVxfE6QOOBER0w)
8,673,016
I've been trying to build a Terminal Emulator for Android. Being pretty new to this, my idea was to execute each command and store the output in a file, whose contents would be displayed after each execution. *Pseudo Code :* ``` public Boolean execCommands(String command) { try { rt = Runtime.getRuntime(); process = rt.exec("su"); DataOutputStream os = new DataOutputStream(process.getOutputStream()); os.writeBytes("echo $ \""+command+ "\" >> /sdcard/Android/data/terminalemulatorlog.txt\n\n\n"); /**** Note : String command = (EditText)findViewById(R.id.command).getText().toString(); ****/ os.flush(); os.writeBytes("exit\n"); os.flush(); process.waitFor(); } // Error Handling displayOutput(); //Loads and displays the Text File (/sdcard/Android/data/terminalemulatorlog.txt) return true; } ``` This piece of code works except for a few special commands (Eg. 'clear'). But what I'm more concerned about are the following problems : 1. Each time a command is to be executed, I end up seeking SuperUser permissions (second line of code). And I'd like to do away with this. 2. In cases when the user enters one command followed by another, Such as : ``` cd /sdcard touch File.txt ``` The *File.txt* is created in '/' and not in '/sdcard'. As of now to avoid this, I'm keeping a track of all the 'cd' commands to figure out what the present working directory is. And I'm hoping that there is a better way around this. I'd be grateful if someone could help me out here.
2011/12/29
[ "https://Stackoverflow.com/questions/8673016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1069405/" ]
Not sure if you are still needing this or not, but here is how I am issuing multiple commands at one time and not using "su" to have them run. ``` try { String[] commands = { "dumpstate > /sdcard/LogFiles/dumpstate.txt", "dumpsys > /sdcard/LogFiles/dumpsys.txt", "logcat -d > /sdcard/LogFiles/log.txt", "cat /sdcard/LogFiles/dumpstate.txt /sdcard/LogFiles/dumpsys.txt /sdcard/LogFiles/log.txt > /sdcard/LogFiles/bugreport.rtf" }; Process p = Runtime.getRuntime().exec("/system/bin/sh -"); DataOutputStream os = new DataOutputStream(p.getOutputStream()); for (String tmpCmd : commands) { os.writeBytes(tmpCmd + "\n"); } } catch (IOException e) { e.printStackTrace(); } ```
Regarding problem 1: > > Each time a command is to be executed, I end up seeking SuperUser permissions (second line of code). And I'd like to do away with this. > > > Thanks to Xonar's suggestion from another answer: > > After you issued a command you should echo a magic line like "---EOF---" and stop reading input after reading that line. > > > Solution in Kotlin: ``` private lateinit var suProcess: Process private lateinit var outputStream: DataOutputStream private fun getSu(): Boolean { return try { suProcess = Runtime.getRuntime().exec("su") outputStream = DataOutputStream(suProcess.outputStream) true } catch (e: Exception) { e.printStackTrace() false } } private fun sudo(command: String): List<String>? { return try { outputStream.writeBytes("$command\n") outputStream.flush() outputStream.writeBytes("echo ---EOF---\n") outputStream.flush() val reader = suProcess.inputStream.bufferedReader() val result = mutableListOf<String>() while (true) { val line = reader.readLine() if (line == "---EOF---") break result += line } result } catch (e: Exception) { e.printStackTrace() null } } private fun exitTerminal() { try { outputStream.writeBytes("exit\n") outputStream.flush() suProcess.waitFor() } catch (e: Exception) { e.printStackTrace() } finally { outputStream.close() } } //Activity method override fun onDestroy() { super.onDestroy() exitTerminal() } ```
8,673,016
I've been trying to build a Terminal Emulator for Android. Being pretty new to this, my idea was to execute each command and store the output in a file, whose contents would be displayed after each execution. *Pseudo Code :* ``` public Boolean execCommands(String command) { try { rt = Runtime.getRuntime(); process = rt.exec("su"); DataOutputStream os = new DataOutputStream(process.getOutputStream()); os.writeBytes("echo $ \""+command+ "\" >> /sdcard/Android/data/terminalemulatorlog.txt\n\n\n"); /**** Note : String command = (EditText)findViewById(R.id.command).getText().toString(); ****/ os.flush(); os.writeBytes("exit\n"); os.flush(); process.waitFor(); } // Error Handling displayOutput(); //Loads and displays the Text File (/sdcard/Android/data/terminalemulatorlog.txt) return true; } ``` This piece of code works except for a few special commands (Eg. 'clear'). But what I'm more concerned about are the following problems : 1. Each time a command is to be executed, I end up seeking SuperUser permissions (second line of code). And I'd like to do away with this. 2. In cases when the user enters one command followed by another, Such as : ``` cd /sdcard touch File.txt ``` The *File.txt* is created in '/' and not in '/sdcard'. As of now to avoid this, I'm keeping a track of all the 'cd' commands to figure out what the present working directory is. And I'm hoping that there is a better way around this. I'd be grateful if someone could help me out here.
2011/12/29
[ "https://Stackoverflow.com/questions/8673016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1069405/" ]
This a bit late but here a few ways of doing this. 1) Instead of using su as a starting point use /system/bin/sh. and after calling ``` rt.exec("/system/bin/sh"); ``` You should hold onto the Output Stream and Input Stream to give further commands. After you issued a command you should echo a magic line like "---EOF---" and stop reading input after reading that line. If you don't have this you'll end up with the read function from the InputStream blocking. 2) Pipe the data to a native process you've written that simply moves the data on to your Android Application with a terminating character or string attached to the end. I am not entirely sure how to do this, but it is essentially the same as the previous method just relies on you native application as a middle man. This will get you close to a functioning "Terminal Emulator". 3)If you wan't a true Ternimal Emulator then there's no other way to do it than : using a native application that opens a connection to a psuedoterminal. Here's some basic information of how to open a pty : [link](http://www.kernel.org/doc/man-pages/online/pages/man3/openpty.3.html) Terminal Emulator is a open source project that uses this technique. Have a look [here](http://www.google.com/url?sa=t&rct=j&q=android%20terminal%20emulator%20git&source=web&cd=1&cad=rja&ved=0CCcQFjAA&url=https://github.com/jackpal/Android-Terminal-Emulator&ei=CUuGUMKFCdOFhQeRuYDICA&usg=AFQjCNEEkG6vlDB_xRBgVVxfE6QOOBER0w)
Regarding problem 1: > > Each time a command is to be executed, I end up seeking SuperUser permissions (second line of code). And I'd like to do away with this. > > > Thanks to Xonar's suggestion from another answer: > > After you issued a command you should echo a magic line like "---EOF---" and stop reading input after reading that line. > > > Solution in Kotlin: ``` private lateinit var suProcess: Process private lateinit var outputStream: DataOutputStream private fun getSu(): Boolean { return try { suProcess = Runtime.getRuntime().exec("su") outputStream = DataOutputStream(suProcess.outputStream) true } catch (e: Exception) { e.printStackTrace() false } } private fun sudo(command: String): List<String>? { return try { outputStream.writeBytes("$command\n") outputStream.flush() outputStream.writeBytes("echo ---EOF---\n") outputStream.flush() val reader = suProcess.inputStream.bufferedReader() val result = mutableListOf<String>() while (true) { val line = reader.readLine() if (line == "---EOF---") break result += line } result } catch (e: Exception) { e.printStackTrace() null } } private fun exitTerminal() { try { outputStream.writeBytes("exit\n") outputStream.flush() suProcess.waitFor() } catch (e: Exception) { e.printStackTrace() } finally { outputStream.close() } } //Activity method override fun onDestroy() { super.onDestroy() exitTerminal() } ```
57,808,701
I have setup a cucumber project in java, in my Eclipse IDE I am able to run my feature file directly and the tests will complete. However when I run them as JUnit tests they don't run, in the console they appear as ``` @When("^user navigates to Login Page$") public void user_navigates_to_Login_Page() throws Throwable { // Write code here that turns the phrase above into concrete actions throw new PendingException(); } ``` and if I double click the step in the JUnit tab I get the following message "Test class not found in selected Project" My test runner class looks like this, ``` package com.bsautoweb.runner; import java.io.File; import org.junit.AfterClass; import org.junit.runner.RunWith; import com.cucumber.listener.Reporter; import cucumber.api.CucumberOptions; import cucumber.api.junit.Cucumber; @RunWith(Cucumber.class) @CucumberOptions(glue = {"src/test/java/com/bsautoweb/stepdefinitions"}, features = {"src/test/resources/features/"}, plugin = "com.cucumber.listener.ExtentCucumberFormatter:target/cucumber-reports/report.html", monochrome = true ) public class Testrunner { @AfterClass public static void writeExtentReport() { Reporter.loadXMLConfig(new File("config/report.xml")); } } ``` My folder structure looks like this [![folder structure](https://i.stack.imgur.com/8yr7Y.png)](https://i.stack.imgur.com/8yr7Y.png) It seems that JUnit is ignoring my glue code. Even if I enter an invalid path, it doesn't complain.
2019/09/05
[ "https://Stackoverflow.com/questions/57808701", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3542013/" ]
Set the glue option as `com/bsautoweb/stepdefinitions` or as java package style `com.bsautoweb.stepdefinitions`
Give glue = "com.bsautoweb.stepdefinitions"
2,543,272
> > $\mathbb{Z}\_2\times\mathbb{Z}\_2$ has order $4$, the neutral element is $(0\_2,0\_2)$ and the other elements have order $2$. > Therefore, $\mathbb{Z}\_2\times\mathbb{Z}\_2$ is not cyclic,so it is the Klein group. > > > The elements of $\mathbb{Z\_2}$ are $\{0\_2,1\_2\}$ and the order of $1\_2$ is 2. If we take the direct product $\mathbb{Z}\_2\times\mathbb{Z}\_2$ then we have a generator of the Group product $\langle 1\_2,1\_2\rangle$. Question: How can the author state $\mathbb{Z}\_2\times\mathbb{Z}\_2$ has order $4$? Is $\langle 1\_2,1\_2\rangle$ not a generator of order 2 of $\mathbb{Z}\_2\times\mathbb{Z}\_2$?
2017/11/29
[ "https://math.stackexchange.com/questions/2543272", "https://math.stackexchange.com", "https://math.stackexchange.com/users/400711/" ]
The order of a group is the number of elements in it. $\mathbb{Z}\_2\times\mathbb{Z}\_2$ has four elements: $(0,0), (0,1), (1,0), (1,1)$. But as you have seen, each element has order $\le 2$. (This, in particular, proves that $\mathbb{Z}\_2\times\mathbb{Z}\_2$ is not isomorphic to $\mathbb{Z}\_4$.)
The elements of $\mathbb{Z}\_2 \times \mathbb{Z}\_2$ are (0,0), (0,1), (1,0), (1,1) There are 4 of them, thus the order of the group is 4.
44,497,718
I'm just practising with python. I have a dictionary in the form: ``` my_dict = [{'word': 'aa', 'value': 2}, {'word': 'aah', 'value': 6}, {'word': 'aahed', 'value': 9}] ``` How would I go about ordering this dictionary such that if I had thousands of words I would then be able to select the top 100 based on their value ranking? e.g., from just the above example: ``` scrabble_rank = [{'word': 'aahed', 'rank': 1}, {'word': 'aah', 'rank': 2}, {'word': 'aa', 'rank': 3}] ```
2017/06/12
[ "https://Stackoverflow.com/questions/44497718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Firstly, that's not a dictionary; it's a list of dictionaries. Which is good, because dictionaries are unordered, but lists are ordered. You can sort the list by the value of the `rank` element by using it as a key to the sort function: ``` scrabble_rank.sort(key=lambda x: x['value']) ```
Is this what you are looking for: ``` scrabble_rank = [{'word':it[1], 'rank':idx+1} for idx,it in enumerate(sorted([[item['value'],item['word']] for item in my_dict],reverse=True))] ```
44,497,718
I'm just practising with python. I have a dictionary in the form: ``` my_dict = [{'word': 'aa', 'value': 2}, {'word': 'aah', 'value': 6}, {'word': 'aahed', 'value': 9}] ``` How would I go about ordering this dictionary such that if I had thousands of words I would then be able to select the top 100 based on their value ranking? e.g., from just the above example: ``` scrabble_rank = [{'word': 'aahed', 'rank': 1}, {'word': 'aah', 'rank': 2}, {'word': 'aa', 'rank': 3}] ```
2017/06/12
[ "https://Stackoverflow.com/questions/44497718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Firstly, that's not a dictionary; it's a list of dictionaries. Which is good, because dictionaries are unordered, but lists are ordered. You can sort the list by the value of the `rank` element by using it as a key to the sort function: ``` scrabble_rank.sort(key=lambda x: x['value']) ```
Using `Pandas` Library: ``` import pandas as pd ``` There is this one-liner: ``` scrabble_rank = pd.DataFrame(my_dict).sort_values('value', ascending=False).reset_index(drop=True).reset_index().to_dict(orient='records') ``` It outputs: ``` [{'index': 0, 'value': 9, 'word': 'aahed'}, {'index': 1, 'value': 6, 'word': 'aah'}, {'index': 2, 'value': 2, 'word': 'aa'}] ``` Basically it reads your records into a DataFrame, then it sort by `value` in descending order, then it drops original index (order), and it exports as records (your previous format).
44,497,718
I'm just practising with python. I have a dictionary in the form: ``` my_dict = [{'word': 'aa', 'value': 2}, {'word': 'aah', 'value': 6}, {'word': 'aahed', 'value': 9}] ``` How would I go about ordering this dictionary such that if I had thousands of words I would then be able to select the top 100 based on their value ranking? e.g., from just the above example: ``` scrabble_rank = [{'word': 'aahed', 'rank': 1}, {'word': 'aah', 'rank': 2}, {'word': 'aa', 'rank': 3}] ```
2017/06/12
[ "https://Stackoverflow.com/questions/44497718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Firstly, that's not a dictionary; it's a list of dictionaries. Which is good, because dictionaries are unordered, but lists are ordered. You can sort the list by the value of the `rank` element by using it as a key to the sort function: ``` scrabble_rank.sort(key=lambda x: x['value']) ```
You can use [heapq](https://docs.python.org/3/library/heapq.html): ``` import heapq my_dict = [{'word': 'aa', 'value': 2}, {'word': 'aah', 'value': 6}, {'word': 'aahed', 'value': 9}] # Select the top 3 records based on `value` values_sorted = heapq.nlargest(3, # fetch top 3 my_dict, # dict to be used key=lambda x: x['value']) # Key definition print(values_sorted) [{'word': 'aahed', 'value': 9}, {'word': 'aah', 'value': 6}, {'word': 'aa', 'value': 2}] ```
26,480,289
I run a debian 7. When I launch > > cat /etc/debian\_version > > > I have > > 7.7 > > > and when I launch > > cat /etc/issue > > > the response is > > Debian GNU/Linux 7.6 > > > Do I run 7.7 or 7.6?
2014/10/21
[ "https://Stackoverflow.com/questions/26480289", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3744357/" ]
By default, PHP's `json_encode` will escape `/` characters in strings. This: * Does not change the data. `"\/"` and `"/"` are two different, but valid, and equal, JSON representations of a `/` character. * Lets you use the output as a JavaScript literal, inline in an HTML document, without `</script>` ending the script element in the middle of the string. Whatever your problem is, it has nothing to do with the escaped slashes. If it was, then your generated HTML would have some invalid end tags in it, which would be ignored or treated as text. So you would get more text content in your button then you intended. --- Your problem is this mismatch: ``` $arr['htmlnav']=$htmlnav; response.nav ``` You are writing to `htmlnav` but trying to read from `nav`.
Slashes are being escaped. Simply unescape the string once you receive it on the front-end. Or if you're feeling lazy, you can disable the escaping function in the PHP script: ``` echo json_encode($arr, JSON_UNESCAPED_SLASHES); ```
10,744,905
``` class User extends ActiveRecord\Model { pubic static $primary_key = 'userId'; private function isUserLoggedIn() {} } ``` The error I get: > > A PHP Error was encountered > > > Severity: Notice > > > Message: Trying to get property of non-object > > > Filename: lib/Model.php > > > Line Number: 830 > > >
2012/05/24
[ "https://Stackoverflow.com/questions/10744905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096953/" ]
This is a few months late, but I'm just getting into php-activerecord myself. Your problem might be that you typed "pubic" instead of "public", and php by default doesn't support any pubic variables.
Your problem is going to be in your Users model. It's possible you didn't extend CI\_Model, you didn't call parent::construct() in the users constructor, or there is some other error in there.
10,744,905
``` class User extends ActiveRecord\Model { pubic static $primary_key = 'userId'; private function isUserLoggedIn() {} } ``` The error I get: > > A PHP Error was encountered > > > Severity: Notice > > > Message: Trying to get property of non-object > > > Filename: lib/Model.php > > > Line Number: 830 > > >
2012/05/24
[ "https://Stackoverflow.com/questions/10744905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096953/" ]
This is a few months late, but I'm just getting into php-activerecord myself. Your problem might be that you typed "pubic" instead of "public", and php by default doesn't support any pubic variables.
This is caused due to absence of **auto-increment** field in your table. Please add **auto-increment** field. I had faced the same issue.
61,310
I have been a GIS Analyst for over 10 years and am currently studying java (50% through the course). My plan is to develop GIS applications mainly focused towards open source solutions. I would like to develop some form of a portfolio to gain more experience, but I am not sure what the best way to do this would be. I have thought about joining some groups/communities like Geotools, but I am concerned with my lack of experience. I feel I would be more of a hindrance than a benefit. The other idea, is that I could look at developing some in house applications or even some Android application to build up some credibility. If anyone could provide some suggestions, or share their similar experiences, it would be greatly be appreciated. **How should I go about building a portfolio from scratch to further my open-source GIS App career aspirations?**
2013/05/20
[ "https://gis.stackexchange.com/questions/61310", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/18320/" ]
A few ideas come to mind for building your geospatial programming credentials: 1. Create a legacy of solutions and answers on GISse and Stack Overflow. You will notice that many people on GISse creatively and wisely use this forum to further their freelance work. 2. Create a web page or blog to show potential employers what you know. Some of my favorites, and good examples in the GIS world, include [Smathermathers Weblog](http://smathermather.wordpress.com/) and [Spatial Thoughts](http://qgis.spatialthoughts.com/). 3. Do pro bono work for NPOs. Not only is this a great way to give back by helping cash-strapped non-profits, but the work you do for these willing clients looks great on your portfolio/resume. 4. Find programming jobs on freelance sites such as [Elance](https://www.elance.com/r/contractors/q-GIS). Then add these jobs to your portfolio.
I think the way that we create mapping applications is changing fast and the key to success in this industry is being ahead of that curve. For example 10+ years ago when we wanted a blog we get a shared hosting solution, download a blogging platform like Wordpress or Movable Type install it on the server, buy a domain name, install a theme, bang our head against a wall etc, etc. Now we just sign-up for a Tumblr or Wordpress hosted account and off we go. The same thing is happening with online GIS. The days of building your own stack from scratch and deploying it to your own server are numbered. A few years from now it will only be the big guns with big budgets and very specific requirements that will still be doing it. I would focus on learning the new cloud based mapping platforms such as ArcGIS Online, [CartoDB](http://www.cartodb.com), [MangoMap](http://www.mangomap.com) and [MapBox](http://www.mapbox.com) inside out. If someone said to me that in 2014 I had the choice between my CV saying that I know how to roll out and tweak web map servers or I know all of the cloud based GIS systems inside out then I know which one I would choose by a country mile. I'm a programmer by trade and can tell you that the guys that in 2010 were busy becoming experts on cloud systems such as Amazon EC2 are now the hottest property in the industry whilst DB admins and Java developers are ten a penny.
61,310
I have been a GIS Analyst for over 10 years and am currently studying java (50% through the course). My plan is to develop GIS applications mainly focused towards open source solutions. I would like to develop some form of a portfolio to gain more experience, but I am not sure what the best way to do this would be. I have thought about joining some groups/communities like Geotools, but I am concerned with my lack of experience. I feel I would be more of a hindrance than a benefit. The other idea, is that I could look at developing some in house applications or even some Android application to build up some credibility. If anyone could provide some suggestions, or share their similar experiences, it would be greatly be appreciated. **How should I go about building a portfolio from scratch to further my open-source GIS App career aspirations?**
2013/05/20
[ "https://gis.stackexchange.com/questions/61310", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/18320/" ]
A few ideas come to mind for building your geospatial programming credentials: 1. Create a legacy of solutions and answers on GISse and Stack Overflow. You will notice that many people on GISse creatively and wisely use this forum to further their freelance work. 2. Create a web page or blog to show potential employers what you know. Some of my favorites, and good examples in the GIS world, include [Smathermathers Weblog](http://smathermather.wordpress.com/) and [Spatial Thoughts](http://qgis.spatialthoughts.com/). 3. Do pro bono work for NPOs. Not only is this a great way to give back by helping cash-strapped non-profits, but the work you do for these willing clients looks great on your portfolio/resume. 4. Find programming jobs on freelance sites such as [Elance](https://www.elance.com/r/contractors/q-GIS). Then add these jobs to your portfolio.
> > but I am concerned with my lack of experience. I feel I would be more of a hindrance than a benefit. > > > My expirience is that Open Source communities does not look at things this way at all. While your Java experience might not exactly meet the standards of the project, there are always tasks to be done. Mundane tasks like filing bugs, testing etc are things that anyone can do, and this is a great way to contribute. And, while your expertise in Java might not be that good, you are in this to learn, so try some simple tasks (look through their issue tracker) solve them and submit a patch. Most teams welcomes new contributors, and if they have the time they probably will guide you in the right direction. Your GIS-expirience can also be a great benefit to open source projects, try to look at the project from a "professional gis analyst" viewpoint and suggest new features. You could also try implementing them and then ask for ideas for improvement. This may be a great way to get to know the core developers. In general: do not let your lack of experience stop you from contributing, I think this mentality is a big "threat" to open source projects, people feel they have to be experts in order to contribute. In most cases, all that is needed is the will to contribute and to learn. And yes, after some time you will get the experience, and being an active developer on an open source project (of some size) is a great asset when applying for jobs in software development. Good luck!
61,310
I have been a GIS Analyst for over 10 years and am currently studying java (50% through the course). My plan is to develop GIS applications mainly focused towards open source solutions. I would like to develop some form of a portfolio to gain more experience, but I am not sure what the best way to do this would be. I have thought about joining some groups/communities like Geotools, but I am concerned with my lack of experience. I feel I would be more of a hindrance than a benefit. The other idea, is that I could look at developing some in house applications or even some Android application to build up some credibility. If anyone could provide some suggestions, or share their similar experiences, it would be greatly be appreciated. **How should I go about building a portfolio from scratch to further my open-source GIS App career aspirations?**
2013/05/20
[ "https://gis.stackexchange.com/questions/61310", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/18320/" ]
I think the way that we create mapping applications is changing fast and the key to success in this industry is being ahead of that curve. For example 10+ years ago when we wanted a blog we get a shared hosting solution, download a blogging platform like Wordpress or Movable Type install it on the server, buy a domain name, install a theme, bang our head against a wall etc, etc. Now we just sign-up for a Tumblr or Wordpress hosted account and off we go. The same thing is happening with online GIS. The days of building your own stack from scratch and deploying it to your own server are numbered. A few years from now it will only be the big guns with big budgets and very specific requirements that will still be doing it. I would focus on learning the new cloud based mapping platforms such as ArcGIS Online, [CartoDB](http://www.cartodb.com), [MangoMap](http://www.mangomap.com) and [MapBox](http://www.mapbox.com) inside out. If someone said to me that in 2014 I had the choice between my CV saying that I know how to roll out and tweak web map servers or I know all of the cloud based GIS systems inside out then I know which one I would choose by a country mile. I'm a programmer by trade and can tell you that the guys that in 2010 were busy becoming experts on cloud systems such as Amazon EC2 are now the hottest property in the industry whilst DB admins and Java developers are ten a penny.
> > but I am concerned with my lack of experience. I feel I would be more of a hindrance than a benefit. > > > My expirience is that Open Source communities does not look at things this way at all. While your Java experience might not exactly meet the standards of the project, there are always tasks to be done. Mundane tasks like filing bugs, testing etc are things that anyone can do, and this is a great way to contribute. And, while your expertise in Java might not be that good, you are in this to learn, so try some simple tasks (look through their issue tracker) solve them and submit a patch. Most teams welcomes new contributors, and if they have the time they probably will guide you in the right direction. Your GIS-expirience can also be a great benefit to open source projects, try to look at the project from a "professional gis analyst" viewpoint and suggest new features. You could also try implementing them and then ask for ideas for improvement. This may be a great way to get to know the core developers. In general: do not let your lack of experience stop you from contributing, I think this mentality is a big "threat" to open source projects, people feel they have to be experts in order to contribute. In most cases, all that is needed is the will to contribute and to learn. And yes, after some time you will get the experience, and being an active developer on an open source project (of some size) is a great asset when applying for jobs in software development. Good luck!
365,482
I looked it up and most forums link to <http://semarch.linguistics.fas.nyu.edu/barker/Syllables/index.txt>, an NYU site that no longer works. I would like to know how many unique syllables are used in the English dictionary (not possible syllables, but actually used syllables).
2016/12/28
[ "https://english.stackexchange.com/questions/365482", "https://english.stackexchange.com", "https://english.stackexchange.com/users/212663/" ]
To answer you question, according to that paper on that URL: How many unique phonemes single syllables are used in the English language? 15,831 I've visited that URL 5yrs ago in a wild search of a syllabary of the English language. Anyways, you can view the paper still using the WayBackMachine one Archive.org. It went offline somewhere between Sept 23 - Oct 8th 2016. Here's a [link to the last snapshot](https://web.archive.org/web/20160923005626/http://semarch.linguistics.fas.nyu.edu:80/barker/Syllables/index.txt).
The main question is only really answerable for a specific dialect and accent of English. "American English" has a lot of regional variations, and also variations by [register](https://en.wikipedia.org/wiki/Register_(sociolinguistics)). The basic reason for this is that English is a [pluricentric language](https://en.wikipedia.org/wiki/Pluricentric_language), and even within a supposedly standardised version, it is very much permissible to borrow words from other languages and coin new words (often from roots in other languages). These new words often have pronunciations that are mangled versions of the original word or root, so the inventory of syllables is subject to random enlargement by users of the language. English dictionaries are descriptive, not prescriptive.
53,706
> > By faith **Abel** offered to God a better sacrifice than Cain, through > which he obtained the testimony that he was righteous, God testifying > about his gifts, and **through faith**, **though he is dead, he still > speaks.** Hebrews 11:4 (NASB) > > > 1. How does he speak even while he's dead?
2020/12/16
[ "https://hermeneutics.stackexchange.com/questions/53706", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/2577/" ]
The author of Hebrews 11:4 is very likely referring/alluding to Gen 4:10 - > > “What have you done?” replied the LORD. “The voice of your brother’s > blood cries out to Me from the ground. > > > This "voice" is clearly not a literal voice but the exemplary life of Able, though dead, remains in people's memories to show the glory of God. In fact, the NT continues to use the example of Able's righteousness to teach us because he is always called "righteous": * Matt 23:35 - And so upon you will come all the righteous blood shed on earth, from the **blood of righteous Abel** to the blood of Zechariah son of Berechiah, whom you murdered between the temple and the altar. * 1 John 3:12 - Do not be like Cain, who belonged to the evil one and murdered his brother. And why did Cain slay him? Because his own deeds were evil, while those of his brother were **righteous**. * Heb 11:4 - By faith Abel offered to God a more excellent sacrifice than Cain, by which he obtained witness that **he was righteous**, God testifying of his gifts: and by it he being dead yet speaks. Thus, Hebrews speaks of the example of the righteousness of Able still teaching is about the benefits of faith in God. Finally, note that the same author of Hebrews also mentions Able in a similar context in Heb 12:24 - > > to Jesus the mediator of a new covenant, and to the sprinkled blood > that speaks a better word than the blood of Abel. > > > Thus, again, an inanimate object, "the sprinkled blood of the new covenant", speaks. Thus, we have another example of the personification of the inanimate. That is, the Bible authors were capable of figurative language. ==================================== APPENDIX - text matter There is a slight variation in the Greek text of Heb 11:4 - the final word in most version is λαλεῖ but some have λαλεῖται. Barnes comments on this as follows: > > And by it he, being dead, yet speaketh - Margin, "Is yet spoken of." > This difference of translation arises from a difference of reading in > the mss. That from which the translation in the text is derived, is > λαλεῖ lalei - "he speaketh." That from which the rendering in the > margin is derived, is λαλεῖται laleitai - "is being spoken of;" that > is, is "praised or commended." The latter is the common reading in the > Greek text, and is found in Walton, Wetstein, Matthzei, Titman, and > Mill; the former is adopted by Griesbach, Koppe, Knapp, Grotius, > Hammond, Storr, Rosenmuller, Prof. Stuart, Bloomfield, and Hahn, and > is found in the Syriac and Coptic, and is what is favored by most of > the Fathers. See "Wetstein." The authority of manuscripts is in favor > of the reading λαλεῖται laleitai - "is spoken of." It is impossible, > in this variety of opinion, to determine which is the true reading, > and this is one of the cases where the original text must probably be > forever undecided. > > >
**What does it mean that ''though [Abel] is dead, he still speaks'' in Hebrews 11:4?** Your right, Abel has been dead for thousands of years and not a single word that he may have said is recorded in the scripture. So then how does he speak to us? Abel although dead, speaks to us through his faith, he is the first human to develop this excellent quality. His faith must have been indeed very powerful, so that God by means of his spirit, inspired Paul to write about it and have it recorded for us to read today. If we learn from his faith and seek to imitate it, then the record of Abel is speaking to us in a very real and effective way.
13,059,096
Z3 currently supports the DIMACS format for input. Is there any way to output the DIMACS format for the problem before solution? I mean converting the problem to a system CNFs and output it in a DIMACS format. If not, any ideas towards this direction would be more than helpful.
2012/10/24
[ "https://Stackoverflow.com/questions/13059096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1772662/" ]
The DIMACS format is very primitive, it supports only Boolean variables. Z3 does not reduce every problem into SAT. Some problems are solved using a propositional SAT solver, but this is not the rule. This usually only happens if the input contains only Boolean and/or Bit-vector variables. Moreover, even if the input problem contains only Boolean and Bit-vector variables, there is no guarantee that Z3 will use a pure SAT solver to solve it. That being said, you can use the [tactic framework](http://rise4fun.com/Z3Py/tutorial/strategies) to control Z3. For example, for Bit-vector problems, the following tactic will convert it into a propositional formula in CNF format. It should be straightforward to convert it into DIMACS. Here is the example. You can try it online at: <http://rise4fun.com/Z3Py/E1s> ``` x, y, z = BitVecs('x y z', 16) g = Goal() g.add(x == y, z > If(x < 0, x, -x)) print g # t is a tactic that reduces a Bit-vector problem into propositional CNF t = Then('simplify', 'bit-blast', 'tseitin-cnf') subgoal = t(g) assert len(subgoal) == 1 # Traverse each clause of the first subgoal for c in subgoal[0]: print c ```
Thanks to Leonardo's answer I came up with this code that will do what you want: ``` private static void Output(Context ctx,Solver slv) { var goal = ctx.MkGoal(); goal.Add(slv.Assertions); var applyResult = ctx.Then(ctx.MkTactic("simplify"), ctx.MkTactic("bit-blast"), ctx.MkTactic("tseitin-cnf")).Apply(goal); Debug.Assert(applyResult.Subgoals.Length==1); var map = new Dictionary<BoolExpr,int>(); foreach (var f in applyResult.Subgoals[0].Formulas) { Debug.Assert(f.IsOr); foreach (var e in f.Args) if (e.IsNot) { Debug.Assert(e.Args.Length==1); Debug.Assert(e.Args[0].IsConst); map[(BoolExpr)e.Args[0]] = 0; } else { Debug.Assert(e.IsConst); map[(BoolExpr)e] = 0; } } var id = 1; foreach (var key in map.Keys.ToArray()) map[key] = id++; using (var fos = File.CreateText("problem.cnf")) { fos.WriteLine("c DIMACS file format"); fos.WriteLine($"p cnf {map.Count} {applyResult.Subgoals[0].Formulas.Length}"); foreach(var f in applyResult.Subgoals[0].Formulas) { foreach (var e in f.Args) if (e.IsNot) fos.Write($"{map[(BoolExpr)e.Args[0]]} "); else fos.Write($"-{map[(BoolExpr)e]} "); fos.WriteLine("0"); } } } ``` For it to work you should add all your constraints to the solver directly, by calling `slv.Assert(...)`.
14,302
Before I installed Ubuntu on my old computer, I took backups of both partitions with dd. Long story short: I now want to restore these two files, `XP.dd` and `Storage.dd`, to the computer again. How do I do this? Thanks! This question is related: [NTFS backup image wont mount](https://askubuntu.com/questions/4584/ntfs-backup-image-wont-mount)
2010/11/21
[ "https://askubuntu.com/questions/14302", "https://askubuntu.com", "https://askubuntu.com/users/1439/" ]
When people use `dd` to backup an image of a partition, they often fail to backup the partition table as well. If you didn't, you can try creating partitions of similar sizes with `fdisk` and set the partition flags appropriately. Then `dd` them back and run whatever disk checking utility suits the partition in question from a bootable CD. Slightly more detailed: ``` $ ls -l imagename #to get the size $ sudo fdisk /dev/sdX ``` **Options:** * `n`: adds a new partition (follow the prompts, make a modestly larger partition and resize if the fix works). * `t`: sets the partition type in the partition table. * `w`: writes the changes to the disk. * `m`: lists options as the program informs you. Then run: ``` sudo dd if=imagename of=/dev/sdXY ``` Run the appropriate disk check utility after the `dd` restore is complete. There are better ways to do this, they aren't horribly relevant and are overly complicated.
I think you should only *dd back* the img data to partiton. example(boot with live ubuntu, expected /dev/hda1 is the restoreable partition, unmount it first, if it mounted): ``` # umount /dev/hda1 # dd if=/path/to/image.dd of=/dev/hda1 # mount /dev/hda1 /path/to/mount ``` After you checked the data, reboot, or set correctly your fstab. Please give more info, for more help!
74,146,558
How can I multiply two elements in a list? I am reading a text file, and printing the following: ``` for i in range(len(listeisotoper)): print("Isotop type:"+listeisotoper[i][0]) print("Isotopisk masse u: "+listeisotoper[i][1]) print("Naturlig forekomst: "+listeisotoper[i][2]) print("xxx"+"g/mol") print("\n") ``` However i cannot fathom how i can multiply the `listeisotoper[i][1] * listeisotoper[i][2]` and then have it print the number with decimal points. Any suggestions?
2022/10/20
[ "https://Stackoverflow.com/questions/74146558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9861217/" ]
Probably you are using Selenium 4. if so, `find_element_by_xpath` and all the others `find_element_by_*` methods are not supported by Selenium 4, you have to use the new syntax and add an essential import, as following: ```py from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By driver_service = Service(executable_path="C:\Program Files (x86)\chromedriver.exe") driver = webdriver.Chrome(service=driver_service) PASSWORD = 'testtes' login_page = 'http://192.168.2.1/login.html' driver.get(login_page) driver.find_element(By.XPATH, "//input[@placeholder='Password']").send_keys(PASSWORD) ```
Try this: ``` from selenium.webdriver.common.by import By driver.find_element(By.XPATH, "//input[@placeholder='Password']").send_keys(PASSWORD) ```
71,674
Ezekiel 20:25-26 > > So I gave them other statutes that were not good and laws through > which they could not live; 26 and I defiled them through their gifts—the > sacrifice of every firstborn—that I might fill them with horror so > they would know that I am the Lord.’ > > > וְגַם-אֲנִי נָתַתִּי לָהֶם, חֻקִּים לֹא טוֹבִים; וּמִשְׁפָּטִים--לֹא > יִחְיוּ, בָּהֶם. וָאֲטַמֵּא אוֹתָם בְּמַתְּנוֹתָם, בְּהַעֲבִיר > כָּל-פֶּטֶר רָחַם: לְמַעַן אֲשִׁמֵּם > > > It is generally understood that the "statutes that were not good and laws through which they could not live" that God gave them are what follows in the next verse "the sacrifice of every firstborn". In other words, God commanded them to sacrifice the firstborn, and gave them horrible laws so to fill them with horror, because they rebelled against God in Egypt by worshipping idols. This is also supported by the word ואטמא "and I defiled them", i.e., God himself defiled them by commanding them to sacrifice. How are we to understand this shocking claim? Some scholars believe this is a reference to Ex. 13:12, and 22:28. They claim that it was taken literally by Ezekiel (and most Israelites), the human firstborn was to be sacrificed to God (Ex. 13:13 then must be a later modification of the law). Greenberg (The Anchor Bible, perhaps also Zimmerli) however maintains that this is not a reference to Exodus, it is just a reference to a popular belief in Israel that God commanded child sacrifice. Greenberg also writes that according to Ezekiel God directly misleads the people by giving them ambiguous laws so that they can twist it to their liking, and make child sacrifice seem as if God commands it. Is this a correct interpretation of v. 26? How else would you interpret v. 26 that strongly implies that God himself commanded them to sacrifice children? Is it meant to be taken literal? Is it some form of sarcasm perhaps? --- Please do not post one line answers, or copy paste bible commentaries, I can look them up myself if I want to. Only post well researched answers and cite your sources or evidence, and if you don't have anything to add please don't feel the need to post anything (feel free to comment though), I'll be happy if my question stays unanswered than to have a bunch of low quality answers associated with it.
2021/12/06
[ "https://hermeneutics.stackexchange.com/questions/71674", "https://hermeneutics.stackexchange.com", "https://hermeneutics.stackexchange.com/users/15819/" ]
Based on context alone, the interpretation stated in the OP’s question is problematic. Ezekiel 20:25-26 itself is a puzzling anomaly against the broader context of text. Apart from these two verses, the text otherwise forms one cohesive message that is first meant for the house of Israel (v.27) then to its children (v. 18). The two major points regarding what God desires from Israel are: 1/ They should abide by God’s commandments and keep holy his Sabbath (vv. 11, 16, 19-20) 2/ They should refrain from all idol worship and practices (vv. 7, 16, 18). Specifically, they are not to engage in child-sacrifice (v. 31) Verses 25-26, however, break the flow of the text and present a problem for both translators and interpreters alike. An article by Jewish scholar Hyam Maccoby offers some valuable insight. Maccoby presents the problem in this way: > > The difficulties of these translations are obvious. Ezekiel has just > been complaining that the Israelites have not kept the statutes and > laws. Now he says, apparently, that the statutes and laws were not > good. In that case, why complain that the Israelites did not keep > them? Or were there two sets of laws, one good, which the Israelites > did not keep, and the other bad, given to them as a punishment for not > keeping the first set? Where in the Torah or elsewhere is there any > evidence for two such sets of laws? > > > <http://jtr.shanti.virginia.edu/statutes-that-were-not-good-ezekiel-2025-26-traditional-interpretations/#content> > > > The various interpretations offer different solutions for reconciling this problem. Commentaries generally take the words of verse 25 literally. The problem with this approach lies in the challenge of explaining how God could give bad statutes and ordinances. Though commentators take pains to differentiate these laws from those of God (v.11), it does not resolve the problem that they are said to be given by God. In the article above, Maccoby outlines an alternate interpretation belonging to Meir Loeb Malbim (1809-1879). Malbim interprets verse 25 as being sarcastic and as representing the views of those who rebelled against the laws of God. > > Malbim’s general approach to the text, investing it with fierce > sarcasm, is surely far more convincing than the standard translations. > The notion of a God who deliberately gives bad laws is surely > nonsensical, but that Ezekiel should attribute to the rebels the view > that the laws of God, as conveyed by the prophet, are bad is perfectly > understandable. > > > Malbim’s interpretation hinges on an important textual issue: > > It is in fact an important problem of the text whether the words > beha'avir kol peter racham refer to idolatrous human sacrifice or to > the Torah practice of sacrificing the firstborn of animals only. The > translators of AV and NEB have plumped for the former alternative, > while JPS leaves the matter indeterminate. In favour of the idolatry > alternative is the use of the same verb in a clearly idolatrous > context in v. 31. Also the use of the verb ha`avir in almost all cases > refers to idolatrous worship. > > > But there is an important exception, and this is certainly what > determined Malbim to adopt his interpretation. In Exodus 13:12, we > find not only the verb, but the whole phrase. Malbim was well aware > that Ezekiel is here repeating a liturgical phrase from Israelite > worship, and such a phrase cannot be ascribed to idolatrous procedure, > in reference to which the expression kol peter rechem is never used. > He therefore felt forced to interpret the rebellious Israelites as > complaining about the Torah law as an impediment to the performance of > idolatrous rites. > > > Actually, modern scholarship confirms the rebels’ sense of history, if > not their morality, for the biblical denunciation of human firstborn > sacrifice is now seen by scholars as a reform of previous Israelite > practice. The text of Exodus 13:12-13, while it rules out sacrifice of > the human firstborn, shows a law that has been subject to evolution. > The sanctification of the firstborn requiring redemption, the sparing > of the Israelite firstborn at the time of the death of the Egyptian > firstborn, even the aborted sacrifice of Isaac by Abraham, all show a > process of accommodation and reform bespeaking an original, primitive > pre-Biblical rite of firstborn sacrifice. The very fact that the term > ha`avir has survived in Exodus for non-idolatrous practice, though > elsewhere this term is used exclusively in a context of idolatry, > shows that there is more continuity between the two practices than was > later acknowledged. The biblical writers, including Ezekiel, denounced > human sacrifice as idolatrous (see especially the denunciation of the > Canaanites in Leviticus 21), but they were struggling with a mode of > worship that had an aura of ancient authority as well as a mystical > rationale of its own. > > > Maccoby’s article has helped me to come to my own understanding of Ez 20:25-26, one that deviates from those presented in the article. An additional excerpt from that article plays an important role in shaping my thought: > > Malbim realised that Ezekiel was disputing with people who had their > own critique of the commandments of the Torah, rather than with mere > idolaters. But Malbim may have overlooked the extent to which > Ezekiel’s opponents were concerned with exegesis rather than criticism > of the Torah. There is also a question about how far the text of > Exodus was available to Ezekiel and to his opponents. This question > leads to the possibility that their dispute was not merely exegetical > but redactional: they may have been arguing about different versions > of Exodus current at that time, only one of which explicitly banned > human firstborn sacrifice (i.e. one contained Exodus 13: 13b, `and > every firstborn of your sons you shall redeem’, while another, cited > by Ezekiel’s opponents, did not). > > > “Ezekiel was disputing with people who had their own critique of the commandments of the Torah.” This point is important, I think, not just to the verses in question but to the chapter in general, the first verse of which states that the elders of Israel came to inquire of Yahweh. What they wanted to discuss may very well be the issue of child-sacrifice. This connection between their inquiry and the practice of child-sacrifice is more directly made in v. 31: > > And when you offer your gifts, when you make your sons pass through > the fire, you are defiling yourselves with all your idols to this day. > So shall I be inquired of by you, house of Israel? As I live,” > declares the Lord God, “I certainly will not be inquired of by you. – > v .31 > > > Here is my own creative reconstruction of the meeting between the elders and Ezekiel – The elders came to Ezekiel to challenge his teachings and interpretation of Scripture. Specifically, they questioned whether it was not Ezekiel who was wrong for denouncing the practice of child-sacrifice. Basing their arguments on certain verses from Exodus (13:12, 22:29) and possibly other versions of the Torah, they inquired whether this practice was not commanded by God himself. This is then how the “bad” laws came to be attributed to God. > > “You shall not hold back the offering from your entire harvest and > your wine. The firstborn of your sons you shall give to Me." – Ex 22:29 > > > God’s refusal to “be inquired” by the elders is an indication that they did not inquire in good faith but were only trying to justify their own position. Despite his refusal, the whole of chapter 20 in a way serves as God’s answer to their inquiry. Instead of debating the text, God’s answer lays out the history of his journey with the people of Israel, the focus of which is on God’s unwavering faithfulness and mercy despite Israel’s persistent unfaithfulness and rebellion (vv. 6-8, 13-17, 21-22). Against all the evidence of his holiness and goodness, God expressed his frustration that they still did not know or understand who he is. > > Then you will know that I am the Lord, when I have dealt with you in > behalf of My name, not according to your evil ways or according to > your corrupt deeds, house of Israel,” declares the Lord God.’” – v. 44 > > > Despite everything that I have written, however, I don’t think we can rule out the possibility that Ez 20:25-26 can be understood in its most literal sense. Though God, for the sake of his name, will’s only that which is good, there is still a sense in which everything that happens, whether good or bad, must serve God’s good purposes in the end. Thus, even when men rebel against God in the most egregious way, as when they offer up child-sacrifices to their idols, and even though sin is rooted in man’s own nature, intentions, and choices, their actions can still be said to be in accordance with what God has decreed. Herein lies the mystery beyond what the human mind can grasp - that of how God’s omniscience and omnipotence coexists with man’s free will. Easier to understand, perhaps, is how the consequences of men’s actions, the desolation that results from sin (v. 26), serve God’s will and purposes. In the end God proclaims his ultimate sovereignty over all things, so that all would “know that I am the Lord” (vv. 12, 26, 38, 42, 44). > > “As for you, house of Israel,” this is what the Lord God says: “Go, > serve, everyone of you his idols; but later you will certainly listen > to Me, and My holy name you will no longer defile with your gifts and > your idols.” – Ez 20:39 > > > As a final thought, I cannot help but notice how relevant the message of Ezekiel 20 is to answering the OP’s question. It serves as a reminder that, in order to properly understand and apply Scripture, we must first be grounded in the memory and knowledge of God’s goodness and mercy.
Not so fast! Note the context in the previous verses of Eze 20:18-24 - > > **18** In the wilderness I said to their children: ‘Do not walk in the statutes of your fathers or keep their ordinances or defile yourselves > with their idols. **19** I am the LORD your God; walk in My statutes, > keep My ordinances, and practice them. **20** Keep My Sabbaths holy, > that they may be a sign between us, so that you may know that I am the > LORD your God.’ > > > **21** But the children rebelled against Me. They did not walk in My statutes or carefully observe My ordinances—though the man who does > these things will live by them—and they profaned My Sabbaths. So I > resolved to pour out My wrath upon them and vent My anger against them > in the wilderness. **22** But I withheld My hand and acted for the > sake of My name, so that it would not be profaned in the eyes of the > nations in whose sight I had brought them out. > > > **23** However, with an uplifted hand I swore to them in the wilderness that I would scatter them among the nations and disperse > them throughout the lands. **24** For they did not practice My > ordinances, but they rejected My statutes and profaned My Sabbaths, > fixing their eyes on the idols of their fathers. > > > Thus, God makes this situation very clear - because the people rejected God and His covenant laws, God allowed them ("gave them over to" V25) to profane pagan practices, including child sacrifice. Thus, it was **NOT** the LORD's will that this occur but it was because of the choice of the people! we see this again in Rom 1:21-24 - > > **21** For although they knew God, they neither glorified Him as God nor gave thanks to Him, but they became futile in their thinking and > darkened in their foolish hearts. **22** Although they claimed to be > wise, they became fools, **23** and exchanged the glory of the > immortal God for images of mortal man and birds and animals and > reptiles. **24** Therefore **God gave them over** in the desires of > their hearts to impurity for the dishonoring of their bodies with one > another. > > > This is a perfect example of God being attributed as the cause of something that He does not actually instigate, ie, the Divine Passive - see appendix below. Thus, in Eze 20:26 we have God being attributed as the case of something that was actually the choice of the people. **APPENDIX - Divine Passive** Lam 3:38 - Do not both adversity and good come from the mouth of the Most High? The Divine Passive says that because God is omniscient and omnipotent, He is the ultimate cause of all things, even those He does not directly instigate, because God allows them to happen. The idea of the Divine Passive doctrine (as distinct from the grammatical divine passive construction) is one that is not explicit in the Bible but was created to explain the available, apparently contradictory, facts. Here are some examples: * 2 Sam 24:1 vs 1 Chron 21:1 – Who tempted King David to have a census? God or Satan? Both are correct because to the Hebrew mind, God is omniscient and omnipotent and thus events only occur if He allows. James 1:13 explicitly states that God tempts no one. * Job 2:3 - God says that Satan "incited" God to ruin Job, even though it was Satan that was the direct cause of Job's ruin * 1 Sam 16:14, 16, 18:10, 19:9 – God sent an evil (literally, unclean) spirit on Saul? God does not have an evil spirit to send! Again, the omnipotent God is deemed responsible for that which He does not prevent. * Judges 9:23 has an identical idea of an evil spirit from God. * 1 Kings 2:22, 23, 2 Chron 18:21, 22 all have a "lying spirit" from the LORD. * Ex 9:12, 10:1, 20, 27, 11:10, 14:8 – God causes Pharaoh to harden his heart??? Clearly not! Compare Ex 8:15, 32, 9:34 where Pharaoh hardens his own heart. * Compare Rev 17:1 where God judges the great prostitute, with, Rev 17:16, 17 where the great prostitute becomes a victim of her own wicked ways. * In Eze 14:9 says, “I the LORD have enticed/deceived that prophet”; whereas James 1:13 says that God does not tempt anyone. This principle can be readily extrapolated to other many (not all) other passages where the passive voice is used; eg, the beatitudes of Matt 5, Rom 3:28, 1 Cor 7:23, Gal 5:13, Eph 2:5, Matt 9:2, 1 Peter 1:18.
2,099,602
Show that the equation below has exactly one root: $$2x+\cos x=0$$ How would I find the root?
2017/01/16
[ "https://math.stackexchange.com/questions/2099602", "https://math.stackexchange.com", "https://math.stackexchange.com/users/375665/" ]
If $f(x) = 2x+\cos x$ has two zeros, $a$ and $b$, then we have, by MVT (or Rolle's Theorem) $$0=\frac{f(b)-f(a)}{b-a} = f'(c)$$ for some $c$ between $a$ and $b$. However, $f'(x) = 2-\sin x$ is always positive, so no such $c$ can exist. To see that there is at least one root, we just plug in $x=10$ and $x=-10$ and apply IVT.
Suppose $2x+\cos x$ had [at least] two roots. Then the mean value theorem implies that the derivative is zero somewhere between those two roots. However, the derivative is $2-\sin x$, which is strictly positive. So this shows the number of roots is either $0$ or $1$. But this doesn't completely answer the question... There is a more direct way. Since the derivative is strictly positive, $2x+\cos x$ is strictly increasing. Using the IVT will give you one root.
2,099,602
Show that the equation below has exactly one root: $$2x+\cos x=0$$ How would I find the root?
2017/01/16
[ "https://math.stackexchange.com/questions/2099602", "https://math.stackexchange.com", "https://math.stackexchange.com/users/375665/" ]
If $f(x) = 2x+\cos x$ has two zeros, $a$ and $b$, then we have, by MVT (or Rolle's Theorem) $$0=\frac{f(b)-f(a)}{b-a} = f'(c)$$ for some $c$ between $a$ and $b$. However, $f'(x) = 2-\sin x$ is always positive, so no such $c$ can exist. To see that there is at least one root, we just plug in $x=10$ and $x=-10$ and apply IVT.
Let $f(x) = 2x + \cos(x)$. Then $f'(x) = 2 - \sin(x) \gt 0$ for all $x$, since $|\sin(x)| \le 1$. This means it crosses the $x$ axis once and never looks back.
2,099,602
Show that the equation below has exactly one root: $$2x+\cos x=0$$ How would I find the root?
2017/01/16
[ "https://math.stackexchange.com/questions/2099602", "https://math.stackexchange.com", "https://math.stackexchange.com/users/375665/" ]
If $f(x) = 2x+\cos x$ has two zeros, $a$ and $b$, then we have, by MVT (or Rolle's Theorem) $$0=\frac{f(b)-f(a)}{b-a} = f'(c)$$ for some $c$ between $a$ and $b$. However, $f'(x) = 2-\sin x$ is always positive, so no such $c$ can exist. To see that there is at least one root, we just plug in $x=10$ and $x=-10$ and apply IVT.
If you derive the function you get: $$2-\sin x>0\ \forall x \in \mathbb R$$ So the function is strictly increasing. If you take the limit of $2x+\cos x$ to $\pm\infty$ you get $\pm\infty$, hence being the function continuous and strictly increasing it must intersect the $x-axis$ only in a point.
2,099,602
Show that the equation below has exactly one root: $$2x+\cos x=0$$ How would I find the root?
2017/01/16
[ "https://math.stackexchange.com/questions/2099602", "https://math.stackexchange.com", "https://math.stackexchange.com/users/375665/" ]
If $f(x) = 2x+\cos x$ has two zeros, $a$ and $b$, then we have, by MVT (or Rolle's Theorem) $$0=\frac{f(b)-f(a)}{b-a} = f'(c)$$ for some $c$ between $a$ and $b$. However, $f'(x) = 2-\sin x$ is always positive, so no such $c$ can exist. To see that there is at least one root, we just plug in $x=10$ and $x=-10$ and apply IVT.
Previous answers and comments explained that the root is unique. Now, the question is : what is the zero of $$f(x)=2x+\cos(x)$$ Such equation which mixes polynomila and trigonometric terms do not show analytical solutions and numerical methods should be used. The simplest is probably Newton method which, starting from a guess $x\_0$ will update it according to $$x\_{n+1}=x\_n-\frac{f(x\_n)}{f'(x\_n)}$$ In the present case, this will write $$x\_{n+1}=\frac{x\_n \sin (x\_n)+\cos (x\_n)}{\sin (x\_n)-2}$$ Being lazy, let us choose $x\_0=0$; then the method will generate the following iterates $$\left( \begin{array}{cc} n & x\_n \\ 0 & 0 \\ 1 & -0.50000000000000000000 \\ 2 & -0.45062669307724304657 \\ 3 & -0.45018364757777474250 \\ 4 & -0.45018361129487381641 \\ 5 & -0.45018361129487357304 \end{array} \right)$$
64,296,423
I have a Node (14.3.0) server where I enabled ES6 module imports in my package.json by adding the following line: package.json: `"type": "module",` According to the firebase-admin docs here: <https://firebase.google.com/docs/admin/setup/#node.js> > > If you are using ES2015, you can import the module instead: > > > import \* as admin from 'firebase-admin'; > > > When I use `import * as admin from 'firebase-admin';` I get the following error: > > credential: admin.credential.applicationDefault(), > > TypeError: Cannot read property 'applicationDefault' of undefined > > > It seems that `firebase-admin` isn't imported properly - I have tried removing the `"type": "module"` line in package.json and importing firebase-admin with require: `const admin = require(firebase-admin)` and it works, so my question is - is it possible to import `firebase-admin` in Node using ES6, and if so, how? Below is a complete, minimal, reproduction: server.js ``` import express from 'express'; import * as admin from 'firebase-admin'; const app = express(); const PORT = process.env.PORT || 5000; app.use(express.json()); admin.initializeApp({ credential: admin.credential.applicationDefault(), databaseURL: process.env.FIREBASE_DB_URL, }); app.listen(PORT, () => console.log(`listening on ${PORT}`)); export default app; ``` package.json ``` { "name": "server", "version": "1.0.0", "main": "index.js", "license": "MIT", "dependencies": { "express": "^4.17.1", "firebase-admin": "^9.2.0", }, "type": "module", "scripts": { "dev": "node server.js" }, "engines": { "node": "14.x" } } ``` NOTE: Before running the server, make sure to do in your shell (Mac/Linux): `export GOOGLE_APPLICATION_CREDENTIALS="/your/path/to/service-account-file.json"`
2020/10/10
[ "https://Stackoverflow.com/questions/64296423", "https://Stackoverflow.com", "https://Stackoverflow.com/users/973862/" ]
I asked this question on the `firebase-admin-node` github. Apparently, they hadn't tested imports with Node 14 yet. The answer is simply: ``` import admin from 'firebase-admin' ``` You can see an explanation here: <https://github.com/firebase/firebase-admin-node/issues/1061#event-3868300300>
Use it like this: ``` import * as admin from 'firebase-admin'; const {credential} = admin; ``` this way you will be able to use the functions.
52,049,251
We are starting a new full stack project. It has been decided we are going to use React for the front end which should consume a GraphQL API. We are contemplating two scenarios for the development of this API: * The first one is to build a GraphQL API that uses a REST API as data source using Apollo. * The second one is to build a GraphQL API that uses a database as data source skipping the REST API. What are the advantages and disadvantages of each scenario? Is one better than the other?
2018/08/28
[ "https://Stackoverflow.com/questions/52049251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/535967/" ]
Based on our discussion on gitter, Since you are building your platform from scratch, you don't need to build your GraphQL layer over REST APIs. That will be double work and is not needed. I recommend that you set up [Apollo-Server](https://www.apollographql.com/docs/apollo-server/) which will directly interact with your database(s). You said that you might need to use some legacy APIs for which I recommended Microservices which can easily work with graphql. Regarding PRISMA, it doesn't give you as much flexibility. It also does not allow ACID compliant updates for fields such as increment or decrement a value. [This issue](https://github.com/prisma/prisma/issues/1349) on github can be used to track updates on this feature. PRISMA is good for getting you set up quickly but I(personal opinion) feel that it is not as flexible. Your final question was how do you query database/databases without REST API, for which I suggested querying directly from your server. GraphQL is client side and Apollo will help you to extend it and provide much more features. You don't need to do things much different than what you'd do when creating a REST API other than the fact that you don't need to define separate endpoints for each request. The database interactions remain almost the same other than the fact that you only return/query for the data that you need and this can be done dynamically. From [graphql website](https://graphql.org/) > > Send a GraphQL query to your API and get exactly what you need, nothing more and nothing less. GraphQL queries always return predictable results. Apps using GraphQL are fast and stable because they control the data they get, not the server. > > > Other mentions: [AWS Appsync](https://aws.amazon.com/appsync/)
Using REST to power your GraphQL is like using a to power your Tesla. Simply dumb!! Use a database but only select the fields and records asked by GraphQL query, NOT "select \*". Use Redis cache if you need the speed.
52,049,251
We are starting a new full stack project. It has been decided we are going to use React for the front end which should consume a GraphQL API. We are contemplating two scenarios for the development of this API: * The first one is to build a GraphQL API that uses a REST API as data source using Apollo. * The second one is to build a GraphQL API that uses a database as data source skipping the REST API. What are the advantages and disadvantages of each scenario? Is one better than the other?
2018/08/28
[ "https://Stackoverflow.com/questions/52049251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/535967/" ]
The second scenario is best > > The second one is to build a GraphQL API that uses a database as data > source skipping the REST API. > > > REST and GraphQL can both be operated over HTTP, though GraphQL is protocol agnostic. And GraphQL is the new kid in the block with lot of cool features and it solves most of the problem faced with REST like : 1.) Multiple end points to fetch data ``` In rest we have create multiple API endpoints ``` 2.) Over/Under Fetching ``` Some time API return extra data that we don't need or less data in response ``` 3.) Network Requests ``` multiple endpoint leads to more network requests ``` 4.) Error Handling Error handling in REST is pretty straightforward, we simply check the HTTP headers to get the status of a response. Depending on the HTTP status code ( 404, 503, 500 etc) we get, we can easily tell what the error is and how to go about resolving it. GraphQL on the other hand, when operated over HTTP, we will always get a 200 OK response status. When an error occurs while processing GraphQL queries, the complete error message is sent to the client with the response 5.) Versioning Often when consuming third-party REST APIs, we see stuff like v1, v2, v3 etc. which simply indicate the version of the REST API we are using. This leads to code redundancy and less maintainable code. With GraphQL, there is no need for versioning as we can easily add new fields and types to our GraphQL API without impacting existing queries. Also, we can easily mark fields as deprecated and the fields will be excluded from the response gotten from the server For detailed comparison please visit [GraphQl vs Rest](https://blog.pusher.com/rest-versus-graphql/) [or this post](https://medium.com/codingthesmartway-com-blog/rest-vs-graphql-418eac2e3083) [Or Official GraphQL site](https://graphql.org/)
Using REST to power your GraphQL is like using a to power your Tesla. Simply dumb!! Use a database but only select the fields and records asked by GraphQL query, NOT "select \*". Use Redis cache if you need the speed.
52,049,251
We are starting a new full stack project. It has been decided we are going to use React for the front end which should consume a GraphQL API. We are contemplating two scenarios for the development of this API: * The first one is to build a GraphQL API that uses a REST API as data source using Apollo. * The second one is to build a GraphQL API that uses a database as data source skipping the REST API. What are the advantages and disadvantages of each scenario? Is one better than the other?
2018/08/28
[ "https://Stackoverflow.com/questions/52049251", "https://Stackoverflow.com", "https://Stackoverflow.com/users/535967/" ]
Based on our discussion on gitter, Since you are building your platform from scratch, you don't need to build your GraphQL layer over REST APIs. That will be double work and is not needed. I recommend that you set up [Apollo-Server](https://www.apollographql.com/docs/apollo-server/) which will directly interact with your database(s). You said that you might need to use some legacy APIs for which I recommended Microservices which can easily work with graphql. Regarding PRISMA, it doesn't give you as much flexibility. It also does not allow ACID compliant updates for fields such as increment or decrement a value. [This issue](https://github.com/prisma/prisma/issues/1349) on github can be used to track updates on this feature. PRISMA is good for getting you set up quickly but I(personal opinion) feel that it is not as flexible. Your final question was how do you query database/databases without REST API, for which I suggested querying directly from your server. GraphQL is client side and Apollo will help you to extend it and provide much more features. You don't need to do things much different than what you'd do when creating a REST API other than the fact that you don't need to define separate endpoints for each request. The database interactions remain almost the same other than the fact that you only return/query for the data that you need and this can be done dynamically. From [graphql website](https://graphql.org/) > > Send a GraphQL query to your API and get exactly what you need, nothing more and nothing less. GraphQL queries always return predictable results. Apps using GraphQL are fast and stable because they control the data they get, not the server. > > > Other mentions: [AWS Appsync](https://aws.amazon.com/appsync/)
The second scenario is best > > The second one is to build a GraphQL API that uses a database as data > source skipping the REST API. > > > REST and GraphQL can both be operated over HTTP, though GraphQL is protocol agnostic. And GraphQL is the new kid in the block with lot of cool features and it solves most of the problem faced with REST like : 1.) Multiple end points to fetch data ``` In rest we have create multiple API endpoints ``` 2.) Over/Under Fetching ``` Some time API return extra data that we don't need or less data in response ``` 3.) Network Requests ``` multiple endpoint leads to more network requests ``` 4.) Error Handling Error handling in REST is pretty straightforward, we simply check the HTTP headers to get the status of a response. Depending on the HTTP status code ( 404, 503, 500 etc) we get, we can easily tell what the error is and how to go about resolving it. GraphQL on the other hand, when operated over HTTP, we will always get a 200 OK response status. When an error occurs while processing GraphQL queries, the complete error message is sent to the client with the response 5.) Versioning Often when consuming third-party REST APIs, we see stuff like v1, v2, v3 etc. which simply indicate the version of the REST API we are using. This leads to code redundancy and less maintainable code. With GraphQL, there is no need for versioning as we can easily add new fields and types to our GraphQL API without impacting existing queries. Also, we can easily mark fields as deprecated and the fields will be excluded from the response gotten from the server For detailed comparison please visit [GraphQl vs Rest](https://blog.pusher.com/rest-versus-graphql/) [or this post](https://medium.com/codingthesmartway-com-blog/rest-vs-graphql-418eac2e3083) [Or Official GraphQL site](https://graphql.org/)
14,295,122
Is there a single step way using awk, sort or something equal to sort or reverse a single column of a multi-column CSV table while maintaining the rest in the same order they are? For example,I have: ``` 6, 45, 9 5, 47, 6 4, 46, 7 3, 48, 4 2, 10, 5 1, 11, 1 ``` and would like to have: ``` 1, 45, 9 2, 47, 6 3, 46, 7 4, 48, 4 5, 10, 5 6, 11, 1 ``` So, only the first column is sorted and the rest are in their previous order.
2013/01/12
[ "https://Stackoverflow.com/questions/14295122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972610/" ]
This might work for you: ``` paste -d, <(cut -d, -f1 file | sort) <(cut -d, -f2- file) ```
awk one-liner ``` awk -F, '{c[NR]=$1;l[NR]=$2", "$3}END{for(i=1;i<=NR;i++) print c[NR-i+1]", "l[i]}' file ``` test ``` kent$ echo "6, 45, 9 5, 47, 6 4, 46, 7 3, 48, 4 2, 10, 5 1, 11, 1"|awk -F, '{c[NR]=$1;l[NR]=$2", "$3}END{for(i=1;i<=NR;i++) print c[NR-i+1]", "l[i]}' 1, 45, 9 2, 47, 6 3, 46, 7 4, 48, 4 5, 10, 5 6, 11, 1 ```
14,295,122
Is there a single step way using awk, sort or something equal to sort or reverse a single column of a multi-column CSV table while maintaining the rest in the same order they are? For example,I have: ``` 6, 45, 9 5, 47, 6 4, 46, 7 3, 48, 4 2, 10, 5 1, 11, 1 ``` and would like to have: ``` 1, 45, 9 2, 47, 6 3, 46, 7 4, 48, 4 5, 10, 5 6, 11, 1 ``` So, only the first column is sorted and the rest are in their previous order.
2013/01/12
[ "https://Stackoverflow.com/questions/14295122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972610/" ]
This might work for you: ``` paste -d, <(cut -d, -f1 file | sort) <(cut -d, -f2- file) ```
If you have `GNU awk` here's a one liner: ``` $ gawk '{s[NR]=$1;c[NR]=$2 $3}END{for(i=0;++i<=asort(s);)print s[i] c[i]}' file 1,45,9 2,47,6 3,46,7 4,48,4 5,10,5 6,11,1 ``` If not, here's an `awk` script that implements a simple bubble sort: ``` { # read col1 in sort array, read others in col array sort[NR] = $1 cols[NR] = $2 $3 } END { # sort it with bubble sort do { haschanged = 0 for(i=1; i < NR; i++) { if ( sort[i] > sort[i+1] ) { t = sort[i] sort[i] = sort[i+1] sort[i+1] = t haschanged = 1 } } } while ( haschanged == 1 ) # print it for(i=1; i <= NR; i++) { print sort[i] cols[i] } } ``` Save it to a file `sort.awk` and do `awk -f sort.awk file`: ``` $ awk -f sort.awk file 1,45,9 2,47,6 3,46,7 4,48,4 5,10,5 6,11,1 ```
14,295,122
Is there a single step way using awk, sort or something equal to sort or reverse a single column of a multi-column CSV table while maintaining the rest in the same order they are? For example,I have: ``` 6, 45, 9 5, 47, 6 4, 46, 7 3, 48, 4 2, 10, 5 1, 11, 1 ``` and would like to have: ``` 1, 45, 9 2, 47, 6 3, 46, 7 4, 48, 4 5, 10, 5 6, 11, 1 ``` So, only the first column is sorted and the rest are in their previous order.
2013/01/12
[ "https://Stackoverflow.com/questions/14295122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972610/" ]
awk one-liner ``` awk -F, '{c[NR]=$1;l[NR]=$2", "$3}END{for(i=1;i<=NR;i++) print c[NR-i+1]", "l[i]}' file ``` test ``` kent$ echo "6, 45, 9 5, 47, 6 4, 46, 7 3, 48, 4 2, 10, 5 1, 11, 1"|awk -F, '{c[NR]=$1;l[NR]=$2", "$3}END{for(i=1;i<=NR;i++) print c[NR-i+1]", "l[i]}' 1, 45, 9 2, 47, 6 3, 46, 7 4, 48, 4 5, 10, 5 6, 11, 1 ```
If you have `GNU awk` here's a one liner: ``` $ gawk '{s[NR]=$1;c[NR]=$2 $3}END{for(i=0;++i<=asort(s);)print s[i] c[i]}' file 1,45,9 2,47,6 3,46,7 4,48,4 5,10,5 6,11,1 ``` If not, here's an `awk` script that implements a simple bubble sort: ``` { # read col1 in sort array, read others in col array sort[NR] = $1 cols[NR] = $2 $3 } END { # sort it with bubble sort do { haschanged = 0 for(i=1; i < NR; i++) { if ( sort[i] > sort[i+1] ) { t = sort[i] sort[i] = sort[i+1] sort[i+1] = t haschanged = 1 } } } while ( haschanged == 1 ) # print it for(i=1; i <= NR; i++) { print sort[i] cols[i] } } ``` Save it to a file `sort.awk` and do `awk -f sort.awk file`: ``` $ awk -f sort.awk file 1,45,9 2,47,6 3,46,7 4,48,4 5,10,5 6,11,1 ```
2,355,607
How can I post data using Jquery from selected data to the PHP? HTML code: ``` <select id="test" title="test" class="test"> <option value="1">1 <option value="2">2 <option value="3">3 <option value="4">4 <option value="5">5 </select> ``` Jquery: ``` $('.18').click(function(event) { var test_s = $('#test').val(); $('.opc_mfont').load("index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>&data=' + test_s +'"); event.stopPropagation(); }); ``` All the data was successfully post to PHP and MySql query except ' + test\_s + ' at the and of jquery load(). I was tested that data received from select fields with "alert(test\_s);" and the alert message show up with correct data on it. But I can't send "var test\_s" to PHP using jquery load(). How can I do that?
2010/03/01
[ "https://Stackoverflow.com/questions/2355607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/268338/" ]
### read from jQuery .load(): *The POST method is used if data is provided as an object; otherwise, GET is assumed.* read here: <http://api.jquery.com/load/> ### code untested, probably something like this: ``` $('.opc_mfont').load("index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>", {'data':test_s}); ```
You mixed up single and double quotes: ``` $('.18').click(function(event) { var test_s = $('#test').val(); $('.opc_mfont').load('index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>&data=' + test_s); event.stopPropagation(); }); ```
2,355,607
How can I post data using Jquery from selected data to the PHP? HTML code: ``` <select id="test" title="test" class="test"> <option value="1">1 <option value="2">2 <option value="3">3 <option value="4">4 <option value="5">5 </select> ``` Jquery: ``` $('.18').click(function(event) { var test_s = $('#test').val(); $('.opc_mfont').load("index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>&data=' + test_s +'"); event.stopPropagation(); }); ``` All the data was successfully post to PHP and MySql query except ' + test\_s + ' at the and of jquery load(). I was tested that data received from select fields with "alert(test\_s);" and the alert message show up with correct data on it. But I can't send "var test\_s" to PHP using jquery load(). How can I do that?
2010/03/01
[ "https://Stackoverflow.com/questions/2355607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/268338/" ]
### read from jQuery .load(): *The POST method is used if data is provided as an object; otherwise, GET is assumed.* read here: <http://api.jquery.com/load/> ### code untested, probably something like this: ``` $('.opc_mfont').load("index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>", {'data':test_s}); ```
Your problem is your string: ``` "index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>&data=' + test_s +'" ``` That's not string concatenation - you're mixing your string delimiters. You want: ``` "index.php?x=18&fid=<?=$fid;?>&mid=<?=$mid;?>&data=" + test_s ```
37,119,683
I am very new to python and have to scrape a website for some data for a course at university: [Xrel](https://www.xrel.to/games-release-list.html?archive=2016-01) I am able to get the information i need. The problem is i need it for every entry(page, month, year). The amount of pages differs for every month. Is there any way to extract the maximum pagenumber so i can store it and use it for a loop? I would appreciate any help. Thanks!
2016/05/09
[ "https://Stackoverflow.com/questions/37119683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5839166/" ]
> > One of the main task of Compiler is to check valid syntax > > > Last method has compilation Error because of after `retrun` statement no other statement possible. And previuos methods are compiled because the syntax is correct even though there is clear dead code warning. So try it other way.
Its simply because in the first 2 there is no garauntee that condition will not change in runtime (as far as the compiler concerns) , while in the last 2 there is no way the condition will ever change in runtime. Whats weird is that putting a `final` there didnt help compiler realize the dead code, altough it is gauranteed that b will never change. It seems that the compiler doesnt make any effort to evaluate your code in any level to find dead code...
37,119,683
I am very new to python and have to scrape a website for some data for a course at university: [Xrel](https://www.xrel.to/games-release-list.html?archive=2016-01) I am able to get the information i need. The problem is i need it for every entry(page, month, year). The amount of pages differs for every month. Is there any way to extract the maximum pagenumber so i can store it and use it for a loop? I would appreciate any help. Thanks!
2016/05/09
[ "https://Stackoverflow.com/questions/37119683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5839166/" ]
> > One of the main task of Compiler is to check valid syntax > > > Last method has compilation Error because of after `retrun` statement no other statement possible. And previuos methods are compiled because the syntax is correct even though there is clear dead code warning. So try it other way.
Code analysis is sensitive to "Stop Problem" in general. To obtain some information about the code, you usually have to run the code (if code contains infinite loops, the analyser would hang on during analysis). Because of the issue, in foo and foo2 code analyser doesn't predict future code behaviour. foo4: It is simply Java syntactic error. It is not allowed to write a code after return statement. foo3: Code is syntactically correct, but as RC mentioned, code analyser integrated with IDE is able to perform simple detection of a branch that will be never fired.
52,169,430
Is there an angular CLI command that displays all the existing routes of the app?
2018/09/04
[ "https://Stackoverflow.com/questions/52169430", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7281814/" ]
**No** - there is not an Angular CLI command as of today that will display all your existing routes. You can take a look at the [Official Angular CLI documentation](https://github.com/angular/angular-cli/wiki) which lists all the available commands.
Angular router supports ForRoot and ForChild. One parent Route State and Multiple Child Route State. parent Route State usually set in app.module.ts. Child Route states will be added in the lazy loaded modules. When user access the URL, the corresponding configured component will be rendered, those routes will be added in the main router configuration. May be you can add EnableTracing proeprty. ``` RouterModule.forRoot([ ROUTES, { enableTracing: true } ]) ```
66,912,178
I've created a procedure oracle with 2 parameters, one of them is a out parameter type `TABLE OF VARCHAR2` . how to call it in java and get result? My test procedure created below: ``` /* creating package with specs */ create or replace PACKAGE PACK1 AS TYPE name_array IS TABLE OF VARCHAR2(50) INDEX BY BINARY_INTEGER; PROCEDURE proc_filter_and_return_array( p_name_in IN VARCHAR2, p_name_out_array OUT name_array ); END PACK1; /* creating package body with procedure */ create or replace PACKAGE BODY PACK1 as PROCEDURE proc_filter_and_return_array( p_name_in IN VARCHAR2, p_name_out_array OUT name_array )IS CURSOR c_table1_select is select name FROM table1_test where name like '%' || p_name_in || '%'; v_index NUMBER := 0; BEGIN FOR x IN c_table1_select LOOP p_name_out_array( v_index ) := x.name; v_index := v_index + 1; END LOOP; END proc_filter_and_return_array; END PACK1; ``` When I'm testing it in oracle I got successfully with the code below: ``` DECLARE p_name_array pack1.name_array; BEGIN pack1.proc_filter_and_return_array(p_name_in => 'name_to_filter', p_name_out_array => p_name_array); dbms_output.put_line(' number from table: ' || p_name_array(1) ); END; ``` But in java I got some errors, I'm doing this way to call the procedure: ``` SimpleJdbcCall simpleJdbcCall = new SimpleJdbcCall(jdbcTemplate) .withCatalogName("PACK1") .withProcedureName("PROC_FILTER_AND_RETURN_ARRAY") .declareParameters( new SqlParameter("P_NAME_IN", Types.VARCHAR) ) .declareParameters( new SqlOutParameter("P_NAME_OUT_ARRAY", Types.ARRAY, "PACK1.NAME_ARRAY" )); MapSqlParameterSource map = new MapSqlParameterSource(); map.addValue("P_NAME_IN", "name_to_filter"); Map<String, Object> result = simpleJdbcCall.execute(map); ``` So I got this on running from java: ``` org.springframework.jdbc.UncategorizedSQLException: CallableStatementCallback; uncategorized SQLException for SQL [{call PACK1.PROC_FILTER_AND_RETURN_ARRAY(?, ?)}]; SQL state [99999]; error code [17074]; invalid name pattern: PACK1.NAME_ARRAY; nested exception is java.sql.SQLException: invalid name pattern: PACK1.NAME_ARRAY] with root cause java.sql.SQLException: invalid name pattern: PACK1.NAME_ARRAY at oracle.jdbc.oracore.OracleTypeADT.initMetadata11_2(OracleTypeADT.java:764) at oracle.jdbc.oracore.OracleTypeADT.initMetadata(OracleTypeADT.java:479) at oracle.jdbc.oracore.OracleTypeADT.init(OracleTypeADT.java:443) at oracle.sql.ArrayDescriptor.initPickler(ArrayDescriptor.java:1499) at oracle.sql.ArrayDescriptor.<init>(ArrayDescriptor.java:274) at oracle.sql.ArrayDescriptor.createDescriptor(ArrayDescriptor.java:127) at oracle.sql.ArrayDescriptor.createDescriptor(ArrayDescriptor.java:79) at oracle.jdbc.driver.NamedTypeAccessor.otypeFromName(NamedTypeAccessor.java:83) at oracle.jdbc.driver.TypeAccessor.initMetadata(TypeAccessor.java:76) at oracle.jdbc.driver.T4CCallableStatement.allocateAccessor(T4CCallableStatement.java:599) at oracle.jdbc.driver.OracleCallableStatement.registerOutParameterInternal(OracleCallableStatement.java:201) at oracle.jdbc.driver.OracleCallableStatement.registerOutParameter(OracleCallableStatement.java:240) at oracle.jdbc.driver.OracleCallableStatementWrapper.registerOutParameter(OracleCallableStatementWrapper.java:1243) at com.zaxxer.hikari.pool.HikariProxyCallableStatement.registerOutParameter(HikariProxyCallableStatement.java) at org.springframework.jdbc.core.CallableStatementCreatorFactory$CallableStatementCreatorImpl.createCallableStatement(CallableStatementCreatorFactory.java:188) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1090) at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1147) at org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:412) at org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:372) at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) ``` unfortunately, I couldn't change anything in client's database :( so I can't change the declaration `TYPE name_array IS TABLE OF VARCHAR2(50) INDEX BY BINARY_INTEGER;` and I need to build a application in java, spring boot. Is there some way to do this without change procedure and package on oracle. What I'm doing wrong? Thanks in advance.
2021/04/01
[ "https://Stackoverflow.com/questions/66912178", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7505687/" ]
Assumptions: * always skip the first 2 lines of the input file * the second field contains no white space * there are no blank lines Sample data file: ``` $ cat input.dat This is a totally extraneous introduction and does not have anything to do with the data. It is here as a facsimile of what the output file looks like. df bank.com 10.10.10.1 sdfdg store.com 10.10.10.2 s church.com 10.10.10. ``` One `awk` solution: ``` $ awk 'FNR>2 {printf "%s%s", pfx, $2; pfx=","} END {printf "\n"}' input.dat bank.com,store.com,church.com ``` Explanation: * `FNR>2` - for record (row) numbers greater than 2 ... * `printf "%s%s", pfx, $2` - print our prefix (initially blank) plus field #2; because there is no `\n` in the format the cursor is left on the current line * `pfx=","` - set prefix to a comma (`,`) for the rest of the file * `END {printf "\n"}` - add a `\n` to the end of the line
This should work: ``` tail -n +3 filename | awk '{print $2}' | sed -z 's/\n/,/g;s/,$/\n/' > newfile.txt ``` The `tail -n +3` will skip the first 2 lines of the file (so change this to however many lines the intro is, plus one). The next part prints out only the second column, the one you're interested in. The third replaces the newlines with commas. The last part places the output into a new file.
371,503
**Version:** Minecraft Java Edition, 1.15.2, server Is there a [console command](https://minecraft.gamepedia.com/Commands), or area on the [Debug screen](https://minecraft.gamepedia.com/Debug_screen), for querying how many total, already been generated and stored chunks which make up the entire current Minecraft world? I know there are [Minecraft Commands](https://minecraft.gamepedia.com/Commands) around querying for on chunks which are *currently loaded*, but not total generated.
2020/06/18
[ "https://gaming.stackexchange.com/questions/371503", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/152613/" ]
The problem might be you already have a villager spawned somewhere but you don't know it. > > At any time, there can be only ***one*** wandering trader naturally spawned in loaded chunks. After 24000 ticks (20 real-life minutes, or 1 Minecraft day) have passed since the world is created, the game attempts to spawn a wandering trader.[note 1] If there are no wandering traders currently in any loaded chunks, the game tries to spawn a new wandering trader after every following 24000 ticks, within a 48-block radius of a player. > > > Source: [Official Minecraft Wiki](https://minecraft.gamepedia.com/Wandering_Trader#:%7E:text=If%20there%20are%20no%20wandering,block%20radius%20of%20a%20player.) There is some RNG involved hre. Whenever the game tries to spawn a trader (every 24000 ticks) there is a 2.5% chance of success. If it fails, it is 5% for the next day, 7.5% for the next, etc. On an average, you should get a trader in ~10 days but if you're really unlucky eventually the chance will drive up to 100%. So if you're in the same area for a while you should get one. If it's just the blocks and items you're after you could try this [biome finder](https://www.chunkbase.com/apps/biome-finder) to get the biomes you need.
I have the same issue, i had him appearing at the beggining of my world, but once i created an underground base, the trader stopped spawning with the patrols, but when i got back on surface, but after around 20 minutes he got back with the patrols.
32,538
I was wondering if there was any hard number evidence for upscaling or downscaling footage to a certain raster size, and what happens to the quality or if any degradation or improvements are made to the output. I know anecdotally that if you take a `320x240` video file and upscale it to `1920x1080` that the quality is lost - mainly from stretching pixels. I know anecdotally that if you take a `3840x2160` video file and downscale it to `1024x576` that the quality in the larger areas remain, and but the finer details can be lost when compressed into a smaller pixel area. I'm currently in an environment workplace that uses only SD outputs, but for some reason there is a push to take the incoming media (which can range from `320x240` all the way to 8k) and conform it to `1920x1080`. When it is imported into the editor, it is an SD project, and burnt to DVD-Video. For some reason the explanation of matching our output raster or device (DVD-Video) doesn't seem as sufficient evidence, so I wanted to know if there were any numbers or studies or factual papers I could use! --- For the most part assume the conversions are performed with "off the shelf" converters, which most likely run `ffmpeg` in their processing. All resizes with in the applications too would be default - so *Bicubic interpolation* is most common.
2020/10/27
[ "https://avp.stackexchange.com/questions/32538", "https://avp.stackexchange.com", "https://avp.stackexchange.com/users/14221/" ]
... Depends... There are many concepts that need to be differentiated. Let me explore a bit. **Information** The first one is that we need to think about an image as information. So we need to ask first. Is the information lost? Is the information modified? Is it improved? Keep this concept in mind. **Scaling** And now let me apply this idea to another word. Scaling. Scaling "per se" does not modify the initial information. It only adapts it to conform to another viewing size or display. Take your 320x240 video file and do nothing with it. Only play it on any monitor to full screen. The video is exactly the same. If viewed on a large screen you will surely notice the pixels, but that is your perception. The information on the video file is exactly the same. --- Now let us actually scale the image so it is no longer a 320x240 file. Normally you would like to use round multiples. 2x, 3x, 4x. If you use a "nearest neighbor" scaling method, the information will be **exactly the same**. A 240 P image 4 times would be a 960P image. But scaling it to 1080 has a compromise of how you scale some pixels, the factor is 4.5x so, either some pixels are scaled 4 times and some other 5 times, or you average some values. **Resampling** And here is the key. An operation you are now performing is not scaling, it is resampling. Taking some values, and converting them into other values. If I say that John has $20 and Mike has $10. I have the exact information. If I say that the average they have is $15 it destroys the original information. It could be that one has $30 and the other has nothing. Or that the amounts are inverted. So, resampling is the one that compromises the information. --- > > mainly from stretching pixels. > > > Here you are probably referring to the aspect ratio. Which is another thing to consider. It can be a resampling maintaining the aspect ratio. Keep that in mind. > > I know anecdotally that if you take a 320x240 video file and upscale it to 1920x1080 that the quality is lost > > > Not really. First of all, you *perceive* it as *low quality* because you are now used to see HD images, so you are adding how you perceive the resulting image. Again, see that image on a watch, or use it on a nanotechnology screen on a label on a soda can, and you would find it impressive. Quality is a process, not an end result... necessarily --- **The algorithm** Our capacity to percive information depends on the information itself. Our eyes can detect detail if there is contrast. And we percive this contrast on the edges of a shape. Some resampling algorithms try to enhance the detail on the borders of the shapes. Some try to enhance it, some try to generate new information guessing what values should be there. --- At the end there is always a compromise on what do you expect to see, what tools are at our disposal, and how much of the original information you can, or want to keep. --- P.S. Downsampling always reduces the ammount of information, but it is only important if the information is relevant. A 4k video file of a completly white wall will probably not have relevant information, so we can resample safely to SD. But if we have a small text painted there, probably this text will be lost on the downsampling.
May i understand your question as > > How do i have someone/something that understands the missing puzzles from > my upscaled images? > > > If so, you are talking some intelligence/experience in upscaling or AI-upscaling. Then, plz take a look of [AI Upscalers](https://sites.google.com/view/aiupscalingtutorials/ai-upscaling); it mentions * An open source AI upscaler (super sampler) based on a generative adversarial network architecture. * An open source image enhancer that seeks to restore lost texture details from known types * Commercial super sampler that uses artificial intelligence Good luck!
29,993,602
When the application is run, WCF service works as expected. When the exact same code is called from a unit test project, the error below occurs. Is there something special needed for windows 8 test projects to access the WCF service? > > Result Message: Test method > DataServiceTests.MyTest > threw exception: > > > System.AggregateException: One or more errors occurred. ---> > System.ServiceModel.EndpointNotFoundException: Could not connect to > net.tcp://localhost:56478/MyService/DataAccessService. The connection > attempt lasted for a time span of 00:00:42.0131535. TCP error code > 10060: A connection attempt failed because the connected party did not > properly respond after a period of time, or established connection > failed because connected host has failed to respond > 127.0.0.1:56478. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly > respond after a period of time, or established connection failed > because connected host has failed to respond 127.0.0.1:56478 > > > Integration test code: ``` DataAccess service = new DataAccess( new Uri(@"net.tcp://127.0.0.1:56478/MyServices/DataAccessService")); var bob = service.GetData(); ```
2015/05/01
[ "https://Stackoverflow.com/questions/29993602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/322518/" ]
With a little CSS maybe: ``` .slick-prev { left: 10px } .slick-next { right: 10px } ```
If you would just like to use CSS you can just target the classes .slick-prev and .slick-next. Posting your code might help, but here is what worked on slick carousel's demo page (I had to add a background-color to for the white icon to show up on the white slide). ``` .slick-prev {left:10px;} .slick-next {right: 10px;} ```
58,906,382
Consider an example where a person can have a lot of cars. That's why we are using @OneToMany annotation: ``` @Entity @Table(name = "person") public class Person { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; @Column(name = "first_name") private String firstName; @Column(name = "last_name") private String lastName; @Column(name = "phone") private String phone; @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "owner") private List<Car> cars = new ArrayList<Car>(); public Person() { } // Getters setters } ``` --- ``` @Entity @Table(name = "cars") public class Car { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private int id; @ManyToOne private Person owner; @Column(name = "description") private String description; public Car() { } // Getters setters ``` Persistence and everything works fine, but I would like to change something to fit my needs. When the program starts I want all persons to be loaded in memory, let's say in a `List<Person>`. So, I do this: ``` List<Person> persons = session.createQuery("from Person", Person.class).list(); ``` But, as it makes sense, Hibernate loads the cars too. **Is there a way to stop it from doing that?** The reason of that is because it is faster to do something like the following: ``` List<Person> persons = loadAllPersonsFromDatabase(); List<Car> allCars = loadAllCarsFromDatabase(); for (Person p : persons) { List<Car> carsOfThisPerson = findCarsOfThisPerson(allCars,p); p.setCars(carsOfThisPerson); } ``` since I want everything in memory. I know that loading all records of a database in memory is not a common practice, and the reason of that is because the one is a database and the other is a memory. However, my environment is a desktop connection, where queries are kind of slow (since the database is standalone) and I'm sure that these records will never be that many that will cause an `OutOfMemoryError`. I think it is also important to mention that changing `FetchType.EAGER` to `FetchType.LAZY` will not work me. It will be the same since after I load the data, I call the `getCars()` for all persons. @crizis in comments has a good point. The reason I don't simply delete the relationship (do `@Transient List<Car> cars`) of those two (Person,Car), is because I want to have the other functionalities persistence offers. I mean, if I do this: ``` Car car = getCarByPerson("John"); car.setDescription("John"); ``` I will be able to save this car by: ``` hibernateSession.save(john); ``` and not by: `hibernateSession.save(car);` Also, in case I delete John, I want all his cars to be deleted as well (that's where `CASCADE` comes in), without me being forced to: ``` for (Car c : john.getCars()) hibernateSession.delete(c); ``` or whatever.
2019/11/17
[ "https://Stackoverflow.com/questions/58906382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6579265/" ]
After some experiments and searching on web, I found out that using `FetchMode.SELECT` will do exactly what I want and it is exactly what I was looking for. I changed: ``` @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "owner") private List<Car> cars = new ArrayList<Car>(); ``` to: ``` @OneToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL, mappedBy = "owner") @Fetch(value = FetchMode.SUBSELECT) private List<Car> cars = new ArrayList<Car>(); ``` and indeed it does only 2 selects.
Create a memory dump and analyze that to see what is using all that memory. Maybe you just need to increase the max heap size. You could also make it lazy and use `Hibernate.initialize` on the cars collection. That way, I even think its possible to use a fetch mode like e.g. SUBSELECT. If nothing helps, I can recommend using Blaze-Persistence Entity Views to reduce loaded state to a minimum and use a SUBSELECT fetch strategy for sure.
2,710,373
Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (<http://github.com/lamp/paypal_adaptive_gateway>) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: ``` headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body ``` (i added the proxy line so i could see what was going on with Charles - <http://www.charlesproxy.com/>) When I look at the request headers in charles, this is what i see: ``` X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com ``` I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.
2010/04/25
[ "https://Stackoverflow.com/questions/2710373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241326/" ]
The RFC does specify that header keys are [case-insensitive](http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2), so unfortunately you seem to have hit an annoying requirement with the PayPal API. Net::HTTP is what is changing the case, although I'm surprised they're not all getting downcased: ``` # File net/http.rb, line 1160 def []=(key, val) unless val @header.delete key.downcase return val end @header[key.downcase] = [val] end ``` "Sets the header field corresponding to the case-insensitive key." As the above is a simple class it could be monkey-patched. I will think further for a nicer solution.
I got several issues with the code proposed by @kaplan-ilya because the Net::HTTP library tries to detect the post content-type, and the I ended up with 2 content-type and other fields repeated with different cases. So the code below should ensure than once a particular case has been choosen, it will stick to the same. ``` class Post < Net::HTTP::Post def initialize_http_header(headers) @header = {} headers.each { |k, v| @header[k.to_s] = [v] } end def [](name) _k, val = header_insensitive_match name val end def []=(name, val) key, _val = header_insensitive_match name key = name if key.nil? if val @header[key] = [val] else @header.delete(key) end end def capitalize(name) name end def header_insensitive_match(name) @header.find { |key, _value| key.match Regexp.new(name.to_s, Regexp::IGNORECASE) } end end ```
2,710,373
Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (<http://github.com/lamp/paypal_adaptive_gateway>) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: ``` headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body ``` (i added the proxy line so i could see what was going on with Charles - <http://www.charlesproxy.com/>) When I look at the request headers in charles, this is what i see: ``` X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com ``` I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.
2010/04/25
[ "https://Stackoverflow.com/questions/2710373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241326/" ]
The RFC does specify that header keys are [case-insensitive](http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2), so unfortunately you seem to have hit an annoying requirement with the PayPal API. Net::HTTP is what is changing the case, although I'm surprised they're not all getting downcased: ``` # File net/http.rb, line 1160 def []=(key, val) unless val @header.delete key.downcase return val end @header[key.downcase] = [val] end ``` "Sets the header field corresponding to the case-insensitive key." As the above is a simple class it could be monkey-patched. I will think further for a nicer solution.
If you are still looking for an answer that works. Newer versions have introduced some changes to underlying `capitalize` method by using `to_s`. Fix is to make the `to_s` and `to_str` return the `self` so that the returned object is an instance of `ImmutableKey` instead of the base string class. ``` class ImmutableKey < String def capitalize self end def to_s self end alias_method :to_str, :to_s end ``` Ref: <https://jatindhankhar.in/blog/custom-http-header-and-ruby-standard-library/>
2,710,373
Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (<http://github.com/lamp/paypal_adaptive_gateway>) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: ``` headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body ``` (i added the proxy line so i could see what was going on with Charles - <http://www.charlesproxy.com/>) When I look at the request headers in charles, this is what i see: ``` X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com ``` I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.
2010/04/25
[ "https://Stackoverflow.com/questions/2710373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241326/" ]
Use following code to force case sensitive headers. ``` class CaseSensitivePost < Net::HTTP::Post def initialize_http_header(headers) @header = {} headers.each{|k,v| @header[k.to_s] = [v] } end def [](name) @header[name.to_s] end def []=(name, val) if val @header[name.to_s] = [val] else @header.delete(name.to_s) end end def capitalize(name) name end end ``` Usage example: ``` post = CaseSensitivePost.new(url, {myCasedHeader: '1'}) post.body = body http = Net::HTTP.new(host, port) http.request(post) ```
I got several issues with the code proposed by @kaplan-ilya because the Net::HTTP library tries to detect the post content-type, and the I ended up with 2 content-type and other fields repeated with different cases. So the code below should ensure than once a particular case has been choosen, it will stick to the same. ``` class Post < Net::HTTP::Post def initialize_http_header(headers) @header = {} headers.each { |k, v| @header[k.to_s] = [v] } end def [](name) _k, val = header_insensitive_match name val end def []=(name, val) key, _val = header_insensitive_match name key = name if key.nil? if val @header[key] = [val] else @header.delete(key) end end def capitalize(name) name end def header_insensitive_match(name) @header.find { |key, _value| key.match Regexp.new(name.to_s, Regexp::IGNORECASE) } end end ```
2,710,373
Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (<http://github.com/lamp/paypal_adaptive_gateway>) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: ``` headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body ``` (i added the proxy line so i could see what was going on with Charles - <http://www.charlesproxy.com/>) When I look at the request headers in charles, this is what i see: ``` X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com ``` I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.
2010/04/25
[ "https://Stackoverflow.com/questions/2710373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241326/" ]
Use following code to force case sensitive headers. ``` class CaseSensitivePost < Net::HTTP::Post def initialize_http_header(headers) @header = {} headers.each{|k,v| @header[k.to_s] = [v] } end def [](name) @header[name.to_s] end def []=(name, val) if val @header[name.to_s] = [val] else @header.delete(name.to_s) end end def capitalize(name) name end end ``` Usage example: ``` post = CaseSensitivePost.new(url, {myCasedHeader: '1'}) post.body = body http = Net::HTTP.new(host, port) http.request(post) ```
If you are still looking for an answer that works. Newer versions have introduced some changes to underlying `capitalize` method by using `to_s`. Fix is to make the `to_s` and `to_str` return the `self` so that the returned object is an instance of `ImmutableKey` instead of the base string class. ``` class ImmutableKey < String def capitalize self end def to_s self end alias_method :to_str, :to_s end ``` Ref: <https://jatindhankhar.in/blog/custom-http-header-and-ruby-standard-library/>
2,710,373
Although the HTTP spec says that headers are case insensitive; Paypal, with their new adaptive payments API require their headers to be case-sensitive. Using the paypal adaptive payments extension for ActiveMerchant (<http://github.com/lamp/paypal_adaptive_gateway>) it seems that although the headers are set in all caps, they are sent in mixed case. Here is the code that sends the HTTP request: ``` headers = { "X-PAYPAL-REQUEST-DATA-FORMAT" => "XML", "X-PAYPAL-RESPONSE-DATA-FORMAT" => "JSON", "X-PAYPAL-SECURITY-USERID" => @config[:login], "X-PAYPAL-SECURITY-PASSWORD" => @config[:password], "X-PAYPAL-SECURITY-SIGNATURE" => @config[:signature], "X-PAYPAL-APPLICATION-ID" => @config[:appid] } build_url action request = Net::HTTP::Post.new(@url.path) request.body = @xml headers.each_pair { |k,v| request[k] = v } request.content_type = 'text/xml' proxy = Net::HTTP::Proxy("127.0.0.1", "60723") server = proxy.new(@url.host, 443) server.use_ssl = true server.start { |http| http.request(request) }.body ``` (i added the proxy line so i could see what was going on with Charles - <http://www.charlesproxy.com/>) When I look at the request headers in charles, this is what i see: ``` X-Paypal-Application-Id ... X-Paypal-Security-Password... X-Paypal-Security-Signature ... X-Paypal-Security-Userid ... X-Paypal-Request-Data-Format XML X-Paypal-Response-Data-Format JSON Accept */* Content-Type text/xml Content-Length 522 Host svcs.sandbox.paypal.com ``` I verified that it is not Charles doing the case conversion by running a similar request using curl. In that test the case was preserved.
2010/04/25
[ "https://Stackoverflow.com/questions/2710373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/241326/" ]
If you are still looking for an answer that works. Newer versions have introduced some changes to underlying `capitalize` method by using `to_s`. Fix is to make the `to_s` and `to_str` return the `self` so that the returned object is an instance of `ImmutableKey` instead of the base string class. ``` class ImmutableKey < String def capitalize self end def to_s self end alias_method :to_str, :to_s end ``` Ref: <https://jatindhankhar.in/blog/custom-http-header-and-ruby-standard-library/>
I got several issues with the code proposed by @kaplan-ilya because the Net::HTTP library tries to detect the post content-type, and the I ended up with 2 content-type and other fields repeated with different cases. So the code below should ensure than once a particular case has been choosen, it will stick to the same. ``` class Post < Net::HTTP::Post def initialize_http_header(headers) @header = {} headers.each { |k, v| @header[k.to_s] = [v] } end def [](name) _k, val = header_insensitive_match name val end def []=(name, val) key, _val = header_insensitive_match name key = name if key.nil? if val @header[key] = [val] else @header.delete(key) end end def capitalize(name) name end def header_insensitive_match(name) @header.find { |key, _value| key.match Regexp.new(name.to_s, Regexp::IGNORECASE) } end end ```
52,250,480
My question is about composite types. I can't seem to find anywhere that explains what compound types are in C++. Are they different from composite types?
2018/09/10
[ "https://Stackoverflow.com/questions/52250480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8455470/" ]
From the C++ working draft (N4713): > > **6.7 Types [basic.types]** > > > 1. There are two kinds of types: fundamental types and compound types. > > > There is no specific definition of compound types in the said draft. All we are told is how these compound types are constructed. > > **6.7.2 Compound types [basic.compound]** > > > 1. Compound types can be constructed in the following ways: > > (1.1) — arrays of objects of a given type; > > (1.2) — functions, which have parameters of given types and return void or references or objects of a given type; > > (1.3) — pointers to cv void or objects or functions (including static members of classes) of a given type; > > (1.4) — references to objects or functions of a given type. There are two types of references: > > (1.4.1) — lvalue reference > > (1.4.2) — rvalue reference > > (1.5) — classes containing a sequence of objects of various types, a set of types, enumerations and functions for manipulating these objects, and a set of restrictions on the access to these entities; > > (1.6) — unions, which are classes capable of containing objects of different types at different times; > > (1.7) — enumerations, which comprise a set of named constant values. Each distinct enumeration constitutes a different enumerated type; > > (1.8) — pointers to non-static class members, which identify members of a given type within objects of a given class. Pointers to data members and pointers to member functions are collectively called pointer-to-member types. > > > In the same draft composite types refer to composition of primary types in the form of templates. > > **23.15.4.2 Composite type traits [meta.unary.comp]** > > > 1. These templates provide **convenient compositions of the primary type categories**, corresponding to the descriptions given in subclause 6.7. > > >
Any type that is not a [fundamental type](https://en.cppreference.com/w/cpp/language/types) (a type in the core language) is a compound type. Fundamental types: void, nullptr\_t, bool, integer/character/floating-point types, ranges (C++20) Compound types: All other types Another way to think about this is that an object is a compound type if you have to use another type when writing it's declaration: ``` std::vector<T> myVec; // type T used in type std::vector<T> => compound int* myPtr; // type int used in type int* => compound double myVal = 10.3; // double is a fundamental type => fundamental double& myRef = myVal; // type double is used in type double& => compound ```
52,250,480
My question is about composite types. I can't seem to find anywhere that explains what compound types are in C++. Are they different from composite types?
2018/09/10
[ "https://Stackoverflow.com/questions/52250480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8455470/" ]
From the C++ working draft (N4713): > > **6.7 Types [basic.types]** > > > 1. There are two kinds of types: fundamental types and compound types. > > > There is no specific definition of compound types in the said draft. All we are told is how these compound types are constructed. > > **6.7.2 Compound types [basic.compound]** > > > 1. Compound types can be constructed in the following ways: > > (1.1) — arrays of objects of a given type; > > (1.2) — functions, which have parameters of given types and return void or references or objects of a given type; > > (1.3) — pointers to cv void or objects or functions (including static members of classes) of a given type; > > (1.4) — references to objects or functions of a given type. There are two types of references: > > (1.4.1) — lvalue reference > > (1.4.2) — rvalue reference > > (1.5) — classes containing a sequence of objects of various types, a set of types, enumerations and functions for manipulating these objects, and a set of restrictions on the access to these entities; > > (1.6) — unions, which are classes capable of containing objects of different types at different times; > > (1.7) — enumerations, which comprise a set of named constant values. Each distinct enumeration constitutes a different enumerated type; > > (1.8) — pointers to non-static class members, which identify members of a given type within objects of a given class. Pointers to data members and pointers to member functions are collectively called pointer-to-member types. > > > In the same draft composite types refer to composition of primary types in the form of templates. > > **23.15.4.2 Composite type traits [meta.unary.comp]** > > > 1. These templates provide **convenient compositions of the primary type categories**, corresponding to the descriptions given in subclause 6.7. > > >
There are 2 types : Base type: like int,double,boolean ,... in addition this type can be qualified by const `const int c1=89;` compound type: Types that is built by using base type. 1- pointers `int *v1=&v1;` 2-references `int &r1=v1;` 3-arrays based on int or char our whatever 4-strings this is how i understand it ,will be happy if someone can add or correct me.
52,250,480
My question is about composite types. I can't seem to find anywhere that explains what compound types are in C++. Are they different from composite types?
2018/09/10
[ "https://Stackoverflow.com/questions/52250480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8455470/" ]
From the book *C++ Primer, 5th edition*: > > A compound type is a type that is defined in terms of another type. C++ has several compound types, two of which, **references and pointers...** > > > I think it means compound types are types like references and pointers. Do correct me if I'm wrong.
Any type that is not a [fundamental type](https://en.cppreference.com/w/cpp/language/types) (a type in the core language) is a compound type. Fundamental types: void, nullptr\_t, bool, integer/character/floating-point types, ranges (C++20) Compound types: All other types Another way to think about this is that an object is a compound type if you have to use another type when writing it's declaration: ``` std::vector<T> myVec; // type T used in type std::vector<T> => compound int* myPtr; // type int used in type int* => compound double myVal = 10.3; // double is a fundamental type => fundamental double& myRef = myVal; // type double is used in type double& => compound ```
52,250,480
My question is about composite types. I can't seem to find anywhere that explains what compound types are in C++. Are they different from composite types?
2018/09/10
[ "https://Stackoverflow.com/questions/52250480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8455470/" ]
From the book *C++ Primer, 5th edition*: > > A compound type is a type that is defined in terms of another type. C++ has several compound types, two of which, **references and pointers...** > > > I think it means compound types are types like references and pointers. Do correct me if I'm wrong.
There are 2 types : Base type: like int,double,boolean ,... in addition this type can be qualified by const `const int c1=89;` compound type: Types that is built by using base type. 1- pointers `int *v1=&v1;` 2-references `int &r1=v1;` 3-arrays based on int or char our whatever 4-strings this is how i understand it ,will be happy if someone can add or correct me.
133,308
I have an external hard drive that was making an odd sound, and I wanted it to stop. I found out that navigating the contents of the HD in the Finder would make it stop temporarily. I set up a shell geeklet that ran `ls /Volumes/ExtraStorage` and that fixed the problem. My question is if this can harm the hard drive if I use it over an extended period of time.
2014/06/08
[ "https://apple.stackexchange.com/questions/133308", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/71220/" ]
This is likely a hard drive power management and sleep issue rather than an issue with the drive. When unused for a few minutes, the drive goes to sleep, possibly making a noise while parking the heads for safety. Your script accessing it in shorter intervals prevents it from going to sleep. You can confirm if this is the issue by observing how long it takes to access something on the drive after it makes the sound (and goes to sleep). If it seems slower than usual, it's because of waking up from sleep. Firstly, you should confirm if OS X is making the drive sleep and/or if the drive's firmware is the culprit: * Disable your script. * Go to **System Preferences > Energy Saver** and see if **Put the hard disk(s) to sleep when possible** is enabled. If it is, disable it and observe the behavior of the drive. * If the OS X configuration doesn't help, check the drive's make and model and see how you can disable sleep from the manufacturer's support site. On one hand, disabling the sleep behavior would increase the amount of power consumed (even if it seems insignificant). On the other hand, if you would be using the drive somewhat often while leaving it connected permanently, it may be a good idea to just having it running and avoid the lag of a sleep/wake cycle.
If your script fixes the odd sound and you have good back-ups, your solution is unlikely to cause worse problems. That the drive is making an odd sound anyway suggests a problem. Make sure your back-ups are current and maintained. Hard disk drives can be kept spinning and in use constantly. Your `ls` script run every six minutes is not placing a huge burden on the drive. It may be that OS X, or the disk itself, will cache the repeated read and thus `ls` will not be triggering physical activity each time. As for harm or reliability, the articles by Backblaze are reassuring. [What Hard Drive Should I Buy](http://blog.backblaze.com/2014/01/21/what-hard-drive-should-i-buy/) implies consumer drives are surprisingly reliable even when used aggressively.
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
**None, use an additional layer in Photoshop/Illustrator to contain all of you annotations.** This way you can easily turn it on/off when you need to and it is all contained within the same document. If for some reason you are not the creator of these files and you do not have access to Photoshop of Illustrator, try out [Skim](http://www.macupdate.com/app/mac/24590/skim).
I know only screen annotation apps like Skitch, Voila or Little Snapper. Apps not support photoshop or illustrator formats. Or some desktop sketching apps like Desktastic or FlySketch.
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
You don't mention how much you want to spend for this tool, but OmniGraffle can do this. Here's an example using a portion of this page. I set the page ruler units to pixels, then created line objects with labels that contain dynamic text. I double-clicked the bottom label to show the text variable used to show the pixel dimensions. ![OmniGraffle image annotation](https://i.stack.imgur.com/0dSPG.png) I'm using OmniGraffle Pro, but I imagine regular OmniGraffle has this feature as well; I don't remember dynamic text being one of the Pro features. OmniGraffle is available on the App Store for $99.99; Omnigraffle Pro for $199.99. I like the software a lot - I imagine you'd find many other uses for it as well.
**None, use an additional layer in Photoshop/Illustrator to contain all of you annotations.** This way you can easily turn it on/off when you need to and it is all contained within the same document. If for some reason you are not the creator of these files and you do not have access to Photoshop of Illustrator, try out [Skim](http://www.macupdate.com/app/mac/24590/skim).
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
**None, use an additional layer in Photoshop/Illustrator to contain all of you annotations.** This way you can easily turn it on/off when you need to and it is all contained within the same document. If for some reason you are not the creator of these files and you do not have access to Photoshop of Illustrator, try out [Skim](http://www.macupdate.com/app/mac/24590/skim).
RasterEdge provides [annotation](http://www.rasteredge.com/how-to/vb-net-imaging/image-annotating/) capabilities for users if they want to annotate, redact information on images and documents, it is worth trying.
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
You don't mention how much you want to spend for this tool, but OmniGraffle can do this. Here's an example using a portion of this page. I set the page ruler units to pixels, then created line objects with labels that contain dynamic text. I double-clicked the bottom label to show the text variable used to show the pixel dimensions. ![OmniGraffle image annotation](https://i.stack.imgur.com/0dSPG.png) I'm using OmniGraffle Pro, but I imagine regular OmniGraffle has this feature as well; I don't remember dynamic text being one of the Pro features. OmniGraffle is available on the App Store for $99.99; Omnigraffle Pro for $199.99. I like the software a lot - I imagine you'd find many other uses for it as well.
I know only screen annotation apps like Skitch, Voila or Little Snapper. Apps not support photoshop or illustrator formats. Or some desktop sketching apps like Desktastic or FlySketch.
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
I know only screen annotation apps like Skitch, Voila or Little Snapper. Apps not support photoshop or illustrator formats. Or some desktop sketching apps like Desktastic or FlySketch.
RasterEdge provides [annotation](http://www.rasteredge.com/how-to/vb-net-imaging/image-annotating/) capabilities for users if they want to annotate, redact information on images and documents, it is worth trying.
5,094
I'm looking for a tool where I can quickly annotate my Photoshop/Illustrator designs with the distances in pixels between different elements and / or the edges, so I can then transform the layout into code without having to switch between programs and measure distances all day. I've been searching Google for a while, without any results. Is there something like that?
2011/12/15
[ "https://graphicdesign.stackexchange.com/questions/5094", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/3186/" ]
You don't mention how much you want to spend for this tool, but OmniGraffle can do this. Here's an example using a portion of this page. I set the page ruler units to pixels, then created line objects with labels that contain dynamic text. I double-clicked the bottom label to show the text variable used to show the pixel dimensions. ![OmniGraffle image annotation](https://i.stack.imgur.com/0dSPG.png) I'm using OmniGraffle Pro, but I imagine regular OmniGraffle has this feature as well; I don't remember dynamic text being one of the Pro features. OmniGraffle is available on the App Store for $99.99; Omnigraffle Pro for $199.99. I like the software a lot - I imagine you'd find many other uses for it as well.
RasterEdge provides [annotation](http://www.rasteredge.com/how-to/vb-net-imaging/image-annotating/) capabilities for users if they want to annotate, redact information on images and documents, it is worth trying.
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
You can use [`pd.Index.get_level_values`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html) and map a series from `t2`: ``` t1['y'] = t1.index.get_level_values(0).map(t2['y'].get) print(t1) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
Use [`reindex`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html) on `t2`, setting the `level` parameter as appropriate, and directly assign to `t1`: ``` t1['y'] = t2['y'].reindex(t1.index, level='a1') x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` To reindex on multiple levels, simply pass a list as the `level` parameter, e.g. `['a1', 'a2'`].
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
You can use [`pd.Index.get_level_values`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html) and map a series from `t2`: ``` t1['y'] = t1.index.get_level_values(0).map(t2['y'].get) print(t1) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
A slow way to do the join in the 2nd example: ```py for col in t2.columns: for i2 in t2.index: t1.loc[i2+(slice(None),),col] = t2.loc[i2,col] ``` The task is to vectorize it and to put slice(None) automatically in the correct locations while creating a t1 index item. Vectorized version for the 2nd example: ```py m = list(zip(t1.index.get_level_values('a1'), t1.index.get_level_values('a2'))) t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ``` Vectorized version for the 1st example: ```py m = t1.index.get_level_values('a1') t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ```
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
Solution to the 1st example: ```py t1.reset_index('a2', drop=False).join(t2 ).rename_axis('a1').set_index('a2', append=True) ``` Solution to the 2nd example: ```py t1.reset_index('a3', drop=False).join( t2.rename_axis(index={'b1':'a1', 'b2':'a2'}) ).set_index('a3', append=True) ```
You can use [`pd.Index.get_level_values`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html) and map a series from `t2`: ``` t1['y'] = t1.index.get_level_values(0).map(t2['y'].get) print(t1) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
You could merge `t1` and `t2` directly on the index level named `a1` in `t1`, and the single index of `t2`: ``` t1.merge(t2, left_on = t1.index.get_level_values('a1').values, right_index=True) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
Use [`reindex`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html) on `t2`, setting the `level` parameter as appropriate, and directly assign to `t1`: ``` t1['y'] = t2['y'].reindex(t1.index, level='a1') x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` To reindex on multiple levels, simply pass a list as the `level` parameter, e.g. `['a1', 'a2'`].
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
You could merge `t1` and `t2` directly on the index level named `a1` in `t1`, and the single index of `t2`: ``` t1.merge(t2, left_on = t1.index.get_level_values('a1').values, right_index=True) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
A slow way to do the join in the 2nd example: ```py for col in t2.columns: for i2 in t2.index: t1.loc[i2+(slice(None),),col] = t2.loc[i2,col] ``` The task is to vectorize it and to put slice(None) automatically in the correct locations while creating a t1 index item. Vectorized version for the 2nd example: ```py m = list(zip(t1.index.get_level_values('a1'), t1.index.get_level_values('a2'))) t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ``` Vectorized version for the 1st example: ```py m = t1.index.get_level_values('a1') t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ```
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
Solution to the 1st example: ```py t1.reset_index('a2', drop=False).join(t2 ).rename_axis('a1').set_index('a2', append=True) ``` Solution to the 2nd example: ```py t1.reset_index('a3', drop=False).join( t2.rename_axis(index={'b1':'a1', 'b2':'a2'}) ).set_index('a3', append=True) ```
You could merge `t1` and `t2` directly on the index level named `a1` in `t1`, and the single index of `t2`: ``` t1.merge(t2, left_on = t1.index.get_level_values('a1').values, right_index=True) x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ```
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
Solution to the 1st example: ```py t1.reset_index('a2', drop=False).join(t2 ).rename_axis('a1').set_index('a2', append=True) ``` Solution to the 2nd example: ```py t1.reset_index('a3', drop=False).join( t2.rename_axis(index={'b1':'a1', 'b2':'a2'}) ).set_index('a3', append=True) ```
Use [`reindex`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html) on `t2`, setting the `level` parameter as appropriate, and directly assign to `t1`: ``` t1['y'] = t2['y'].reindex(t1.index, level='a1') x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` To reindex on multiple levels, simply pass a list as the `level` parameter, e.g. `['a1', 'a2'`].
50,477,220
How can one join 2 pandas DataFrames on MultiIndex with different number of levels? ```py import pandas as pd t1 = pd.DataFrame(data={'a1':[0,0,1,1,2,2], 'a2':[0,1,0,1,0,1], 'x':[1.,2.,3.,4.,5.,6.]}) t1.set_index(['a1','a2'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,1,2], 'y':[20.,40.,60.]}) t2.set_index(['b1'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 0 0 1.0 1 2.0 1 0 3.0 1 4.0 2 0 5.0 1 6.0 >>> t2 y b1 0 20.0 1 40.0 2 60.0 ``` Expected result for joining on 'a1' => 'b1': ```none x y a1 a2 0 0 1.0 20.0 1 2.0 20.0 1 0 3.0 40.0 1 4.0 40.0 2 0 5.0 60.0 1 6.0 60.0 ``` Another example: joining on ['a1','a2'] => ['b1','b2']: ```py import pandas as pd, numpy as np t1 = pd.DataFrame(data={'a1':[0,0,0,0,1,1,1,1,2,2,2,2], 'a2':[3,3,4,4,3,3,4,4,3,3,4,4], 'a3':[7,8,7,8,7,8,7,8,7,8,7,8], 'x':[1.,2.,3.,4.,5.,6.,7.,8.,9.,10.,11.,12.]}) t1.set_index(['a1','a2','a3'], inplace=True) t1.sort_index(inplace=True) t2 = pd.DataFrame(data={'b1':[0,0,1,1,2,2], 'b2':[3,4,3,4,3,4], 'y':[10.,20.,30.,40.,50.,60.]}) t2.set_index(['b1','b2'], inplace=True) t2.sort_index(inplace=True) ``` ```none >>> t1 x a1 a2 a3 0 3 7 1.0 8 2.0 4 7 3.0 8 4.0 1 3 7 5.0 8 6.0 4 7 7.0 8 8.0 2 3 7 9.0 8 10.0 4 7 11.0 8 12.0 >>> t2 y b1 b2 0 3 10.0 4 20.0 1 3 30.0 4 40.0 2 3 50.0 4 60.0 ``` Expected result for joining on ['a1','a2'] => ['b1','b2']: ```none x y a1 a2 a3 0 3 7 1.0 10.0 8 2.0 10.0 4 7 3.0 20.0 8 4.0 20.0 1 3 7 5.0 30.0 8 6.0 30.0 4 7 7.0 40.0 8 8.0 40.0 2 3 7 9.0 50.0 8 10.0 50.0 4 7 11.0 60.0 8 12.0 60.0 ``` The solution should work joining on multiple index levels. Thank you for your help!
2018/05/22
[ "https://Stackoverflow.com/questions/50477220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6461882/" ]
Solution to the 1st example: ```py t1.reset_index('a2', drop=False).join(t2 ).rename_axis('a1').set_index('a2', append=True) ``` Solution to the 2nd example: ```py t1.reset_index('a3', drop=False).join( t2.rename_axis(index={'b1':'a1', 'b2':'a2'}) ).set_index('a3', append=True) ```
A slow way to do the join in the 2nd example: ```py for col in t2.columns: for i2 in t2.index: t1.loc[i2+(slice(None),),col] = t2.loc[i2,col] ``` The task is to vectorize it and to put slice(None) automatically in the correct locations while creating a t1 index item. Vectorized version for the 2nd example: ```py m = list(zip(t1.index.get_level_values('a1'), t1.index.get_level_values('a2'))) t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ``` Vectorized version for the 1st example: ```py m = t1.index.get_level_values('a1') t1 = t1.assign(**dict(zip(t2.columns,[np.nan]*len(t2.columns)))) t1[t2.columns] = t2.loc[m,:].values ```
13,354,169
I am doing direct queries to wikipedia's website via an android app. However, sometimes when I get "REDIRECT" queries, and my redirect suggestion has a # sign in it, I don't really know how to handle it. I know I can just throw everything out past the # sign, but that means I'm looking up something that is relatively unimportant. Here is an example link; <http://en.wikipedia.org/w/api.php?action=query&prop=extracts&titles=Poof_(The_Fairly_OddParents)&format=json> I can just redirect to List\_of\_The\_Fairly\_OddParents\_characters, but I want to know if there is anything useful I can do with the "#Poof" because I'm sure that the information is there for a reason. I could not find anything on MediaWiki APIs. Please help! Thanks
2012/11/13
[ "https://Stackoverflow.com/questions/13354169", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1387888/" ]
Most of the time the part after the hash is a section title (though it is simply appended to the new URL, so it can be, and occasionally is, used to point to other kinds of anchors), so you can locate the section (with the `mobileview` action for example) and with the section index you can query the HTML for that section. (`extracts` does not support sections, so you you must use something else and enforce length limits manually.)
The hash points to a named anchor or ID in the target document. So if you're parsing the page content, then ideally you'd find the anchor tag or ID and return the content that follows that element, rather than starting at the beginning of the content.
4,847,558
I've searched google and other places for a while but I can't find out how to generate or create dotted pairs. I ask this because, I need to turn a list that looks like this: ``` (X Y Z) ``` To a list that is in this format: ``` ((X . 1) (Y . 2) (Z . 3)) ``` Where the numbers represent the index. I have a function that turns the list into the format of ``` (X 1 Y 2 Z 3) ``` Here is that function: ``` (defun listFormat (l) (defun place-index (idx l) (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l))))) (place-index 1 l)) ``` But I'm not sure how to get dotted pairs. Thanks in advance
2011/01/31
[ "https://Stackoverflow.com/questions/4847558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1188934/" ]
Your code has a very basic mistake: ``` (defun listFormat (l) (defun place-index (idx l) ; <<<---- This DEFUN is wrong (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l))))) (place-index 1 l)) ``` Don't nest DEFUN. That's just **wrong**. DEFUN defines a global function. Whenever you run listFormat, it redefines the **GLOBAL** function PLACE-INDEX. You may have seen similarly nested functions in SCHEME using DEFINE. In Common Lisp you should not use DEFUN for a nested local function. In Lisp local functions are defined with FLET or LABELS (for recursive functions). ``` (defun listFormat (l) (labels ((place-index (idx l) (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l)))))) (place-index 1 l))) ``` Also Stackoverflow is the wrong place to solve your homework. Google searches is also the wrong way to learn Lisp programming. I propose to use the good old way of reading an introductory book and using a reference. Here is a basic **introductory Lisp book** for download: [Common Lisp: A Gentle Introduction to Symbolic Computation](http://www.cs.cmu.edu/~dst/LispBook/). **Reference sheets**: a small [Common Lisp Quick Reference](http://www.ic.unicamp.br/~zanoni/mc336/2008-2s/lisp/CommonLisp/quick-reference.pdf) (PDF) and a more detailed [Common Lisp Quick Reference](http://clqr.berlios.de/). *Dotted pairs* are called *conses* in Lisp. See the real online reference for Common Lisp, the [Common Lisp HyperSpec](http://www.lispworks.com/documentation/HyperSpec/Body/14_.htm).
You want the else-branch to read: ``` (cons (cons (first l) idx) (place-index (+ idx 1) (rest l))) ```
4,847,558
I've searched google and other places for a while but I can't find out how to generate or create dotted pairs. I ask this because, I need to turn a list that looks like this: ``` (X Y Z) ``` To a list that is in this format: ``` ((X . 1) (Y . 2) (Z . 3)) ``` Where the numbers represent the index. I have a function that turns the list into the format of ``` (X 1 Y 2 Z 3) ``` Here is that function: ``` (defun listFormat (l) (defun place-index (idx l) (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l))))) (place-index 1 l)) ``` But I'm not sure how to get dotted pairs. Thanks in advance
2011/01/31
[ "https://Stackoverflow.com/questions/4847558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1188934/" ]
You want the else-branch to read: ``` (cons (cons (first l) idx) (place-index (+ idx 1) (rest l))) ```
And by the way, for the question itself, this code will do: ``` (defun listFormat (lst) (loop for idx from 1 for item in lst collect (cons item idx))) ```
4,847,558
I've searched google and other places for a while but I can't find out how to generate or create dotted pairs. I ask this because, I need to turn a list that looks like this: ``` (X Y Z) ``` To a list that is in this format: ``` ((X . 1) (Y . 2) (Z . 3)) ``` Where the numbers represent the index. I have a function that turns the list into the format of ``` (X 1 Y 2 Z 3) ``` Here is that function: ``` (defun listFormat (l) (defun place-index (idx l) (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l))))) (place-index 1 l)) ``` But I'm not sure how to get dotted pairs. Thanks in advance
2011/01/31
[ "https://Stackoverflow.com/questions/4847558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1188934/" ]
Your code has a very basic mistake: ``` (defun listFormat (l) (defun place-index (idx l) ; <<<---- This DEFUN is wrong (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l))))) (place-index 1 l)) ``` Don't nest DEFUN. That's just **wrong**. DEFUN defines a global function. Whenever you run listFormat, it redefines the **GLOBAL** function PLACE-INDEX. You may have seen similarly nested functions in SCHEME using DEFINE. In Common Lisp you should not use DEFUN for a nested local function. In Lisp local functions are defined with FLET or LABELS (for recursive functions). ``` (defun listFormat (l) (labels ((place-index (idx l) (if (null l) nil (append (list (first l)) (list idx) (place-index (+ idx 1) (rest l)))))) (place-index 1 l))) ``` Also Stackoverflow is the wrong place to solve your homework. Google searches is also the wrong way to learn Lisp programming. I propose to use the good old way of reading an introductory book and using a reference. Here is a basic **introductory Lisp book** for download: [Common Lisp: A Gentle Introduction to Symbolic Computation](http://www.cs.cmu.edu/~dst/LispBook/). **Reference sheets**: a small [Common Lisp Quick Reference](http://www.ic.unicamp.br/~zanoni/mc336/2008-2s/lisp/CommonLisp/quick-reference.pdf) (PDF) and a more detailed [Common Lisp Quick Reference](http://clqr.berlios.de/). *Dotted pairs* are called *conses* in Lisp. See the real online reference for Common Lisp, the [Common Lisp HyperSpec](http://www.lispworks.com/documentation/HyperSpec/Body/14_.htm).
And by the way, for the question itself, this code will do: ``` (defun listFormat (lst) (loop for idx from 1 for item in lst collect (cons item idx))) ```
2,843
As the core and mantle of the earth cools, it will reach a point where new crust cannot be produced. How can this point be calculated? If we can, has anyone done such calculations? Thanks!
2014/11/19
[ "https://earthscience.stackexchange.com/questions/2843", "https://earthscience.stackexchange.com", "https://earthscience.stackexchange.com/users/1061/" ]
One permanent threat to plate tectonics is the oceans vanishing. The scientific jury may still be out on this matter, but most geologists and geophysicists consider water to be the lubricant that makes plate tectonics possible. In a billion years or so, the Sun will have become 10% more luminous. This is conjectured to make the Earth to undergo an unstoppable moist greenhouse / runaway greenhouse, and the oceans will vanish. The core is still emitting residual heat from the formation of the Earth. As the core cools, iron in the outer core freezes onto the inner core. This is freezing is an additional source of heat. The solidification of the Earth's core thus represents another permanent threat to plate tectonics. The liquid outer core is conjectured to have frozen solid less than three billion years from now. A nearer term threat is the formation of the next supercontinent. Plate tectonics may operate in fits and starts. What makes subduction zones form is not known. What is known is that a major subduction zone vanished when India collided with Asia, and no new subduction zones formed elsewhere to take its place. If this conjecture is true, plate tectonics will temporarily stop in a few hundred million years when the next supercontinent forms, only to restart later when too much heat stress builds up inside the planet. This is all highly conjectural. No scientist will live to see their conjectures falsified. **References:** [Kasting (1988), "Runaway and moist greenhouse atmospheres and the evolution of Earth and Venus," *Icarus* 74.3 : 472-494](http://www.chriscunnings.com/uploads/2/0/7/7/20773630/runaway_greenhouse_venus.pdf) [McDonough (2003), "Compositional model for the Earth's core." In *Treatise on geochemistry 2* 547-568](http://adsabs.harvard.edu/abs/2003TrGeo...2..547M). [Silver & Behn (2008), "Intermittent plate tectonics?" *Science* 319.5859](http://www.sciencemag.org/content/319/5859/85).
The whole plate tectonic system is good for a few billion years yet. David is correct that water, as a lubricant, is needed to keep the system moving, but a 10% warming of the sun isn't enough to completely destroy the oceans. That will occur, but not until the sun moves close to, or into its red giant phase several billion years from now. The other factor is uncertainty as to how long there will be sufficient radioactive heat to overcome viscous drag of the convective cells which underlie the crustal plates. There has been much speculation, but I am not aware of any convincing analysis of how long this will take.
41,760,813
Here i want to join this three tables namely `salary`,`payroll` and `attendance` .first i want to join `salary.salary_template_id` with `employee.salary_template_id` and after that i want to join `employee.user_id` with `attendance.user_id` in order to get the basic salary of each `user_id`.how can i join? My salary table looks like this I am getting this error > > Unknown column 'tbl\_employee\_payroll.salary\_template\_id' in 'on clause' > > > ``` salary_template_id salary_grade basic_salary overtime_salary 1 a 10000 2 b 15000 ``` My employee table looks like this ``` payroll_id user_id salary_template_id 1 1 NULL 2 36 2 3 43 1 ``` My attendance looks like this ``` attendance_id user_id leave_category_id date_in date_out attendance_status 13 36 0 2017-01-02 2017-01-02 1 14 36 3 2017-01-04 2017-01-04 3 ``` here is my code ``` public function attendance_report_by_empid($user_id = null, $sdate = null) { $this->db->select('attendance.*', FALSE); $this->db->select('employee.*', FALSE); $this->db->select('salary.*', FALSE); $this->db->from('attendance'); $this->db->join('salary', 'salary.salary_template_id = employee.salary_template_id', 'inner'); $this->db->join('employee', 'employee.user_id = attendance.user_id', 'left'); $this->db->where('attendance.user_id', $user_id); $this->db->where('attendance.date_in', $sdate); $this->db->where('attendance.date_out <=', $sdate); $query_result = $this->db->get(); $result = $query_result->result(); return $query; } ``` my result should be like this ``` attendance_id user_id leave_category_id date_in date_out attendance_status salary 13 36 0 2017-01-02 2017-01-02 1 1000 14 36 3 2017-01-04 2017-01-04 3 1000 ```
2017/01/20
[ "https://Stackoverflow.com/questions/41760813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5064637/" ]
You can use LatLngBounds to make a fix bound focus. ``` /**Latlng's to get focus*/ LatLng Delhi = new LatLng(28.61, 77.2099); LatLng Chandigarh = new LatLng(30.75, 76.78); LatLng SriLanka = new LatLng(7.000, 81.0000); LatLng America = new LatLng(38.8833, 77.0167); LatLng Arab = new LatLng(24.000, 45.000); /**create for loop/manual to add LatLng's to the LatLngBounds.Builder*/ LatLngBounds.Builder builder = new LatLngBounds.Builder(); builder.include(Delhi); builder.include(Chandigarh); builder.include(SriLanka); builder.include(America); builder.include(Arab); /**initialize the padding for map boundary*/ int padding = 50; /**create the bounds from latlngBuilder to set into map camera*/ LatLngBounds bounds = builder.build(); /**create the camera with bounds and padding to set into map*/ final CameraUpdate cu = CameraUpdateFactory.newLatLngBounds(bounds, padding); /**call the map call back to know map is loaded or not*/ map.setOnMapLoadedCallback(new GoogleMap.OnMapLoadedCallback() { @Override public void onMapLoaded() { /**set animated zoom camera into map*/ map.animateCamera(cu); } }); ``` As like above code you have a list of coordinates `List<Coordinates>` as I am adding manual LatLng in code. add these coordinates LatLng object to the LatLngBounds.Builder then animate the camera, it will automatically zoom for the all covered LatLng's.
``` mMap = googleMap; Log.d("mylog", "Added Markers"); mMap.addMarker(place1); mMap.addMarker(place2); int padding = 50; /**create the bounds from latlngBuilder to set into map camera*/ LatLngBounds.Builder builder = new LatLngBounds.Builder(); builder.include(new LatLng(28.429730, 77.055400)); builder.include(new LatLng(28.403000, 77.318800)); LatLngBounds bounds = builder.build(); /**create the camera with bounds and padding to set into map*/ final CameraUpdate cu = CameraUpdateFactory.newLatLngBounds(bounds, padding); /**call the map call back to know map is loaded or not*/ mMap.setOnMapLoadedCallback(new GoogleMap.OnMapLoadedCallback() { @Override public void onMapLoaded() { /**set animated zoom camera into map*/ mMap.animateCamera(cu); } }); ```
38,172,513
What is the best way to play sound with delay 50ms or 100ms? Here is something, what i tried: ``` var beat = new Audio('/sound/BEAT.wav'); var time = 300; playbeats(); function playbeats(){ beat.cloneNode().play(); setTimeout(playbeats, time); } ``` This is working correctly but my goal is to play BEAT.wav after every 100ms. When I change "time" variable to 100, then it is so "laggy". 721ms is my BEAT.wav (that's why im using cloneNode()) What is alternatives to solve this?
2016/07/03
[ "https://Stackoverflow.com/questions/38172513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6524615/" ]
You can use setInterval(), the arguments are the same. ``` setInterval(function() { playbeats(); }, 100); ``` and your function playbeats function should be. ``` function playbeats(){ var tempBeat=beat.cloneNode(); tempBeat.play(); ``` } your whole program should be like this. ``` var beat = new Audio('/sound/BEAT.wav'); setInterval(function() { playbeats(); }, 100); function playbeats(){ var tempBeat=beat.cloneNode(); tempBeat.play(); } ```
You can use the Web Audio API but the code will be a bit different.If you want the Web Audio API's timing and loop capabillities you will need to load the file into a buffer first. It also requires that your code is run on a server. Here is an example: ``` var audioContext = new AudioContext(); var audioBuffer; var getSound = new XMLHttpRequest(); getSound.open("get", "sound/BEAT.wav", true); getSound.responseType = "arraybuffer"; getSound.onload = function() { audioContext.decodeAudioData(getSound.response, function(buffer) { audioBuffer = buffer; }); }; getSound.send(); function playback() { var playSound = audioContext.createBufferSource(); playSound.buffer = audioBuffer; playSound.loop = true; playSound.connect(audioContext.destination); playSound.start(audioContext.currentTime, 0, 0.3); } window.addEventListener("mousedown", playback); ```
38,172,513
What is the best way to play sound with delay 50ms or 100ms? Here is something, what i tried: ``` var beat = new Audio('/sound/BEAT.wav'); var time = 300; playbeats(); function playbeats(){ beat.cloneNode().play(); setTimeout(playbeats, time); } ``` This is working correctly but my goal is to play BEAT.wav after every 100ms. When I change "time" variable to 100, then it is so "laggy". 721ms is my BEAT.wav (that's why im using cloneNode()) What is alternatives to solve this?
2016/07/03
[ "https://Stackoverflow.com/questions/38172513", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6524615/" ]
You can use setInterval(), the arguments are the same. ``` setInterval(function() { playbeats(); }, 100); ``` and your function playbeats function should be. ``` function playbeats(){ var tempBeat=beat.cloneNode(); tempBeat.play(); ``` } your whole program should be like this. ``` var beat = new Audio('/sound/BEAT.wav'); setInterval(function() { playbeats(); }, 100); function playbeats(){ var tempBeat=beat.cloneNode(); tempBeat.play(); } ```
I would also recommend using the Web Audio API. From there, you can simply loop a buffer source node every 100ms or 50ms or whatever time you want. To do this, as stated in other responses, you'll need to use an XMLHttpRequest to load the sound file via a server ``` // set up the Web Audio context var audioCtx = new AudioContext(); // create a new buffer // 2 channels, 4410 samples (100 ms at 44100 samples/sec), 44100 samples per sec var buffer = audioCtx.createBuffer(2, 4410, 44100); // load the sound file via an XMLHttpRequest from a server var request = new XMLHttpRequest(); request.open('GET', '/sound/BEAT.wav', true); request.responseType = 'arraybuffer'; request.onload = function () { var audioData = request.response; audioCtx.decodeAudioData(audioData, function (newBuffer) { buffer = newBuffer; }); } request.send(); ``` Now you can make a Buffer Source Node to loop the playback ``` // create the buffer source var bufferSource = audioCtx.createBufferSource(); // set the buffer we want to use bufferSource.buffer = buffer; // set the buffer source node to loop bufferSource.loop = true; // specify the loop points in seconds (0.1s = 100ms) // this is a little redundant since we already set our buffer to be 100ms // so by default it would loop when the buffer comes to an end (at 100ms) bufferSource.loopStart = 0; bufferSource.loopEnd = 0.1; // connect the buffer source to the Web Audio sound output bufferSource.connect(audioCtx.destination); // play! bufferSource.start(); ``` Note that if you stop the playback via `bufferSource.stop()`, you will not be able to start it again. You can only call `start()` once, so you'll need to create a new source node if you want to start playback again. Note that because of the way the sound file is loaded via an `XMLHttpRequest`, if you try to test this on your machine without running a server, you'll get a cross-reference request error on most browsers. So the simplest way to get around this if you want to test this on your machine is to run a Python SimpleHTTPServer
2,203,455
7 balls with different colors, there are 4 identical boxes. the box can be empty. how many ways to distribute the balls? what kind of counting problem is this? how do we count it?
2017/03/26
[ "https://math.stackexchange.com/questions/2203455", "https://math.stackexchange.com", "https://math.stackexchange.com/users/387221/" ]
We have $7$ objects (let each be elements of a set) and we want to partition these objects into $4$ subsets. We can have $1$ nonempty subset (and the other $3$ consequently empty), $2$ nonempty subsets (and the rest empty), $3$ nonempty subsets (and the rest empty), or $4$ nonempty subsets. We can't have $0$ nonempty subsets because that would mean we have no balls in all of the boxes. The striling number of the second kind $S(n,k)$ counts the number of ways to partition a set of $n$ objects into $k$ non-empty subsets. So the answer is as you claim, $$S(7,1)+S(7,2)+S(7,3)+S(7,4)$$
The answer can be obtained w/o using Stirling numbers as $1+63+\dfrac{(4^7 -1\*4 - 63\*12)}{4!}= 715$, as explained below If all configurations had $4!$ permutations, we'd get $\;\dfrac{4^7}{4!}$, but two types don't, and we are adjusting for them * Just $1$ way to place all balls in one box, $4$ ways only to permute, so we add $1$ and subtract $1\*4$ in the numerator * $\binom71 +\binom72+\binom73 = 63$ ways to place balls in two boxes,$12$ ways to permute, so we again add an adjust similarly * Note that all boxes filled will, of course, have $4!$ permutations, but so will $3$ boxes filled and one blank, because the blank also assumes a distinct identity
2,203,455
7 balls with different colors, there are 4 identical boxes. the box can be empty. how many ways to distribute the balls? what kind of counting problem is this? how do we count it?
2017/03/26
[ "https://math.stackexchange.com/questions/2203455", "https://math.stackexchange.com", "https://math.stackexchange.com/users/387221/" ]
From set A (the m balls) to set B (the n boxes), the number of onto functions is $n!S(m,n)$. If the boxes are identical, the order does not matter anymore. Therefore, there are $n!S(m,n)/n! = S(m,n) $ ways to distribute the 7 balls. We can use, 4 boxes, 3 boxes, 2 boxes or 1 box. $S(7,4)+S(7,3)+S(7,2)+S(7,1)$
The answer can be obtained w/o using Stirling numbers as $1+63+\dfrac{(4^7 -1\*4 - 63\*12)}{4!}= 715$, as explained below If all configurations had $4!$ permutations, we'd get $\;\dfrac{4^7}{4!}$, but two types don't, and we are adjusting for them * Just $1$ way to place all balls in one box, $4$ ways only to permute, so we add $1$ and subtract $1\*4$ in the numerator * $\binom71 +\binom72+\binom73 = 63$ ways to place balls in two boxes,$12$ ways to permute, so we again add an adjust similarly * Note that all boxes filled will, of course, have $4!$ permutations, but so will $3$ boxes filled and one blank, because the blank also assumes a distinct identity
40,698,535
Disclaimer: I am not going to use any other ways I have a specific requirement. All I want is to Join from posts table to the posts meta so that I can get the featured image per post. I am able to get the post via where clause post type is post and published but I don't know how to write join for MySql to get the featured image of each post
2016/11/19
[ "https://Stackoverflow.com/questions/40698535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4582545/" ]
You mention " i don't want to mix up js and jQuery on the following code" but you are actually mixing vanilla DOM APIs with jQuery methods. `.parent` and `.addClass` are jQuery functions. You can code: ``` btn.parentNode.classList.add("active"); ```
You need .parentNode on the element. Something like this. ``` var clickedPath = this.getElement(); clickedPath.classList.add("active"); var classes = clickedPath.getAttribute('class').match(/\d+/g) || []; buttons.forEach(function(btn) { var method = classes.indexOf(btn.getAttribute('data-date')) > -1 ? 'add' : 'remove'; btn.classList[method]('active'); btn.parentNode.classList.add("active"); }); ```