qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
114,392
I have two Ghost 14 backups of my machine. One for the machine fully configured with apps after and XP install and one of the last update before i re-imaged it (it's XP, I re-image about once every six months). I recently wanted to try simply using my initial image in a virtual environment to do my testing that generally causes me to need to re-image. I used the VMWare converter to convert the Ghost images to a virtual machine to use in Virtual box but they fail to properly boot. They get stuck after the bios loads and windows begins loading. If I power down the machine and refire it it will go to the error screen in windows that asks if you would like to boot to a different mode. none selected make any difference. What are some possible errors I should look for in the conversion process or in my settings for the converter?
2010/02/28
[ "https://superuser.com/questions/114392", "https://superuser.com", "https://superuser.com/users/23312/" ]
**SphereXP** - the world's number one three-dimensional desktop. ![alt text](https://i.stack.imgur.com/OJxbp.jpg) **[YODM 3D](http://yodm-3d.uptodown.com/en/)** - Virtual Desktop Manager featuring the Cube 3D effect ![alt text](https://i.stack.imgur.com/gCnS9.png) **[Matodate](http://madotate.en.softonic.com/)** - Manage windows more easily in a 3D desktop ![alt text](https://i.stack.imgur.com/rBey4.jpg) **[BumpTop™](http://bumptop.com/)** is a fun, intuitive 3D desktop that keeps you organized and makes you more productive. ![alt text](https://i.stack.imgur.com/B1mJK.jpg)
You should try the best: [BumpTop](http://bumptop.com/). ![enter image description here](https://i.stack.imgur.com/Hl6MY.jpg) It's been recommended a lot!
45,942,074
I am a new user to Android Studio. I have a zip packed from Udacity that I am trying to open/run in Android Studio. The package is here - [Starter Code Zip File](https://d17h27t6h515a5.cloudfront.net/topher/2017/April/58e82999_firebase-analytics-green-thumb-android/firebase-analytics-green-thumb-android.zip "Starter Code Zip File"). I have no instructions as to how to get this into Android Studio. Left to my own devices, I am currently stuck at a place where, if I try to run the app, I get an error saying **"Please select Android SDK**. I have checked everywhere for a solution to this, and the only two things I've found were: 1. Making sure the path to my Android SDK folder is correct within Android Studio settings (it is) 2. Making sure the same path is matched in the project **local.properties** file (it is) The fact that these are both set properly (I think) leads me to believe that I just did the whole import wrong and should start from scratch. But I don't really know what to do with that zip file to get it (properly) into Android Studio. Initially I just went to **File > Open** and then selected the project **build.gradle** file from the zip package. Was that even the right thing to get this started?
2017/08/29
[ "https://Stackoverflow.com/questions/45942074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8533181/" ]
“Please select Android SDK” is a problem with the configuration not with the project. In some updates from android studio you can see this problem. Solution: ``` File -> Settings -> Android SDK -> Android SDK Location Edit -> Android SDK ```
This was driving me nuts in Android Studio 3.0.1 on Windows 10. Fixed by: 1. Updating the USB driver for the device (Windows: Device Manager -> SAMSUNG Android Phone -> SAMSUNG Android ADB Interface and right-click to "Update Driver" and... 2. Android Studio: Tools -> Android -> Tick "Enable ADB Integration" I am very surprised given the dominance of Windows on the desktop and Android on mobile devices that this is such a struggle - I hope it saves someone the same hassle.
45,942,074
I am a new user to Android Studio. I have a zip packed from Udacity that I am trying to open/run in Android Studio. The package is here - [Starter Code Zip File](https://d17h27t6h515a5.cloudfront.net/topher/2017/April/58e82999_firebase-analytics-green-thumb-android/firebase-analytics-green-thumb-android.zip "Starter Code Zip File"). I have no instructions as to how to get this into Android Studio. Left to my own devices, I am currently stuck at a place where, if I try to run the app, I get an error saying **"Please select Android SDK**. I have checked everywhere for a solution to this, and the only two things I've found were: 1. Making sure the path to my Android SDK folder is correct within Android Studio settings (it is) 2. Making sure the same path is matched in the project **local.properties** file (it is) The fact that these are both set properly (I think) leads me to believe that I just did the whole import wrong and should start from scratch. But I don't really know what to do with that zip file to get it (properly) into Android Studio. Initially I just went to **File > Open** and then selected the project **build.gradle** file from the zip package. Was that even the right thing to get this started?
2017/08/29
[ "https://Stackoverflow.com/questions/45942074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8533181/" ]
“Please select Android SDK” is a problem with the configuration not with the project. In some updates from android studio you can see this problem. Solution: ``` File -> Settings -> Android SDK -> Android SDK Location Edit -> Android SDK ```
please change your versionCode or versionName in Gradle.Build and Sync. after sync, you will able to run and you can change the previous version again.
45,942,074
I am a new user to Android Studio. I have a zip packed from Udacity that I am trying to open/run in Android Studio. The package is here - [Starter Code Zip File](https://d17h27t6h515a5.cloudfront.net/topher/2017/April/58e82999_firebase-analytics-green-thumb-android/firebase-analytics-green-thumb-android.zip "Starter Code Zip File"). I have no instructions as to how to get this into Android Studio. Left to my own devices, I am currently stuck at a place where, if I try to run the app, I get an error saying **"Please select Android SDK**. I have checked everywhere for a solution to this, and the only two things I've found were: 1. Making sure the path to my Android SDK folder is correct within Android Studio settings (it is) 2. Making sure the same path is matched in the project **local.properties** file (it is) The fact that these are both set properly (I think) leads me to believe that I just did the whole import wrong and should start from scratch. But I don't really know what to do with that zip file to get it (properly) into Android Studio. Initially I just went to **File > Open** and then selected the project **build.gradle** file from the zip package. Was that even the right thing to get this started?
2017/08/29
[ "https://Stackoverflow.com/questions/45942074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8533181/" ]
“Please select Android SDK” is a problem with the configuration not with the project. In some updates from android studio you can see this problem. Solution: ``` File -> Settings -> Android SDK -> Android SDK Location Edit -> Android SDK ```
try to sync `build.gradle` **OR** ``` Tools -> Android -> Sync Project with Gradle ``` If still not working then restart android studio...
63,764,426
I came across something this morning which made me think... If you store a variable in Python, and I assume most languages, as x = 0.1, and then you display this value to 30 decimal places you get: '0.100000000000000005551115123126' I read an article online which explained that the number is stored in binary on the computer and the discrepancy is due to base conversion. My question is how do physicists and nano-scientists get around this problem when they do computation? I always thought that if I input data into my calculator using scientific-notation it will give me a reliable accurate result but now I am wondering if this is really the case? There must be a simple solution? Thanks.
2020/09/06
[ "https://Stackoverflow.com/questions/63764426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10443133/" ]
Well, as a physicist I say these decimals are redundant in most cases.. it really doesn't matter what's on the 16th decimal place. The precision of measurements doesn't reach that level (not even in QED). [Take a look at here](https://physics.stackexchange.com/questions/497087/what-is-the-most-precise-physical-measurement-ever-performed), the highest precision measurements are around `10^13 - 10^14`. Applying to this to your example: With 14 decimal places `0.100000000000000005551115123126` becomes `0.100000000000000`, which doesn't introduce any errors at all.
There is the [decimal](https://docs.python.org/2/library/decimal.html) class in python that can help you deals with this problem. But personally, when I work with money transactions, I don't want to have extra cents like 1€99000012. I convert amounts to cents. So I just have to manipulate and store integers.
63,764,426
I came across something this morning which made me think... If you store a variable in Python, and I assume most languages, as x = 0.1, and then you display this value to 30 decimal places you get: '0.100000000000000005551115123126' I read an article online which explained that the number is stored in binary on the computer and the discrepancy is due to base conversion. My question is how do physicists and nano-scientists get around this problem when they do computation? I always thought that if I input data into my calculator using scientific-notation it will give me a reliable accurate result but now I am wondering if this is really the case? There must be a simple solution? Thanks.
2020/09/06
[ "https://Stackoverflow.com/questions/63764426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10443133/" ]
As always, it depends on the context (sorry for all the "normally" and "usually" words below). Also depends on definition of "scientific". Below is en example of "physical", not "purely mathematical" modeling. Normally, using computer for scientific / engineering calculations: 1. you have reality 2. you have an analytical mathematical model of the reality 3. to solve the analytical model, usually you have to use some numerical approximation (e.g. finite element method, some numerical scheme for time integration, ...) 4. you solve 3. using floating point arithmetics Now, in the "model chain": * you loose accuracy from 1) reality to 2) analytical mathematical model + most theories does some assumptions (neglecting relativity theory and using classical Newtonian mechanics, neglecting effect of gravity, neglecting ...) + you don't know exactly all the boundary and initial conditions + you don't know exactly all the material properties + you don't know ... and have to do some assumption * you loose accuracy from 2) analytical to 3) numerical model + from definition. Analytical solution is accurate, but usually practically unachievable. + in the limit case of infinite computational resources, the numerical methods usually converges to the analytical solution, which is somehow limited by the limited floating point accuracy, but usually the resources are limiting. * you loose some accuracy using floating point arithmetics + in some cases it influences the numerical solution + there are approaches using exact numbers, but they are (usually much) more computationally expensive You have a lot of trade-offs in the "model chain" (between accuracy, computational costs, amount and quality of input data, ...). From "practical" point of view, floating point arithmetics is *not* fully negligible, but *usually* is one of the least problems in the "model chain".
There is the [decimal](https://docs.python.org/2/library/decimal.html) class in python that can help you deals with this problem. But personally, when I work with money transactions, I don't want to have extra cents like 1€99000012. I convert amounts to cents. So I just have to manipulate and store integers.
55,242,248
In my view, I have an ajax behavior with a listener that updates a bean property and then an "oncomplete" action that executes a javascript method This is the ajax event: ```html <p:ajax event="rowDblselect" listener="#{backController.onRowDoubleClick}" oncomplete="openNewTab()" /> <h:inputHidden id="hutchy" value="#{backController.productViewerUrl}" /> ``` This is the bean method that should update the property: ``` public void onRowDoubleClick(final SelectEvent event) { RecordDTO currentRecordDTO = (RecordDTO) event.getObject(); setProductViewerUrl("https://www.google.com/search?q=" + currentRecordDTO.getName()); } public String getProductViewerUrl() { return productViewerUrl; } public void setProductViewerUrl(String productViewerUrl) { this.productViewerUrl = productViewerUrl; } ``` And thereafter, the javascript method that uses the updated property: ```js function openNewTab(){ var url = $('#pbm\\:hutchy').val(); var hiddenCode = "#{backController.productViewerUrl}"; alert(url + hiddenCode); window.open(url, '_newtab'); } ``` The problem is that the code of Javascript doesn't get the updated value of the property (even with the hidden field), I have done some debugging after the DoubleClick event and I found that the execution doesn't pass by the getter method of the property when executing the JS (before the alert) Does anyone have an idea? thanks in advance!
2019/03/19
[ "https://Stackoverflow.com/questions/55242248", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1387275/" ]
The resources load balancer and the virtual machine scale set are the associated relationship. What you can do is add the virtual machine scale set into the backend pool of the load balancer, and then you can change the existing NAT rules or create new rules to associate with the instance of the existing scale set. In addition, when you create the virtual machine scale set, there also a configuration to select the load balancer or application gateway for it, if you select load balancer, Azure will add the NAT rules for you. It shows like this: [![enter image description here](https://i.stack.imgur.com/3A3jl.png)](https://i.stack.imgur.com/3A3jl.png)
Check out the example for "update the load balancer of your scale set" here: <https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set#examples>. This walks through how to remove an existing load balancer from a scale set and add a new one in its place. When you update the IpConfiguration for the scale set, it should create the necessary rules for you. If you're still having trouble connecting to VMs in the scale set, check the following common issues: * If the scale set is in "manual" update mode, then you'll need to bring all of the VMs up to date with the model (also discussed in the above doc) for the new networking changes to take effect. * If you're using SLB Standard, you need to whitelist traffic with a network security group (with SLB Standard, by default all traffic is disallowed). Best of luck! :)
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
Following code will be helpful to you, ``` var table = $('#stockistTable').DataTable(); var index = $(this).closest("tr")[0]; table.fnDeleteRow(table.fnGetPosition(index)); ``` [Fiddle Demo Here](https://jsfiddle.net/0afkg0gy/3/)
This example [here](https://datatables.net/examples/api/select_single_row.html) demonstrates how to delete rows from a click event - it should do the trick for you.
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
Following code will be helpful to you, ``` var table = $('#stockistTable').DataTable(); var index = $(this).closest("tr")[0]; table.fnDeleteRow(table.fnGetPosition(index)); ``` [Fiddle Demo Here](https://jsfiddle.net/0afkg0gy/3/)
``` var dtRow=0; //declare this globally dtRow = $(this).closest('tr'); //assigning value on click delete var stockistTable=$('#stockistTable').DataTable(); stockistTable.row(dtRow).remove().draw( false ); ``` This Code worked for me!!!!
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
You can make a Jquery object of the entire row element and pass it in `row()` function of Datatable. ``` var table = $('#stockistTable').DataTable(); var removingRow = $(this).closest('tr'); table.row(removingRow).remove().draw(); ```
This example [here](https://datatables.net/examples/api/select_single_row.html) demonstrates how to delete rows from a click event - it should do the trick for you.
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
You can make a Jquery object of the entire row element and pass it in `row()` function of Datatable. ``` var table = $('#stockistTable').DataTable(); var removingRow = $(this).closest('tr'); table.row(removingRow).remove().draw(); ```
``` var dtRow=0; //declare this globally dtRow = $(this).closest('tr'); //assigning value on click delete var stockistTable=$('#stockistTable').DataTable(); stockistTable.row(dtRow).remove().draw( false ); ``` This Code worked for me!!!!
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
You can try this ``` $(document).ready(function() { var table = $('#stockistTable').DataTable(); $('tr').on("click", function(e) { index = $(this).closest('tr').index(); table.row( $(this) ).remove().draw(); }); } ); ```
This example [here](https://datatables.net/examples/api/select_single_row.html) demonstrates how to delete rows from a click event - it should do the trick for you.
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
You can try this ``` $(document).ready(function() { var table = $('#stockistTable').DataTable(); $('tr').on("click", function(e) { index = $(this).closest('tr').index(); table.row( $(this) ).remove().draw(); }); } ); ```
``` var dtRow=0; //declare this globally dtRow = $(this).closest('tr'); //assigning value on click delete var stockistTable=$('#stockistTable').DataTable(); stockistTable.row(dtRow).remove().draw( false ); ``` This Code worked for me!!!!
49,249,179
the command do not work, when I want to show the nova's endpoints: ``` openstack endpoint show nova ``` it will report error: > > More than one endpoint exists with the name 'nova'. > > >
2018/03/13
[ "https://Stackoverflow.com/questions/49249179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7693832/" ]
``` var dtRow=0; //declare this globally dtRow = $(this).closest('tr'); //assigning value on click delete var stockistTable=$('#stockistTable').DataTable(); stockistTable.row(dtRow).remove().draw( false ); ``` This Code worked for me!!!!
This example [here](https://datatables.net/examples/api/select_single_row.html) demonstrates how to delete rows from a click event - it should do the trick for you.
10,551,464
I have problem getting cookies from HTTP response. Im sure, that response should have cookies, but I cant see them in my app. Here is my code: ```cs private static CookieContainer cookies = new CookieContainer(); private static CookieContainer Cookies { get { return cookies; } } public static async Task<HttpStatusCode> SendPostRequest(string url, string postData) { if (url == null) throw new ArgumentNullException("url"); if (postData == null) throw new ArgumentNullException("postData"); HttpStatusCode statusCodeToReturn = HttpStatusCode.Forbidden; HttpWebRequest webRequest = HttpWebRequest.CreateHttp(url); webRequest.Method = "POST"; var cookies = Cookies; webRequest.CookieContainer = cookies; //webRequest.SupportsCookieContainer = true; using (var requestStream = await webRequest.GetRequestStreamAsync()) { var bytes = Encoding.UTF8.GetBytes(postData); requestStream.Write(bytes, 0, bytes.Length); } using (WebResponse response = await webRequest.GetResponseAsync()) { statusCodeToReturn = WebResponseToHTTPStatusCode(response); } return statusCodeToReturn; } ``` Cookies (using Wireshark): ``` rack.session=BAh7BkkiD3Nlc3Npb25faWQGOgZFRiJFMzg1ZjYxNzIzNzQ4MmY5NmI3NTMw%0AYWMwZmRjNmVmZjMwMDk4OTgzZGUwNjRlNzIzODlmODNjYzE2YmVmMjNlOQ%3D%3D%0A--30d79cd2276c3236de11104852bba4b84bf80f26; path=/; HttpOnly ```
2012/05/11
[ "https://Stackoverflow.com/questions/10551464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/915328/" ]
The problem is in returned Cookies. Cookies without set DOMAIN are NOT supported in WP7.
I think you can just create an global variable to save the cookie.Such as in your app.xaml.cs file you can create a variable like this: ``` public CookieContainer GlobalCookie{get;set;} ``` And make the GloalCookie equal to your successful HttpWebRequest CookieContainer. Then you can use this variable when you call another API.Hope to help you:)
4,431,913
I get confused about the difference between linear and nonlinear system. Suppose that we have a linear system \begin{equation}\label{1} Ax=b \tag{1} \end{equation} with $A \in \mathbb{R}^{n\times n}$ and $b\in \mathbb{R}^{n}$and we define the $x$ as a nonlinear function \begin{equation} x(t) = \sin(2\pi t) + \sin(3\pi t) \quad \text{where} \quad t=1,2,\dots,n \tag{2} \end{equation} my question is, does the linearity of the system (1) depends on the nature of $x$ ( that can be either linear or nonlinear function)?. In case that the system (1) stays always linear as long as it takes the form of $A x$ no matter how the $x$ is defined, is it possible to transform it to a nonlinear system by adding a nonlinear term ? A term like $e^{x(t)}$ or $x(t)^{2}$ ? is the system $Ax(t) + e^{x(t)} = b$ a nonlinear one or example? Thank you for you explanations and help.
2022/04/20
[ "https://math.stackexchange.com/questions/4431913", "https://math.stackexchange.com", "https://math.stackexchange.com/users/856284/" ]
In a linear system of equation, $x$ is the **vector of variables**. It is not a vector of functions, so speaking about $x(t)$ is not really sensible. In your writing, you first say that $x\in\mathbb R^n$, but then you use the term $x(t)$. This is nonsensical. $x$ can either be a vector, or it can be a function, but it cannot be both at the same time. --- That said, "a linear system" is an ill-defined term. There are multiple mathematical concepts that are called "a linear system etc". For example, there are linear systems of differential equations but you probably aren't talking about those. **A linear system of equations** is any set of linear equations. Any such set of equation can be written compactly as $Ax=b$ for some fixed matrix $A$ and some fixed vector $b$.
Let us first consider a system of equations, where we have only *one* equation. Then it is called *linear* if it is of the form $$ a\_1x\_1+\cdots +a\_nx\_n=b, $$ for constants $a\_i,b\in K$ and variables $x\_i$. It is called *polynomial*, if it is of the form $$ f(x\_1,\ldots ,x\_n)=0, $$ for a polynomial $f\in K[x\_1,\ldots ,x\_n]$. For example, $$ 3z^2y^7-28xyz+17z-28xy+515=0 $$ in the variables $x,y,z$ and rational coefficients. It is *non-linear* if it is not linear. Now a *system* of equations just means that we have several such equations, which we want to solve simultaneously. For example, consider the following system of polynomial, non-linear equations $$x+y+z=1$$ $$x^2+y^2+z^2=35$$ $$x^3+y^3+z^3=97$$
31,006,709
I am trying to implement a map using fold. I could do so in Haskell ``` data Tree a = EmptyTree | Node a (Tree a) (Tree a) deriving (Show) foldTree :: Tree a -> b -> (b -> a -> b -> b) -> b foldTree EmptyTree d _ = d foldTree (Node a l r) d f = f (foldTree l d f) a (foldTree r d f) mapTree :: Tree a -> ( a -> b) -> Tree b mapTree tree f = foldTree tree EmptyTree (\l n r -> Node (f n) l r) ``` However when I tried to port it to Scala, I am a bit stuck ``` sealed trait Tree[+A] case object EmptyTree extends Tree[Nothing] case class Node[A](value: A , left: Tree[A], right: Tree[A]) extends Tree[A] def fold[A, B](t:Tree[A] , z:B)(f:(B,A,B) => B) : B = t match { case EmptyTree => z case Node(x,l,r) => f ( fold( l , z )(f) , x , fold( r , z )(f) ) } def map(tree:Tree[Int])(f:Int=>Int) : Tree[Int] = fold(tree , EmptyTree)((l,x,r) => Node(f(x),l,r)) ``` The compiler complains that it is expecting an EmptyTree in function I pass to fold. ``` fold(tree , EmptyTree)((l,x,r) => Node(f(x),l,r)) ``` The return type for the Map is Tree so I would have expected this to work. Any suggestions ?
2015/06/23
[ "https://Stackoverflow.com/questions/31006709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3336961/" ]
Turned out PFUser class has private property `authData` with all necessary information. We can retrieve it like this: ``` PFUser* user = PFUser.currentUser; NSDictionary* authData = [user valueForKey:@"authData"]; ``` While it works in `Parse SDK v.1.9.1`, it may fail to work in upcoming or previous versions of SDK.
This is not possible, as the authData from the Parse \_User table is not returned to the client. You should to a regular FBSDKGraphRequest to "/me"
32,076,894
I've got Data from a Station (TimeStamp %Y-%m-%d %H:%M) which I'd like to compare with a reference file (TimeStamp: %H:%M) The data file from the weather station has 1 entry ever Minute The reference File has one entry every 15 Minutes. I tried to plot both files the same axis but for some reason it doesn't work. I also tried the (timecolumn(1, "%H:%M") command when plotting the reference file. ``` set xdata time set timefmt "%Y-%m-%d %H:%M" set format x "%H:%M" `Set title '24.12.2014'` set origin 0,0.66 "data_20141224.csv" using 1:25 axes x1y2 linewidth 1 lc rgb 'orange' w l title 'SolarRadiation',\ "refdat.txt" using (timecolumn(1, "%H:%M")):4 axes x1y2 linewidth 1 lc rgb 'yellow' w l title 'CI Reference' ``` The 2. Plot (CI Reference) never shos up on the plot. Weather Station Input Files: ``` 2014-12-24 06:00 1.00 0.93 0.93 0.00 9 4.8 4.8 4.8 63 2014-12-24 06:01 1.00 0.93 0.93 0.00 9 4.8 4.8 4.7 63 2014-12-24 06:02 1.00 0.93 0.93 0.00 9 4.7 4.7 4.7 63 ``` Reference Input File: ``` 08:22 56 32 161 54 282 85 29 349 08:37 75 42 228 68 358 112 40 460 08:52 94 51 295 81 425 131 46 539 ``` Thx for the Help so far This is what i typed: ``` clear reset set title "24.12.2014" font "verdana,08" set xdata time set timefmt "%Y-%m-%d %H:%M" set format x "%H:%M" time set xlabel "Time" set ylabel "CloudinessIndex" set y2label "Irradiance" set ytics (0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0) set yrange [0:1] set ytics nomirror set y2range [0:1200] set y2tics (0,100,200,300,400,500,600,700,800,900,1000,1100,1200) set xrange ['06:00':'22:00'] set autoscale x set nokey set grid #Plot: plot s=0, "data_20141224.csv" using \ (s==0 ? (s=timecolumn(1,"%Y-%m-%d %H:%M"),0) : (timecolumn(1,"%Y-%m-%d %H:%M")-s)):25 axes x1y2 linewidth 1 lc rgb 'orange' w l title 'SolarRadiation',\ "refdata.txt" using (timecolumn(1,'%H:%M')):4 axes x1y2 linewidth 1 lc rgb 'yellow' w l title 'CI Reference', ``` Still it plots only the data from the log file :-(
2015/08/18
[ "https://Stackoverflow.com/questions/32076894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5239927/" ]
You're looking for `Indirect()`. If your cell A1 has the path that you will change (`C:\Users\Arun Victor\Documents\New folder\[MASTER.xlsm]`). In your sheet1, those cells which are currently `=C:\Users\Arun Victor\Documents\New folder\[MASTER.xlsm]Employee WOs'!D4`, can be replaced with `=Indirect("'"&A1&"Employee WOs'!D4")`. Here's an example. I have a Workbook called "TextBook.xlsx" on my desktop. I want my formula to return the value of whatever cell I want, which I will put in cell C9. [![enter image description here](https://i.stack.imgur.com/KlkpO.jpg)](https://i.stack.imgur.com/KlkpO.jpg) I can change C9 from `A1` to `B1029` for instance, and D9 will return whatever value that cell has. (in my 'TextBook.xlsx', `A1` literally just says "Cell A1". **Note** about using `Indirect()`, you must have the file you're using open, or else you'll get an error. In other words, I have my TextBook.xlsx open, which is why you can see the result ("Cell A1"). If I close the workbook, that will change to a `#REF` error.
In your VBA code, you can use something like: ``` = Sheet("Settings").Cells(1,1) & "Employee WOs'!D4" ``` This way your code will automatically take your changes on cell A1 from your Settings sheet.
37,630,855
**My package:** `Python 2.7.11+` `Django 1.9.6` In my `urls.py` I have imported: `from django.conf.urls import patterns, url, include` but it's dysplays an error when I type `python manage.py runserver`: > > ImportError: cannot import name patterns > > > I have tried to change import string with: `from django.conf.urls.defaults import` but it causes the following error: > > from django.conf.urls.defaults import > > >
2016/06/04
[ "https://Stackoverflow.com/questions/37630855", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6423072/" ]
Since Django 1.7, the urlconf is a simple list and no longer requires the patterns import. So remove patterns from your import, and see the example here on the syntax to use: <https://docs.djangoproject.com/en/1.9/topics/http/urls/#example>
patterns is deprecated after Django 1.7. You can defines urls simply like this ```js urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'', include('web.urls')), ] ``` or you can import urls from your app and define urls like this ```js from app import urls urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'', include(urls)), ] ```
609,171
I would like to add a fully qualified domain name to my **Centos 6 VM** that is running on **virtualbox**. The virtualbox is set to be in **bridged** mode. I'm a windows programmer, and I am as far as they get from networking / linux. I would be most grateful if you could point me into the right direction on how to do this. I understand that the best way to do this is through a DNS server, but I want to go the quickest/easiest route for the purposes of learning by setting it up in **/etc/hosts**? I don't know if thats the right area, but I am open to your feedback. update: I have been able to ping this VM using my host machine using its IP (I'm not sure whether it's internal or external) however, I haven't been able to ping it using its hostname!
2013/06/18
[ "https://superuser.com/questions/609171", "https://superuser.com", "https://superuser.com/users/10616/" ]
You can also do the following: Check the IP address on the CentOS VM by running the command `ifconfig`. It should be something like `10.0.2.15`. Now modify the file `/etc/hosts` **on CentOS**, add the following line ``` 10.0.2.15 virtualCentOS.local virtualCentOS ```
Typically setting up a dns record in your dns server is the best way to do this. You will need to edit the following file c:\windows\system32\drivers\etc\hosts Do the following Start -> Run notepad c:\windows\system32\drivers\etc\hosts In notepad, add the following line to the end of the file (change 192.168.10.10 to the ip address of your vm, change 'foo' to the domain name you would like to refer to your vm as) 192.168.10.10 foo Here is a link that has screenshots of how to do this. <http://www.howtogeek.com/howto/27350/beginner-geek-how-to-edit-your-hosts-file/> **Important** This will only work as long as your vm has the same ip address. If your vm gets a new ip address, you will have to repeat these steps.
69,554,406
I'm working on react app and using material-ui v5 and I'm stuck on this error theme.spacing is not working. ``` import { makeStyles } from "@material-ui/styles"; import React from "react"; import Drawer from "@mui/material/Drawer"; import Typography from "@mui/material/Typography"; import List from "@mui/material/List"; import ListItem from "@mui/material/ListItem"; import ListItemIcon from "@mui/material/ListItemIcon"; import ListItemText from "@mui/material/ListItemText"; import { AddCircleOutlineOutlined, SubjectOutlined } from "@mui/icons-material"; import { useHistory, useLocation } from "react-router"; const drawerWidth = 240; const useStyles = makeStyles((theme) => { return { page: { background: "#f9f9f9", width: "100%", padding: theme.spacing(3), }, drawer: { width: drawerWidth, }, drawerPaper: { width: drawerWidth, }, root: { display: "flex", }, active: { background: "#f4f4f4 !important", }, }; }); const Layout = ({ children }) => { const classes = useStyles(); const history = useHistory(); const location = useLocation(); const menuItems = [ { text: "My Notes", icon: <SubjectOutlined color="secondary" />, path: "/", }, { text: "Create Note", icon: <AddCircleOutlineOutlined color="secondary" />, path: "/create", }, ]; return ( <div className={classes.root}> <div className={classes.page}>{children}</div> </div> ); }; export default Layout; ``` on line no. 19 that is "padding: theme.spacing(3)" it is showing error on my browser > > TypeError: theme.spacing is not a function > > >
2021/10/13
[ "https://Stackoverflow.com/questions/69554406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17133643/" ]
How do you create the theme, did you made with createTheme? You need to pass the provider in the root of your application. Try import like this. ``` import { createTheme } from '@mui/material/styles ``` Then define all your global styles for your application and then make, for example, this. ``` import { ThemeProvider } from "@mui/material/styles"; <ThemeProvider theme={yourtheme}><App/></ThemeProvider> ```
Maybe your import is wrong ? Try this ``` import {makeStyles} from "@material-ui/core/styles"; ```
69,554,406
I'm working on react app and using material-ui v5 and I'm stuck on this error theme.spacing is not working. ``` import { makeStyles } from "@material-ui/styles"; import React from "react"; import Drawer from "@mui/material/Drawer"; import Typography from "@mui/material/Typography"; import List from "@mui/material/List"; import ListItem from "@mui/material/ListItem"; import ListItemIcon from "@mui/material/ListItemIcon"; import ListItemText from "@mui/material/ListItemText"; import { AddCircleOutlineOutlined, SubjectOutlined } from "@mui/icons-material"; import { useHistory, useLocation } from "react-router"; const drawerWidth = 240; const useStyles = makeStyles((theme) => { return { page: { background: "#f9f9f9", width: "100%", padding: theme.spacing(3), }, drawer: { width: drawerWidth, }, drawerPaper: { width: drawerWidth, }, root: { display: "flex", }, active: { background: "#f4f4f4 !important", }, }; }); const Layout = ({ children }) => { const classes = useStyles(); const history = useHistory(); const location = useLocation(); const menuItems = [ { text: "My Notes", icon: <SubjectOutlined color="secondary" />, path: "/", }, { text: "Create Note", icon: <AddCircleOutlineOutlined color="secondary" />, path: "/create", }, ]; return ( <div className={classes.root}> <div className={classes.page}>{children}</div> </div> ); }; export default Layout; ``` on line no. 19 that is "padding: theme.spacing(3)" it is showing error on my browser > > TypeError: theme.spacing is not a function > > >
2021/10/13
[ "https://Stackoverflow.com/questions/69554406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17133643/" ]
**This is not working because you have to type your Theme** 1. import the Theme ``` import { Theme } from '@mui/material' ``` 2. Now type it: ``` const useStyles=makeStyles((theme:Theme)=>({ root:{ background: 'black', padding:theme.spacing(3), } })) ``` I was facing the exact same problem and this worked for me.
Maybe your import is wrong ? Try this ``` import {makeStyles} from "@material-ui/core/styles"; ```
69,554,406
I'm working on react app and using material-ui v5 and I'm stuck on this error theme.spacing is not working. ``` import { makeStyles } from "@material-ui/styles"; import React from "react"; import Drawer from "@mui/material/Drawer"; import Typography from "@mui/material/Typography"; import List from "@mui/material/List"; import ListItem from "@mui/material/ListItem"; import ListItemIcon from "@mui/material/ListItemIcon"; import ListItemText from "@mui/material/ListItemText"; import { AddCircleOutlineOutlined, SubjectOutlined } from "@mui/icons-material"; import { useHistory, useLocation } from "react-router"; const drawerWidth = 240; const useStyles = makeStyles((theme) => { return { page: { background: "#f9f9f9", width: "100%", padding: theme.spacing(3), }, drawer: { width: drawerWidth, }, drawerPaper: { width: drawerWidth, }, root: { display: "flex", }, active: { background: "#f4f4f4 !important", }, }; }); const Layout = ({ children }) => { const classes = useStyles(); const history = useHistory(); const location = useLocation(); const menuItems = [ { text: "My Notes", icon: <SubjectOutlined color="secondary" />, path: "/", }, { text: "Create Note", icon: <AddCircleOutlineOutlined color="secondary" />, path: "/create", }, ]; return ( <div className={classes.root}> <div className={classes.page}>{children}</div> </div> ); }; export default Layout; ``` on line no. 19 that is "padding: theme.spacing(3)" it is showing error on my browser > > TypeError: theme.spacing is not a function > > >
2021/10/13
[ "https://Stackoverflow.com/questions/69554406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17133643/" ]
**This is not working because you have to type your Theme** 1. import the Theme ``` import { Theme } from '@mui/material' ``` 2. Now type it: ``` const useStyles=makeStyles((theme:Theme)=>({ root:{ background: 'black', padding:theme.spacing(3), } })) ``` I was facing the exact same problem and this worked for me.
How do you create the theme, did you made with createTheme? You need to pass the provider in the root of your application. Try import like this. ``` import { createTheme } from '@mui/material/styles ``` Then define all your global styles for your application and then make, for example, this. ``` import { ThemeProvider } from "@mui/material/styles"; <ThemeProvider theme={yourtheme}><App/></ThemeProvider> ```
33,579
With today's technology could mankind create a system of rings similar to Saturn composed of different masses of water particles and silicate minerals that would freeze? How could mankind accomplish this feat of engineering? [![enter image description here](https://i.stack.imgur.com/wPBig.jpg)](https://i.stack.imgur.com/wPBig.jpg)
2016/01/18
[ "https://worldbuilding.stackexchange.com/questions/33579", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/13424/" ]
UPDATE: It looks like [This question has been asked before](https://worldbuilding.stackexchange.com/questions/14850/earth-with-planetary-ring?rq=1), with pretty much the same sort of answer. Well, ignoring the big question of **"WHY?"**: Using the same method that formed Saturn's Rings ------------------------------------------------ [Wikipedia](https://en.wikipedia.org/wiki/Rings_of_Saturn) is fairly detailed regarding the composition and formation theories for Saturn's rings. They all involve the disintegration of a Moon, either through being ripped apart due to getting too close to the planet ([Roche Limit](https://en.wikipedia.org/wiki/Roche_limit)), or due to collision with another object. This would be preeetty difficult to replicate as you'd have to get an object of sufficient mass to create the rings into a close enough orbit, but only deteriorating slowly enough that it gets ripped apart rather than crashing into the planet and causing the apocalypse. That wouldn't be so good. The alternative would be to smash something into the Moon and see what happens, but the dispersal would involve a bit of pot luck to give you the rings and again, may cause massive damage to the planet. Either of the above seem pretty unlikely. Both of the methods above are beyond our current technology (short of nuking the moon and getting lucky), and there's no guarantee that either would work. I also seem to remember reading that Saturn's rings require "Shepherd Moons" in order to maintain their exact form - without the existence of some of Saturn's moons within the rings themselves, they would lose formation (If I'm wrong on that, someone correct me!). There is however a third option, which wouldn't be exactly like Saturn's rings, but it might be close enough: Space Debris ------------ Have a look at This: [![enter image description here](https://i.stack.imgur.com/RJjaf.jpg)](https://i.stack.imgur.com/RJjaf.jpg) That's a picture of the space debris **ALREADY** in orbit around the Earth — not only that, but that image is **over 10 years old**. With a bit of shunting and maintenance (ok, a lot of shunting and maintenance), all of that debris could theoretically be aligned into a single plane, forming what would look an awful lot like Saturn's rings as the density of material on that plane was increased. Interestingly, this also provides a reason for the "Why?": There are already serious issues trying to get stuff into orbit because of collision threats with space debris. If you could restrict debris to a single plane it would provide vastly greater launch windows, you wouldn't have to worry so much about debris shielding for launch, and it would theoretically prevent the [Kessler Syndrome](https://en.wikipedia.org/wiki/Kessler_syndrome) ever becoming reality. Plus, it would look pretty. There would obviously be issues to content with — safe paths for satellites orbiting on intersecting planes, for instance — but it's theoretically far more feasible than actually creating Saturn's rings. As for the technology involved: * The debris is already all tracked - that image is a computer model based on tracked debris, so they know where it is at the moment. * Advanced knowledge of mechanics and physics would give you the ability to calculate the required force to redirect an object onto a different orbit - "Mankind" can certainly already do this * Actually getting everything onto the new orbit would be the challenge. It would take an awful lot of time and an awful lot of manoeuvrable spacecraft, plus some method of "Sweeping up" the small stuff and transferring it to the new orbit wholesale. * We'd probably need to have some way of keeping it there. Saturn's rings have been around for billions of years, we can't tell how much originally drifted away or into Saturn, neither of which are cases that we can afford. The first two bits (planning it) are well within current technology. The Latter two, I don't know. Possibly, but it would require a massive amount of planning and resources, not to mention coordination between global powers! There is certainly tech to be created (like that sweeping brush) for which the physics would be tough, but not beyond human ingenuity, I would think. It'd just need a plausible enough reasons to get the money and people behind it.
Launching stuff from the planet to orbit is expensive. And if we want to do some major planet colonizing some day we're going to need to build a lot of ships and that means a lot of launches with a lot of equipment from Earth, unless we start manufacturing in space. So pull in some raw materials in the form of captured asteroids and comets, and launch a couple small factory robots to start building the big factories. Some of the existing junk would be pulled in and recycled instead of just letting it burn up. All this manufacturing would produce dust and ice crystals, and over time these would build up until rings started to form in the orbits of the factories. With all of the junk confined to these rings, maneuvering around the planet gets a lot easier.
33,579
With today's technology could mankind create a system of rings similar to Saturn composed of different masses of water particles and silicate minerals that would freeze? How could mankind accomplish this feat of engineering? [![enter image description here](https://i.stack.imgur.com/wPBig.jpg)](https://i.stack.imgur.com/wPBig.jpg)
2016/01/18
[ "https://worldbuilding.stackexchange.com/questions/33579", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/13424/" ]
UPDATE: It looks like [This question has been asked before](https://worldbuilding.stackexchange.com/questions/14850/earth-with-planetary-ring?rq=1), with pretty much the same sort of answer. Well, ignoring the big question of **"WHY?"**: Using the same method that formed Saturn's Rings ------------------------------------------------ [Wikipedia](https://en.wikipedia.org/wiki/Rings_of_Saturn) is fairly detailed regarding the composition and formation theories for Saturn's rings. They all involve the disintegration of a Moon, either through being ripped apart due to getting too close to the planet ([Roche Limit](https://en.wikipedia.org/wiki/Roche_limit)), or due to collision with another object. This would be preeetty difficult to replicate as you'd have to get an object of sufficient mass to create the rings into a close enough orbit, but only deteriorating slowly enough that it gets ripped apart rather than crashing into the planet and causing the apocalypse. That wouldn't be so good. The alternative would be to smash something into the Moon and see what happens, but the dispersal would involve a bit of pot luck to give you the rings and again, may cause massive damage to the planet. Either of the above seem pretty unlikely. Both of the methods above are beyond our current technology (short of nuking the moon and getting lucky), and there's no guarantee that either would work. I also seem to remember reading that Saturn's rings require "Shepherd Moons" in order to maintain their exact form - without the existence of some of Saturn's moons within the rings themselves, they would lose formation (If I'm wrong on that, someone correct me!). There is however a third option, which wouldn't be exactly like Saturn's rings, but it might be close enough: Space Debris ------------ Have a look at This: [![enter image description here](https://i.stack.imgur.com/RJjaf.jpg)](https://i.stack.imgur.com/RJjaf.jpg) That's a picture of the space debris **ALREADY** in orbit around the Earth — not only that, but that image is **over 10 years old**. With a bit of shunting and maintenance (ok, a lot of shunting and maintenance), all of that debris could theoretically be aligned into a single plane, forming what would look an awful lot like Saturn's rings as the density of material on that plane was increased. Interestingly, this also provides a reason for the "Why?": There are already serious issues trying to get stuff into orbit because of collision threats with space debris. If you could restrict debris to a single plane it would provide vastly greater launch windows, you wouldn't have to worry so much about debris shielding for launch, and it would theoretically prevent the [Kessler Syndrome](https://en.wikipedia.org/wiki/Kessler_syndrome) ever becoming reality. Plus, it would look pretty. There would obviously be issues to content with — safe paths for satellites orbiting on intersecting planes, for instance — but it's theoretically far more feasible than actually creating Saturn's rings. As for the technology involved: * The debris is already all tracked - that image is a computer model based on tracked debris, so they know where it is at the moment. * Advanced knowledge of mechanics and physics would give you the ability to calculate the required force to redirect an object onto a different orbit - "Mankind" can certainly already do this * Actually getting everything onto the new orbit would be the challenge. It would take an awful lot of time and an awful lot of manoeuvrable spacecraft, plus some method of "Sweeping up" the small stuff and transferring it to the new orbit wholesale. * We'd probably need to have some way of keeping it there. Saturn's rings have been around for billions of years, we can't tell how much originally drifted away or into Saturn, neither of which are cases that we can afford. The first two bits (planning it) are well within current technology. The Latter two, I don't know. Possibly, but it would require a massive amount of planning and resources, not to mention coordination between global powers! There is certainly tech to be created (like that sweeping brush) for which the physics would be tough, but not beyond human ingenuity, I would think. It'd just need a plausible enough reasons to get the money and people behind it.
Stealing from the Asteroid Belt =============================== One option would be to 'borrow' the required mass from the asteroid belt between Mars and Jupiter - no easy feat for sure, but something current space agencies [have plans](https://www.nasa.gov/content/what-is-nasa-s-asteroid-redirect-mission) to do, albeit on a much smaller scale. Saturn's ring system has an estimated mass of 3 x 1019 kg, with Saturn's mass being ~ 6 x 1026 kg. Using Earth's mass of ~ 6 x 1024 kg, this gives us a ballpark Earth-Ring mass of 3 x 1017 kg. This gives us a ring system of a similar density and relative size to Saturn's. Luckily for us, the asteroid belt is estimated to have ~ 3 x 1021 - enough for 10,000 rings! Now we know there's enough material, let's get started! How to do it ------------ Build an unmanned drone ship capable of sustained spaceflight for many hundreds of years with enough fuel for frequent burns. The burns don't necessarily need to be be powerful, so it's *possible* that this could be achieved with an [ion engine](https://en.wikipedia.org/wiki/Ion_thruster). The size of the ship shouldn't matter in theory, but the bigger (more massive), the better! Pick an asteroid, fly close by and use the gravity of your ship to nudge the asteroid into a transfer orbit to coincide with Earth. This will need to be done very precisely as we want the asteroid to fall in an orbit *around* Earth, not fall into Earth itself! By dropping the asteroids below Earth [Roche limit](https://en.wikipedia.org/wiki/Roche_limit), they should break apart and form perfect rings (*should* - no promises). Rinse and repeat! **Bonus**: As other answers have mentioned, Saturn's ring system is kept in check by small shepherd moons. If you pick a large enough asteroid and place it in orbit *above* the Roche limit, you have a ready-made shepherd!. One Caveat ---------- Saturn's Rings are believed to consist almost entirely of water ice, whereas asteroid belt objects are typically carbon or metal-heavy. This means your rings will likely be less reflective than Saturn's.
33,579
With today's technology could mankind create a system of rings similar to Saturn composed of different masses of water particles and silicate minerals that would freeze? How could mankind accomplish this feat of engineering? [![enter image description here](https://i.stack.imgur.com/wPBig.jpg)](https://i.stack.imgur.com/wPBig.jpg)
2016/01/18
[ "https://worldbuilding.stackexchange.com/questions/33579", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/13424/" ]
Creating a temporary ring is relatively simple, and most of the previous posters have provided useful answers. The problem isn't so much that you can make a ring, but how to make it a stable structure in orbit around a planet. The answer is shepherd moons. [![enter image description here](https://i.stack.imgur.com/VsYyK.jpg)](https://i.stack.imgur.com/VsYyK.jpg) Many of the rings of Saturn have been observed to have small moons whose orbits define the edges of various rings or gaps. The gravitational interplay between the various moons, particles in orbit and Saturn have pushed the particles into the rings we see, and have kept them stable over the eons since they were formed. Doing the same for Earth will be a bit tricky, since unlike Saturn, Earth has a single huge moon orbiting it, and it is quite possible that the perturbations caused by our moon will upset the intricate orbital dances between the tiny shepherd moons, the rings and the Earth. This can be overcome if the builders take the time and expend the energy to actively "fly" the moons in accurate orbits using some sort of impulse to overcome perturbations caused by the Earth's natural satellite. Since this is a long term project, the shepherd moons will need to be propelled by some sort of passive system like Solar Sails, otherwise the energy and reaction mass requirements become prohibitive. The moment the builders are no longer able or willing to take care of their creation, the shepherd moons will start to drift out of position and the ring system will begin to dissipate. Depending on initial conditions, this process could take centuries.
Launching stuff from the planet to orbit is expensive. And if we want to do some major planet colonizing some day we're going to need to build a lot of ships and that means a lot of launches with a lot of equipment from Earth, unless we start manufacturing in space. So pull in some raw materials in the form of captured asteroids and comets, and launch a couple small factory robots to start building the big factories. Some of the existing junk would be pulled in and recycled instead of just letting it burn up. All this manufacturing would produce dust and ice crystals, and over time these would build up until rings started to form in the orbits of the factories. With all of the junk confined to these rings, maneuvering around the planet gets a lot easier.
33,579
With today's technology could mankind create a system of rings similar to Saturn composed of different masses of water particles and silicate minerals that would freeze? How could mankind accomplish this feat of engineering? [![enter image description here](https://i.stack.imgur.com/wPBig.jpg)](https://i.stack.imgur.com/wPBig.jpg)
2016/01/18
[ "https://worldbuilding.stackexchange.com/questions/33579", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/13424/" ]
Creating a temporary ring is relatively simple, and most of the previous posters have provided useful answers. The problem isn't so much that you can make a ring, but how to make it a stable structure in orbit around a planet. The answer is shepherd moons. [![enter image description here](https://i.stack.imgur.com/VsYyK.jpg)](https://i.stack.imgur.com/VsYyK.jpg) Many of the rings of Saturn have been observed to have small moons whose orbits define the edges of various rings or gaps. The gravitational interplay between the various moons, particles in orbit and Saturn have pushed the particles into the rings we see, and have kept them stable over the eons since they were formed. Doing the same for Earth will be a bit tricky, since unlike Saturn, Earth has a single huge moon orbiting it, and it is quite possible that the perturbations caused by our moon will upset the intricate orbital dances between the tiny shepherd moons, the rings and the Earth. This can be overcome if the builders take the time and expend the energy to actively "fly" the moons in accurate orbits using some sort of impulse to overcome perturbations caused by the Earth's natural satellite. Since this is a long term project, the shepherd moons will need to be propelled by some sort of passive system like Solar Sails, otherwise the energy and reaction mass requirements become prohibitive. The moment the builders are no longer able or willing to take care of their creation, the shepherd moons will start to drift out of position and the ring system will begin to dissipate. Depending on initial conditions, this process could take centuries.
Stealing from the Asteroid Belt =============================== One option would be to 'borrow' the required mass from the asteroid belt between Mars and Jupiter - no easy feat for sure, but something current space agencies [have plans](https://www.nasa.gov/content/what-is-nasa-s-asteroid-redirect-mission) to do, albeit on a much smaller scale. Saturn's ring system has an estimated mass of 3 x 1019 kg, with Saturn's mass being ~ 6 x 1026 kg. Using Earth's mass of ~ 6 x 1024 kg, this gives us a ballpark Earth-Ring mass of 3 x 1017 kg. This gives us a ring system of a similar density and relative size to Saturn's. Luckily for us, the asteroid belt is estimated to have ~ 3 x 1021 - enough for 10,000 rings! Now we know there's enough material, let's get started! How to do it ------------ Build an unmanned drone ship capable of sustained spaceflight for many hundreds of years with enough fuel for frequent burns. The burns don't necessarily need to be be powerful, so it's *possible* that this could be achieved with an [ion engine](https://en.wikipedia.org/wiki/Ion_thruster). The size of the ship shouldn't matter in theory, but the bigger (more massive), the better! Pick an asteroid, fly close by and use the gravity of your ship to nudge the asteroid into a transfer orbit to coincide with Earth. This will need to be done very precisely as we want the asteroid to fall in an orbit *around* Earth, not fall into Earth itself! By dropping the asteroids below Earth [Roche limit](https://en.wikipedia.org/wiki/Roche_limit), they should break apart and form perfect rings (*should* - no promises). Rinse and repeat! **Bonus**: As other answers have mentioned, Saturn's ring system is kept in check by small shepherd moons. If you pick a large enough asteroid and place it in orbit *above* the Roche limit, you have a ready-made shepherd!. One Caveat ---------- Saturn's Rings are believed to consist almost entirely of water ice, whereas asteroid belt objects are typically carbon or metal-heavy. This means your rings will likely be less reflective than Saturn's.
42,407,702
Hello all i am try to search my gmail using AE.NET.Mail. But i am facing a problem when i tried to search email using SentOn method it's always retruning **`xm006 BAD Could not parse command`**. I am sening this Datetime Fromat yyyy-MM-dd Can you guys please help me what is the problem here? Thank you!
2017/02/23
[ "https://Stackoverflow.com/questions/42407702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7608846/" ]
There's a bug filed about this exact problem on AE.Net.Mail's GitHub issues page here: <https://github.com/andyedinborough/aenetmail/issues/197> AE.Net.Mail is unlikely to get a fix anytime soon as the project is unmaintained. Your best option is probably to switch to my [MailKit](https://github.com/jstedfast/MailKit) library which does not have this problem and is actively maintained as well.
A little late for this but I guess it could help someone. Since gmail doesn't allow anymore the same DateTime param. A workaround must be made: ``` imapClient.SelectMailbox("INBOX"); SearchCondition condition = new SearchCondition(); condition.Field = SearchCondition.Fields.Since; condition.Value = new DateTime(2019, 2, 12).ToString("dd-MMM-yyyy"); var messages = imapClient.SearchMessages(condition); ```
8,337,593
I'm tring to use hive to analysis our log, and I have a question. Assume we have some data like this: A 1 A 1 A 1 B 1 C 1 B 1 How can I make it like this in hive table(order is not important, I just want to merge them) ? A 1 B 1 C 1 without pre-process it with awk/sed or something like that? Thanks!
2011/12/01
[ "https://Stackoverflow.com/questions/8337593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/853688/" ]
One idea.. you could create a table around the first file (called 'oldtable'). Then run something like this.... create table newtable select field1, max(field) from oldtable group by field1; Not sure I have the syntax right, but the idea is to get unique values of the first field, and only one of the second. Make sense?
For merging the data, we can also use "UNION ALL" , it can also merge two different types of datatypes. insert overwrite into table test1 (select x.\* from t1 x ) UNION ALL (select y.\* from t2 y); here we are merging two tables data (t1 and t2) into one single table test1.
8,337,593
I'm tring to use hive to analysis our log, and I have a question. Assume we have some data like this: A 1 A 1 A 1 B 1 C 1 B 1 How can I make it like this in hive table(order is not important, I just want to merge them) ? A 1 B 1 C 1 without pre-process it with awk/sed or something like that? Thanks!
2011/12/01
[ "https://Stackoverflow.com/questions/8337593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/853688/" ]
One idea.. you could create a table around the first file (called 'oldtable'). Then run something like this.... create table newtable select field1, max(field) from oldtable group by field1; Not sure I have the syntax right, but the idea is to get unique values of the first field, and only one of the second. Make sense?
There's no way to pre-process the data while it's being loaded without using an external program. You could use a view if you'd like to keep the original data intact. ``` hive> SELECT * FROM table1; OK A 1 A 1 A 1 B 1 C 1 B 1 B 2 # Added to show it will group correctly with different values hive> CREATE VIEW table2 (fld1, fld2) AS SELECT fld1, fld2 FROM table1 GROUP BY fld1, fld2; hive> SELECT * FROM table2; OK A 1 B 1 B 2 C 1 ```
8,337,593
I'm tring to use hive to analysis our log, and I have a question. Assume we have some data like this: A 1 A 1 A 1 B 1 C 1 B 1 How can I make it like this in hive table(order is not important, I just want to merge them) ? A 1 B 1 C 1 without pre-process it with awk/sed or something like that? Thanks!
2011/12/01
[ "https://Stackoverflow.com/questions/8337593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/853688/" ]
**Step 1:** Create a Hive table for input data set . create table if not exists table1 (fld1 string, fld2 string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; (i assumed field seprator is \t, you can replace it with actual separator) **Step 2 :** Run below to get the merge data you are looking for create table table2 as select fld1,fld2 from table1 group by fld1,fld2 ; I tried this for below input set hive (default)> select \* from table1; OK A 1 A 1 A 1 B 1 C 1 B 1 create table table4 as select fld1,fld2 from table1 group by fld1,fld2 ; hive (default)> select \* from table4; OK A 1 B 1 C 1 You can use external table as well , but for simplicity I have used managed table here.
For merging the data, we can also use "UNION ALL" , it can also merge two different types of datatypes. insert overwrite into table test1 (select x.\* from t1 x ) UNION ALL (select y.\* from t2 y); here we are merging two tables data (t1 and t2) into one single table test1.
8,337,593
I'm tring to use hive to analysis our log, and I have a question. Assume we have some data like this: A 1 A 1 A 1 B 1 C 1 B 1 How can I make it like this in hive table(order is not important, I just want to merge them) ? A 1 B 1 C 1 without pre-process it with awk/sed or something like that? Thanks!
2011/12/01
[ "https://Stackoverflow.com/questions/8337593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/853688/" ]
**Step 1:** Create a Hive table for input data set . create table if not exists table1 (fld1 string, fld2 string ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; (i assumed field seprator is \t, you can replace it with actual separator) **Step 2 :** Run below to get the merge data you are looking for create table table2 as select fld1,fld2 from table1 group by fld1,fld2 ; I tried this for below input set hive (default)> select \* from table1; OK A 1 A 1 A 1 B 1 C 1 B 1 create table table4 as select fld1,fld2 from table1 group by fld1,fld2 ; hive (default)> select \* from table4; OK A 1 B 1 C 1 You can use external table as well , but for simplicity I have used managed table here.
There's no way to pre-process the data while it's being loaded without using an external program. You could use a view if you'd like to keep the original data intact. ``` hive> SELECT * FROM table1; OK A 1 A 1 A 1 B 1 C 1 B 1 B 2 # Added to show it will group correctly with different values hive> CREATE VIEW table2 (fld1, fld2) AS SELECT fld1, fld2 FROM table1 GROUP BY fld1, fld2; hive> SELECT * FROM table2; OK A 1 B 1 B 2 C 1 ```
46,509,745
In my Stored procedure, I have added a command to create a hash temp table #DIR\_CAT. But every time I execute the procedure I get this error: "There is already an object named '#DIR\_Cat' in the database." Even when I have already created an Exists clause at the start of SP to check and drop the table if it is present. Any help is much appreciated. The code goes like this. ``` if exists (select * from dbo.sysobjects where id = object_id(N'#DIR_Cat') ) drop table #DIR_Cat /* some lines of code*/ CREATE TABLE #DIR_Cat (XMLDta xml) /* some lines of code*/ INSERT #DIR_Cat exec (@stmt) /* some lines of code*/ drop table #DIR_Cat ```
2017/10/01
[ "https://Stackoverflow.com/questions/46509745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8702769/" ]
If you read the desired output matrix top-down, then left-right, you see the pattern 1,2,3, 0,0,0, 1,2,3, 0,0,0, 1,2,3. You can use that pattern to easily create a linear array, and then reshape it into the two-dimensional form: ``` import numpy as np X = np.array([1,2,3]) N = len(X) zeros = np.zeros_like(X) m = np.hstack((np.tile(np.hstack((X,zeros)),N-1),X)).reshape(N,-1).T print m ``` gives ``` [[1 0 0] [2 1 0] [3 2 1] [0 3 2] [0 0 3]] ```
An other writable solution : ``` def block(X): n=X.size zeros=np.zeros((2*n-1,n),X.dtype) zeros[::2]=X return zeros.reshape(n,-1).T ``` try : ``` In [2]: %timeit block(X) 600 µs ± 33 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ```
46,509,745
In my Stored procedure, I have added a command to create a hash temp table #DIR\_CAT. But every time I execute the procedure I get this error: "There is already an object named '#DIR\_Cat' in the database." Even when I have already created an Exists clause at the start of SP to check and drop the table if it is present. Any help is much appreciated. The code goes like this. ``` if exists (select * from dbo.sysobjects where id = object_id(N'#DIR_Cat') ) drop table #DIR_Cat /* some lines of code*/ CREATE TABLE #DIR_Cat (XMLDta xml) /* some lines of code*/ INSERT #DIR_Cat exec (@stmt) /* some lines of code*/ drop table #DIR_Cat ```
2017/10/01
[ "https://Stackoverflow.com/questions/46509745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8702769/" ]
**Approach #1 :** Using [`np.lib.stride_tricks.as_strided`](http://www.scipy-lectures.org/advanced/advanced_numpy/#indexing-scheme-strides) - ``` from numpy.lib.stride_tricks import as_strided as strided def zeropad_arr_v1(X): n = len(X) z = np.zeros(len(X)-1,dtype=X.dtype) X_ext = np.concatenate(( z, X, z)) s = X_ext.strides[0] return strided(X_ext[n-1:], (2*n-1,n), (s,-s), writeable=False) ``` Note that this would create a `read-only` output. If you need to write into later on, simply make a copy by appending `.copy()` at the end. **Approach #2 :** Using concatenation with zeros and then clipping/slicing - ``` def zeropad_arr_v2(X): n = len(X) X_ext = np.concatenate((X, np.zeros(n,dtype=X.dtype))) return np.tile(X_ext, n)[:-n].reshape(-1,n,order='F') ``` Approach #1 being a strides-based method should be very efficient on performance. Sample runs - ``` In [559]: X = np.array([1,2,3]) In [560]: zeropad_arr_v1(X) Out[560]: array([[1, 0, 0], [2, 1, 0], [3, 2, 1], [0, 3, 2], [0, 0, 3]]) In [561]: zeropad_arr_v2(X) Out[561]: array([[1, 0, 0], [2, 1, 0], [3, 2, 1], [0, 3, 2], [0, 0, 3]]) ``` **Runtime test** ``` In [611]: X = np.random.randint(0,9,(1000)) # Approach #1 (read-only) In [612]: %timeit zeropad_arr_v1(X) 100000 loops, best of 3: 8.74 µs per loop # Approach #1 (writable) In [613]: %timeit zeropad_arr_v1(X).copy() 1000 loops, best of 3: 1.05 ms per loop # Approach #2 In [614]: %timeit zeropad_arr_v2(X) 1000 loops, best of 3: 705 µs per loop # @user8153's solution In [615]: %timeit hstack_app(X) 100 loops, best of 3: 2.26 ms per loop ```
An other writable solution : ``` def block(X): n=X.size zeros=np.zeros((2*n-1,n),X.dtype) zeros[::2]=X return zeros.reshape(n,-1).T ``` try : ``` In [2]: %timeit block(X) 600 µs ± 33 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ```
2,862,222
Find the curve length of the intersection between the unit sphere $x^2+y^2+z^2=1$ and the plane $x+y=1$ I have read [this](https://math.stackexchange.com/questions/2004224/parametrization-of-the-intersection-between-a-sphere-and-a-plane) and [this](https://math.stackexchange.com/questions/2339243/when-find-the-equation-of-intersection-of-plane-and-sphere?noredirect=1&lq=1) but I still do not manage, I will go [over this](http://tutorial.math.lamar.edu/Classes/CalcIII/CalcIII.aspx) I hope it will address the subject. So we a unit sphere $x^2+y^2+z^2=1$, I know that the parametrization of a sphere is $(r \sin\rho \cos \theta, r\sin \rho \sin \theta, r\cos \rho)$ Now the plane is $x+y=1$ which can be written as $x=1-y$ and to satisfy that the sphere intersect with the plane with need $(1-y)^2+y^2+z^2=1$ which is $1-2y+y^2+y^2+z^2=1$ or $2y(y-1)+z^2=0$ I need to move to the parametrization to find the curve length, how do I do it?
2018/07/25
[ "https://math.stackexchange.com/questions/2862222", "https://math.stackexchange.com", "https://math.stackexchange.com/users/300848/" ]
One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the *exact* same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system. Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an *obvious* connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the *natural deduction presentation* of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A *sequent calculus presentation* of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $\pi\_i:A\_1\times A\_2\to A\_i$ and $\langle M,N\rangle : A\_1\times A\_2$ where $M:A\_1$ and $N:A\_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $\mathsf{let}$, e.g. $\mathsf{let}\ \langle x,y\rangle = T\ \mathsf{in}\ U$ where $T:A\_1\times A\_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A\_1$ and $A\_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.) Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's [Register Allocation by Proof Transformation](https://doi.org/10.1016/j.scico.2004.01.005) is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the [Concurrent Logical Framework](http://reports-archive.adm.cs.cmu.edu/anon/2002/abstracts/02-101.html) clearly leverage the interplay of proof theory and programming. Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is `callcc` which gives us a classical logic (though, again, a *lot* of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of [this slide set](https://www.p%C3%A9drot.fr/slides/sstt-01-17.pdf#page=11) mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by [Pierre-Marie Pédrot](https://www.p%C3%A9drot.fr/publications.html).) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate [Markov's rule](https://en.wikipedia.org/wiki/Markov_rule). One way of understanding exceptions is by a monad-style translation and the logical view of this is [Friedman's translation](https://en.wikipedia.org/wiki/Friedman_translation). If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...) Daniel Schepler mentioned Hoare logic (a more modern version would be [Hoare Type Theory](https://software.imdea.org/~aleks/htt/)), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2. A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension. 1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories. 2 I don't mean to suggest that Daniel Schepler meant otherwise.
As the previous answer already suggested, imperative languages can be seen as less restrictive languages than functional languages. Any language with a strong enough type system allows you to express types corresponding to mathematical propositions. If you restrict yourself to only a subset of the language, you can also end up with a consistent logic. Let's look at some examples in C++. An implication $A \to B$ corresponds under the BHK interpretation to a function, or algorithm, from type $A$ to $B$. This can be straightforwardedly expressed with a function ``` b f(a input) { // here you would need to build something of type b only using the input } ``` You can also express sum types in C++ with enumerations: ``` enum Either { a, b }; ``` A proof of $A \to A \vee B$ is therefore given by the following function ``` Either f(a input) { return (Either input); } ``` Of course, you have to limit yourself to a functional style, otherwise you can just prove everything by constructing arbitrary elements. Note that the suitability for a programming language to be a theorem prover is merely a continuum. Haskell can be considered a theorem prover due to its expressive type system, even though it allows non-termination, which introduces a nasty $\bot$ in the type system.
244,788
I have downloaded `node.js` tarball from [its website](http://nodejs.org/), and now I would like install the manpage that comes with it so that I can view it by typing: ``` man nodejs ``` How can I do this?
2013/01/18
[ "https://askubuntu.com/questions/244788", "https://askubuntu.com", "https://askubuntu.com/users/23006/" ]
It's not `man nodejs`, but `man 1 node`. **And it will be there by default.** It will be installed for you with the regular installation method (e.g. `sudo make install`) as the `tools/install.py` called from the `Makefile` will take care of it: ``` if 'freebsd' in sys.platform or 'openbsd' in sys.platform: action(['doc/node.1'], 'man/man1/') else: action(['doc/node.1'], 'share/man/man1/') ``` In other words, it installs `node.1` for you in `/usr/share/man/man1/`. --- To read the manpage directly from the source, you can do: ``` man /path/to/nodejssource/doc/node.1 ```
In addition to the man page, Node sets up its own help server. ``` npm help <term> ``` or to get started: ``` npm help npm ``` The documentation is also online at: [Node.js API docs](http://nodejs.org/api/)
6,872
Most stores have Candy1 for $49, and Candy1SE for $39. but i can't find the difference. the SE have a golden spring, just like the Candy2. The plain Candy1 has a silver one. Candy 1: $49 ![Candy 1](https://i.stack.imgur.com/UuM6e.jpg) Candy 1 SE: $39 ![enter image description here](https://i.stack.imgur.com/sjNuj.jpg)
2011/11/12
[ "https://bicycles.stackexchange.com/questions/6872", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/1635/" ]
People recommend the late [Sheldon Brown's site](http://www.sheldonbrown.com/).
There isn't. you can read reviews and guides all day. bikes are really simple (and somewhat dumb) machines. There's really nothing much to improve. But there's several things that does the same in completely different ways. So the only answer to this question is experimentation. What may work for someone like magic may impair you beyond imagination. So, take your experience with bikes and think the problems you had and what would improve it. do that over and over, and visit your LBS
42,838,162
I want the pyramid to look like this if the input was 6 ``` 0 12 345 6789 01234 567890 ``` Here's my code ``` void HalfPyramid(int num) { for (int a=0; a<num; a++) { for (int b=0; b<num-a; b++) { cout << " "; } for (int c=0; c<a; c++) { cout << a; } cout << endl; } } ``` I'm getting ``` 1 22 333 4444 55555 ``` Not sure how to show the numbers as increasing throughout, I tried outputting `a` and `a+1`.
2017/03/16
[ "https://Stackoverflow.com/questions/42838162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7699290/" ]
You need another variable. That variable needs to start at 0 and increment every time you print it. Then since you need to to wrap back to 0 once you print 9 we will use the modulus operator to constrain the output to the range of [0, 9]. With all that you get ``` void HalfPyramid(int num) { int output = 0; for (int a=1; a<num+1; a++) { for (int b=0; b<num-a; b++) { cout << " "; } for (int c=0; c<a; c++) { cout << output++ % 10; } cout << endl; } } ```
``` void HalfPyramid(int num) { int cur = 0; for (int a=0; a<num; a++) { for (int b = 1; b < num - a ; b++) { cout << " "; } for (int c=0; c < a + 1; c++) { cout << cur; cur = (cur+1)%10; } cout << endl; } } ```
42,838,162
I want the pyramid to look like this if the input was 6 ``` 0 12 345 6789 01234 567890 ``` Here's my code ``` void HalfPyramid(int num) { for (int a=0; a<num; a++) { for (int b=0; b<num-a; b++) { cout << " "; } for (int c=0; c<a; c++) { cout << a; } cout << endl; } } ``` I'm getting ``` 1 22 333 4444 55555 ``` Not sure how to show the numbers as increasing throughout, I tried outputting `a` and `a+1`.
2017/03/16
[ "https://Stackoverflow.com/questions/42838162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7699290/" ]
``` void HalfPyramid(int num) { int cur = 0; for (int a=0; a<num; a++) { for (int b = 1; b < num - a ; b++) { cout << " "; } for (int c=0; c < a + 1; c++) { cout << cur; cur = (cur+1)%10; } cout << endl; } } ```
The other answers already provide working versions of `HalfPyramid`. This answer, hopefully, makes you think of the logic and functionality a bit differently. I like to create small functions that capture the essence what I am trying to do rather than using the language to just do it. ``` bool isSpace(int num, int a, int b) { return ((a + b) < (num - 1)); } int getNextDigit(int in) { return (in+1)%10; } void HalfPyramid(int num) { int digit = 0; for (int a = 0; a < num; ++a) { for (int b = 0; b < num; ++b) { if ( isSpace(num, a, b) ) { cout << " "; } else { cout << digit; digit = getNextDigit(digit); } } cout << endl; } } ```
42,838,162
I want the pyramid to look like this if the input was 6 ``` 0 12 345 6789 01234 567890 ``` Here's my code ``` void HalfPyramid(int num) { for (int a=0; a<num; a++) { for (int b=0; b<num-a; b++) { cout << " "; } for (int c=0; c<a; c++) { cout << a; } cout << endl; } } ``` I'm getting ``` 1 22 333 4444 55555 ``` Not sure how to show the numbers as increasing throughout, I tried outputting `a` and `a+1`.
2017/03/16
[ "https://Stackoverflow.com/questions/42838162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7699290/" ]
You need another variable. That variable needs to start at 0 and increment every time you print it. Then since you need to to wrap back to 0 once you print 9 we will use the modulus operator to constrain the output to the range of [0, 9]. With all that you get ``` void HalfPyramid(int num) { int output = 0; for (int a=1; a<num+1; a++) { for (int b=0; b<num-a; b++) { cout << " "; } for (int c=0; c<a; c++) { cout << output++ % 10; } cout << endl; } } ```
The other answers already provide working versions of `HalfPyramid`. This answer, hopefully, makes you think of the logic and functionality a bit differently. I like to create small functions that capture the essence what I am trying to do rather than using the language to just do it. ``` bool isSpace(int num, int a, int b) { return ((a + b) < (num - 1)); } int getNextDigit(int in) { return (in+1)%10; } void HalfPyramid(int num) { int digit = 0; for (int a = 0; a < num; ++a) { for (int b = 0; b < num; ++b) { if ( isSpace(num, a, b) ) { cout << " "; } else { cout << digit; digit = getNextDigit(digit); } } cout << endl; } } ```
27,308,454
In Android, How to check if my params contains the value image here is the params output : ``` ["image=fdfdsgdg5dsgd1s211511", "id=dfd4f5d4f5d", "api_id=f4f54f5d454df"] ``` Code: ``` public String getQueryString(List<NameValuePair> params) { Log.d(TAG, "getQueryString - params => " + params); String queryString = ""; Iterator<NameValuePair> itr = params.iterator(); while(itr.hasNext()) { NameValuePair nvp = itr.next(); //TODO if .... try { if(Arrays.asList(params).contains("image")) { Log.w(TAG, "if(Arrays.asList(params).contains('image'))"); break; } ```
2014/12/05
[ "https://Stackoverflow.com/questions/27308454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792198/" ]
Is your pycharm version community or professional? If your pycharm is community, maybe it needs a pluggin to support django. If your pycharm is professional, make sure that in: `Preferences > Languages&Frameworks > Django > Enable Django Support` the option is chosen. Here is the image: ![](https://i.stack.imgur.com/Is9q3.png)
There is a better way to resolve this problem When you enable the Django support in PyCharm it automatically detects that this is a model and objects refers to the Model manager Instead you can specify that in your models.py itself, which is the preferred method and the best way to code update your code like ``` class Foo(models.Model): // column definitions objects = models.Manager() ```
59,137,067
I am new in nodeJS, I have created this application: ``` const express = require('express'); const app = express(); app.use (express.json()); app.post('api/hostels', (req, res) => { const hostel = { id : hostels.length + 1, name: req.body.name }; hostels.push(hostel); res.send(hostel); }); ``` I send this body in the PostMan raw body (json) ``` { "id": "4", "name" : "new Request" } ``` but I am getting this error: ``` <body> <pre>Cannot POST /api/requests</pre> </body> ```
2019/12/02
[ "https://Stackoverflow.com/questions/59137067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11679569/" ]
Well, you did a small mistake while defining a route of the express. you have `app.post('api/hostels', (req, res) => {})` instead you should have `app.post('/api/hostels', (req, res) => {})`
You are posting to `/api/requests`, your endpoint shows `/api/hostels`. Change the endpoint on your postman to `/api/hostels`.
59,137,067
I am new in nodeJS, I have created this application: ``` const express = require('express'); const app = express(); app.use (express.json()); app.post('api/hostels', (req, res) => { const hostel = { id : hostels.length + 1, name: req.body.name }; hostels.push(hostel); res.send(hostel); }); ``` I send this body in the PostMan raw body (json) ``` { "id": "4", "name" : "new Request" } ``` but I am getting this error: ``` <body> <pre>Cannot POST /api/requests</pre> </body> ```
2019/12/02
[ "https://Stackoverflow.com/questions/59137067", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11679569/" ]
Well, you did a small mistake while defining a route of the express. you have `app.post('api/hostels', (req, res) => {})` instead you should have `app.post('/api/hostels', (req, res) => {})`
There is a mistake in your post, theres a missing `/` in you app.post it should be `app.post('/api/hostels', (req, res) => { }`
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
One use case is to [prohibit assignment to temporaries](https://stackoverflow.com/q/16834937/819272) ``` // can only be used with lvalues T& operator*=(T const& other) & { /* ... */ return *this; } // not possible to do (a * b) = c; T operator*(T const& lhs, T const& rhs) { return lhs *= rhs; } ``` whereas not using the reference qualifier would leave you the choice between two bads ``` T operator*(T const& lhs, T const& rhs); // can be used on rvalues const T operator*(T const& lhs, T const& rhs); // inhibits move semantics ``` The first choice allows move semantics, but acts differently on user-defined types than on builtins (doesn't do as the ints do). The second choice would stop the assigment but eliminate move semantics (possible performance hit for e.g. matrix multiplication). The links by @dyp in the comments also provide an extended discussion on using the other (`&&`) overload, which can be useful if you want to *assign to* (either lvalue or rvalue) references.
If f() needs a Foo temp that is a copy of this and modified, you can modify the temp `this` instead while you can't otherwise
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
In a class that provides reference-getters, ref-qualifier overloading can activate move semantics when extracting from an rvalue. E.g.: ``` class some_class { huge_heavy_class hhc; public: huge_heavy_class& get() & { return hhc; } huge_heavy_class const& get() const& { return hhc; } huge_heavy_class&& get() && { return std::move(hhc); } }; some_class factory(); auto hhc = factory().get(); ``` This does seem like a lot of effort to invest only to have the shorter syntax ``` auto hhc = factory().get(); ``` have the same effect as ``` auto hhc = std::move(factory().get()); ``` EDIT: I found the [original proposal paper](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1821.htm), it provides three motivating examples: 1. Constraining `operator =` to lvalues (TemplateRex's answer) 2. Enabling move for members (basically this answer) 3. Constraining `operator &` to lvalues. I suppose this is sensible to ensure that the "pointee" is more likely to be alive when the "pointer" is eventually dereferenced: ``` struct S { T operator &() &; }; int main() { S foo; auto p1 = &foo; // Ok auto p2 = &S(); // Error } ``` Can't say I've ever personally used an `operator&` overload.
One use case is to [prohibit assignment to temporaries](https://stackoverflow.com/q/16834937/819272) ``` // can only be used with lvalues T& operator*=(T const& other) & { /* ... */ return *this; } // not possible to do (a * b) = c; T operator*(T const& lhs, T const& rhs) { return lhs *= rhs; } ``` whereas not using the reference qualifier would leave you the choice between two bads ``` T operator*(T const& lhs, T const& rhs); // can be used on rvalues const T operator*(T const& lhs, T const& rhs); // inhibits move semantics ``` The first choice allows move semantics, but acts differently on user-defined types than on builtins (doesn't do as the ints do). The second choice would stop the assigment but eliminate move semantics (possible performance hit for e.g. matrix multiplication). The links by @dyp in the comments also provide an extended discussion on using the other (`&&`) overload, which can be useful if you want to *assign to* (either lvalue or rvalue) references.
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
One use case is to [prohibit assignment to temporaries](https://stackoverflow.com/q/16834937/819272) ``` // can only be used with lvalues T& operator*=(T const& other) & { /* ... */ return *this; } // not possible to do (a * b) = c; T operator*(T const& lhs, T const& rhs) { return lhs *= rhs; } ``` whereas not using the reference qualifier would leave you the choice between two bads ``` T operator*(T const& lhs, T const& rhs); // can be used on rvalues const T operator*(T const& lhs, T const& rhs); // inhibits move semantics ``` The first choice allows move semantics, but acts differently on user-defined types than on builtins (doesn't do as the ints do). The second choice would stop the assigment but eliminate move semantics (possible performance hit for e.g. matrix multiplication). The links by @dyp in the comments also provide an extended discussion on using the other (`&&`) overload, which can be useful if you want to *assign to* (either lvalue or rvalue) references.
On the one hand you can use them to prevent functions that are semantically nonsensical to call on temporaries from being called, such as `operator=` or functions that mutate internal state and return `void`, by adding `&` as a reference qualifier. On the other hand you can use it for optimizations such as moving a member out of the object as a return value when you have an rvalue reference, for example a function `getName` could return either a `std::string const&` or `std::string&&` depending on the reference qualifier. Another use case might be operators and functions that return a reference to the original object such as `Foo& operator+=(Foo&)` which could be specialized to return an rvalue reference instead, making the result movable, which would again be an optimization. TL;DR: Use it to prevent incorrect usage of a function or for optimization.
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
In a class that provides reference-getters, ref-qualifier overloading can activate move semantics when extracting from an rvalue. E.g.: ``` class some_class { huge_heavy_class hhc; public: huge_heavy_class& get() & { return hhc; } huge_heavy_class const& get() const& { return hhc; } huge_heavy_class&& get() && { return std::move(hhc); } }; some_class factory(); auto hhc = factory().get(); ``` This does seem like a lot of effort to invest only to have the shorter syntax ``` auto hhc = factory().get(); ``` have the same effect as ``` auto hhc = std::move(factory().get()); ``` EDIT: I found the [original proposal paper](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1821.htm), it provides three motivating examples: 1. Constraining `operator =` to lvalues (TemplateRex's answer) 2. Enabling move for members (basically this answer) 3. Constraining `operator &` to lvalues. I suppose this is sensible to ensure that the "pointee" is more likely to be alive when the "pointer" is eventually dereferenced: ``` struct S { T operator &() &; }; int main() { S foo; auto p1 = &foo; // Ok auto p2 = &S(); // Error } ``` Can't say I've ever personally used an `operator&` overload.
If f() needs a Foo temp that is a copy of this and modified, you can modify the temp `this` instead while you can't otherwise
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
If f() needs a Foo temp that is a copy of this and modified, you can modify the temp `this` instead while you can't otherwise
On the one hand you can use them to prevent functions that are semantically nonsensical to call on temporaries from being called, such as `operator=` or functions that mutate internal state and return `void`, by adding `&` as a reference qualifier. On the other hand you can use it for optimizations such as moving a member out of the object as a return value when you have an rvalue reference, for example a function `getName` could return either a `std::string const&` or `std::string&&` depending on the reference qualifier. Another use case might be operators and functions that return a reference to the original object such as `Foo& operator+=(Foo&)` which could be specialized to return an rvalue reference instead, making the result movable, which would again be an optimization. TL;DR: Use it to prevent incorrect usage of a function or for optimization.
21,052,377
C++11 makes it possible to overload member functions based on reference qualifiers: ``` class Foo { public: void f() &; // for when *this is an lvalue void f() &&; // for when *this is an rvalue }; Foo obj; obj.f(); // calls lvalue overload std::move(obj).f(); // calls rvalue overload ``` I understand how this works, but what is a use case for it? I see that [N2819](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2819.html) proposed limiting most assignment operators in the standard library to lvalue targets (i.e., adding "`&`" reference qualifiers to assignment operators), but [this was rejected](http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#941). So that was a potential use case where the committee decided not to go with it. So, again, what is a reasonable use case?
2014/01/10
[ "https://Stackoverflow.com/questions/21052377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1426649/" ]
In a class that provides reference-getters, ref-qualifier overloading can activate move semantics when extracting from an rvalue. E.g.: ``` class some_class { huge_heavy_class hhc; public: huge_heavy_class& get() & { return hhc; } huge_heavy_class const& get() const& { return hhc; } huge_heavy_class&& get() && { return std::move(hhc); } }; some_class factory(); auto hhc = factory().get(); ``` This does seem like a lot of effort to invest only to have the shorter syntax ``` auto hhc = factory().get(); ``` have the same effect as ``` auto hhc = std::move(factory().get()); ``` EDIT: I found the [original proposal paper](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1821.htm), it provides three motivating examples: 1. Constraining `operator =` to lvalues (TemplateRex's answer) 2. Enabling move for members (basically this answer) 3. Constraining `operator &` to lvalues. I suppose this is sensible to ensure that the "pointee" is more likely to be alive when the "pointer" is eventually dereferenced: ``` struct S { T operator &() &; }; int main() { S foo; auto p1 = &foo; // Ok auto p2 = &S(); // Error } ``` Can't say I've ever personally used an `operator&` overload.
On the one hand you can use them to prevent functions that are semantically nonsensical to call on temporaries from being called, such as `operator=` or functions that mutate internal state and return `void`, by adding `&` as a reference qualifier. On the other hand you can use it for optimizations such as moving a member out of the object as a return value when you have an rvalue reference, for example a function `getName` could return either a `std::string const&` or `std::string&&` depending on the reference qualifier. Another use case might be operators and functions that return a reference to the original object such as `Foo& operator+=(Foo&)` which could be specialized to return an rvalue reference instead, making the result movable, which would again be an optimization. TL;DR: Use it to prevent incorrect usage of a function or for optimization.
1,493,140
i have a form that utilizes checkboxes. > > `<input type="checkbox" name="check[]" value="notsure"> Not Sure, Please help me determine <br /> > <input type="checkbox" name="check[]" value="keyboard"> Keyboard <br /> > <input type="checkbox" name="check[]" value="touchscreen"> Touch Screen Monitors <br /> > <input type="checkbox" name="check[]" value="scales">Scales <br /> > <input type="checkbox" name="check[]" value="wireless">Wireless Devices <br />` > > > And here is the code that process this form in a external php file. ``` $addequip = implode(', ', $_POST['check']); ``` I keep getting this error below; ``` <b>Warning</b>: implode() [<a href='function.implode'>function.implode</a>]: Invalid arguments passed in <b>.../process.php</b> on line <b>53</b><br /> OK ```
2009/09/29
[ "https://Stackoverflow.com/questions/1493140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
is any of your checkboxes ticked? php’s `$_POST` array will only have checkboxes which have been ticked to silence your warning use this: ``` $addequip = implode(', ', empty($_POST['check']) ? array() : $_POST['check'] ); ```
the following site seems to be what you need: <http://www.ozzu.com/programming-forum/desperate-gettin-checkbox-values-post-php-t28756.html>
1,493,140
i have a form that utilizes checkboxes. > > `<input type="checkbox" name="check[]" value="notsure"> Not Sure, Please help me determine <br /> > <input type="checkbox" name="check[]" value="keyboard"> Keyboard <br /> > <input type="checkbox" name="check[]" value="touchscreen"> Touch Screen Monitors <br /> > <input type="checkbox" name="check[]" value="scales">Scales <br /> > <input type="checkbox" name="check[]" value="wireless">Wireless Devices <br />` > > > And here is the code that process this form in a external php file. ``` $addequip = implode(', ', $_POST['check']); ``` I keep getting this error below; ``` <b>Warning</b>: implode() [<a href='function.implode'>function.implode</a>]: Invalid arguments passed in <b>.../process.php</b> on line <b>53</b><br /> OK ```
2009/09/29
[ "https://Stackoverflow.com/questions/1493140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
is any of your checkboxes ticked? php’s `$_POST` array will only have checkboxes which have been ticked to silence your warning use this: ``` $addequip = implode(', ', empty($_POST['check']) ? array() : $_POST['check'] ); ```
Hi I m the original user who posted this question i couldnt login to my account so posting from another account. After couple hours of trying i somehow managed to make it work partially. Below is the modified form html and process code for checkboxes ``` <input type="checkbox" name="check" value="Touchscreen"> Touchscreen<br> <input type="checkbox" name="check" value="Keyboard"> Keyboard<br> <input type="checkbox" name="check" value="Scales"> Scales<br> ``` I had to remove the [] so it would work. Also below is the entire post method for those would like to see. It works perfectly fine with every other field. ``` <form id="contact_form" action="process.php" method="POST" onSubmit="return processForm()"> ``` And below is the php code to process checkboxes. For some reason i have to tell script that $\_POST['check'] is an array without it would only return array. All other methods suggested returns invalid argument passed error. ``` $chckbox = array($_POST['check']); if(is_array($chckbox)) { foreach($chckbox as $addequip) { $chckbox .="$addequip\n"; } } ``` So this code works but returns only 1 checkbox value that is ticked even no matter how many you ticked.
31,140,137
I need to return objects of different classes in a single method using the keyword Object as the return type ``` public class ObjectFactory { public static Object assignObject(String type) { if(type.equalsIgnoreCase("abc")){ return new abcClass(); } else if(type.equalsIgnoreCase("def")) { return new defClass(); } else if(type.equalsIgnoreCase("ghi")) { return new ghiClass(); } return null; } } ``` and in another class I am trying to get the objects as ``` public class xyz{ public void get(){ Object obj=(abcClass)ObjectFactory.assignObject("abc"); } } ``` How can I access the methods in abcClass using the obj object??
2015/06/30
[ "https://Stackoverflow.com/questions/31140137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3665420/" ]
Your current code will throw an exception if `assignObject` returns an instance that is not an `abcClass`, so you can change the type of `obj` to `absClass` : ``` public void get(){ abcClass obj=(abcClass)ObjectFactory.assignObject("abc"); } ```
You can use this as a refrence :- ``` public Object varyingReturnType(String testString ){ if(testString == null) return 1; else return testString ; } Object o1 = varyingReturnType("Lets Check String"); if( o1 instanceof String) //return true String now = (String) o1; Object o2 = varyingReturnType(null); if( o2 instanceof Integer) //return true int i = (Integer)o2; ``` So similarly you can use your own conditions along with the `instanceof` operator and can cast it to get the actual object type from `Object` type.
31,140,137
I need to return objects of different classes in a single method using the keyword Object as the return type ``` public class ObjectFactory { public static Object assignObject(String type) { if(type.equalsIgnoreCase("abc")){ return new abcClass(); } else if(type.equalsIgnoreCase("def")) { return new defClass(); } else if(type.equalsIgnoreCase("ghi")) { return new ghiClass(); } return null; } } ``` and in another class I am trying to get the objects as ``` public class xyz{ public void get(){ Object obj=(abcClass)ObjectFactory.assignObject("abc"); } } ``` How can I access the methods in abcClass using the obj object??
2015/06/30
[ "https://Stackoverflow.com/questions/31140137", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3665420/" ]
I would suggest as one of the commentators on your initial post did. That is, refactor this to use an interface. Your classes AbcClass, DefClass, and GhiClass, could all implement an interface, lets call it Letters. You can then define a class called LettersFactory, with the method createLetters. At this point, I'd also recommend changing your hard coded string identifiers into an enumeration. For instance: ``` public enum LetterTypes { ABC, DEF, GHI } ``` You're factory method can then accept this enumeration, and you have no fears of getting invalid values. The factory method can also return the type Letters (the interface) and you have a more specific version of Object (which is good). Finally, if you need to determine these types on the fly, you can have a method defined in Letters (forcing all children to implement it) called getType() which returns the LetterTypes enumeration for the class that is implemented. You could also use the instanceof operator to determine which class you have. Cheers, Frank
You can use this as a refrence :- ``` public Object varyingReturnType(String testString ){ if(testString == null) return 1; else return testString ; } Object o1 = varyingReturnType("Lets Check String"); if( o1 instanceof String) //return true String now = (String) o1; Object o2 = varyingReturnType(null); if( o2 instanceof Integer) //return true int i = (Integer)o2; ``` So similarly you can use your own conditions along with the `instanceof` operator and can cast it to get the actual object type from `Object` type.
29,334
I have a breadboard connected to an Arduino Uno, 220k resistors connected to an led, of course it has a common ground and a pin connected to 5v. All pins are digital. Now, I want led 1 to turn on for one second, the next led for two seconds and the next for three seconds, but that means, when the second led which has to be on for two seconds then the first led will come back on after one second has past on the second led, the third led would be on for three seconds, so the second led will also be on when two seconds pass and the first would be on when one second passes. So basically, all leds are on for their given duration. How do I do this? I have been trying to do this since three days. It would be a pleasure if someone helps me out. Thanks. ``` int ledPin1= 2; int ledPin2 = 4; int ledPin3 = 7; void setup(){ pinMode(ledPin1, OUTPUT); pinMode(ledPin2, OUTPUT); pinMode(ledPin3, OUTPUT); } void loop(){ digitalWrite(ledPin1, HIGH); delay(500); digitalWrite(ledPin1, LOW); delay(500); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin2, LOW); delay(1000); digitalWrite(ledPin3, HIGH); delay(1500); digitalWrite(ledPin3, LOW); delay(1500); } ``` So far this is the code. But it delays the whole program. Need leds on at given times.
2016/09/17
[ "https://arduino.stackexchange.com/questions/29334", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/26676/" ]
You could start by drawing a time graph, like this: ``` LED 1 # # # # # # # # # # # # # # # # # # LED 2 ## ## ## ## ## ## ## ## ## LED 3 ### ### ### ### ### ### <-period 1-><-period 2-><-period 3-> ``` As you see on the graph, the whole sequence repeats itself every 12 seconds. Then, you can just write down all the pin toggles for 12 seconds and let the loop repeat itself: ``` void loop() { digitalWrite(ledPin1, HIGH); digitalWrite(ledPin3, LOW); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); digitalWrite(ledPin3, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); digitalWrite(ledPin3, LOW); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); digitalWrite(ledPin3, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); delay(1000); } ``` This was the simple, albeit tedious solution. The smart solution is to go through the [Blink Without Delay](https://www.arduino.cc/en/Tutorial/BlinkWithoutDelay) Arduino tutorial. You will see that, once you start using `millis()` instead of `delay()` for managing your timings, blinking three LEDs instead of one is completely trivial.
EDIT: I just love Finit State Machines and templates to do some simple tasks: ``` class FSM { public: using Handler = void(*)(FSM &); FSM(Handler hnd) : m_func{hnd} {} void check(uint32_t ts = millis()) { if (m_next < ts) { m_func(*this); } } void delay(uint32_t _delay) { m_next += _delay; } void nextState(uint8_t state) { m_state = state; } uint8_t state() { return m_state; } protected: Handler m_func; uint8_t m_state = 0; uint32_t m_next = 0; }; template <uint8_t PIN, uint32_t HIGH_DELAY, uint32_t LOW_DELAY> void pin_fsm(FSM & fsm) { switch (fsm.state()) { case 0: pinMode(PIN, OUTPUT); // fall through case 1: digitalWrite(PIN, HIGH); fsm.delay(HIGH_DELAY); fsm.nextState(2); break; case 2: digitalWrite(PIN, LOW); fsm.delay(LOW_DELAY); fsm.nextState(1); break; } } FSM fsms[] = {{&pin_fsm<2,500,500>}, {&pin_fsm<4,1000,1000>}, {&pin_fsm<7,1500,1500>}}; void setup() { } void loop() { auto ts = millis(); for (FSM& fsm : fsms) fsm.check(ts); delay(1); } ``` --- The code should be working, so as mentioned before in that duplicate thread, `220k` is far bigger than it should be. So just use `220R` (`= 220 Ohms`) instead of `220k Ohms` (`= 220000 Ohms`). And if it's not working even with correct resistor, your LED is connected in reversed direction. Try the builtin LED on pin 13. That one must be working. Anyway you should read something about [LEDs](http://www.societyofrobots.com/electronics_led_tutorial.shtml). [![enter image description here](https://i.stack.imgur.com/mpTdj.png)](https://i.stack.imgur.com/mpTdj.png)
29,334
I have a breadboard connected to an Arduino Uno, 220k resistors connected to an led, of course it has a common ground and a pin connected to 5v. All pins are digital. Now, I want led 1 to turn on for one second, the next led for two seconds and the next for three seconds, but that means, when the second led which has to be on for two seconds then the first led will come back on after one second has past on the second led, the third led would be on for three seconds, so the second led will also be on when two seconds pass and the first would be on when one second passes. So basically, all leds are on for their given duration. How do I do this? I have been trying to do this since three days. It would be a pleasure if someone helps me out. Thanks. ``` int ledPin1= 2; int ledPin2 = 4; int ledPin3 = 7; void setup(){ pinMode(ledPin1, OUTPUT); pinMode(ledPin2, OUTPUT); pinMode(ledPin3, OUTPUT); } void loop(){ digitalWrite(ledPin1, HIGH); delay(500); digitalWrite(ledPin1, LOW); delay(500); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin2, LOW); delay(1000); digitalWrite(ledPin3, HIGH); delay(1500); digitalWrite(ledPin3, LOW); delay(1500); } ``` So far this is the code. But it delays the whole program. Need leds on at given times.
2016/09/17
[ "https://arduino.stackexchange.com/questions/29334", "https://arduino.stackexchange.com", "https://arduino.stackexchange.com/users/26676/" ]
You could start by drawing a time graph, like this: ``` LED 1 # # # # # # # # # # # # # # # # # # LED 2 ## ## ## ## ## ## ## ## ## LED 3 ### ### ### ### ### ### <-period 1-><-period 2-><-period 3-> ``` As you see on the graph, the whole sequence repeats itself every 12 seconds. Then, you can just write down all the pin toggles for 12 seconds and let the loop repeat itself: ``` void loop() { digitalWrite(ledPin1, HIGH); digitalWrite(ledPin3, LOW); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); digitalWrite(ledPin3, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); digitalWrite(ledPin3, LOW); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, HIGH); digitalWrite(ledPin3, HIGH); delay(1000); digitalWrite(ledPin1, HIGH); delay(1000); digitalWrite(ledPin1, LOW); digitalWrite(ledPin2, LOW); delay(1000); } ``` This was the simple, albeit tedious solution. The smart solution is to go through the [Blink Without Delay](https://www.arduino.cc/en/Tutorial/BlinkWithoutDelay) Arduino tutorial. You will see that, once you start using `millis()` instead of `delay()` for managing your timings, blinking three LEDs instead of one is completely trivial.
How about this? Basically each loop is 1sec, and each LED has it's own counter which increases. If the counter is higher than the LED number/onTime, the LED is off for that loop and the conter reset, otherwise LED is on. ``` uint8_t LED[3] = { 2, 4, 7 }; uint8_t LED_COUNTER[3] = { 0, 0, 0 }; void setup() { for (int i = 0; i < 3; i++) { pinMode(LED[i], OUTPUT); } } void loop() { for (int i = 0; i < 3; i++) { if (LED_COUNT[i] = i) { LED_COUNT[i] = 0; } else { digitalWrite(LED[i], 1); LED_COUNT[i] += 1; } } delay(1000); } ```
17,396,746
Ex. if I give `2013-7-1` to `2013-7-7`, there are 5 workdays (Mon-Fri) and so the output should be 5 PS: In this problem holidays are excepted, only consider the weekends. Does anyone have an idea of implementing this into c++?
2013/07/01
[ "https://Stackoverflow.com/questions/17396746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2456268/" ]
I created a new config property: `connect.test.options.port`, and set that to 9001. Now they appear to be running properly on separate ports. Note also that the `Gruntfile.js` is overriding the `singleRun` property in `karma.conf.js`. Comment/cut that out if you want the config in `karma.conf.js` to work properly. EDIT 11/4/13: [The issue](https://github.com/yeoman/generator-angular/issues/375) was reported by others as well and seems to have been addressed with changes to generator-angular.
After I change the port, there is a XHR error, `The 'Access-Control-Allow-Origin' header has a value 'http://localhost:9000' that is not equal to the supplied origin. Origin 'http://localhost:9090' is therefore not allowed access.` It was 9000 initially and then I change the grunt ``` connect: { main: { options: { port: 9090 } } } ```
17,396,746
Ex. if I give `2013-7-1` to `2013-7-7`, there are 5 workdays (Mon-Fri) and so the output should be 5 PS: In this problem holidays are excepted, only consider the weekends. Does anyone have an idea of implementing this into c++?
2013/07/01
[ "https://Stackoverflow.com/questions/17396746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2456268/" ]
Add `port: 9001` to test, like this: ``` test: { options: { port: 9001, ... } } ```
After I change the port, there is a XHR error, `The 'Access-Control-Allow-Origin' header has a value 'http://localhost:9000' that is not equal to the supplied origin. Origin 'http://localhost:9090' is therefore not allowed access.` It was 9000 initially and then I change the grunt ``` connect: { main: { options: { port: 9090 } } } ```
3,879,403
I was wondering why the integral $$ S = \int\_{-\infty}^{\infty} \frac{x}{1+x^2} \, \mathrm{d}x $$ does not converge. Since the function $$f(x) = \frac{x}{1+x^2}$$ is antisymmetric, I could calculate the integral as follows $$S \enspace = \enspace \int\_{-\infty}^{\infty} f(x) \, \mathrm{d}x \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{-\infty}^0 f(x) \, \mathrm{d}x$$ Now, I substitute $x \rightarrow (-y)$ in the 2nd integral and then use the antisymmetry of $f(x)$: $$S \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{\infty}^0 f(-y) \, \mathrm{d}(-y) \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{\infty}^0 f(y) \, \mathrm{d}(y) \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x - \int\_{0}^{\infty} f(y) \, \mathrm{d}(y) \enspace = \enspace 0$$ What is the problem with this kind of reasoning?
2020/10/24
[ "https://math.stackexchange.com/questions/3879403", "https://math.stackexchange.com", "https://math.stackexchange.com/users/679645/" ]
The problem is simply that, by definition, this improper integral converges if & only the improper integrals $\int\_0^\infty\frac x{1+x^2}\,\mathrm dx$ and $\int\_{-\infty}^0\frac x{1+x^2}\,\mathrm dx$ both converge *independently*. Now $$\lim\_{A\to\infty}\int\_0^A\frac x{1+x^2}\,\mathrm dx= \lim\_{A\to\infty}\tfrac12\ln(1+A^2)\to\infty=\infty,$$ and similarly for the other integral.
By definition $$\int\_{0}^{\infty} f(x) dx=\displaystyle\lim\_{n \to \infty}\int\_{0}^{n} f(x) dx$$ $$\int\_{-\infty}^{0} f(x) dx=\displaystyle\lim\_{m \to \infty}\int\_{-m}^{0} f(x) dx$$. Thus $$\int\_{-\infty}^{\infty} f(x) dx=\int\_{-\infty}^{0} f(x) dx+\int\_{0}^{\infty} f(x) dx\\=\displaystyle\lim\_{n \to \infty}\int\_{0}^{n} f(x) dx+\displaystyle\lim\_{m \to \infty}\int\_{-m}^{0} f(x) dx$$ Taking $x=-y$ in the second integral and assuming $f$ to be odd we get, $$ \int\_{-\infty}^{\infty} f(x) dx=\displaystyle\lim\_{n \to \infty}\int\_{0}^{n} f(x) dx-\displaystyle\lim\_{m \to \infty}\int\_{0}^{m} f(y) dy$$. Since we can't claim that $n=m$, thus $$ \int\_{-\infty}^{\infty} f(x) dx \neq 0$$ unless $f(x)=0 \ \forall x$ obviously. But Cauchy principal value of $ \int\_{-\infty}^{\infty} f(x) dx$ is defined to be $\displaystyle\lim\_{n \to \infty}\int\_{-n}^{n} f(x) dx$. Thus in this case you can say principal value of the integral is $0$.
3,879,403
I was wondering why the integral $$ S = \int\_{-\infty}^{\infty} \frac{x}{1+x^2} \, \mathrm{d}x $$ does not converge. Since the function $$f(x) = \frac{x}{1+x^2}$$ is antisymmetric, I could calculate the integral as follows $$S \enspace = \enspace \int\_{-\infty}^{\infty} f(x) \, \mathrm{d}x \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{-\infty}^0 f(x) \, \mathrm{d}x$$ Now, I substitute $x \rightarrow (-y)$ in the 2nd integral and then use the antisymmetry of $f(x)$: $$S \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{\infty}^0 f(-y) \, \mathrm{d}(-y) \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x + \int\_{\infty}^0 f(y) \, \mathrm{d}(y) \enspace = \enspace \int\_0^{\infty} f(x) \, \mathrm{d}x - \int\_{0}^{\infty} f(y) \, \mathrm{d}(y) \enspace = \enspace 0$$ What is the problem with this kind of reasoning?
2020/10/24
[ "https://math.stackexchange.com/questions/3879403", "https://math.stackexchange.com", "https://math.stackexchange.com/users/679645/" ]
The problem is simply that, by definition, this improper integral converges if & only the improper integrals $\int\_0^\infty\frac x{1+x^2}\,\mathrm dx$ and $\int\_{-\infty}^0\frac x{1+x^2}\,\mathrm dx$ both converge *independently*. Now $$\lim\_{A\to\infty}\int\_0^A\frac x{1+x^2}\,\mathrm dx= \lim\_{A\to\infty}\tfrac12\ln(1+A^2)\to\infty=\infty,$$ and similarly for the other integral.
If we have an integral: $$I(n)=\int\_1^\infty\frac{1}{x^n}dx$$ This will only converge for $n>1$, notice how this does not include $n=1$. If we look at the function you are integrating: $$\frac x{x^2+1}=\frac 1{x+\frac1x}\approx\frac 1x$$ and so your integral will not converge. If we look at the entire domain of your integral however, as you meantioned it is assymetric but since parts of this domain are divergent we consider integrals like this in terms of their [cauchy principle value](https://en.wikipedia.org/wiki/Cauchy_principal_value), in other terms it is best to write your integral as: $$I(A)=\int\_{-A}^0\frac x{x^2+1}dx+\int\_0^A\frac x{x^2+1}dx$$ and then take the limit as $A\to\infty$ to show the value for which this integral converges towards. Basically, for an integral to be considered truely divergent you should be able to split up the domain of the integral and each part be also convergent. Hope this helps :)
30,896,185
I have a csv with lines like this: ``` Last,First,A00XXXXXX,1492-01-10,2015-06-17,,Sentence Skills 104,,Elementary Algebra 38, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 101,College Level Math 56 Last,First,A00XXXXXX,1492-01-10,2015-06-17,Reading Comprehension 102,,,, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 118,College Level Math 97 ``` I want to remove the word "Reading Comprehension" but leave the number, but only if its in column 6, if its in any other column, leave it alone. Once again the variable is stumping me, I know how to remove it from the specific column if I know the number, but not remove the word and leave the number when I don't know the number ``` awk -v old="Reading Comprehension 102" -v new="" -v col=6 '$col==old{$col=new} 1' FS="," OFS="," mergedfile.csv > testmerg.csv ``` Thank you for the help,
2015/06/17
[ "https://Stackoverflow.com/questions/30896185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3582089/" ]
Re-using your awk variable definitions: ``` $ awk -v old="Reading Comprehension " -v new="" -v col=6 'BEGIN{FS=OFS=","} {sub(old,new,$col)} 1' file Last,First,A00XXXXXX,1492-01-10,2015-06-17,,Sentence Skills 104,,Elementary Algebra 38, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 101,College Level Math 56 Last,First,A00XXXXXX,1492-01-10,2015-06-17,102,,,, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 118,College Level Math 97 ``` Get the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
You can use this awk: ``` awk 'BEGIN{FS=OFS=","} {sub(/Reading Comprehension */, "", $6)} 1' file.csv Last,First,A00XXXXXX,1492-01-10,2015-06-17,,Sentence Skills 104,,Elementary Algebra 38, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 101,College Level Math 56 Last,First,A00XXXXXX,1492-01-10,2015-06-17,102,,,, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 118,College Level Math 97 ```
30,896,185
I have a csv with lines like this: ``` Last,First,A00XXXXXX,1492-01-10,2015-06-17,,Sentence Skills 104,,Elementary Algebra 38, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 101,College Level Math 56 Last,First,A00XXXXXX,1492-01-10,2015-06-17,Reading Comprehension 102,,,, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 118,College Level Math 97 ``` I want to remove the word "Reading Comprehension" but leave the number, but only if its in column 6, if its in any other column, leave it alone. Once again the variable is stumping me, I know how to remove it from the specific column if I know the number, but not remove the word and leave the number when I don't know the number ``` awk -v old="Reading Comprehension 102" -v new="" -v col=6 '$col==old{$col=new} 1' FS="," OFS="," mergedfile.csv > testmerg.csv ``` Thank you for the help,
2015/06/17
[ "https://Stackoverflow.com/questions/30896185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3582089/" ]
Re-using your awk variable definitions: ``` $ awk -v old="Reading Comprehension " -v new="" -v col=6 'BEGIN{FS=OFS=","} {sub(old,new,$col)} 1' file Last,First,A00XXXXXX,1492-01-10,2015-06-17,,Sentence Skills 104,,Elementary Algebra 38, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 101,College Level Math 56 Last,First,A00XXXXXX,1492-01-10,2015-06-17,102,,,, Last,First,A00XXXXXX,1492-01-10,2015-06-17,,,,Elementary Algebra 118,College Level Math 97 ``` Get the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
Let give sed a chance (althought is not it's domain) ``` echo "Last,First,A00XXXXXX,1492-01-10,2015-06-17,Reading Comprehension 102,,,," | sed -r 's/(([^,]*,){5})Reading Comprehension /\1/' ``` Last,First,A00XXXXXX,1492-01-10,2015-06-17,102,,,, or by *Ed Morton's* suggestion for using variables ``` old="Reading Comprehension" new="" col=6 sed -r 's/(([^,]*,){'"$((col-1))"'})'"$old"' /\1'"$new"'/' ```
12,827,985
From my understanding of Ruby on Rails and ActiveRecord, I am able to use the ActiveRecord model itself instead of its ID when a parameter is looking for an ID. For example, if I have a `Foo` model that `belongs_to` a `Bar` model, then I could write `bar = Bar.new(foo_id: foo)` instead of `bar = Bar.new(foo_id: foo.id)`. However, in the model I am making now (for a Go game application), this does not seem to be the case. Here is the relevant code from the model: ``` class User < ActiveRecord::Base . . . has_many :participations, dependent: :destroy has_many :games, through: :participations end class Game < ActiveRecord::Base attr_accessible :height, :width has_many :participations, dependent: :destroy has_many :users, through: :participations def black_player self.users.joins(:participations).where("participations.color = ?", false).first end def white_player self.users.joins(:participations).where("participations.color = ?", true).first end end class Participation < ActiveRecord::Base attr_accessible :game_id, :user_id, :color belongs_to :game belongs_to :user validates_uniqueness_of :color, scope: :game_id validates_uniqueness_of :user_id, scope: :game_id end ``` (color is a boolean where false=black, true=white) If I have created two `Users`, `black_player` (id=1) and `white_player` (id=2), and a `Game` `game`, I can do this: ``` game.participations.create(user_id: black_player, color: false) ``` And `game.participations` and `black_player.participations` both show this new `Participation`: ``` => #<Participation id: 1, game_id: 1, user_id: 1, color: false, created_at: "2012-10-10 20:07:23", updated_at: "2012-10-10 20:07:23"> ``` However, if I then try: ``` game.participations.create(user_id: white_player, color: true) ``` then the new `Participation` has a `user_id` of 1 (black\_player's id). As I validate against duplicate players in the same game, this is not a valid `Participation` and is not added to the database: ``` => #<Participation id: nil, game_id: 1, user_id: 1, color: true, created_at: nil, updated_at: nil> ``` However, if I do: ``` game.participations.create(user_id: white_player.id, color: true) ``` Then it does work: ``` => #<Participation id: 2, game_id: 1, user_id: 2, color: true, created_at: "2012-10-10 20:34:03", updated_at: "2012-10-10 20:34:03"> ``` What is the cause of this behavior?
2012/10/10
[ "https://Stackoverflow.com/questions/12827985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/787691/" ]
It seems that any parent object passed directly to :belongs\_to child object gets converted into boolean and thus results in id being 1. One of the solutions would be to initiate the object first, and then set parent objects directly before saving it: ``` @participation = Participation.new @participation.game = @game @participation.user = @user @participation.color = false @participation.save ``` Other would be to pass IDs instead of objects, just the way you are doing it now. > > EDIT: Below is not the case, leaving for info: > > > Try using `user` instead of `user_id`: ``` game.participations.create(user: white_player, color: true) ``` I think what might be happening is that it is trying to come up with an integer from your user object, because you are explicitly specifying column name instead of a relation. `"#<User:0x107f8a2f2584e0>"` then becomes 1 or whatever first valid digits it finds in the string, you can try it with `User.find(2).to_s.to_i` it should return 1 in your case.
ActiveRecord will handle the association for you. However, you have 2 problems that I can from the code above. First, You are trying to assigning the object 'user' to the attribute 'user\_id'. Second, that 'user' is not available for mass\_assignment on instances of the Participation class. Assigning attributes to an object by passing a hash into the call to create, is known as "mass\_assignment" and can be used by hackers to assign attributes that you might not want them too. For that reason, rails provides the "attr\_accessible" method. You must explicitly declare the attributes that your users are allowed to set during 'mass\_assignment' by passing them as symbols into the 'attr\_accessible' method in your class definition. ``` attr_accessible :game, :user, :color ``` Now you can ``` game.participations.create(:user => my_player_object) ``` note that if you still want to assign the "Id" in some situations, than you have to pass that into attr\_accessible as well ``` attr_accessible :game, :user, :color, :game_id, :user_id ```
15,041,788
I have a custom view that extends ImageView and I use it in an XML layout like this: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:orientation="vertical" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp"> <com.android.example.MyView android:id="@+id/myview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> <com.android.example.MyView android:id="@+id/myview2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> </LinearLayout> </LinearLayout> ``` in my activity I do the usual: setContentView(R.layout.myLayout) . Now, I need to get the reference of my class/View "MyView" in order to set a custom listener, but I i'm not able to get it from the id. ``` myview (MyView) findViewById(R.id.myview1); ``` returns null. I tried to look at similar issues but haven't found any that helped me. Please, note that if I add the View to the layout programmatically from the Activity everything is working fine, but I would like to be able to find what the issue is here. Thanks in advance.
2013/02/23
[ "https://Stackoverflow.com/questions/15041788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476846/" ]
Have a look at the `printf` manpage: > > > ``` > n The number of characters written so far is stored into the integer indicated by the int > * (or variant) pointer argument. No argument is converted. > > ``` > > So, you'll have to pass a *pointer to* an int. Also, as Xavier Holt pointed out, you'll have to use a valid buffer to read into. Try this: ``` #include <stdio.h> int main(void) { int n; char x[1000]; fgets(x, 1000, stdin); printf("\n%s%n\n",x,&n); printf("n: %d\n",n); return 0; } ``` This code works for me.
You need to pass a pointer to `n`.
15,041,788
I have a custom view that extends ImageView and I use it in an XML layout like this: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:orientation="vertical" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp"> <com.android.example.MyView android:id="@+id/myview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> <com.android.example.MyView android:id="@+id/myview2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> </LinearLayout> </LinearLayout> ``` in my activity I do the usual: setContentView(R.layout.myLayout) . Now, I need to get the reference of my class/View "MyView" in order to set a custom listener, but I i'm not able to get it from the id. ``` myview (MyView) findViewById(R.id.myview1); ``` returns null. I tried to look at similar issues but haven't found any that helped me. Please, note that if I add the View to the layout programmatically from the Activity everything is working fine, but I would like to be able to find what the issue is here. Thanks in advance.
2013/02/23
[ "https://Stackoverflow.com/questions/15041788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476846/" ]
You need to pass a pointer to `n`.
The argument to `n` needs to be a pointer to a signed int, not a singed int.
15,041,788
I have a custom view that extends ImageView and I use it in an XML layout like this: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:orientation="vertical" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp"> <com.android.example.MyView android:id="@+id/myview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> <com.android.example.MyView android:id="@+id/myview2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> </LinearLayout> </LinearLayout> ``` in my activity I do the usual: setContentView(R.layout.myLayout) . Now, I need to get the reference of my class/View "MyView" in order to set a custom listener, but I i'm not able to get it from the id. ``` myview (MyView) findViewById(R.id.myview1); ``` returns null. I tried to look at similar issues but haven't found any that helped me. Please, note that if I add the View to the layout programmatically from the Activity everything is working fine, but I would like to be able to find what the issue is here. Thanks in advance.
2013/02/23
[ "https://Stackoverflow.com/questions/15041788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476846/" ]
You need to pass a pointer to `n`.
That's not how to use the "%n" specifier. See [the C99 Standard](http://port70.net/~nsz/c/c99/n1256.html#7.19.6.1). > > n > > The argument shall be a pointer to signed integer into which is written the number of characters written to the output stream so far by this call to fprintf. No argument is converted, but one is consumed. If the conversion specification includes any flags, a field width, or a precision, the behavior is undefined. > > > Also you need some place to store the input (hint: initialize `x`)
15,041,788
I have a custom view that extends ImageView and I use it in an XML layout like this: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:orientation="vertical" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp"> <com.android.example.MyView android:id="@+id/myview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> <com.android.example.MyView android:id="@+id/myview2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> </LinearLayout> </LinearLayout> ``` in my activity I do the usual: setContentView(R.layout.myLayout) . Now, I need to get the reference of my class/View "MyView" in order to set a custom listener, but I i'm not able to get it from the id. ``` myview (MyView) findViewById(R.id.myview1); ``` returns null. I tried to look at similar issues but haven't found any that helped me. Please, note that if I add the View to the layout programmatically from the Activity everything is working fine, but I would like to be able to find what the issue is here. Thanks in advance.
2013/02/23
[ "https://Stackoverflow.com/questions/15041788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476846/" ]
Have a look at the `printf` manpage: > > > ``` > n The number of characters written so far is stored into the integer indicated by the int > * (or variant) pointer argument. No argument is converted. > > ``` > > So, you'll have to pass a *pointer to* an int. Also, as Xavier Holt pointed out, you'll have to use a valid buffer to read into. Try this: ``` #include <stdio.h> int main(void) { int n; char x[1000]; fgets(x, 1000, stdin); printf("\n%s%n\n",x,&n); printf("n: %d\n",n); return 0; } ``` This code works for me.
The argument to `n` needs to be a pointer to a signed int, not a singed int.
15,041,788
I have a custom view that extends ImageView and I use it in an XML layout like this: ``` <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center" android:orientation="vertical" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:layout_marginTop="10dp"> <com.android.example.MyView android:id="@+id/myview1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> <com.android.example.MyView android:id="@+id/myview2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_margin="5dp" /> </LinearLayout> </LinearLayout> ``` in my activity I do the usual: setContentView(R.layout.myLayout) . Now, I need to get the reference of my class/View "MyView" in order to set a custom listener, but I i'm not able to get it from the id. ``` myview (MyView) findViewById(R.id.myview1); ``` returns null. I tried to look at similar issues but haven't found any that helped me. Please, note that if I add the View to the layout programmatically from the Activity everything is working fine, but I would like to be able to find what the issue is here. Thanks in advance.
2013/02/23
[ "https://Stackoverflow.com/questions/15041788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476846/" ]
Have a look at the `printf` manpage: > > > ``` > n The number of characters written so far is stored into the integer indicated by the int > * (or variant) pointer argument. No argument is converted. > > ``` > > So, you'll have to pass a *pointer to* an int. Also, as Xavier Holt pointed out, you'll have to use a valid buffer to read into. Try this: ``` #include <stdio.h> int main(void) { int n; char x[1000]; fgets(x, 1000, stdin); printf("\n%s%n\n",x,&n); printf("n: %d\n",n); return 0; } ``` This code works for me.
That's not how to use the "%n" specifier. See [the C99 Standard](http://port70.net/~nsz/c/c99/n1256.html#7.19.6.1). > > n > > The argument shall be a pointer to signed integer into which is written the number of characters written to the output stream so far by this call to fprintf. No argument is converted, but one is consumed. If the conversion specification includes any flags, a field width, or a precision, the behavior is undefined. > > > Also you need some place to store the input (hint: initialize `x`)
2,322,126
$f$ is twice-differentiable function at the domain of $\Bbb R$ \begin{align} \lim\_{h \to 0}{f(x+h)-f(x)-f'(x)h \over h^2} &= \lim\_{h \to 0}\left[{f(x+h)-f(x) \over h^2} -{f'(x)h \over h^2}\right] \\ &= \lim\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h} -\lim\_{h \to 0}{f'(x) \over h} \\ &= \lim\_{h \to 0}{f'(x)\over h} -\lim\_{h \to 0}{f'(x) \over h} \\ &= \lim\_{h \to 0}\left[{f'(x)\over h}-{f'(x)\over h}\right] \\ & = 0 \end{align} Is the above reasoning correct? I especially concerns my repeating distribution of limits back and forth.
2017/06/14
[ "https://math.stackexchange.com/questions/2322126", "https://math.stackexchange.com", "https://math.stackexchange.com/users/442594/" ]
You made an error at this step: $\lim\limits\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h}=\lim\limits\_{h \to 0}{f'(x) \over h} $ Because by doing so you assume that $\lim\limits\_{h \to 0}{f(x+h)-f(x) \over h}$ and $\lim\limits\_{h \to 0}{1 \over h}$ both exists which is clearly false for the latter. Now since $f(x)$ is twice differentiable, then $f$ and $f'$ are continuous. Substituting $h=0$ into $\lim\limits\_{h \to 0}{f(x+h)-f(x)-f'(x)h \over h^2}$ gives us ${0 \over 0}$. (Note that we can directly let $h=0$ due to continuity!) Now applying L'Hopital's rule by taking derivatives on the numerator and denominator **with respect to $h$** yields: $\lim\limits\_{h \to 0}{f'(x+h)-f'(x) \over 2h}$. Then it is clear that this expression is equivalent to ${1 \over 2} f''(x)$.
By the Taylor formula $$f(x+h) = f(x) + f'(x)h + \frac{1}{2}f''(x) h^2 + o(h^2)$$ Now substitute and get $$\lim\_{h \to 0} \frac{1}{h^2} (f(x+h) -f(x) - f'(x)h) = \lim\_{h \to 0} \frac{1}{h^2} (\frac{1}{2}f''(x) h^2 + o(h^2)) = \frac{1}{2}f''(x)$$
2,322,126
$f$ is twice-differentiable function at the domain of $\Bbb R$ \begin{align} \lim\_{h \to 0}{f(x+h)-f(x)-f'(x)h \over h^2} &= \lim\_{h \to 0}\left[{f(x+h)-f(x) \over h^2} -{f'(x)h \over h^2}\right] \\ &= \lim\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h} -\lim\_{h \to 0}{f'(x) \over h} \\ &= \lim\_{h \to 0}{f'(x)\over h} -\lim\_{h \to 0}{f'(x) \over h} \\ &= \lim\_{h \to 0}\left[{f'(x)\over h}-{f'(x)\over h}\right] \\ & = 0 \end{align} Is the above reasoning correct? I especially concerns my repeating distribution of limits back and forth.
2017/06/14
[ "https://math.stackexchange.com/questions/2322126", "https://math.stackexchange.com", "https://math.stackexchange.com/users/442594/" ]
You made an error at this step: $\lim\limits\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h}=\lim\limits\_{h \to 0}{f'(x) \over h} $ Because by doing so you assume that $\lim\limits\_{h \to 0}{f(x+h)-f(x) \over h}$ and $\lim\limits\_{h \to 0}{1 \over h}$ both exists which is clearly false for the latter. Now since $f(x)$ is twice differentiable, then $f$ and $f'$ are continuous. Substituting $h=0$ into $\lim\limits\_{h \to 0}{f(x+h)-f(x)-f'(x)h \over h^2}$ gives us ${0 \over 0}$. (Note that we can directly let $h=0$ due to continuity!) Now applying L'Hopital's rule by taking derivatives on the numerator and denominator **with respect to $h$** yields: $\lim\limits\_{h \to 0}{f'(x+h)-f'(x) \over 2h}$. Then it is clear that this expression is equivalent to ${1 \over 2} f''(x)$.
The first and main error is in $$ \lim\_{h \to 0}\left[{f(x+h)-f(x) \over h^2} -{f'(x)h \over h^2}\right] = \lim\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h} -\lim\_{h \to 0}{f'(x) \over h} $$ because neither limit in the right-hand side exists (unless you're in a very very special situation, which is not assumed in the hypotheses). The second mistake is in $$ \lim\_{h \to 0}{f(x+h)-f(x) \over h}{1\over h}= \lim\_{h \to 0}{f'(x)\over h} $$ When computing a limit, you cannot just take one part and substitute it with its limit; this is essentially the same as doing $$ \lim\_{h\to0}\frac{h}{h}=\lim\_{h\to0}\frac{0}{h} $$
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
Go for R. SAS is a monster and is not fun. Having said that your market value will be determined more by what you can and want to do and the general education level than just by this or that technology.
This all depends. SAS is great and is backed by the SAS Institute, meaning if you're working for an organization that has invested in SAS, you can contact the support team for anything funky happening with the software. R is free and open source, and there are also organizations being created that are building on to and supporting R much like SAS Institute has done. The difference being that SAS has been around much longer and is much more structured than R. They both have their ups and downs, and it depends on what type of analytics you plan on doing in your career. So, to answer your questions, I would decide what you envision yourself doing. To my understanding R is good for Big Data applications and Machine Learning, SAS is great for statistical analysis (ARIMA, etc). SAS and R comparisons: <http://support.sas.com/resources/papers/proceedings13/348-2013.pdf> <http://www.learnanalytics.in/blog/?p=9>
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
You'll need both or even more for your "career in analytics". Start with SAS, SAS studio is intuitive and easy, once you are comfortable with the "stats" and looking for next level than is coming R in place. SAS also have free courses for intro in stat and their platform. [SAS free courses](http://support.sas.com/training/tutorial/index.html) For long term believe me that is not about the tools/platform. You can check the free book listed bellow and most important modeling and prediction techniques listed in the book. Theoretical and practical understanding of many important methods is essential and after that you will be able to compute in any language. [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)
This all depends. SAS is great and is backed by the SAS Institute, meaning if you're working for an organization that has invested in SAS, you can contact the support team for anything funky happening with the software. R is free and open source, and there are also organizations being created that are building on to and supporting R much like SAS Institute has done. The difference being that SAS has been around much longer and is much more structured than R. They both have their ups and downs, and it depends on what type of analytics you plan on doing in your career. So, to answer your questions, I would decide what you envision yourself doing. To my understanding R is good for Big Data applications and Machine Learning, SAS is great for statistical analysis (ARIMA, etc). SAS and R comparisons: <http://support.sas.com/resources/papers/proceedings13/348-2013.pdf> <http://www.learnanalytics.in/blog/?p=9>
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
Go for R. SAS is a monster and is not fun. Having said that your market value will be determined more by what you can and want to do and the general education level than just by this or that technology.
In my experience, there are a few things to consider here: * What field you're going into * What kinds of technology you believe you'll be working with * What kinds of teams you believe you'll be working with **Field** This is a huge determiner. To my understanding, SAS is standard in finance, banking, some biostats, and other industries. R, on the other hand, is open source, free, and is receiving a lot of attention currently from Microsoft after their acquisition of Revolution Analytics. **Technology** Are you thinking you'll be at a large corporation or a small shop? Do you think you'll need to work with creating production ready algorithms or simply producing insights and analysis? The reason this matters is that open source technology can often times be a bit easier to convert to production ready use. The cost of software can also be a barrier at smaller companies and may dictate what you are capable of using. **Collaboration with Other Teams** If you sit firmly on the business side, it's possible that everyone you work with may use SAS or may be comfortable with the outputs. In this case, you may not need to collaborate across any additional technology. If you work with technology, however, it may be difficult to integrate SAS or proprietary solutions into other workflows. An example would be creating a real-time user scoring system. If you were able to program this into R or Python, you may be able to pass the code directly to a developer for implementation. This would be more difficult if a proprietary solution was involved. **Other Considerations** The analytics space is evolving rapidly. On top of the above, Python is coming out as a very popular technology for use in machine learning and data mining and pairs well with Spark as well as some other large data technology. A specific example of a library here would be scikit-learn. **Closing Thoughts/My Experience** I'm an R/Python user by experience, though I have some experience with SAS in school as well as in work. Generally, I've found the level of support with R and Python to be great - especially since they've started to rise in popularity. SAS has its uses and has definitely carved out piece of the industry for itself. All in all, however - getting a solid base in statistical theory, programming (scripting), and understanding the application and value that analysis can provide will go a long way. The syntax between these tools is, generally, not worlds apart. SAS can be a bit strange compared to R and Python, but all of these tools generally have syntax which is fairly readable and is not difficult to adapt to another tool.
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
You'll need both or even more for your "career in analytics". Start with SAS, SAS studio is intuitive and easy, once you are comfortable with the "stats" and looking for next level than is coming R in place. SAS also have free courses for intro in stat and their platform. [SAS free courses](http://support.sas.com/training/tutorial/index.html) For long term believe me that is not about the tools/platform. You can check the free book listed bellow and most important modeling and prediction techniques listed in the book. Theoretical and practical understanding of many important methods is essential and after that you will be able to compute in any language. [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)
In my experience, there are a few things to consider here: * What field you're going into * What kinds of technology you believe you'll be working with * What kinds of teams you believe you'll be working with **Field** This is a huge determiner. To my understanding, SAS is standard in finance, banking, some biostats, and other industries. R, on the other hand, is open source, free, and is receiving a lot of attention currently from Microsoft after their acquisition of Revolution Analytics. **Technology** Are you thinking you'll be at a large corporation or a small shop? Do you think you'll need to work with creating production ready algorithms or simply producing insights and analysis? The reason this matters is that open source technology can often times be a bit easier to convert to production ready use. The cost of software can also be a barrier at smaller companies and may dictate what you are capable of using. **Collaboration with Other Teams** If you sit firmly on the business side, it's possible that everyone you work with may use SAS or may be comfortable with the outputs. In this case, you may not need to collaborate across any additional technology. If you work with technology, however, it may be difficult to integrate SAS or proprietary solutions into other workflows. An example would be creating a real-time user scoring system. If you were able to program this into R or Python, you may be able to pass the code directly to a developer for implementation. This would be more difficult if a proprietary solution was involved. **Other Considerations** The analytics space is evolving rapidly. On top of the above, Python is coming out as a very popular technology for use in machine learning and data mining and pairs well with Spark as well as some other large data technology. A specific example of a library here would be scikit-learn. **Closing Thoughts/My Experience** I'm an R/Python user by experience, though I have some experience with SAS in school as well as in work. Generally, I've found the level of support with R and Python to be great - especially since they've started to rise in popularity. SAS has its uses and has definitely carved out piece of the industry for itself. All in all, however - getting a solid base in statistical theory, programming (scripting), and understanding the application and value that analysis can provide will go a long way. The syntax between these tools is, generally, not worlds apart. SAS can be a bit strange compared to R and Python, but all of these tools generally have syntax which is fairly readable and is not difficult to adapt to another tool.
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
Go for R. SAS is a monster and is not fun. Having said that your market value will be determined more by what you can and want to do and the general education level than just by this or that technology.
@ImperativelyAblative has many good points. Adding to that, many companies, especially banks, that rooted in SAS have started to experiment with R. This is because of the cost saving and maturing enterprise support, think of RStudio Server and Cloudera. SAS is great, but it comes with a premium price tag (some may argue it's even overpriced). I am a management consultant working at a top tier firm with specialty in data science. We serve most of the global companies and work with their executives and operating teams. Moving to open source tools, such as R, has gained the blessing of many top executives and building tractions among core analytics teams (Marketing, Modelling, etc.). The point being - the market is moving towards open source tools The other consideration, in my opinion, is the role you aspire to play in an organization. Here are some archetypes based on my experience in working with the analytics teams: * **General Business Analyst**: R, Interactive visualization tools (Tableau / QlikView), some SQL, and Excel / Powerpoint (of course) * **Modeller**: SAS is must, R (my clients are all trying to pick this up), deep in SQL / HQL (querying on Hadoop stack), and strong domain knowledge (risk model, pricing, operation optimization, etc.) * **Application Developer** (people who put things into production): SAS is must, Python, and automation language Of course, this is not an exhaustive list, but hope it provides some industry trend for your reference. There is always the superstar in any organization who knows all these stuff, could be you :)
10,394
As I'm starting my career in analytics, I have to choose between SAS in-memory analytics and Openware and largely adopted R, which one should I choose now so that I will have good market value in short-term as well as long-term?
2016/02/25
[ "https://datascience.stackexchange.com/questions/10394", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6558/" ]
You'll need both or even more for your "career in analytics". Start with SAS, SAS studio is intuitive and easy, once you are comfortable with the "stats" and looking for next level than is coming R in place. SAS also have free courses for intro in stat and their platform. [SAS free courses](http://support.sas.com/training/tutorial/index.html) For long term believe me that is not about the tools/platform. You can check the free book listed bellow and most important modeling and prediction techniques listed in the book. Theoretical and practical understanding of many important methods is essential and after that you will be able to compute in any language. [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)
@ImperativelyAblative has many good points. Adding to that, many companies, especially banks, that rooted in SAS have started to experiment with R. This is because of the cost saving and maturing enterprise support, think of RStudio Server and Cloudera. SAS is great, but it comes with a premium price tag (some may argue it's even overpriced). I am a management consultant working at a top tier firm with specialty in data science. We serve most of the global companies and work with their executives and operating teams. Moving to open source tools, such as R, has gained the blessing of many top executives and building tractions among core analytics teams (Marketing, Modelling, etc.). The point being - the market is moving towards open source tools The other consideration, in my opinion, is the role you aspire to play in an organization. Here are some archetypes based on my experience in working with the analytics teams: * **General Business Analyst**: R, Interactive visualization tools (Tableau / QlikView), some SQL, and Excel / Powerpoint (of course) * **Modeller**: SAS is must, R (my clients are all trying to pick this up), deep in SQL / HQL (querying on Hadoop stack), and strong domain knowledge (risk model, pricing, operation optimization, etc.) * **Application Developer** (people who put things into production): SAS is must, Python, and automation language Of course, this is not an exhaustive list, but hope it provides some industry trend for your reference. There is always the superstar in any organization who knows all these stuff, could be you :)
43,966
I've been testing some software in a server virtual environment and I've noticed I get a huge amount of CPU usage on the Interrupts process. My question is, how does this relate to the virtual hardware platform as the rate is allot lower in a real system. Some how the hypervizor scheduler works hard to over come this problem but not as well as on real hardware does. Obvious things are high I/O and disk access but this application mostly just sits and works in memory allot. If anyone has experienced the same, please let me know. thanks in advance Screenshot: [Process Explorer](http://twitpic.com/b7d3y) [alt text http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg](http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg)
2009/07/22
[ "https://serverfault.com/questions/43966", "https://serverfault.com", "https://serverfault.com/users/3190/" ]
Have you read the VM's CPU usage from the guest or the host? Guest CPU usage figures are inherently wrong. Other than that make sure you've removed any unnecessary virtual hardware like floppies/serial ports etc. and make sure you're on the latest vmtools.
This VM wasn't created through a P2V migration by any chance? If so make sure that the latest VMware tools are installed and that there are no drivers left from the physical hardware or software related to the hardware such as management agents.
43,966
I've been testing some software in a server virtual environment and I've noticed I get a huge amount of CPU usage on the Interrupts process. My question is, how does this relate to the virtual hardware platform as the rate is allot lower in a real system. Some how the hypervizor scheduler works hard to over come this problem but not as well as on real hardware does. Obvious things are high I/O and disk access but this application mostly just sits and works in memory allot. If anyone has experienced the same, please let me know. thanks in advance Screenshot: [Process Explorer](http://twitpic.com/b7d3y) [alt text http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg](http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg)
2009/07/22
[ "https://serverfault.com/questions/43966", "https://serverfault.com", "https://serverfault.com/users/3190/" ]
This VM wasn't created through a P2V migration by any chance? If so make sure that the latest VMware tools are installed and that there are no drivers left from the physical hardware or software related to the hardware such as management agents.
In my years of working at home with W2003 server, Enterprise mostly, i found out that the hard part is to find answers, problems are there where you don't expect them. In this case i had msmpeng.exe shooting in the 100% during transfer files to the server and now it is interrupts when i want to read from the server. This hardware is an old Compaq EN series pc with P4 1.00ghz with 4 GB memory. As a SATA harddisk solution i use a pci, 3 ports raid-3 XFX-card. In this machine, and with this same config, i had a single disk as the C-disk, until there was a bad spot on this 10 year old datacarrier. I had to reformat it, (IDE) and bought a second HD and mirrored it for sure. I started to install W2003 server on it, i downloaded SP1, SP2 and installed it on the server, Then i updated everything, and in the end i gained acces to the RAID-3 disk, i made sure the server admin had all the rights. Well, since this problem i read here on this board, i had it before, but it is so unpredictable wether it is the disk where the system resides, or the data (raid), or is it the old hardware, or the network? In this way i can't really tell. Until i have the money to buy a good datastorage for my medialibrary, i'll keep trying to find the cause. I have a similar pc, and i think i'm going to testrun this pc and elevate it to server, see if the problem reoccurs. I'm sorry that i can't give you a straight answer, even Microsoft can't give it, there are so much factors. I say, try to reinstall a fresh server on a replacement harddisk in the same configuration, maybe that'll help. Ps. currently copying a 6,5 GB file from server to pc, well, a 11Mbit wireless connection is faster!
43,966
I've been testing some software in a server virtual environment and I've noticed I get a huge amount of CPU usage on the Interrupts process. My question is, how does this relate to the virtual hardware platform as the rate is allot lower in a real system. Some how the hypervizor scheduler works hard to over come this problem but not as well as on real hardware does. Obvious things are high I/O and disk access but this application mostly just sits and works in memory allot. If anyone has experienced the same, please let me know. thanks in advance Screenshot: [Process Explorer](http://twitpic.com/b7d3y) [alt text http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg](http://web5.twitpic.com/img/18819358-a20408f0a63a49a012562a18de70f829.4a66ebd3-full.jpg)
2009/07/22
[ "https://serverfault.com/questions/43966", "https://serverfault.com", "https://serverfault.com/users/3190/" ]
Have you read the VM's CPU usage from the guest or the host? Guest CPU usage figures are inherently wrong. Other than that make sure you've removed any unnecessary virtual hardware like floppies/serial ports etc. and make sure you're on the latest vmtools.
In my years of working at home with W2003 server, Enterprise mostly, i found out that the hard part is to find answers, problems are there where you don't expect them. In this case i had msmpeng.exe shooting in the 100% during transfer files to the server and now it is interrupts when i want to read from the server. This hardware is an old Compaq EN series pc with P4 1.00ghz with 4 GB memory. As a SATA harddisk solution i use a pci, 3 ports raid-3 XFX-card. In this machine, and with this same config, i had a single disk as the C-disk, until there was a bad spot on this 10 year old datacarrier. I had to reformat it, (IDE) and bought a second HD and mirrored it for sure. I started to install W2003 server on it, i downloaded SP1, SP2 and installed it on the server, Then i updated everything, and in the end i gained acces to the RAID-3 disk, i made sure the server admin had all the rights. Well, since this problem i read here on this board, i had it before, but it is so unpredictable wether it is the disk where the system resides, or the data (raid), or is it the old hardware, or the network? In this way i can't really tell. Until i have the money to buy a good datastorage for my medialibrary, i'll keep trying to find the cause. I have a similar pc, and i think i'm going to testrun this pc and elevate it to server, see if the problem reoccurs. I'm sorry that i can't give you a straight answer, even Microsoft can't give it, there are so much factors. I say, try to reinstall a fresh server on a replacement harddisk in the same configuration, maybe that'll help. Ps. currently copying a 6,5 GB file from server to pc, well, a 11Mbit wireless connection is faster!
24,655,092
I want to leverage chef-metal and chef-zero with my existing cookbooks and chef-repo (already leveraging berkshelf and vagrant for dev) I started with the example provided at <https://github.com/opscode/chef-metal#vagrant> I've got a vagrant\_linux.rb ``` require 'chef_metal_vagrant' vagrant_box 'CentOS-6.4-x86_64' do url 'http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box' end with_machine_options :vagrant_options => { 'vm.box' => 'CentOS-6.4-x86_64' } ``` I also have dev\_server.rb ``` require 'chef_metal' with_chef_local_server :chef_repo_path => '~/workspace/git/my-chef-repo' machine 'dev_server' do tag 'dev_server' recipe 'myapp' converge true end ``` If I put my myapp cookbook under ~/workspace/git/my-chef-repo/cookbooks, the above works fine using the following command, I've got a vagrant managed vm named dev\_server converging (applying myapp recipe) ``` chef-client -z vagrant_linux.rb dev_server.rb ``` But now, I'd like to keep my cookbooks folder empty and use berkshelf, It does not look supported by chef-zero at the moment, is it ? How could I do that ?
2014/07/09
[ "https://Stackoverflow.com/questions/24655092", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409784/" ]
You can pass :cookbook\_path that contains multiple paths as an Array like so: <https://github.com/opscode/ec-metal/blob/master/cookbooks/ec-harness/recipes/vagrant.rb#L12-L13> ``` with_chef_local_server :chef_repo_path => repo_path, :cookbook_path => [ File.join(repo_path, 'cookbooks'), File.join(repo_path, 'vendor', 'cookbooks') ] ``` Then you can use berks to vendor upstream cookbooks into a different path (vendor/cookbooks/), while putting your own cookbooks into cookbooks/ like so: <https://github.com/opscode/ec-metal/blob/master/Rakefile#L114> ``` berks vendor vendor/cookbooks/ ```
The "berks vendor" command is how I generally do that--use "berks vendor" and add the vendored path to your cookbook path.
60,185,417
Let's say we have a 5x5 matrix, filled with 0s. ``` myMatrix <- matrix(rep(0, 25), ncol = 5) ``` Now, let's pick a triplet of integers between 1 and 5. ``` triplet <- c(1,2,3) ``` For all combinations of this triplet we now add 1 in the matrix, with this function: ``` addCombinationsToMatrix <- function(.matrix, .triplet){ indexesToChange <- as.matrix(expand.grid(.triplet, .triplet)) .matrix[indexesToChange] <- .matrix[indexesToChange] + 1 .matrix } ``` Using the function, we go from ``` myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 0 0 0 0 0 [2,] 0 0 0 0 0 [3,] 0 0 0 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 ``` to ``` myMatrix <- addCombinationsToMatrix(myMatrix, triplet) myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 0 0 [2,] 1 1 1 0 0 [3,] 1 1 1 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 ``` If we pick another triplet we move on to ``` nextTriplet <- 2:4 myMatrix <- addCombinationsToMatrix(myMatrix, nextTriplet) myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 0 0 [2,] 1 2 2 1 0 [3,] 1 2 2 1 0 [4,] 0 1 1 1 0 [5,] 0 0 0 0 0 ``` So, row-column combinations represent how often two integers have been shown together in a triplet: 3 and 4 have been shown together once, 2 and 3 have been shown together twice. **Question**: How can one pick triplets, such that every combination (1-2, 1-3, 1-4...) was picked at least once and the number of triplets is minimized. I'm looking for an algorithm here that picks the next triplet. Ideally it can be extended to * arbitrarily big matrices (10x10, 100x100 ...) * arbitrarily big vectors (quadruplets, quintuplets, n-tuplets) * an arbitrary number of times a combination must have been picked at least Example: ``` myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, 1:3) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, 3:5) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, c(1,4,5)) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, c(2,4,5)) myMatrix ``` --- **EDIT**: Just to be sure: the answer doesn't have to be `R` code. It can be some other language as well or even pseudo code. **EDIT 2**: It occured to me now, that there are different ways of measuring efficiency. I actually meant, the algorithm should take as little iterations as possible. The algorithm being fast is also very cool, but not the main goal here.
2020/02/12
[ "https://Stackoverflow.com/questions/60185417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7210250/" ]
Great question! This comes up in survey design, where you want a few different versions of the survey that each only contain a subset of the questions, but you want every pair (or t-tuple) of questions to have been asked at least once. This is called **covering design**, and is a variant of the classic *set cover problem*. As you can read in an excellent [Mathematics Stack Exchange post](https://math.stackexchange.com/q/1734855/127715) on the topic, folks use notation C(v, k, t) indicating the minimum number of k-element subsets you need to draw (k=3 in your case) from a v-element set (v=5 in your case) such that every t-element subset in the entire set (t=2 in your case) is contained within one of your selected subsets. Folks have evaluated this function for many different (v, k, t) tuples; see, for instance, <https://ljcr.dmgordon.org/cover/table.html> . We can read from that table that C(5, 3, 2) = 4, with the following as one possible design: ``` 1 2 3 1 4 5 2 3 4 2 3 5 ``` First and foremost, this problem is NP-hard, so all known exact algorithms will scale exponentially in inputs v, k, and t. So while you may be able to solve small instances exactly by enumeration or some more clever exact method (e.g. integer programming), you will likely need to resort to heuristic methods as the problem size gets very large. One possibility in this direction is lexicographic covering, as proposed in <https://arxiv.org/pdf/math/9502238.pdf> (you will note that many of the solutions on the site linked above list "lex covering" as the method of construction). Basically you list out all possible k-tuples in lexicographic order: ``` 123 124 125 134 135 145 234 235 245 345 ``` Then you greedily add the k-tuple that covers the most previously uncovered t-tuples, breaking ties using the lexicographic ordering. Here's how the algorithm works in our case: 1. At the beginning every 3-tuple covers 3 different 2-tuples, so we add `123` since it is lexicographically earliest. 2. After doing this, the 2-tuples of `12`, `13`, and `23` have been covered, while all remaining 2-tuples are not covered. A number of 3-tuples cover 3 more 2-tuples, e.g. `145` and `245`. We pick `145`, since it is lexicographically first, covering `14`, `45`, and `15`. 3. Now we have 4 remaining uncovered 2-tuples -- `24`, `25`, `34`, and `35`. No 3-tuple covers 3 of these, but several cover 2, e.g. `234` and `345`. We select `234` as the lexicographically earliest. 4. We have two remaining uncovered 2-tuples -- `25` and `35`. We select `235` as the only 3-tuple that covers both. We end up with the exact solution shown above. Importantly, this is just a heuristic method -- it doesn't give any guarantee that 4 is the smallest number of 3-tuples needed to cover all pairs in a set with 5 elements. In this case, a lower bound by Schönheim (a reference is provided in the linked article above) convinces us that, in fact, C(5, 3, 2) cannot be smaller than 4. We conclude that the solution from lexicographic covering is in fact optimal. You would need a tweak to cover each t-tuple a certain number of times r. One obvious one would just be to repeat each tuple to be covered "r" times, and then run lex covering as usual (so for instance in the first step above each 3-tuple would cover 9 2-tuples with r=3). Of course this remains a heuristic for your overall problem due to the use of lex covering.
Here is an option using `data.table` to keep track of the matrix count and `RcppAlgos` to generate the combinations: ``` library(RcppAlgos) library(data.table) M <- 100 #5 #10 #100 sz <- 5 #3 #4 5 minpick <- 3 #1 #2 d <- integer(M) system.time({ universe <- as.data.table(comboGeneral(M, 2L, nThreads=4L))[, count := 0L] ntuples <- 0 while (universe[, any(count < minpick)]) { v <- universe[order(count), head(unique(c(V1[1L:2L], V2[1L:2L])), sz)] universe[as.data.table(comboGeneral(v, 2L, nThreads=4L)), on=.NATURAL, count := count + 1L] ntuples = ntuples + 1L } ntuples }) # user system elapsed # 26.82 9.81 28.75 m <- matrix(0L, nrow=M, ncol=M) m[as.matrix(universe[, V1:V2])] <- universe$count m + t(m) + diag(d) ``` It is a greedy algorithm, hence, I am not sure if this will result a minimum number of tuples.
60,185,417
Let's say we have a 5x5 matrix, filled with 0s. ``` myMatrix <- matrix(rep(0, 25), ncol = 5) ``` Now, let's pick a triplet of integers between 1 and 5. ``` triplet <- c(1,2,3) ``` For all combinations of this triplet we now add 1 in the matrix, with this function: ``` addCombinationsToMatrix <- function(.matrix, .triplet){ indexesToChange <- as.matrix(expand.grid(.triplet, .triplet)) .matrix[indexesToChange] <- .matrix[indexesToChange] + 1 .matrix } ``` Using the function, we go from ``` myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 0 0 0 0 0 [2,] 0 0 0 0 0 [3,] 0 0 0 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 ``` to ``` myMatrix <- addCombinationsToMatrix(myMatrix, triplet) myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 0 0 [2,] 1 1 1 0 0 [3,] 1 1 1 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 ``` If we pick another triplet we move on to ``` nextTriplet <- 2:4 myMatrix <- addCombinationsToMatrix(myMatrix, nextTriplet) myMatrix [,1] [,2] [,3] [,4] [,5] [1,] 1 1 1 0 0 [2,] 1 2 2 1 0 [3,] 1 2 2 1 0 [4,] 0 1 1 1 0 [5,] 0 0 0 0 0 ``` So, row-column combinations represent how often two integers have been shown together in a triplet: 3 and 4 have been shown together once, 2 and 3 have been shown together twice. **Question**: How can one pick triplets, such that every combination (1-2, 1-3, 1-4...) was picked at least once and the number of triplets is minimized. I'm looking for an algorithm here that picks the next triplet. Ideally it can be extended to * arbitrarily big matrices (10x10, 100x100 ...) * arbitrarily big vectors (quadruplets, quintuplets, n-tuplets) * an arbitrary number of times a combination must have been picked at least Example: ``` myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, 1:3) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, 3:5) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, c(1,4,5)) myMatrix myMatrix <- addCombinationsToMatrix(myMatrix, c(2,4,5)) myMatrix ``` --- **EDIT**: Just to be sure: the answer doesn't have to be `R` code. It can be some other language as well or even pseudo code. **EDIT 2**: It occured to me now, that there are different ways of measuring efficiency. I actually meant, the algorithm should take as little iterations as possible. The algorithm being fast is also very cool, but not the main goal here.
2020/02/12
[ "https://Stackoverflow.com/questions/60185417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7210250/" ]
Great question! This comes up in survey design, where you want a few different versions of the survey that each only contain a subset of the questions, but you want every pair (or t-tuple) of questions to have been asked at least once. This is called **covering design**, and is a variant of the classic *set cover problem*. As you can read in an excellent [Mathematics Stack Exchange post](https://math.stackexchange.com/q/1734855/127715) on the topic, folks use notation C(v, k, t) indicating the minimum number of k-element subsets you need to draw (k=3 in your case) from a v-element set (v=5 in your case) such that every t-element subset in the entire set (t=2 in your case) is contained within one of your selected subsets. Folks have evaluated this function for many different (v, k, t) tuples; see, for instance, <https://ljcr.dmgordon.org/cover/table.html> . We can read from that table that C(5, 3, 2) = 4, with the following as one possible design: ``` 1 2 3 1 4 5 2 3 4 2 3 5 ``` First and foremost, this problem is NP-hard, so all known exact algorithms will scale exponentially in inputs v, k, and t. So while you may be able to solve small instances exactly by enumeration or some more clever exact method (e.g. integer programming), you will likely need to resort to heuristic methods as the problem size gets very large. One possibility in this direction is lexicographic covering, as proposed in <https://arxiv.org/pdf/math/9502238.pdf> (you will note that many of the solutions on the site linked above list "lex covering" as the method of construction). Basically you list out all possible k-tuples in lexicographic order: ``` 123 124 125 134 135 145 234 235 245 345 ``` Then you greedily add the k-tuple that covers the most previously uncovered t-tuples, breaking ties using the lexicographic ordering. Here's how the algorithm works in our case: 1. At the beginning every 3-tuple covers 3 different 2-tuples, so we add `123` since it is lexicographically earliest. 2. After doing this, the 2-tuples of `12`, `13`, and `23` have been covered, while all remaining 2-tuples are not covered. A number of 3-tuples cover 3 more 2-tuples, e.g. `145` and `245`. We pick `145`, since it is lexicographically first, covering `14`, `45`, and `15`. 3. Now we have 4 remaining uncovered 2-tuples -- `24`, `25`, `34`, and `35`. No 3-tuple covers 3 of these, but several cover 2, e.g. `234` and `345`. We select `234` as the lexicographically earliest. 4. We have two remaining uncovered 2-tuples -- `25` and `35`. We select `235` as the only 3-tuple that covers both. We end up with the exact solution shown above. Importantly, this is just a heuristic method -- it doesn't give any guarantee that 4 is the smallest number of 3-tuples needed to cover all pairs in a set with 5 elements. In this case, a lower bound by Schönheim (a reference is provided in the linked article above) convinces us that, in fact, C(5, 3, 2) cannot be smaller than 4. We conclude that the solution from lexicographic covering is in fact optimal. You would need a tweak to cover each t-tuple a certain number of times r. One obvious one would just be to repeat each tuple to be covered "r" times, and then run lex covering as usual (so for instance in the first step above each 3-tuple would cover 9 2-tuples with r=3). Of course this remains a heuristic for your overall problem due to the use of lex covering.
Since this question asks for algorithmic approaches to covering designs, I'll provide one that gives exact answers (aka the best possible design) using integer programming in R. For every single k-tuple that you are considering (k=3 for you, since you are selecting triplets), define a decision variable that takes value 1 if you include it in your design and 0 if not. So in your case, you would define x\_123 to indicate if tuple (1,2,3) is selected, x\_345 for (3,4,5), and so on. The objective of the optimization model is to minimize the number of tuples selected, aka the sum of all your decision variables. However, for every t-tuple (t=2 in your case), you need to include a decision variable that contains that t-tuple. This yields a constraint for every t-tuple. As an example, we would have `x_123+x_124+x_125 >= 1` would be the constraint that requires the pair `12` to be in some selected tuple. This yields the following optimization model: ``` min x_123+x_124+...+x_345 s.t. x_123+x_124+x_125 >= 1 # constraint for 12 x_123+x_134+x_135 >= 1 # constraint for 13 ... x_145+x_245+x_345 >= 1 # constraint for 45 x_ijk binary for all i, j, k ``` You could extend this to requiring r repeats of every t-tuple by changing the right-hand side of every inequality to "r" and requiring all the variables to be integer instead of binary. This is easy to solve with a package like `lpSolve` in R: ``` library(lpSolve) C <- function(v, k, tt, r) { k.tuples <- combn(v, k) t.tuples <- combn(v, tt) mod <- lp(direction="min", objective.in=rep(1, ncol(k.tuples)), const.mat=t(apply(t.tuples, 2, function(x) { apply(k.tuples, 2, function(y) as.numeric(sum(x %in% y) == tt)) })), const.dir=rep(">=", ncol(t.tuples)), const.rhs=rep(r, ncol(t.tuples)), all.int=TRUE) k.tuples[,rep(seq_len(ncol(k.tuples)), round(mod$solution))] } C(5, 3, 2, 1) # [,1] [,2] [,3] [,4] # [1,] 1 1 1 3 # [2,] 2 2 2 4 # [3,] 3 4 5 5 C(5, 3, 2, 3) # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] # [1,] 1 1 1 1 1 1 2 2 2 3 # [2,] 2 2 2 3 3 4 3 3 4 4 # [3,] 3 4 5 4 5 5 4 5 5 5 ``` While this solves your problem exactly, it will not scale well to large problem sizes. This is because the problem is NP-hard -- no known exact algorithm will scale well. If you need to solve large problem instances, then the heuristics recommended in other answers here are your best bet. Or you could solve with integer programming (as we do here) and set a timeout; then you will be working with the best solution found by your timeout, which is a heuristic solution to the problem overall.
67,163,630
I'm using CSS transform: scale(0.6) to scale down a div. When the element is scaled down, it maintains its aspect ratio. However, in my case, this element needs to always have a height that will reach the bottom of the viewport. This means I need to adjust the height of the element while keeping its width and top position the same. How do I calculate the height I need to apply so that the element reaches the bottom of the viewport exactly when transform: scale(x) is applied? below is a codesnippet. Clicking anywhere scales the div down and it's when I should apply the new height to allow the div height to reach the bottom of the viewport while keeping the same width, and position. ```js document.body.addEventListener('click', () => { document.querySelectorAll('div')[0].style.transform = 'scale(0.44)'; }); ``` ```css body { margin: 0; padding: 0; height: 100vh; } div { width: 350px; height: calc(100% - 50px); position: absolute; top: 50px; background-color: red; position: absolute; transform-origin: top; } ``` ```html <div><h1>TEST</h1></div> ```
2021/04/19
[ "https://Stackoverflow.com/questions/67163630", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9983270/" ]
Since you want the div's height to stretch till bottom, you can make use of `window.innerHeight` here. The new height can be calculated using the following formula :- `newHeight = (viewportHeight - offsetTop)*(1/scaleValue)` Putting in values it will come down to the following calculation :- `newHeight = (window.innerHeight - div.offsetTop)*(1/0.44)` ```js document.body.addEventListener('click', () => { const div = document.querySelectorAll('div')[0]; div.style.transform = 'scale(0.44)'; div.style.height = `${(window.innerHeight-div.offsetTop)*(1/0.44)}px` }); ``` ```css body { margin: 0; padding: 0; height: 100vh; } div { width: 350px; height: calc(100% - 50px); position: absolute; top: 50px; background-color: red; position: absolute; transform-origin: top; } ``` ```html <div> <h1>TEST</h1> </div> ```
i see you position it from the top. i think that if you position it from bottom you will get what u want. change top to bottom: 0px change tranform-origin to transform-origin:bottom and for calculating the height u could use 100VH instead of 100%
19,457,031
Hey does anyone know what it is wrong with the `chr` in this code, the first `chr` (`chr(event.Ascii):`). it just returns with a syntax error. I am writing a keylogger using pyHook. Thanks in advance. ``` import pyHook, pythoncom, sys, logging file_log = 'C:\\Python\\log.txt' def OnKeyboardEvent (event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)' chr(event.Ascii): logging.log(10, chr(event.Ascii)) return True hooks_manager = pyHook.HookManager() hooks_manager.KeyDown = OnKeyboardEvent hooks_manager.HookKeyboard() pythoncom.PumpMessages() ```
2013/10/18
[ "https://Stackoverflow.com/questions/19457031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2891941/" ]
There are two errors in that section of code. First, you are missing a closing parenthesis: ``` logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)' # ----^ -------^ ``` Without that closing parenthesis, Python does not know when that expression is supposed to end. The next line then doesn't make sense and raises a `SyntaxError` exception. Your next line has a stray colon: ``` chr(event.Ascii): ``` which you need to remove. You also do not *store* the result of that call, you could just omit that line. The following is correct Python. ``` def OnKeyboardEvent (event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)') logging.log(10, chr(event.Ascii)) return True ```
This line has a colon at the end of it. ``` chr(event.Ascii): ``` You should remove it.
19,457,031
Hey does anyone know what it is wrong with the `chr` in this code, the first `chr` (`chr(event.Ascii):`). it just returns with a syntax error. I am writing a keylogger using pyHook. Thanks in advance. ``` import pyHook, pythoncom, sys, logging file_log = 'C:\\Python\\log.txt' def OnKeyboardEvent (event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)' chr(event.Ascii): logging.log(10, chr(event.Ascii)) return True hooks_manager = pyHook.HookManager() hooks_manager.KeyDown = OnKeyboardEvent hooks_manager.HookKeyboard() pythoncom.PumpMessages() ```
2013/10/18
[ "https://Stackoverflow.com/questions/19457031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2891941/" ]
This line has a colon at the end of it. ``` chr(event.Ascii): ``` You should remove it.
Use this code and enjoy..!! ``` import pyHook, pythoncom, sys, logging file_log = 'D:\zzzz1.txt' def OnKeyboardEvent(event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format='%(message)s') chr(event.Ascii) logging.log(10,chr(event.Ascii)) return True hooks_manager = pyHook.HookManager() hooks_manager.KeyDown = OnKeyboardEvent hooks_manager.HookKeyboard() pythoncom.PumpMessages() ```
19,457,031
Hey does anyone know what it is wrong with the `chr` in this code, the first `chr` (`chr(event.Ascii):`). it just returns with a syntax error. I am writing a keylogger using pyHook. Thanks in advance. ``` import pyHook, pythoncom, sys, logging file_log = 'C:\\Python\\log.txt' def OnKeyboardEvent (event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)' chr(event.Ascii): logging.log(10, chr(event.Ascii)) return True hooks_manager = pyHook.HookManager() hooks_manager.KeyDown = OnKeyboardEvent hooks_manager.HookKeyboard() pythoncom.PumpMessages() ```
2013/10/18
[ "https://Stackoverflow.com/questions/19457031", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2891941/" ]
There are two errors in that section of code. First, you are missing a closing parenthesis: ``` logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)' # ----^ -------^ ``` Without that closing parenthesis, Python does not know when that expression is supposed to end. The next line then doesn't make sense and raises a `SyntaxError` exception. Your next line has a stray colon: ``` chr(event.Ascii): ``` which you need to remove. You also do not *store* the result of that call, you could just omit that line. The following is correct Python. ``` def OnKeyboardEvent (event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format ='%(message)') logging.log(10, chr(event.Ascii)) return True ```
Use this code and enjoy..!! ``` import pyHook, pythoncom, sys, logging file_log = 'D:\zzzz1.txt' def OnKeyboardEvent(event): logging.basicConfig(filename=file_log, level=logging.DEBUG, format='%(message)s') chr(event.Ascii) logging.log(10,chr(event.Ascii)) return True hooks_manager = pyHook.HookManager() hooks_manager.KeyDown = OnKeyboardEvent hooks_manager.HookKeyboard() pythoncom.PumpMessages() ```
46,478,697
I would like to run `BasicLSTMCell` once, get result and see if I can reproduce results manually. However, I am stuck at executing `BasicLSTMCell` once. Here is my code: ```py import tensorflow as tf BATCH_SIZE = 7 SEQUENCE_LENGTH = 5 VECTOR_SIZE = 3 STATE_SIZE = 4 x = tf.placeholder(tf.float32, [BATCH_SIZE, SEQUENCE_LENGTH, VECTOR_SIZE], name='input_placeholder') y = tf.placeholder(tf.float32, [BATCH_SIZE, SEQUENCE_LENGTH], name='labels_placeholder') rnn_inputs = tf.unstack(x, axis = 1) init_state = tf.zeros([BATCH_SIZE, STATE_SIZE], tf.float32) cell = tf.contrib.rnn.BasicLSTMCell(STATE_SIZE, state_is_tuple = True) X = np.zeros([BATCH_SIZE, SEQUENCE_LENGTH, VECTOR_SIZE]) Y = np.zeros([BATCH_SIZE, SEQUENCE_LENGTH]) sess = tf.Session() output_state = sess.run([cell(rnn_inputs[0], (init_state, init_state))], feed_dict = {x:X,y:Y}) ``` It produces very long error message, which I post below, but summary of it is: ```py FailedPreconditionError: Attempting to use uninitialized value basic_lstm_cell/kernel [[Node: basic_lstm_cell/kernel/read = Identity[T=DT_FLOAT, _class=["loc:@basic_lstm_cell/kernel"], _device="/job:localhost/replica:0/task:0/cpu:0"](basic_lstm_cell/kernel)]] ``` Do you see what value I failed to initialize? Here is full error trace: ```py --------------------------------------------------------------------------- FailedPreconditionError Traceback (most recent call last) ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1326 try: -> 1327 return fn(*args) 1328 except errors.OpError as e: ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1305 feed_dict, fetch_list, target_list, -> 1306 status, run_metadata) 1307 ~\AppData\Local\conda\conda\envs\tensorflow\lib\contextlib.py in __exit__(self, type, value, traceback) 65 try: ---> 66 next(self.gen) 67 except StopIteration: ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status() 465 compat.as_text(pywrap_tensorflow.TF_Message(status)), --> 466 pywrap_tensorflow.TF_GetCode(status)) 467 finally: FailedPreconditionError: Attempting to use uninitialized value basic_lstm_cell/bias [[Node: basic_lstm_cell/bias/read = Identity[T=DT_FLOAT, _class=["loc:@basic_lstm_cell/bias"], _device="/job:localhost/replica:0/task:0/cpu:0"](basic_lstm_cell/bias)]] During handling of the above exception, another exception occurred: FailedPreconditionError Traceback (most recent call last) <ipython-input-2-730be9dd8e3e> in <module>() 18 19 sess = tf.Session() ---> 20 output_state = sess.run([cell(rnn_inputs[0], (init_state, init_state))], feed_dict = {x:X,y:Y}) 21 22 ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 893 try: 894 result = self._run(None, fetches, feed_dict, options_ptr, --> 895 run_metadata_ptr) 896 if run_metadata: 897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 1122 if final_fetches or final_targets or (handle and feed_dict_tensor): 1123 results = self._do_run(handle, final_targets, final_fetches, -> 1124 feed_dict_tensor, options, run_metadata) 1125 else: 1126 results = [] ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1319 if handle is None: 1320 return self._do_call(_run_fn, self._session, feeds, fetches, targets, -> 1321 options, run_metadata) 1322 else: 1323 return self._do_call(_prun_fn, self._session, handle, feeds, fetches) ~\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1338 except KeyError: 1339 pass -> 1340 raise type(e)(node_def, op, message) 1341 1342 def _extend_graph(self): FailedPreconditionError: Attempting to use uninitialized value basic_lstm_cell/bias [[Node: basic_lstm_cell/bias/read = Identity[T=DT_FLOAT, _class=["loc:@basic_lstm_cell/bias"], _device="/job:localhost/replica:0/task:0/cpu:0"](basic_lstm_cell/bias)]] Caused by op 'basic_lstm_cell/bias/read', defined at: File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-730be9dd8e3e>", line 20, in <module> output_state = sess.run([cell(rnn_inputs[0], (init_state, init_state))], feed_dict = {x:X,y:Y}) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in __call__ return super(RNNCell, self).__call__(inputs, state) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\layers\base.py", line 450, in __call__ outputs = self.call(inputs, *args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 401, in call concat = _linear([inputs, h], 4 * self._num_units, True) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 1053, in _linear initializer=bias_initializer) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 1065, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 962, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 360, in get_variable validate_shape=validate_shape, use_resource=use_resource) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 183, in _rnn_get_variable variable = getter(*args, **kwargs) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 352, in _true_getter use_resource=use_resource) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 725, in _get_single_variable validate_shape=validate_shape) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variables.py", line 199, in __init__ expected_shape=expected_shape) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\variables.py", line 330, in _init_from_args self._snapshot = array_ops.identity(self._variable, name="read") File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1400, in identity result = _op_def_lib.apply_op("Identity", input=input, name=name) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\Users\some_user\AppData\Local\conda\conda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access FailedPreconditionError (see above for traceback): Attempting to use uninitialized value basic_lstm_cell/bias [[Node: basic_lstm_cell/bias/read = Identity[T=DT_FLOAT, _class=["loc:@basic_lstm_cell/bias"], _device="/job:localhost/replica:0/task:0/cpu:0"](basic_lstm_cell/bias)]] ```
2017/09/28
[ "https://Stackoverflow.com/questions/46478697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1700890/" ]
*cell* is an object of BasicLSTMCell, which is a part of the graph to populate output and a new state. Thanks for python Readability ``` cell(rnn_inputs[0], (init_state, init_state)) ``` is actually: ``` cell.__call__(rnn_inputs[0], (init_state, init_state)) ``` The cell call can be brought in front of tf.Session(), as you can't run it. The following code is tested using python 3.5 and tensorflow 1.3: ``` import tensorflow as tf import numpy as np BATCH_SIZE = 7 SEQUENCE_LENGTH = 5 VECTOR_SIZE = 3 STATE_SIZE = 4 x = tf.placeholder(tf.float32, [BATCH_SIZE, SEQUENCE_LENGTH, VECTOR_SIZE], name='input_placeholder') y = tf.placeholder(tf.float32, [BATCH_SIZE, SEQUENCE_LENGTH], name='labels_placeholder') rnn_inputs = tf.unstack(x, axis = 1) init_state = tf.zeros([BATCH_SIZE, STATE_SIZE], tf.float32) cell = tf.contrib.rnn.BasicLSTMCell(STATE_SIZE, state_is_tuple = True) X = np.zeros([BATCH_SIZE, SEQUENCE_LENGTH, VECTOR_SIZE]) Y = np.zeros([BATCH_SIZE, SEQUENCE_LENGTH]) output, newstate = cell.__call__(rnn_inputs[0], (init_state, init_state)) sess = tf.Session() sess.run(tf.global_variables_initializer()) result = sess.run([output, newstate], feed_dict = {x:X,y:Y}) print (result) sess.close() ```
[You need to initialize all variables](https://www.tensorflow.org/versions/r1.0/programmers_guide/variables#initialization) with, for example, [`tf.global_variables_initializer()`](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) (or restore them from a previously-trained graph) before you can actually use them. When you create an LSTM cell, you are adding a number of variables (the weights and biases of your cell) to the graph. To initialize these, add the line: ``` sess.run(tf.global_variables_initializer()) ``` before your `sess.run` line. Note that this will initialize all of the variables to useless values---I believe normally-distributed random values for the weights and zero for the biases. Running this network once doesn't really do anything for you. You could [restore the network from a previously-trained one](https://www.tensorflow.org/api_guides/python/meta_graph). That link and [this one](https://www.tensorflow.org/programmers_guide/saved_model) describe how to use the [`tf.train.Saver`](https://www.tensorflow.org/api_docs/python/tf/train/Saver) object to save and restore variables in a graph, as well as save and restore an entire graph (network structure, operations, etc.).
55,530,907
I have a customview: ``` public class GalleryView extends View implements View.OnClickListener { private CallBackHandler callBackHandler; Paint myPaint = new Paint(); public GalleryView(Context context, CallBackHandler callBackHandler) { super(context); this.callBackHandler = callBackHandler; } public GalleryView(Context context, AttributeSet attributeSet) { super(context, attributeSet); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); myPaint.setColor(Color.RED); canvas.drawPaint(myPaint); } @Override public void onClick(View view) { System.out.println("clicked !"); callBackHandler.do(); } } ``` I am adding this to my linearLayout of my main activity: ``` linearLayout.addView(galleryView); ``` And I set that layout to my contantview: ``` setContentView(linearLayout); ``` I can see the view red, but clicking is not triggered. What is wrong here ?
2019/04/05
[ "https://Stackoverflow.com/questions/55530907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1344545/" ]
You need to add this line inside on create : ``` yourView.setOnClickListener(this); ``` Basically, when you add this line You assign `OnClickListener` to your view using `setOnClickListener(this)` and that's how inside `onClick`, the onClick of the assigned `OnClickListener` is called.
You can do it by simply setting OnClickListener as below: ``` public class GalleryView extends View implements View.OnClickListener { Paint myPaint = new Paint(); private CallBackHandler callBackHandler; public GalleryView(Context context, CallBackHandler callBackHandler) { this(context, null, callBackHandler); } public GalleryView(Context context, AttributeSet attributeSet, CallBackHandler callbackHandler) { super(context, attributeSet); this.callBackHandler = callbackHandler; initialize(); } private void initialize() { setOnClickListener(this); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); myPaint.setColor(Color.RED); canvas.drawPaint(myPaint); } @Override public void onClick(View view) { System.out.println("clicked !"); callBackHandler.do(); } } ```
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
If you really want to use a 0 to 100 range, you can calculate the decimal automatically: ```css element { opacity: calc(40 / 100); } ``` or you can use a variable to make it clearer: ```css element { --opacity-percent: 40; opacity: calc(var(--opacity-percent) / 100); } ``` But both of these are less clear than just using a decimal like the standard says, so I wouldn't recommend them unless there's a really valid reason.
According to the docs, it has to be a number between 0 and 1. <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity> <https://www.w3schools.com/cssref/css3_pr_opacity.asp> I'm not sure why you would want a percent instead of this number considering they are the same thing (just divide by 100).
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Yes it's possible if you consider `filter` ```css .box { filter:opacity(30%); background:red; height:20px; } ``` ```html <div class="box"> </div> ``` You will even have better performance because: > > This function is similar to the more established opacity property; the difference is that with filters, some browsers provide hardware acceleration for better performance.[ref](https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/opacity) > > > --- Simply pay attention to some special behavior related to stacking context: [CSS-Filter on parent breaks child positioning](https://stackoverflow.com/q/52937708/8620333)
According to the docs, it has to be a number between 0 and 1. <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity> <https://www.w3schools.com/cssref/css3_pr_opacity.asp> I'm not sure why you would want a percent instead of this number considering they are the same thing (just divide by 100).
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Using a percentage is fully valid css, according to the spec: <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity#Values> > > `alpha-value` > A `number` in the range 0.0 to 1.0, inclusive, or a `percentage` in the range 0% to 100%, inclusive, representing the opacity of the channel (that is, the value of its alpha channel). Any value outside the interval, though valid, is clamped to the nearest limit in the range. > > > So, either of these should be okay, according to spec: ```css .foo { opacity: .3; } .foo { opacity: 30%; } ``` Keep in mind, though, that if you are using Sass, it will not handle the `percentage` value, and will compile it to `1%`.
According to the docs, it has to be a number between 0 and 1. <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity> <https://www.w3schools.com/cssref/css3_pr_opacity.asp> I'm not sure why you would want a percent instead of this number considering they are the same thing (just divide by 100).
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
If you really want to use a 0 to 100 range, you can calculate the decimal automatically: ```css element { opacity: calc(40 / 100); } ``` or you can use a variable to make it clearer: ```css element { --opacity-percent: 40; opacity: calc(var(--opacity-percent) / 100); } ``` But both of these are less clear than just using a decimal like the standard says, so I wouldn't recommend them unless there's a really valid reason.
No, [decimals only](https://www.w3.org/TR/2018/REC-css-color-3-20180619/#transparency). > > Any values outside the range 0.0 (fully transparent) to 1.0 (fully opaque) will be clamped to this range. > > >
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Yes it's possible if you consider `filter` ```css .box { filter:opacity(30%); background:red; height:20px; } ``` ```html <div class="box"> </div> ``` You will even have better performance because: > > This function is similar to the more established opacity property; the difference is that with filters, some browsers provide hardware acceleration for better performance.[ref](https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/opacity) > > > --- Simply pay attention to some special behavior related to stacking context: [CSS-Filter on parent breaks child positioning](https://stackoverflow.com/q/52937708/8620333)
No, [decimals only](https://www.w3.org/TR/2018/REC-css-color-3-20180619/#transparency). > > Any values outside the range 0.0 (fully transparent) to 1.0 (fully opaque) will be clamped to this range. > > >
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Using a percentage is fully valid css, according to the spec: <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity#Values> > > `alpha-value` > A `number` in the range 0.0 to 1.0, inclusive, or a `percentage` in the range 0% to 100%, inclusive, representing the opacity of the channel (that is, the value of its alpha channel). Any value outside the interval, though valid, is clamped to the nearest limit in the range. > > > So, either of these should be okay, according to spec: ```css .foo { opacity: .3; } .foo { opacity: 30%; } ``` Keep in mind, though, that if you are using Sass, it will not handle the `percentage` value, and will compile it to `1%`.
No, [decimals only](https://www.w3.org/TR/2018/REC-css-color-3-20180619/#transparency). > > Any values outside the range 0.0 (fully transparent) to 1.0 (fully opaque) will be clamped to this range. > > >
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
If you really want to use a 0 to 100 range, you can calculate the decimal automatically: ```css element { opacity: calc(40 / 100); } ``` or you can use a variable to make it clearer: ```css element { --opacity-percent: 40; opacity: calc(var(--opacity-percent) / 100); } ``` But both of these are less clear than just using a decimal like the standard says, so I wouldn't recommend them unless there's a really valid reason.
Yes it's possible if you consider `filter` ```css .box { filter:opacity(30%); background:red; height:20px; } ``` ```html <div class="box"> </div> ``` You will even have better performance because: > > This function is similar to the more established opacity property; the difference is that with filters, some browsers provide hardware acceleration for better performance.[ref](https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/opacity) > > > --- Simply pay attention to some special behavior related to stacking context: [CSS-Filter on parent breaks child positioning](https://stackoverflow.com/q/52937708/8620333)
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Using a percentage is fully valid css, according to the spec: <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity#Values> > > `alpha-value` > A `number` in the range 0.0 to 1.0, inclusive, or a `percentage` in the range 0% to 100%, inclusive, representing the opacity of the channel (that is, the value of its alpha channel). Any value outside the interval, though valid, is clamped to the nearest limit in the range. > > > So, either of these should be okay, according to spec: ```css .foo { opacity: .3; } .foo { opacity: 30%; } ``` Keep in mind, though, that if you are using Sass, it will not handle the `percentage` value, and will compile it to `1%`.
If you really want to use a 0 to 100 range, you can calculate the decimal automatically: ```css element { opacity: calc(40 / 100); } ``` or you can use a variable to make it clearer: ```css element { --opacity-percent: 40; opacity: calc(var(--opacity-percent) / 100); } ``` But both of these are less clear than just using a decimal like the standard says, so I wouldn't recommend them unless there's a really valid reason.
57,012,564
Is it possible to define opacity in CSS by percentage (eg 30%) in CSS? does not seem to be working, right now I can only conduct it by decimal point. <https://css-tricks.com/almanac/properties/o/opacity/> ``` .test{ opacity: 0.3; } ``` **Intended Goal:** ``` .test{ opacity: 30%; } ```
2019/07/12
[ "https://Stackoverflow.com/questions/57012564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11736013/" ]
Using a percentage is fully valid css, according to the spec: <https://developer.mozilla.org/en-US/docs/Web/CSS/opacity#Values> > > `alpha-value` > A `number` in the range 0.0 to 1.0, inclusive, or a `percentage` in the range 0% to 100%, inclusive, representing the opacity of the channel (that is, the value of its alpha channel). Any value outside the interval, though valid, is clamped to the nearest limit in the range. > > > So, either of these should be okay, according to spec: ```css .foo { opacity: .3; } .foo { opacity: 30%; } ``` Keep in mind, though, that if you are using Sass, it will not handle the `percentage` value, and will compile it to `1%`.
Yes it's possible if you consider `filter` ```css .box { filter:opacity(30%); background:red; height:20px; } ``` ```html <div class="box"> </div> ``` You will even have better performance because: > > This function is similar to the more established opacity property; the difference is that with filters, some browsers provide hardware acceleration for better performance.[ref](https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/opacity) > > > --- Simply pay attention to some special behavior related to stacking context: [CSS-Filter on parent breaks child positioning](https://stackoverflow.com/q/52937708/8620333)
3,791,677
I am using [MVVM Foundation](http://mvvmfoundation.codeplex.com/) but I think its quite straight-forward and not really framework specific. My setup is as follows: * StartViewModel - has a ExitCommand that returns a RelayCommand/ICommand ``` public ICommand ExitCommand { get { return _exitCommand ?? (_exitCommand = new RelayCommand(() => MessageBox.Show("Hello World"))); } } public RelayCommand _exitCommand; ``` * StartView (User Control) has a button binded to the ExitCommand ``` <Button Content="Exit" Command="{Binding ExitCommand}" /> ```
2010/09/24
[ "https://Stackoverflow.com/questions/3791677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/292291/" ]
First, read as much as you can stomach on MVVM, e.g. [WPF Apps With The Model-View-ViewModel Design Pattern](http://msdn.microsoft.com/en-us/magazine/dd419663.aspx) on MSDN. Once you understand the basic principles driving it the answer will seem more reasonable. Basically you want to keep your View (UI) and ViewModel (essentially abstract UI, but also abstract Model) layers [separate](http://en.wikipedia.org/wiki/Separation_of_concerns) and decoupled. Showing a message box or closing a window should be considered a UI specific detail and therefore implemented in the View, or in the case of a message box, more generally available via a 'Service'. With respect to the ViewModel, this is achieved using [Inversion of Control](http://en.wikipedia.org/wiki/Inversion_of_control) (IoC). Take the message box example above. Rather than showing the message box itself, it takes a dependency on an IMessageBoxService which has a Show method and the ViewModel calls that instead - delegating responsibility. This could be taken further by leveraging [Dependency Injection](http://en.wikipedia.org/wiki/Dependency_injection) (DI) containers. Another approach used for closing a View window might be for the ViewModel to expose an event, called for example RequestClose (as in the MSDN article), that the View subscribes to. Then the ViewModel would raise the event when it wants the corresponding View / window to close; it assumes something else is listening and will take responsibility and actually do it.
you can implement an CloseEvent in your StartViewModel. in your StartView you have to register this CloseEvent. when you Raise your closeevent from your VM then, your View recognize that it has to close your app/window.
30,547,638
I'm trying to submit my first iphone app to the app store but when click on validate or submit to App store I get this Error : Itunes store operation failed. the network connection was lost. can anyone help me please?
2015/05/30
[ "https://Stackoverflow.com/questions/30547638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3972984/" ]
I assume that `m_parts` is a vector of pointers to the common base class? Then you need to [*downcast*](http://www.bogotobogo.com/cplusplus/upcasting_downcasting.php#Downcasting) to the correct class, like ``` static_cast<ActualClassThisIs*>(m_parts[i])->getThrust(); ``` Note that this only works if the class inherits from the base class. Also note that it will not work if the actual class *isn't* what you cast it to. Then you might have to use `dynamic_cast` and check for `nullptr`: ``` if (dynamic_cast<ActualClassThisIs*>(m_parts[i]) != nullptr) dynamic_cast<ActualClassThisIs*>(m_parts[i])->getThrust(); ```
Your last part, > > How do I declare a vector within Rocket that is able to contain all subclasses of RocketPart and in the main how do I call calcTWR(), so that it actually gives back the total thrust, regardless of the contents of the vector? > > > Could be done with something along these lines: ``` class IPart { virtual double getThrust() = 0; } class MetalPart : public IPart { double getThrust() override { return 5.5; } } class PlasticPart : public IPart { double getThrust() override { return 3.3; } } class CrappyPart : public IPart { double getThrust() override { return 1.1; } } ``` and then you keep them in a ``` std::vector<IPart*> parts; ``` And can then calculate the total thrust with something like this: ``` double total = 0; for(auto part : parts) { total += part->getThrust(); } ``` Sorry, your diagram wasn't readable on my screen so I made some names up.
39,611,708
so i have 3 tables in MySql events ``` | ID | event_name | ------------------- | 1 | test | | 2 | test2 | | 3 | test3 | ``` sp\_events ``` | ID | event_ID | show_in_grid | -------------------------------- | 5 | 1 | 1 | | 6 | 2 | 1 | ``` sp\_event\_dates ``` | ID | sp_event_ID | start_date | ------------------------------------------ | 1 | 5 | 2016-10-31 14:00:00 | | 2 | 5 | 2016-11-01 14:00:00 | | 3 | 5 | 2016-11-02 14:00:00 | | 4 | 6 | 2016-12-01 14:00:00 | | 5 | 6 | 2016-12-02 14:00:00 | ``` so Im trying to join the sp\_event\_dates table but i only want the first result and dont want a duplicate for every result. the SQL Ive tried is ``` SELECT events.*, sp_event_dates.start_date FROM events JOIN sp_events ON sp_events.event_ID=events.ID JOIN sp_event_dates ON sp_event_dates.sp_event_ID = (SELECT dd.ID FROM sp_event_dates dd WHERE sp_events.ID = dd.sp_event_ID ORDER BY dd.start_date ASC LIMIT 1) WHERE sp_events.show_in_grid=1; ``` This doesn't work as intended. I would expect the results to be as below: ``` | ID | event_name | start_date | ----------------------------------------- | 1 | test | 2016-10-31 14:00:00 | | 2 | test2 | 2016-12-01 14:00:00 | ``` I do eventually plan to add a where clause on the start\_date but just trying to get this to work first. Can anyone see what I'm doing wrong? My query returns no results currently
2016/09/21
[ "https://Stackoverflow.com/questions/39611708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1680768/" ]
Just use a GROUP BY + MIN: ``` SELECT events.id ,events.event_name ,min(sp_event_dates.start_date) As start_date FROM events JOIN sp_events ON sp_events.event_ID=events.ID JOIN sp_event_dates ON sp_event_dates.sp_event_ID = sp_events.ID WHERE sp_events.show_in_grid=1 GROUP BY events.id ,events.event_name ```
You need `groupby` event id **SQL:** ``` SELECT events.ID, events.event_name, sp_event_dates.start_date FROM `events` INNER JOIN sp_events ON sp_events.event_ID = events.ID INNER JOIN sp_event_dates ON sp_event_dates.`sp_event_ID` = sp_events.`ID` WHERE sp_events.show_in_grid = 1 GROUP BY events.ID ``` **Output:** ``` ID event_name start_date ``` --- ``` 1 test 2016-10-31 14:00:00 2 test2 2016-12-01 14:00:00 ```
7,858,228
This is just an academic question (I would never do this in real code): **If I were to use shared\_ptr<> universally in my code, would the behavior be equivalent to a gc-collected language like Java?** If not, how would the behavior be different from a gc-embedded language? Which C++ construct would yield equivalent behavior compared to a gc-embedded language? Note: In real coding, I strongly prefer the use of RAII and strict ownership over the use of any smart pointers. I also know that other less-generic pointers, unique\_ptr<> would be more efficient. This question is just a query into smart-pointer equivalence.
2011/10/22
[ "https://Stackoverflow.com/questions/7858228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/975129/" ]
No, there'd be a couple of important differences: * You would get a memory leak any time you have a cyclic reference. A garbage collector can handle cycles, ref-counting can't. * You would avoid any stalls or pauses because no garbage collection ever occurs. On the other hand, you'd likely spend more total CPU time cleaning up resources, because the amortized cost of an occasional garbage collection is pretty low, and ref-counting can be relatively expensive if you do it on everything. Obviously the first point is the killer. If you did this, many of your resources wouldn't get freed, and you'd leak memory and your app just wouldn't behave very well. > > Which C++ construct would yield equivalent behavior compared to a gc-embedded language? > > > None. C++ doesn't have a garbage collector because there's no way to implement a correct, reliable one. (Yes, I'm aware of Boehm's GC, and it's a good approximation, but it's conservative, and doesn't detect all references, only the ones it can be 100% sure of. There is no way, in a general C++ program, to implement a garbage collector that Just Works(tm))
Garbage collection happens whenever the GC decides that it should. `shared_ptr`s are not collected. An object managed by a `shared_ptr` will only *ever* be destroyed in the destructor of a `shared_ptr`. And therefore, you know exactly when memory can and can *not* be freed. You still have control over when memory goes away with `shared_ptr`. You don't have that with a garbage collector (outside of coarse-grained commands like turning it on/off or modifying it's behavior a bit).