text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Knockout bind optional fields
I'm trying to bind a JSON containing optional fields using knockout js.
The problem is I keep getting this error:
Unable to process binding "value: function..."
And I can't add the missing fields as I need them to remain missing (the missing fields are taken from a "parent" JSON)
Is there any option to tell knockout js to ignore those fields and only add them if a user types anything in the field?
Any chance we could get a jsfiddle or plunkr of your problem?
It's quite complicated.
I made a simple example: link
The idea is sometimes I have a full JSON and sometimes I have partial.
I mustn't add empty fields unless the user intend to add the field (It's a child/parent inheritance mechanism)
If you click on "apply Full" - It will work. I just need that "apply Partial" will ignore the missing field and create it if the user type something in the input field
You can bind to a non-existent view-model property if you use the property syntax, like $data.property.
<input type="text" data-bind="value: $data.key">
https://jsfiddle.net/hrfq3wdh/2/
can you just use hasOwnProperty?
var data = {
foo1: 'bar1',
foo3: 'bar3'
}
function viewModel(mydata) {
var self = this;
this.foo1 = ko.observable(mydata.foo1);
this.foo2 = ko.observable(mydata.hasOwnProperty('foo2') ? mydata.foo2 : '');
this.foo3 = ko.observable(mydata.foo3);
}
var vm = new viewModel(data);
(function($) {
ko.applyBindings(vm); //bind the knockout model
})(jQuery);
https://jsfiddle.net/0o89pmju/8/
Bryan, thanks for your answer. It will work but I need to translate the JSON to a ko observables and i have more than 40 fields in a complex hierarchical structure.
Is there any simpler solution?
maybe another possibility would be to use the ko mapping plugin and the with data binding run snippet below.
var partial = {value : 456};
var full = { key: "abc",
value : 123};
function viewModel() {
var self = this;
this.full=ko.observable('');
this.partial=ko.observable('');
this.applyFull = function(){
self.full(ko.mapping.fromJS(full));
self.partial('');
}
this.applyPartial = function(){
self.full('');
self.partial(ko.mapping.fromJS(partial));
}
}
var vm = new viewModel();
(function($) {
ko.applyBindings(vm); //bind the knockout model
})(jQuery);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.4.2/knockout-min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout.mapping/2.4.1/knockout.mapping.min.js"></script>
<div data-bind="with: full">
Key <input type="text" data-bind="value: key"> Value <input type="text" data-bind="value: value">
</div>
<div data-bind="with: partial">
Value <input type="text" data-bind="value: value">
</div>
<button data-bind="click: applyFull"> apply Full </button>
<button data-bind="click: applyPartial"> apply Partial </button>
All the answers are good but too complex to implement on my small system.
It appears the problem was trying to access to a field inside innere object which doesn't exists. (for example, main.inner.price when inner is undefined)
I ended up creating an empty object whenever it's missing in the main object using main.inner = main.inner || {}
| common-pile/stackexchange_filtered |
Writing an animated gif from BufferedImages
I have a program that loads a spritesheet, creates BufferedImages of each frame on the sheet, and writes it to an animated gif. Using the class provided by Elliot Kroo on Creating animated GIF with ImageIO?, I was able to successfully output a file. However, the gif doesn't animate quite right. The sheet I provided was a .png with a transparent background, so every subsequent frame is just put on top of the last frames, with differences showing through the transparent background (Example). Here's the code that uses the gif writer:
ImageOutputStream output = new FileImageOutputStream(new File(savePath));
GifSequenceWriter writer = new GifSequenceWriter(output, data[0].getType(), delay, true);
for(BufferedImage bi:data)
{
writer.writeToSequence(bi);
}
writer.close();
output.close();
Where data is an array of each frame as a BufferedImage (which I have also checked, and don't seem to be the problem). Is this a limitation of .gifs or the Java ImageWriters? Or can I edit a setting somewhere to prevent this? I'd rather not put a background if I don't have to.
Assuming data is an array of BufferedImage, the imageType parameter to the GifSequenceWriter constructor would likely be TYPE_INT_ARGB for data read from a .png file. I would expect transparentColorFlag to be true, but you'd have to determine the transparentColorIndex empirically.
graphicsControlExtensionNode.setAttribute("transparentColorFlag", "TRUE");
graphicsControlExtensionNode.setAttribute("transparentColorIndex", ???);
See also this answer.
Ah. Is there no way to edit the behavior of the gifwriter to not use the optimized frames? Also, the color for the transparent background seemed to be black(0,0,0), but passing 0 for the transparentColorIndex doesn't seem to affect anything(not even the normal black goes away).
Not as far as I know. Kroo's code seems to assume opaque images. A .gif lacks an alpha channel; it only has a color index meant to be interpreted as transparent. If you need to preserve the .png transparency, you might try this AnimatedGifEncoder.
Do you know what I should assign the attribute transparentColorIndex? It doesn't seem to be an int with ARGB color.
Right, transparentColorIndex isn't a color; it's an index in the .gif color table; it would be whatever color your encoder chooses, e.g. findClosest(). Note that GifSequenceWriteruses theImageIOencoder, whileAnimatedGifEncoder` does its own encoding.
findClosest() is a method of AnimatedGifEncoder, which does its own encoding. I doesn't use the ImageIO encoder.
| common-pile/stackexchange_filtered |
How to Remove Space Additionally to the Special Chars?
in my functions.php if have this code:
echo '<a href="'.preg_replace('/\s/','-',$search).'-keyword1.html">'.urldecode($search).'</a>';
this removes the special chars..
but how can i additonally add remove space and replace it with - and remove "
so, if someone types in "yo! here" i want yo-here
Try:
<?php
$str = '"yo! here"';
$str = preg_replace( array('/[^\s\w]/','/\s/'),array('','-'),$str);
var_dump($str); // prints yo-here
?>
If you want to replace a run of unwanted characters with a single dash, then you can use something like this:
preg_replace('/\W+/', '-', $search);
To remove surrounding quotes, and then replace any other junk with dashes, try this:
$no_quotes = preg_replace('/^"|"$/', '', $search);
$no_junk = preg_replace('/\W+/', '-', $no_quotes);
For the OP's input: '"yo! here"' this will produce '-yo-here-'
This will replace multiple spaces / "special" chars with a single hyphen. If you don't want that, remove the "+".
You might want to trim off any trailing hyphens, should something end with an exclamation point / other.
<?php
preg_replace("/\W+/", "-", "yo! here check this out");
?>
You can remove any non-words:
preg_replace('/\W/', '-', $search);
| common-pile/stackexchange_filtered |
Does Convert.ToInt32(object) skip unboxing?
The IL produced from the following:
object[] items = new object[] { 341, "qwerty" };
int item1FromConvert = Convert.ToInt32(items[0]);
int item1FromCast = (int)items[0];
Is (according to LINQPad 4):
IL_0001: ldc.i4.2
IL_0002: newarr System.Object
IL_0007: stloc.3 // CS$0$0000
IL_0008: ldloc.3 // CS$0$0000
IL_0009: ldc.i4.0
IL_000A: ldc.i4 55 01 00 00
IL_000F: box System.Int32
IL_0014: stelem.ref
IL_0015: ldloc.3 // CS$0$0000
IL_0016: ldc.i4.1
IL_0017: ldstr "qwerty"
IL_001C: stelem.ref
IL_001D: ldloc.3 // CS$0$0000
IL_001E: stloc.0 // items
IL_001F: ldloc.0 // items
IL_0020: ldc.i4.0
IL_0021: ldelem.ref
IL_0022: call System.Convert.ToInt32
IL_0027: stloc.1 // item1FromConvert
IL_0028: ldloc.0 // items
IL_0029: ldc.i4.0
IL_002A: ldelem.ref
IL_002B: unbox.any System.Int32
IL_0030: stloc.2 // item1FromCast
item1FromConvert appears to skip the unboxing stage typical when casting an object to an int, or any value type, (from my extremely limited understanding of IL - based on there is no unbox.any for the line above the line with the comment: // item1FromConvert).
Is this indeed the case and will Convert.ToValueType(object) always save me the unboxing if the object is the said value type?
Do you have the IL for System.Convert.ToInt32? Does the unboxing happen in there?
There's no boxing, but you are instead just getting a method call instead, which is just a different way of doing it. It's not really going to save you time from a performance perspective. All you're doing is calling Convert.ToInt32(object), which the implementation of is:
public static int ToInt32(object value) {
return value == null? 0: ((IConvertible)value).ToInt32(null);
}
(see here: http://referencesource.microsoft.com/#mscorlib/system/convert.cs#3a271a647f117003)
So if anything, you're actually decreasing your performance because of the various operations happening in the implementation of Convert.ToInt32(object), which you wouldn't have if you just cast the value in the first place.
You're trying to micro-optimise, and actually making the number of total IL instructions executed increase, because LINQPad won't show you the IL of the methods you're calling.
Thanks, but please don't assume I am trying to micro-optimise, the question is academic.
OK fair enough, you're right I shouldn't have assumed that, but I inferred it from your comment "save me the unboxing", where "save" suggests that you're trying to reduce something (e.g. overhead).
| common-pile/stackexchange_filtered |
Webservice method to call a url
I have a webservice, it has its wsdl and everything works fine when I make a call to my web service.
What I want to do now is call a url from somewhere within my web service method. In c# code behind I can do it something like this:
Response.Redirect("Insurance.aspx?fileno1=" + txtFileNo1.Text + "&fileno2=" + txtFileNo2.Text + "&docid=" + Convert.ToString(GridView1.SelectedDataKey[2]));
but the Response.Redirect option is not available on the asmx page.
Is something like this possible? If so then would be grateful in anybody can show me how. I've tried searching everywhere but can only find about calling a web service or calling a webs ervice inside another web service but no such topics on calling a url from within your web service. Any help would be greatly appreciated.
What exactly do you mean by "call a url"? Do you mean redirect the user? If so, you can access the current Response in your web service by calling HttpContext.Current.Response.Redirect(...)
Call a url like "www.insuranceini.com/insurance.aspx?fileno1="+txtfileno1 my client calls my webservice which then makes a call to another one of my Apis like the link above which processes the data the client is sending me.
@Dave Zych do you suppose the HttpContext you've mentioned will work for my scenario just clarified above?
The Response.Redirect method sends a Status Code 300 to the browser which directs the user to a new page. What you want to do is create a WebRequest and parse the response:
string url = string.Format("www.insuranceini.com/insurance.asp?fileno1={0}", txtfileno1);
WebRequest request = HttpWebRequest.Create(url);
using(WebResponse response = request.GetResponse())
{
using(StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string urlText = reader.ReadToEnd();
//Do whatever you need to do
}
}
EDIT: I wrapped the WebResponse and StreamReader objects in using statements so they are disposed of properly once you're finished with them.
Ok let me try that..thanks!so for example if my insurance.aspx has about 5 parameters txtfileno1, txtfileno2, username, userid,dteinsured how will that appear in the url that you have mentioned above?
I'm using the string.Format method. So with multiple parameters, your could do: string.Format("www.insuranceini.com/insurance.aspx?txtfileno1={0}&txtfileno2={1}&username={2}&userid={3}&dteinsured={4}", txtfileno1, txtfileno2, username, userid, dteinsured), where everything outside of the string is a variable.
| common-pile/stackexchange_filtered |
Hide elements using jquery on a condition
I would like to hide the content of my page on a certain condition how can i do this by using jquery.
Example
if(type=="member")
{
\\hide some elements..
}
Thanks for the help in advance.
The value of type is being fetched by session.getAttribute() from a servlet.
http://api.jquery.com/hide/
I had the similar problem to my project on which i had to check if the User is Admin or just a simple User. What i had done is something like this:
In my jsp page i was retrieving the session (just as you are doing) User user = (User) session.getAttribute("loginDone"); In my case i had named loginDone my session Attribute. Then in the spot i would like to check what type is the User, i did a piece of scriptlet:
<% if (user!= null && user.isAdmin()) { %>
// the code for the Admin User.
<% } %>
<% if (user != null && !user.isAdmin()) { <%>
// the code for the simple User.
<% } %>
In my case I just want to remove a few tabs.
You can use this method as well! If the first condition is true show what you want, if not show something else. All your elements will be there but the were shown depending the condition check.
If my answer helpd you you can give it an up vote and accept it!
Try this:
if(type=="member"){
$(yourDom).hide();
}
Can this be used inside a jsp tag??
Sorry, I am not familiar with jsp. Just know that, These are javascript codes, Which can be put in tag In html file.
if (type === "member") {
$('#target1').hide(); //or
$('#target1).css('display', 'none');
}
.hide() is similar to CSS property display:none
{Also, I just saw xiaocui included that in his example.}
in the second example you can use 'hidden' as the value if you don't want your layout to be affected.
If you're confused about how to add jquery to a jsp page, theres a discussion on that here jQuery adding to JSP page
| common-pile/stackexchange_filtered |
How to keep temporary mysqli table available in php during statement execution?
I am busy trying to execute a set of statements that involve the use of a temporary table.
My goal is to create the temporary table, insert values to it and then do a like comparison of the temporary tables contents to another table.
These statements are working perfectly in phpmyadmin when executed from RAW SQL, but I'm assuming that the table is not available when I try to insert the data.
Below is the code for my php function + mysqli execution:
function SearchArticles($Tags){
global $DBConn, $StatusCode;
$count = 0;
$tagCount = count($Tags);
$selectText = "";
$result_array = array();
$article_array = array();
foreach($Tags as $tag){
if($count == 0){
$selectText .= "('%".$tag."%')";
}else {
$selectText .= ", ('%".$tag."%')";
}
$count++;
}
$query = "CREATE TEMPORARY TABLE tags (tag VARCHAR(20));";
$stmt = $DBConn->prepare($query);
if($stmt->execute()){
$query2 = "INSERT INTO tags VALUES ?;";
$stmt = $DBConn->prepare($query2);
$stmt->bind_param("s", $selectText);
if($stmt->execute()){
$query3 = "SELECT DISTINCT art.ArticleID FROM article as art JOIN tags as t ON (art.Tags LIKE t.tag);";
$stmt = $DBConn->prepare($query3);
if($stmt->execute()){
$stmt->store_result();
$stmt->bind_result($ArticleID);
if($stmt->num_rows() > 0){
while($stmt->fetch()){
array_push($article_array, array("ArticleID"=>$ArticelID));
}
array_push($result_array, array("Response"=>$article_array));
}else{
array_push($result_array, array("Response"=>$StatusCode->Empty));
}
}else{
array_push($result_array, array("Response"=>$StatusCode->SQLError));
}
}else{
array_push($result_array, array("Response"=>$StatusCode->SQLError));
}
}else{
array_push($result_array, array("Response"=>$StatusCode->SQLError));
}
$stmt->close();
return json_encode($result_array);
}
The first statement executes perfectly, however the second statement gives me the error of:
PHP Fatal error: Call to a member function bind_param() on a non-object
If this is an error to do with the Temp table not existing, how do i preserve this table long enough to run the rest of the statements?
I have tried to use:
$stmt = $DBConn->multi_query(query);
with all the queries in one, but i need to insert data to one query and get data from the SELECT query.
Any help will be appreciated, thank you!
This is not an issue with the temporary table. It should remain throughout the same connection (unless it resets with timeout, not sure about this part).
The error is that $stmt is a non-object. This means that your query was invalid (syntax error), so mysqli refused to create an instance of mysqli_stmt and returned a boolean instead.
Use var_dump($DBConn->error) to see if there are any errors.
Edit: I just noticed that your query $query2 is INSERT INTO tags VALUES ? (the ; is redundant anyway). If this becomes a string "text", this would become INSERT INTO tags VALUES "text". This is a SQL syntax error. You should wrap the ? with (), so it becomes INSERT INTO tags VALUES (?).
In conclusion, change this line:
$query2 = "INSERT INTO tags VALUES ?;";
to:
$query2 = "INSERT INTO tags VALUES (?);";
also note that you don't need the ; to terminate SQL statements passed into mysqli::prepare.
Briliant, going through this i noticed it. Thank you very much for your response. It is much appreciated.
You have a simple syntax error use the brackets around the parameters like this
INSERT INTO tags VALUES (?)
| common-pile/stackexchange_filtered |
how can cashed chat when pusher channel will disconnect or internet will turn off in flutter app?
actually i am working with the chat app with Laravel backend support. i received message using pusher channel. the problem is cashes chat for offline mode as like other chat apps.
is SQFlite solution or any other way to make is easier?
Can you please add some more details on what you're asking? Are you wanting to cache the chat when the app goes offline?
use Hive or ObjectBox for saving messages to local storage
| common-pile/stackexchange_filtered |
Greedy Algorithm implementation
So I have some questions concerning the solution to the problem of scheduling n activities that may overlap using the least amount of classrooms possible. The solution is below:
Find the smallest number of classrooms to schedule a set of activities S in. To do this efefficiently
move through the activities according to starting and finishing times. Maintain two lists of classrooms: Rooms that are busy at time t and rooms that are free at time t. When t is the starting time
for some activity schedule this activity to a free room and move the room to the busy list.
Similarly, move the room to the free list when the activity stops. Initially start with zero rooms. If
there are no rooms in the free list create a new room.
The algorithm can be implemented by sorting the activities. At each start or finish time we can
schedule the activities and move the rooms between the lists in constant time. The total time is thus
dominated by sorting and is therefore O(n lg n).
My questions are
1) First, how do you move through the activities by both starting and finishing time at the same time?
2) I don't quite understand how it's possible to move the rooms between lists in constant time. If you want to move rooms from the busy list to the free list, don't you have to iterate over all the rooms in the busy list and see which ones have end times that have already passed?
3) Are there any 'state' variables that we need to keep track of while doing this to make it work?
The way the algorithm works, you need to create a list containing an element for each start time and an element for each end time (so 2n elements in total if there are n activities). Sort this list. When an end time and a start time are equal, sort the end time first -- this will cause back-to-back bookings for halls to work.
If you use linked lists for holding the free and booked halls, you can have the elements you created in step 1 hold pointers back to an activity structure, and this structure can hold a pointer to the list element containing the hall that this activity is assigned to. This will be NULL initially, and will take on a value when that hall is used for that activity. Then when that activity ends, its hall can be looked up in constant time by following two pointers from the activity-end element (first to the activity object, and from there to the hall element).
That should be clear from the above description, hopefully.
So the halls themselves don't have to be in a linked list with each other, they just have to be linked to events which are in turn linked to start and end times
@user1855952: The halls should still be in 1 of 2 lists, indicating whether they are used or free, so that when a "start-activity" item is seen in the time-sorted event list it is always possible to grab a free hall (just take the 1st one in the free list). When you say "linked to" I don't know which direction you mean: what I mean is that the start and end time objects point to event objects, which point to hall objects.
| common-pile/stackexchange_filtered |
scrollbar for statictext in wxpython?
is possible to add a scrollbar to a statictext in wxpython?
the thing is that i'm creating this statictext:
self.staticText1 = wx.StaticText(id=wxID_FRAME1STATICTEXT1,label=u'some text here',name='staticText1', parent=self.panel1, pos=wx.Point(16, 96),
size=wx.Size(408, 216),style=wx.ST_NO_AUTORESIZE | wx.THICK_FRAME | wx.ALIGN_CENTRE | wx.SUNKEN_BORDER)
self.staticText1.SetBackgroundColour(wx.Colour(255, 255, 255))
self.staticText1.SetBackgroundStyle(wx.BG_STYLE_SYSTEM)
self.staticText1.SetFont(wx.Font(9, wx.SWISS, wx.NORMAL, wx.BOLD, False,u'MS Shell Dlg 2'))
self.staticText1.SetAutoLayout(True)
self.staticText1.SetConstraints(LayoutAnchors(self.staticText1, False,True, True, False))
self.staticText1.SetHelpText(u'')
but later i use StaticText.SetLabel to change the label and the new text is too big to fit the window, so i need to add a scrollbar to the statictext..
i tried adding wx.VSCROLL to the style, and the scrollbar show up but cant scroll down to see the rest of the text..
wx.StaticText is designed to never respond to mouse events and never take user focus. Given that this is its role in life, it seems that a scrollbar would be inconsistent with its purpose.
There are two ways to get what you want: 1) You could use a regular TextCtrl with the style TE_READONLY (see here); or 2) you could make a scrolled window that contains your StaticText control.
| common-pile/stackexchange_filtered |
I'm Creating a File named "helloworld.txt" and saving some inputs but after I re run the code the contents that I save disappear.
1 - This is the Input IMAGE
2 - This is the Image with Input Saved Through Program
3 - This is the IMAGE after I re-run the program
In the Third Picture I just re-run the main file without inputting anything but when I go and check the File it is empty.
If anything is missing or in any way you can help me I will be thankful to you.
This is my OOP project. As I don't want to remake those management system all over the Internet I chose this project and I want to create it easily as I can. I don't copy teachers work or any other Student I gain help from them but write a easy code for my own Understanding.
import java.io.*;
import java.util.*;
import java.util.Scanner;
public class PortalTest {
public static void main(String[] args) {
BufferedWriter in;
BufferedReader in1;
try {
in = new BufferedWriter(new FileWriter("helloworld.txt"));
in1 = new BufferedReader(new FileReader("helloworld.txt"));
int moviesMenuInput;
Portal portal = new Portal();
Movies movies = new Movies();
Games games = new Games();
TvShows tvShows = new TvShows();
Music music = new Music();
portal.displayData();
Scanner input1 = new Scanner(System.in);
int menuInput = input1.nextInt();
moviesMenu:
while (true)
{
switch (menuInput) {
case 1:
System.out.println("1 - ADD MOVIES");
System.out.println("2 - REMOVE MOVIES");
System.out.println("3 - SEARCH MOVIES");
System.out.println("4 - RETURN TO MENU");
Scanner input2 = new Scanner(System.in);
moviesMenuInput = input2.nextInt();
switch (moviesMenuInput) {
case 1:
System.out.println("Enter Movie Name : ");
Scanner input6 = new Scanner(System.in);
String addMoviesNameInput = input6.nextLine();
movies.setMovieName(addMoviesNameInput);
in.write(addMoviesNameInput);
in.newLine();
System.out.println("Enter Movie Release Date : ");
Scanner input7 = new Scanner(System.in);
String addMoviesReleaseDateInput = input7.nextLine();
movies.setMovieReleaseDate(addMoviesReleaseDateInput);
in.write(addMoviesReleaseDateInput);
in.newLine();
System.out.println("Enter Movie Genre : ");
Scanner input8 = new Scanner(System.in);
String addMoviesGenreInput = input8.nextLine();
movies.setMovieGenre(addMoviesGenreInput);
in.write(addMoviesGenreInput);
in.newLine();
System.out.println("Enter Movie Download Link : ");
Scanner input9 = new Scanner(System.in);
String addMoviesDownloadLinkInput = input9.nextLine();
movies.setDownloadLink(addMoviesDownloadLinkInput);
in.write(addMoviesDownloadLinkInput);
in.newLine();
System.out.println("MOVIE ADDED");
in.close();
break;
case 2:
System.out.println("Enter Name of Movie to Delete : ");
Scanner input10 = new Scanner(System.in);
String deleteMoviesInput = input10.nextLine();
if (deleteMoviesInput.equals(in1.readLine()))
{
System.out.println("MOVIE DELETED ! ");
}
else
{
System.out.println("NO MATCH FOUND");
}
in1.close();
break;
case 3:
System.out.println("Enter Name of Movie to Search : ");
Scanner input11 = new Scanner(System.in);
String searchMoviesInput = input11.nextLine();
if(in1.readLine().equals(searchMoviesInput))
{
System.out.println("Movie Name : "+movies.getMovieName());
System.out.println("Movie Release Date : "+movies.getMovieReleaseDate());
System.out.println("Movie Genre : "+movies.getMovieGenre());
System.out.println("Movie Download Link : "+movies.getDownloadLink());
}
break;
}
continue moviesMenu;
}
}
}catch(IOException e){
System.out.println("There was a problem:" + e);
}
}
}
new FileWriter("helloworld.txt")
This constructor call creates a file if it does not exist, and opens it for writing. If the file already exists, this truncates the file and opens it for writing.
Hence your behavior: in the very beginning of your program you create your file or truncate it, and then you don't write anything to it (on the second run), so it remains empty.
I'd recommend to only invoke this constructor when you are actually going to write anything (so in your case, under case 1). You could do it like this:
try (BufferedWriter writer = new BufferedWriter(new FileWriter("helloworld.txt"))) {
// actually write to the writer
}
This way, the will be appropriately closed automatically after the execution leaves the try block.
| common-pile/stackexchange_filtered |
My integrated terminal window is often blank when I compile with Visual Studio Code
I've started using VS code to compile code but I have a lot of problems since the terminal with the compilation logs is often blank without reason. Sometimes I get the logs and so I can see if the compilation succeeded or failed but it happens quite often the terminal window stays blank so I have to re-launch it...
It's really tedious and I don't know how to fix this...
I compile C++ with Cmake and I'm on Ubuntu 20.04 LTS.
Any ideas ?
Thx!
At a guess you're using the cmake plugin and attempting to do more than one operation at once, the second operation erases the terminal output of the first. The solution is to only do one thing at a time
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
| common-pile/stackexchange_filtered |
Understanding image under set
With three sets:
$A$ = {1, 2, 3}, $B$ = {4, 5, 6}, and $h$ = {(1, 4), (2, 4), (3, 5)}.
Could someone walk me through how to find the image under $h$ of each of the 8 subsets of $A$ and the inverse image of each of the 8 subsets of $B$.
Simple question trying to visualize the definition of image and inverse image. Thanks!
Can you write down the subsets of A and B?
@JohnDouma when it asks for the 8 subsets i assume one would be empty set. Is that true?
Yes. One of the sets is the empty set.
If $A' \subseteq A$, then $h[A'] = \{h(x) : x \in A' \}$, by the definition. From the definition of $h$ (the set of pairs) we see that $h(1) = 4, h(2) = 4, h(3) = 5$. Now enumerate all 8 subsets of $A$ and compute the images. E.g. $h[A] = \{h(1), h(2), h(3)\} = \{4,4,5\} = \{4,5\}$ and so on.
If $B' \subseteq B$, then $h^{-1}[B'] = \{x \in A: h(x) \in B \}$. So the inverse image of of $B'$ is all those points of $A$ whose image lies in $B'$.
So without looking at $h$ we already know (as $h$ maps $A$ into $B$) that $h^{-1}[B] = A$ and also true for any function is $h^{-1}[\emptyset] = \emptyset$
But also $h^{-1}[\{4\}] = \{1,2\}$, as those are the points with image equal to $4$. You look at all subsets of $B$ in the same way.
if $S \subset A$ then $f(S) = \{f(x)|x \in A\}$.
S of if A = {1} then f(A) = {f(1)} = {4}. If A = {1,2} then f(A) = {f(1), f(2)} = {4}. Etc. etc. etc.
| common-pile/stackexchange_filtered |
Rails 3 datatypes?
Where can I find a list of data types that can be used in rails 3? (such as text, string, integer, float, date, etc.?) I keep randomly learning about new ones, but I'd love to have a list I could easily refer to.
Here are all the Rails3 (ActiveRecord migration) datatypes:
:binary
:boolean
:date
:datetime
:decimal
:float
:integer
:primary_key
:references
:string
:text
:time
:timestamp
Source
and :references for polymorphic associations. See: http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/TableDefinition.html
the guide has changed. Maybe a link to the relevant documentation should replace it.
@HarryMoreno: Thanks for the tip! I updated the reference, please let me know if you find any better one.
References is not limited to polymorphic associations. And i would not count it as a datatype.
It is important to know not only the types but the mapping of these types to the database types, too:
For, example, note that in MS SQL Server we are using:
the old "datetime" instead "datetime2"
decimal with its default precision
text and varchar instead nvarchar
int (not possible to use tiny int/small int/big int)
image instead BLOB
As find from this blog.
The tinyint/smallint/bigint can be set by using :limit option with :integer.
I have tested it on Rails 3 and MySQL, they are still working, just as said in the blog, they are signed integer.
Do you mean for defining active record migrations? or do you mean Ruby data types?
Here's a link that may help for creating migrations:
Orthogonal Thought - MySQL and Ruby on Rails datatypes
It might be helpful to know generally what these data types are used for:
binary - is for storing data such as images, audio, or movies.
boolean - is for storing true or false values.
date - store only the date
datetime - store the date and time into a column.
decimal - is for decimals.
float - is for decimals. (What's the difference between decimal and float?)
integer - is for whole numbers.
primary_key - unique key that can uniquely identify each row in a table
string - is for small data types such as a title. (Should you choose string or text?)
text - is for longer pieces of textual data, such as a paragraph of information.
time - is for time only
timestamp - for storing date and time into a column.
I hope that helps someone! Also, here's the official list: http://guides.rubyonrails.org/migrations.html#supported-types
| common-pile/stackexchange_filtered |
Transitioning to Version Control
Until now, we have not had an application version strategy or version-ed our application. We are considering doing so with our next release. What are the minimum tasks necessary to transition to a version-ed application? Answers in the form of bullet points, not necessarily in chronological order would be sufficient. It's okay to keep it high level. A few examples might be:
Determine version control name convention Major.Minor.Build.YY...
Revise on screen imaging or text to display current version
Prepare release notes template for communicating new releases to users
We're a .Net shop using TFS if that helps.
If it's not obvious already, I'm not a developer. I'm the solution expert/BA acting often as the project manager.
http://semver.org/
I'm voting to close this question as off-topic because this isn't specifically related to open source, but goes for all kinds of software development. It's a good question though. It could maybe fit on stackoverflow or programmers.stackexchange
@Martijn I don't think a question has to be specific to open source to be on-topic and it arguably would just get sent back to us if migrated (it's broad, and it fits better here than anywhere else).
@Drano, seeing that your relatively new to Stack Exchange, we've placed your question on-hold mostly because there are many steps to the Version Control process. As the close reason applies, you can edit to try to isolate a specific issue that you are facing, so that we can answer the underlying question with better quality. Thanks :D
You've more or less got it. If you've automated your builds, you could also leverage your build system to monkey patch the version numbers.
Check out the major contenders for distributed version control systems (centralized systems don't cut it, in my humble opinion, both in development flexibility and on technical grounds; "distributed" can be used centralized, but not the other way around). Look for git and mercurial, there are tons of documentation on each, and plenty of sites offering hosting.
| common-pile/stackexchange_filtered |
Passing data between two flutter classes
I am calling the below class:
`HomePage(encryptedCipher: args as Uint8List))`
I am passing encryptedCipher (important secret)
In the HomePage:
`class HomePage extends StatefulWidget {
final Uint8List encryptedCipher;
const HomePage({Key? key, required this.encryptedCipher}) : super(key:key);
@override
State<HomePage> createState() => _HomePageState();
}`
In _HomePageState() class, I am accessing it via the below method:
`@override
void initState() {
super.initState();
encryptedCipher = widget.encryptedCipher;
}`
Question: Is this safe and can a hacker have access to encryptedCipher, while it is passed between classes?
I tried doing research on if this method is safe, but have not been able to find helpful articles.
It is safe to pass an encrypted value like encryptedCipher between two classes as long as the encryption method used is secure and the key used to encrypt the data is kept secret.
Accessing the encryptedCipher via the widget property in the _HomePageState class is also safe since the widget property is only accessible within the context of the State object and is not accessible outside of it.
And since encryptedCipher is not stored in the memory of the device in a way that would allow an attacker to access it, then you should not be concerned about this.
How about decryptedCipher? can we decrypt it at login screen and pass the secret to home page?
| common-pile/stackexchange_filtered |
Replace a tag with href link
I have a html source code and want to replace all <a> tags with its containing href link.
So the tag looks like:
<a href="http://google.com" target="_blank">click here</a>
I expect as output:
http://google.com
I already tried some regex in combination with preg_replace,
but none of them give me the href content.
So what would be the best way to do this?
<?php
$text = '
Random text <a href="foobar.html">Foobar</a> More Text
Other text <a href="http://www.example.com">An example</a>
Still more text <a href="http://www.example.com/foo/bar.html">A deep link</a>. The end.
';
preg_match_all('/<a href="(.*?)"/i',$text,$matches);
foreach ($matches[1] as $match) {
print "A link: $match\n";
}
The result:
A link: foobar.html
A link: http://www.example.com
A link: http://www.example.com/foo/bar.html
This won't completely match till end of <a> tag.
It doesn't need to. The non-greedy search (.*?)"will stop the match when it hits the closing quotation marks.
But it has to replace hole <a> tag.
P.S: No down votes from me. Someone is trying to earn his suffrage badge. :D
Ah, he's replacing, not just matching. In that case my regex would probably be similar to yours. $text = preg_replace('#<a href="(.*?)".*?</a>#i', "$1", $text);
| common-pile/stackexchange_filtered |
Use Group Email (hosted by Google) to create an account on Google Play Store
I am trying to create a Google Play Store account but with a group email (hosted by Google). Is this possible?
For example, I have a group email on my Google Apps Business Console<EMAIL_ADDRESS>and I want to use this group email to set up an account on Google Play store. Is this possible?
Do you have access to this email account (password)?
From what I know, group emails (hosted by Google) don´t have passwords, so that is one of the reasons why I can´t create an account.
From my own experience the Google group emails are not personal emails therefore you'll not be able to create a Play Store account with one. Also, you as an individual, don't have access to such an email account, no password access.
What you could do is to create the Play Store account using the email account which manages (the owner account) the group account.
Thank you ! That´s what I thought, but since I am new to Google Apps was not sure.
| common-pile/stackexchange_filtered |
python TKinter 'int'/'str' object has no attribute 'append'
I know this question is asked a lot but i cant seem to get my code to work.
As a projected i'm trying to build a simple calculator. But i'm kind of stuck. Here is my code.
import Tkinter as tk
import tkMessageBox
top = tk.Tk()
def helloCallBack(x):
counter = 0
counter.append(x)
tkMessageBox.showinfo("result", counter)
one = tk.Button (top, text = "1", command = lambda: helloCallBack(1))
two = tk.Button (top, text = "2", command = lambda: helloCallBack(2))
three = tk.Button (top, text = "3", command = lambda: helloCallBack(3))
four = tk.Button (top, text = "4", command = lambda: helloCallBack(4))
five = tk.Button (top, text = "5", command = lambda: helloCallBack(5))
six = tk.Button (top, text = "6", command = lambda: helloCallBack(6))
seven = tk.Button (top, text = "7", command = lambda: helloCallBack(7))
eight = tk.Button (top, text = "8", command = lambda: helloCallBack(8))
nine = tk.Button (top, text = "9", command = lambda: helloCallBack(9))
zero = tk.Button (top, text = "9", command = lambda: helloCallBack(0))
one.pack()
two.pack()
three.pack()
four.pack()
five.pack()
six.pack()
seven.pack()
eight.pack()
nine.pack()
zero.pack()
top.mainloop()
i'm currently getting the 'int' object has no attribute 'append'
does this mean that you can't use the append command with numbers?
if so how would it be possible to make it so if i press one of the buttons it adds that number to the counter so if you press button one, two, five you would get 0125 i've also tried doing this with
counter = ""
but that just gives the same error but with 'str' object has no attribute 'append'
i'm new to python and any help would be greatly appreciated.
the append function is for lists. try counter += x
Mb try to use counter = 0 counter += str(x) instead of .append. It should work, because we predefine it's type and concatenate strings, not integers.
does this mean that you can't use the append command with numbers?
Yes, that is exactly what it means.
if so how would it be possible to make it so if i press one of the buttons it adds that number to the counter so if you press button one, two, five you would get 0125
You solve this by making counter a string. Leave it as a string until the moment you need it to be an integer, at which point you can do the conversion.
Though, strings don't have an append method either. To append to a string you can use +=, as in:
counter += x
Though, that requires that x be a string, too. The simple solution to that is to pass in a string rather than a number:
one = tk.Button (..., command = lambda: helloCallBack("1"))
two = tk.Button (..., command = lambda: helloCallBack("2"))
...
Thanks this work great. I also needed to change the counter variable to being a global one as well and move it out of the function, other wise it kept resetting to just the number pressed thanks.
| common-pile/stackexchange_filtered |
Keep UILocalNotification Banner visible while app is in background
I'm working on a VoIP phone for iOS, and if a call comes in while running in the background, I do a presentLocalNotificationNow with a UILocalNotification message to inform the user. This works fine, however the banner expires and rolls off the screen before the user has enough of a chance to swipe and answer. I have tried scheduling further notifications at regular intervals, but that fills up the notification centre and causes the banner to appear to be tumbling.
Is there some way to prevent the banner from disappearing until I cancel it in some manner? Both the Skype app and Bria VoIP app have managed to come up with a solution where the banner stays until the call is answered, or the callee hangs up.
The best way to do this is to add a sound to the local notification.
The default notification without a sound lasts 5 seconds as a banner, however you can include
a sound that is longer, up to 30 seconds that you can play when the local notification is posted. The banner notification will stay on screen as long as the sound is playing.
Ah yes, that does seem to be what Skype and Bria are doing. Their banners expire after 30s.
No. I think it is not possible to control/increment the hiding banner time to display. It is it's default behavior.
But instead of Banner style you can set Notification style as Alert. Which will not hide/disappear until user will click on alert's buttons.
Hope this will help.
Can you please put code how to set notification style as alert?
| common-pile/stackexchange_filtered |
When a striped candy destroys a row, is the row direction random?
When you combine 4 candies in a row, you are left with a striped version of that candy. If you subsequently destroy that striped candy, it will destroy a whole row of candies.
I have noticed that the row direction is sometimes horizontal, and sometimes vertical.
Is this chosen by random? or is there specific logic that determines the direction? If so, what is that logic?
Short Answer:
When you use a striped candy, you can tell which direction it will clear based on the direction of the stripes:
The direction of the stripes is determined by the direction you moved the final candy to create the striped candy, as described below.
According to my experience and online sources:
The direction of row/column destroying candy (indicated by the direction of the stripes on the candy) is determined by the direction you moved the piece to create the special candy.
--X- --X-
--X --X-
-X-- Move horizontally --X- creates a row destroyer that destroys horizontally
--X- --X-
---- ----
-X-- ----
X-XX Move vertically XXXX creates a column destroyer that destroys vertically
---- ----
Moreover the location of special pieces is the location of where the piece that was moved to create the bonus ended up. So:
In the first example, it doesn't really matter because the special candy will fall to the bottom.
In the second example, the bonus would appear at the location second from the left like so:
----
----
-O--
----
In the event that you move a striped candy to create another striped candy, the location of the new striped candy will be randomly selected from one of the other candies in the set and the striped candy you just moved will activate.
Although not as concise as the extract I found from one of the links in the other answer. It does explain it well, and for the extra effort I will award you by accepting this answer. It would be nice if you could edit in something about the direction of the stripes indicating which direction it will destroy
@mousefan I appreciate that. I put the bit about the stripe directions in. Technically, both answers are fairly close in length after the inclusion of your extract - mine takes much more space because I ASCII'd up a visual representation to make a clear demonstration of what was happening in case the wording wasn't getting the point across. I like David Toh's links better than mine, but the information is essentially the same.
In my experience, one caveat is that, if the piece you move into position to make four-in-a-row is already a striped candy, then a different piece in the group will become a new striped candy.
Good point @AdamV . I've added that to the answer. It's been several months since I last played, but that reminds me that I recall seeing striped candy match made entirely out of striped candy due to cascades, but I can't recall where the new striped candy, if any, ended up. I can't find an answer in my recent searches. I'll look into it more when I have the opportunity and update the answer further.
Check out these websites for more information on how the candies work
http://candy-crush-saga.wikia.com/wiki/Candies
Or, check out this one which has a better explanation.
http://www.imore.com/candy-crush-top-10-tips-tricks-and-cheats and got to the 4th tip.
Taken from the second link:
Candy is stripped in the same direction as the final candy moved to
complete the previous formation. If your move a candy horizontally, it
will make a horizontal stripe, which will then explode horizontally as
well.
@musefan Too bad I can only approve but not upvote your edit - turning a rather tiny link-only answer into something probably helpful - #DavidToh, please try to be more verbose here, it's always annoying to see an answer only consisting of links that may become dead at one point in the future, therefore it's strongly recommended to provde at least a relevant excerpt of that information
David, I chose not to accept this answer based on the fact it was just a couple of links originally. If you had included the extract that I edited in, I would have accepted it. Just something to keep in mind for your next answer ;-)
It totally depends on how you are combining candies in the game. If you are combining four candies in a column or vertical way, you are creating vertical striped candy, which will clear the entire column candies. If you are combining four candies in a row or horizontal way, you are creating horizontal striped candy, which will clear the entire row candies.
For more information Visit: How to create Striped Candies
Actually, are you sure this is correct? It all depends on which direction you move the last piece, doesn't matter which direction the 'row of four' is facing
| common-pile/stackexchange_filtered |
Mock a function called by child process in Jest
I have been writing unit tests for a grunt task and want to mock "request" that the grunt task encounters during its execution. I am using "exec" to call the grunt task in the child process of a nodejs jest test.
task.js
const request = require('request');
//grunt task registration and other stuff
//...
request(url,function(error,response,body){
//logic
});
task.spec.js
const exec = require('child_process').exec
const request = require)'request')
jest.mock('request')
describe('UT for something',()=>{
it('should pass',async ()=>{
request.mockImplentation((url,cb)=>{
cb(undefined,undefined,'mocked response');
});
const execChild = () => {
return new Promise((resolve,reject)=>{
exec('grunt task',(error,stdout,stderr)=>{
if(error) reject(error);
resolve(stdout ? stdout :stderr);
})
})
}
await execChild();
//rest of the code
})
})
The problem is that when i run the test case the mocked implementation of the request module is not used and the original call is being made. I have a hunch that maybe the request module is being mocked only for the main process and when child process is spawned it uses the original request module and that it is not receiving the mocked value of the module. Any idea on how i can mock the request module for the child process as well ?
i ran this test in debug mode and the other tests where i have successfully mocked the request module and the difference was that in other tests when i look the request function i could it is mocked as it has a property in it like "isMocked:true" which is missing when i do the same for the problematic test
Could you not extract the relevant code from the grunt task so you can call it directly and therefore test it more easily?
@jonrsharpe You mean to say that I export the function containing the grunt task and then test that in jest so I don't have to use the child process module to call the grunt cli?
I mean separate out "grunt task registration and other stuff" from the actual functionality you're talking about, so you can isolate it for testing more easily (yes, without needing to run a child process). Then your grunt tasks just import and run tested functionality.
I see your point. But the functionality that I want to test requires some of the grunt configuration present in the GruntFile.js. Is there any way to execute that GruntFile's content so that grunt is initialized before my test runs?
Could that be passed as arguments instead? Think about how to decouple testable functionality from the implementation details of the task runner.
| common-pile/stackexchange_filtered |
Set session cookies in play framework testing using selenium hd
I am trying to do some unit testing using selenium hd. I currently have the following code:
browser.goTo("http://localhost:3333/")
<EMAIL_ADDRESS> browser.$("#password").text("secret")
browser.$("#loginbutton").click()
browser.goTo("http://localhost:3333/michele")
But I would like to be to be able to set a session cookie instead of having to do this for every test.
I tried to do something like this
browser.webDriver.manage().addCookie( new Cookie("PLAY_SESSION",
"1dd6811c9df64e03a892f55f57dd0f1190656d88-email%3Amichele%40sample.com") )
but doesnt work out, as I get a null pointer exception when I try to retrieve the cookie using
browser.getCookie("PLAY_SESSION").getValue() must<EMAIL_ADDRESS>Any help is greatly appreciated!!
What is the type of browser?
@senia: It's TestBrowser which extends FluentAdapter
Did you figure it out? Struggling with the same here
Since<EMAIL_ADDRESS>is converted into michele%40sample.com hence try using
michele%40sample.com instead of<EMAIL_ADDRESS>browser.getCookie("PLAY_SESSION").getValue() must contain("michele%40sample.com")
| common-pile/stackexchange_filtered |
Plot of Implicit Equations
I've read Cool Graphs of Implicit Equations recently. In the article, it mentioned a software GrafEq which can draw graphs of arbitrary implicit equations. For example,
And I've tried several equations in Mathematica, it failed to give a same graph.
ContourPlot[Exp[Sin[x] + Cos[y]] == Sin[Exp[x + y]], {x, -10, 10}, {y, -10, 10}]
And in GrafEq's page, it says:
“ All software packages, except [GrafEq 2.09], produced erroneous
graphical results. ... [GrafEq 2.09] demonstrated its graphical
sophistication over all the other packages investigated.” — Michael
J. Bossé and N. R. Nandakumar, The College Mathematics Journal
(The other software packages were MathCad 8, Mathematica 4, Maple V,
MATLAB 5.3, and Derive 4.11.)
I want to know if I missed some techniques in Mathematica which can handle these equation drawing? Or it's really a 'weakness' of Mathematica in this area?
Edit:
In the link: http://www.peda.com/grafeq/description.html:
The program also features successive refinement plotting, which
deletes regions of the plane that do not contain solutions, revealing
the regions that do contain solutions. Plotting is completed by
proving which pixels contain solutions. This technique enables the
graphing of implicit relations, in which no single variable can be
readily isolated. Such relations cannot be graphed at all by the
typical computer graphing utility or graphics calculator. Successive
refinement plotting also permits the plotting of singularities.
It seems it judges pixel by pixel, if the pixel's (x,y) is the solution of the equation, then it will be colored. Then what Mathematica's method to draw such equations? Does it have a similar mode we can choose when drawing such graph?
Now, I think I should make my questions clear:
How Mathematica handle such drawings.
How GrafEq handle that, I think I've got some clues, but not sure.
How to get the same result with Mathematcia?
I hope you understand you changed the meaning of your question completely with your last edit. You may at least say so, for the people that has been working for an hour trying to help you deserve that.
@belisarius: Thanks for your answer and your suggestion. I'm also trying hard to figure out the answer. So when I find the new clues, I think I should share it to others. I think this will be helpful to get the answer quick.
@belisarius: I just want to know: 1>. How Mathematica handle such drawings. 2>. How GrafEq handle that (I think I've got some clues, see the editing. 3>. How to get the same result with Mathematcia?
Releated link, http://mathematica.stackexchange.com/questions/19590/what-is-a-good-way-to-plot-some-difficult-implicit-equations
Anybody curious about the machinery of GrafEq should look at Tupper's paper and thesis.
In principle the same method could be used in Mathematica. The problem boils down to determining if a solution to $a=b$ exists somewhere inside each pixel. Here's an oversimplified approach where I calculate $a-b$ for 100 random points within the pixel and see if there are both positive and negative values. If there are, there must be a zero crossing somewhere inside the pixel.
pixelContainsSolution[x0_, y0_] :=
(Max[#] > 0 && Min[#] < 0) &[
Exp[Sin[#1] + Cos[#2]] - Sin[Exp[#1 + #2]] & @@@
Transpose[{x0, y0} + RandomReal[{-0.05, 0.05}, {2, 100}]]]
Image[1 - Table[Boole@pixelContainsSolution[x, y],
{x, -10, 10, 0.1}, {y, -10, 10, 0.1}]]
It's pretty slow - to obtain a nice high resolution plot in a reasonable amount of time would need some optimisations.
Edit
Example of faster code:
data = Compile[{}, Block[{x = Range[-10, 10, 0.0025]},
Exp[Outer[Plus, Sin[x], Cos[x]]] - Sin[Exp[Outer[Plus, x, x]]]]][];
Developer`PartitionMap[Sign[Max[#] Min[#]] &, data, {20, 20}] // Image
The second block of code produces a pretty respectable version of this image, more or less the same as the red one in the question. I would say Simon has a proof-of-concept that Mathematica can do whatever this GraphEq can do -- except it can also do plots that are not contour plots... and lots of other math.
@Simon: Can you use your method to plot $sin(sin(x)+cos(y))=cos(sin(x*y)+cos(x))$? I tried, but failed. I'm not very familiar with Mathematica.
@diverger See my updated.
That's the best I could go with Mathematica 10.0.2, with PlotPoints->50 and MaxRecursion->4.
ContourPlot[
E^(Sin[x] + Cos[y]) == Sin[E^(x + y)], {x, -10, 10}, {y, -10, 10},
Axes -> True, ImageSize -> Large, PlotPoints -> 50,
MaxRecursion -> 4]
The rendering took about 1 hour with Mathematica eating all my 16Gb Ram.
(I'll never try something like this again!)
EDIT
Following Mr.Wizard comment here's a better plot and a better solution. It took just 1m 12s but the Ram utilization peaked to 13GB (beware!):
ContourPlot[
E^(Sin[x] + Cos[y]) == Sin[E^(x + y)], {x, -10, 10}, {y, -10, 10},
Axes -> True, ImageSize -> Large, PlotPoints -> 2000,
MaxRecursion -> 0]
+1 for the computational expense to demonstrate a result
Incidentally I get a fairly similar result much faster with: ContourPlot[E^(Sin[x] + Cos[y]) == Sin[E^(x + y)], {x, -10, 10}, {y, -10, 10}, Axes -> True, ImageSize -> Large, MaxRecursion -> 0, PlotPoints -> 1000]
@Mr.Wizard I can confirm your result. So I've learned that MaxRecursion is useless in this case and it just slows down the calculation and eat memory. Even MaxRecursion -> 0, PlotPoints -> 2000 works in a reasonable time (1m 12s) but with a peak of 13GB of Ram used (!).
The following is a answer to the original question, without accounting for the last edit.
The red plot is nice,but I'm not sure what they are trying to show. Inside the solid colored regions the frequency is very high, but the function isn't constant, as an easy check can show:
f[x_, y_] := Exp[Sin[x] + Cos[y]] - Sin[Exp[x + y]]
GraphicsRow[{Plot[f[x, 5], {x, -10, 10}], Plot[f[x, 5], {x, 4.999, 5}]}]
So the red plot is just a simplification of the real one which is better shown in the Mathematica output.
@Rahul A solid filling represents in my book a constant valued function. Increasing PlotPoints and MaxRecursion makes Mma performs better, as always
With[{eqn = Sin[Sin[x] + Cos[y]] - Cos[Sin[x y] + Cos[x]]},
With[{data = Compile[{}, Block[{x = Range[-10, 10, .005]},
Reverse@Table[UnitStep@eqn, {y, x}]]][]},
Erosion[ColorNegate@EdgeDetect[Image[data]], 2]
]
]
To change the color, you can use ColorReplace[image, Black -> color]
(* A little faster than the built-in EdgeDetect *)
ClearAll[edgeDetect];
edgeDetect[img_] := Image[Sqrt[ImageData[ImageConvolve[img, {{1, 0}, {0, -1}}]]^2 +
ImageData[ImageConvolve[img, {{0, 1}, {-1, 0}}]]^2]];
Previous answer
data=Compile[{},With[{y=Range[-10,10,0.006]},
Table[UnitStep[(E^(Sin[x]+Cos[y])-Sin[E^(x+y)])],{x,y}]]][];//AbsoluteTiming
ArrayPlot[data,DataReversed->True]
| common-pile/stackexchange_filtered |
Is it more practical to define an IDT at assemble time, or build it in memory programmatically from a bootloader/kernel?
It seems to me there is alot of work to be done when writing yourself a complete IDT. Writing all the handlers, ect. Even with things like macro's and "times" directives to help you. If an IDT
consists of 256 qwords (more or less) that hold information about ir handlers to call, flags, segment selector, ect. Would it not be easier just to get yourself into pmode, pick yourself a memory location and programmatically create everything you need there? If you start at address 0x7bfff, and build all 256 entries up to 0x7ffff, making sure to give them the address of a common handler to call more specific handlers from, flags, selector, ect. And then you know your base and limit to. Then just fill in the idt pointer, load it, and prey that it works.
Practicality is always a subjective perception. As such, your question cannot be completely answered. Nonetheless we can look at some arguments for and against building the interrupt descriptor table (IDT) at run time:
Basically, your question boils down to whether to generate constant data (the IDT) at run or compile time. Assuming that the same amount of data will be generated, no matter when, we have some usual arguments:
program size: If you include the fully generated data, then the size of your program (in this case the kernel) will increase, leading to potentially longer initial load times. Of course this is only the case if the data could be generate by code whose size is negligible compared to the size of the data.
execution time: Generating data at run time involves "doing stuff", thus increasing the execution time. Depending on how long it takes to generate the data, and how often it needs to be generated, this is either a serious issue or negligible.
In the context of a kernel, I can think of the following circumstances that would require pondering between the two options:
Limited read only memory (ROM) for the kernel (along with other code) to be placed in. In this case, generating the IDT at run time is favorable to reduce the static memory footprint. This won't happen to you unless in an embedded setting with (very) limited memory available.
Critical boot times. If in some hard real time setting, and your system shuts down (due to an update, failure, ...), then you want it to be up again as quickly as possible. Generating the IDT at run time could increase the boot time just a bit too much. On the other hand, loading static data also takes time. IMHO it's very unlikely that you need to consider this.
For all other situations, there won't be much of a difference in terms of such "measurable" metrics. Consider though that there are some trade offs concerning debugging and run time generated data; when using a static IDT you can validate it before even running the kernel.
That's where another aspect of practicability comes in:
[..] Writing all the handlers, ect. Even with things like macro's and "times" directives to help you. [..]
This is, of course, extremely depending on how you would actually write a static IDT. Basically, nobody stops you from writing a (user) program to emit a correct IDT to be linked into your kernel, utilizing the same code that you would put into the kernel (or even simpler code than that). So having to "write all the handlers" is a deficit of your tools, and as such not directly related to whether generating them at run time or not.
In a hobbyist kernel I wrote long ago I created handlers (code) "by hand" (I would do this differently now ^^) and the actual IDT (data) during run time:
// src/idt/idt.c
void IDT_SetGate (UINT8 num, UINT base, UINT16 sel, UINT8 flags)
{
UINT *tmp;
tmp = IDT;
tmp += num * 2;
*tmp = (base & 0xFFFF);
*tmp |= (sel & 0xFFFF) << 16;
tmp++;
*tmp = (flags & 0xFF) << 8;
*tmp |= (base & 0xFFFF0000);
}
// src/int/isr.c
void ISR_Setup (void)
{
IDT_SetGate(0, (unsigned)_isr0, 0x08, 0x8E);
IDT_SetGate(1, (unsigned)_isr1, 0x08, 0x8E);
// ... a lot more, an array would've been the better choice :D
}
I then had minimal handlers _isr0, _isr1, ... which called a common handler after saving the interrupt number and dealing with error codes. Which is the main argument against:
[..] making sure to give them the address of a common handler to call more specific handlers from [..]
At least on x86 you need distinct handlers for the different interrupts, because otherwise you cannot tell which interrupt fired. To truly generate even the handlers at run time you would need some sort of specialized assembler, which aids you in code generation at run time. I wouldn't call that "practical" anymore.
| common-pile/stackexchange_filtered |
SQL BETWEEN Time doesn't show results
Could use fresh POV help :)
Need to show trips between midnight and 4AM. StartTime column is of type DATETIME. I already tried
CAST(StartTime AS TIME) BETWEEN CAST('00:00:00' AS TIME) AND....
but it didn't show the correct results and so I've done this IIF but still even though the StartTime is NOT between midnight and 4AM, the results shows 1. Am I missing something? Thank for your help!
SELECT
id
, StartTime
, DATEADD(d, 0, DATEDIFF(d, 0, StartTime)) AS Midnight
, DATEADD(hh, +4 , DATEDIFF(d, 0, StartTime)) AS FourAM
, IIF((StartTime BETWEEN DATEADD(d, 0, DATEDIFF(d, 0, StartTime))
AND DATEADD(hh, +4 , DATEDIFF(d, 0, StartTime))), 1, 0) _Between_Check
FROM
FUN.Trip
Results:
ID | StartDate | Midnight | FourAM | Between_Check
------------+-----------------------+-------------------------+-------------------------+--------------
-2135024021 | 19-10-02 00:04:01.000 | 2019-10-02 00:00:00.000 | 2019-10-02 00:04:00.000 | 1
-2135024228 | 19-10-05 00:04:30.000 | 2019-10-05 00:00:00.000 | 2019-10-05 00:04:00.000 | 1
Above is the result I'm getting.
The Between_Check column should be showing 0 as the StartDate is a minute past 4AM
This is what I should be getting
Results:
ID | StartDate | Midnight | FourAM | Between_Check
------------+-----------------------+-------------------------+-------------------------+--------------
-2135024021 | 19-10-02 00:04:01.000 | 2019-10-02 00:00:00.000 | 2019-10-02 00:04:00.000 | 0
-2135024228 | 19-10-05 00:02:30.000 | 2019-10-05 00:00:00.000 | 2019-10-05 00:04:00.000 | 0
please show some sample data and expected result in text and not image
Thanks and sorry for formatting. I'm new to stackoverflow so need to gain experience in it.
'
id | StartTime | Midnight
-2135024021 | 2019-10-02 00:04:01.000 | 2019-10-02 00:00:00.000 |
FiveAM | BETWEEN_Check
2019-10-02 05:04:01.000 | 1
-2135024020 | 2019-10-01 05:27:25.000 | 2019-10-01 00:00:00.000 | 2019-10-01 10:27:25.000 |1
-2135024020 | 2019-10-01 05:58:43.000 |2019-10-01 00:00:00.000 | 2019-10-01 10:58:43.000 |1
'
Two last rows should be showing 0 as the Time is beyond 5AM. Each row starts with -213502
please update your question with these information and not in comment
Curious thing that DATEADD(hh, +4 , DATEDIFF(d, 0, StartTime)) AS FourAM results in 2019-10-02 00:04:00.000. Your system must be adding metric hours that are only one Imperial minute each.
Thanks HABO. Did you mean it should be 04:00:00 rather than 00:04:00
@TommyD It depends. Most of us use hh:mm:ss for times. Do you use ss:hh:mm?
if you want to check if the time is between midnight and 4 AM, why not simply use datepart(hour, {datetime})
IIF (DATEPART(HOUR, StartTime) BETWEEN 0 AND 4, 1, 0)
or alternatively, CONVERT() the StartTime to time data type and then compare with time string
IIF (CONVERT(TIME, StartTime) BETWEEN '00:00' AND '04:00', 1, 0)
or
IIF (CONVERT(TIME, StartTime) >= '00:00' AND CONVERT(TIME, StartTime) < '04:00', 1, 0)
Not sure you wanted the time 04:00 to be inclusive or not. Anyway above are various option you can use
what you are doing here will always be true
IIF(
(StartTime BETWEEN
DATEADD(d, 0, StartTime)
AND DATEADD(hh, +4, StartTime))
, 1, 0)
because DATEADD(d, 0, StartTime) has basically no effect and is equal to StartTime
Thanks Squirrel!
This still results in 1 in the above shown case. The StartTime is 1 minute past 4. Your idea, thanks! does work with GETDATE() as StartTime but doesn't work with the actual StartTime, example with data I've pasted in the question.
Thanks for your time Squirrel.
I get you, I've changed it to
DATEADD(hh, +4 , DATEDIFF(d, 0, StartTime))
but still I'm getting the same result. Bugs me completely.
please update the sample data in the question. It is unreadable in the comments
if the logic required does not include 04:00 then change to BETWEEN 0 AND 3
Hi Squirrl, thaks mate for your time with this. I've edited the question. See results :) Hope it makes sense now
The answer is in the HABO comment who made me look at the result set.
| common-pile/stackexchange_filtered |
How to fill high-end bits in a Java byte with '1' without knowing the last 1 in advance? (FAST FIX Negative Integer decoder)
I am writing a FIX/FAST decoder for negative numbers as described below:
My question is:
How to fill the high-end bits of a Java byte with 1s as it is described above? I am probably unaware of some bit manipulation magic I need to in this conversion.
So I need to go from 01000110 00111010 01011101 to 11110001 10011101 01011101.
I know how to shift by 7 to drop the 8th bit. What I don't know is how to fill the high-end bits with 1s.
It seems like the question you're asking doesn't really match up with the problem you're trying to solve. You're not trying to fill in the high bits with 1; you're trying to decode a stop-bit-encoded integer from a buffer, which involves discarding the sign bits while combining the payload bits. And, of course, you want to stop after you find a byte with a 1 in the stop bit position. The method below should decode the value correctly:
private static final byte SIGN_BIT = (byte)0x40;
private static final byte STOP_BIT = (byte)0x80;
private static final byte PAYLOAD_MASK = 0x7F;
public static int decodeInt(final ByteBuffer buffer) {
int value = 0;
int currentByte = buffer.get();
if ((currentByte & SIGN_BIT) > 0)
value = -1;
value = (value << 7) | (currentByte & PAYLOAD_MASK);
if ((currentByte & STOP_BIT) != 0)
return value;
currentByte = buffer.get();
value = (value << 7) | (currentByte & PAYLOAD_MASK);
if ((currentByte & STOP_BIT) != 0)
return value;
currentByte = buffer.get();
value = (value << 7) | (currentByte & PAYLOAD_MASK);
if ((currentByte & STOP_BIT) != 0)
return value;
currentByte = buffer.get();
value = (value << 7) | (currentByte & PAYLOAD_MASK);
if ((currentByte & STOP_BIT) != 0)
return value;
currentByte = buffer.get();
value = (value << 7) | (currentByte & PAYLOAD_MASK);
return value;
}
A loop would be cleaner, but I unrolled it manually since messaging protocols tend to be hot code paths, and there's a fixed maximum byte length (5 bytes). For simplicity's sake, I read the bytes from a ByteBuffer, so you may need to adjust the logic based on how you're reading the encoded data.
That's great, but just curious: why do you think a loop would be worse? You are repeating the currentByte & PAYLOAD_MASK every time when it only needs to be done at the very last byte. A loop would allow you to detect that...
This code comes from a production system and underwent extensive performance testing. Several variations were tested, and this one consistently yielded the best performance. Branching instructions tend to be expensive, while bit manipulation is cheap, so better to avoid conditionals.
I do believe you! But that does not make much sense because you are checking the condition (currentyByte & STOP_BIT) != 0 multiple times like a loop would do. So you are saying that the loop itself is more expensive? (again just trying to make sense of what you claim)
I'm saying that the looping versions consistently performed worse for us. That doesn't mean much. Actual performance depends on many factors, including hardware architecture and inlining (which can depend on your most common call stacks). Feel free to use a looping version if that's what you prefer. It was easier to post this version since I had the source handy.
Fillig the high bits might go as:
int fillHighBits(int b) { // 0001abcd
int n = Integer.highestOneBit(b); // 00010000
n = ~n; // 11101111
++n; // 11110000
return (n | b) 0xFF; // 1111abcd
}
As expression
(~Integer.highestOneBit(b) + 1) | b
Though the examples you gave lets me doubt this is what you want.
This can be done very simply using a simple accumulator where you shift in 7 bits at a time. You need to keep track of how many bits you have in the accumulator.
Sign extension can be performed by simple logical shift left follwed by arithmetic shift right (by the same distance) to copy the topmost bit to all unused positions.
byte[] input = new byte[] { 0x46, 0x3A, (byte) 0xDD };
int accumulator = 0;
int bitCount = 0;
for (byte b : input) {
accumulator = (accumulator << 7) | (b & 0x7F);
bitCount += 7;
}
// now sign extend the bits in accumulator
accumulator <<= (32 - bitCount);
accumulator >>= (32 - bitCount);
System.out.println(Integer.toHexString(accumulator));
The whole trick is that >>N operator replicates the top bit N times.
@chrisapotek Uhm 32 happens to be the number of bits a int contains. You could write Integer.SIZE to make it clear beyond doubt that the number of bits in the int type is meant. Writing just 32 is a (bad, but overwhelmingly widely spread) habbit from the past.
I am sorry. (Shamed!) An integer bigger than 32 bits will be hard to find :)
something like this:
int x = ...;
x = x | 0xF000;
do logical OR (|) with a number which has highend bits set to 1 and rest are 0
for example:
<PHONE_NUMBER>101010
OR<PHONE_NUMBER>000000
--------------------
<PHONE_NUMBER>1010101
I don't know what the last bit is, that's my problem, no? How do I find out what to mask? The last 1 can be anywhere!
well then iterate through bit sequence Integer.toBinaryString(num) and generate mask
| common-pile/stackexchange_filtered |
Execute SQL queries stored in a object variable
I am struggling with the following scenario, please help.
Got a request to create 20 different data extracts(using different sql queries) from sql database to a text file and SFTP using cozyroc ext that we use inside ssis.
Instead of creating 20 different connections and creating multiple variables to hold outfile variables etc, I thought of using for each loop container in SSIS.
My approach:
I created a table and loaded all 20 queries, each row is a query with following columns id, sqlstatement, metric.
I created an "Execute SQL Task" - for SQL statement I used (select distinct metric from table) and result set for this would be full result set. Assigned this to variable MetricObject object variable, hence stored all 20 different metrics in the object variable.
Now I created the foreach loop container - inside the collection tab I called MetricObject variable in the ADO object source variable and selected rows in the first table hence can be iterated for each metric, hence iterates 20 times. And assigned this to a Metric variable.
Now inside the foreach loop container I created an Execute SQL Task that holds the stored procedure that calls the table created in the first step and pass the metric variable in step c and generate the queries that are stored in step a and in turn hold the SQL statement om the new created variable called sqlstatement. My expectation is sqlstatement variable will hold the SQL from the first step.
Finally I called the variable in the data flow task and map it to text file and generate the text file.
This can be looped several times until it reaches the end of metric values and load all text file in one go.
For some reason I am not successful. Can you let me know if this is the right approach? Or am I missing something?
I have done something similar where I created my own job queue and all the SQL query are stored in a DB table. The way I handle it is somewhat similar. I loop through all the query and put them into a variable. The difference is (since I dont know what the headers or how many columns will be) I execute the SQL into an object, then use C# script task to take the object, convert it to a dataset, then dynamically save the dataset (and headers/columns) to a .csv file using the C# code. I get the file name/path to export this to from the same database table where the query is stored.
| common-pile/stackexchange_filtered |
Why won't these faces UV unwrap?
I'm trying to unwrap some new faces after fixing some topology on a model that is already textured (in particular at the edge of the mouth).
For some reason a number of faces just won't seem to unwrap or show up in the UV editor after I select and unwrap them. I've tried pinning vertices around them, messing with seams, and checked the normals, but no luck.
In the viewport, they seem to take the texture. But the faces/verts just aren't showing in the UV editor, so I can't get them in the right place.
The picture shows the gap in the UV where the new faces need to be and on the right are the faces I'm trying to unwrap.
blend file
those reds dots on the UV Islands vertices means that they are pinned. This will prevent many operations from succeeding. Select everything and use Alt-p to clear the pinning.
Looks like the problem was that I had 'UV local' view selected, in the UV view options. Not sure what that means yet, but fixed the issue for me.
Not exactly a duplicate. I later found the same problem and it was solved by texture-assigning the faces/vertices in Blender Internal mode then switching back to Cycles.
| common-pile/stackexchange_filtered |
Assertions vs Exceptions - is my understanding of the differences between the two correct?
Design By Contract uses preconditions and postconditions of the public
methods in a class together to form a contract between the class and
its clients.
a) In code we implement preconditions and postconditions either as assertions or as exceptions?
b) We implement preconditions and postconditions in code as exceptions if not fulfilling preconditions or postconditions doesn't indicate logically impossible situations or programming errors?
c) we implement them in code as assertions when not fulfilling preconditions or postconditions does indicate logically impossible situations or programming errors?
d) Should preconditions and postconditions only be defined on public methods?
EDIT:
Aren't the following checks considered to be part of a normal operation ( and as I already mentioned, I've seen plenty of articles on DbC using similar examples for preconditions, where checks were made against arguments supplied by the user ), since if the user enters bad data, then without checks operation won't be rejected and as such system will stop working according to specs:
Link:
public User GetUserWithIdOf(int id,
UserRepository userRepository) {
// Pre-conditions
if (userRepository == null)
throw new ArgumentNullException(
"userRepository");
if (id <= 0)
throw new ArgumentOutOfRangeException(
"id must be > 0");
User foundUser = userRepository.GetById(id);
// Post-conditions
if (foundUser == null)
throw new KeyNotFoundException("No user with " +
"an ID of " + id.ToString() +
" could be located.");
return foundUser;
}
@gnat: I don't agree that it is a duplicate, since I'm asking more to the effect what the definition of assertions is and whether pre and post conditions can be implemented as both assertions and exceptions. My question is a response to http://people.cs.aau.dk/~normark/oop-csharp/pdf/contracts.pdf claiming that class contract is the sum of the assertions in the class and that DbC represents the idea of designing and specifying programs by means of assertions, which would suggest that pre and post-conditions can only be implemented as assertions and not also as exceptions
@gnat: but I already explained how this question is different and anybody reading this topic ( including Stephen C ) can easily see why other answers do not address my questions. And if you can't see the difference in the original answer, then you surely must notice the difference in questions that follow the original
a) In code we implement preconditions and postconditions either as assertions or as exceptions?
Strictly, no. The pre/postcondition is actually the test that raises the exception. The exception is merely the way of saying that pre/postcontion has been violated.
b) We implement preconditions and postconditions in code as exceptions if not fulfilling preconditions or postconditions doesn't indicate logically impossible situations or programming errors?
No. People often use explicit test / raise exception code to indicate a bug, etc.
c) we implement them in code as assertions when not fulfilling preconditions or postconditions does indicate logically impossible situations or programming errors?
You can do. But see above.
d) Should preconditions and postconditions only be defined on public methods?
No. If you are using them (formally) there is no reason to restrict them to public methods.
Actually, the key difference between the two approaches is that you can (typically) turn off checking of the "assertion" style of pre/postconditions (e.g. Java's assert statements) when you move your code into production. In the case of Java, this is typically done using a JVM command line option.
This means that you shouldn't use assertion style pre/postconditions for checks that are part of "normal operation" ... like validating user input. In fact, you probably shouldn't treat them as pre/postconditions at all. They are part of the program's active functionality (for the lack of a better way of describing it ...)
This is confusing. I've seen plenty articles using examples where pre post-conditions were used for checks that are part of a normal operation. Here's one such link: http://devlicio.us/blogs/billy_mccafferty/archive/2006/09/22/Design_2D00_by_2D00_Contract_3A00_-A-Practical-Introduction.aspx. I've also read that DbC uses preconditions in public contract of method, and argument checking in public contracts IS part of a normal operation
@EdvRusj - perhaps you are misunderstanding what I mean by part of normal operation. I mean checks that should never, ever under any circumstances be disabled. Like if you did disable them, the system stops working according to its spec. Like when the user enters bad data it is not rejected. By contrast, if you disabled pre/postcondition checking then the system works ... modulo that it behaves differently if a Bug happens.
Could you see my edit? And I'm sorry for dragging this, but I really would like to figure this out
You should define them as assumptions, (in debugging as assertions) so if the pre or post condition isn't true then you have an logical error in in your program.
The key point here is avoiding the test in production code to speed it up,
for example a binary search can go in O(log n) time only if the array is already sorted (precondition) but testing this (for the exception) takes O(n) so if you test for that you might as well go for the linear search and get an early return out of it.
They can be defined on any piece of code (i.e. anything that can be extracted into a method).
| common-pile/stackexchange_filtered |
Show UIActivityIndicatorView when loading NSString from Web
Hi in my app I load a NSString from Internet using NSURL, and then a label shows the text. If i press the button to load the String, it becomes highlighted and stays highlighted for a couple of seconds. During that time i want a UIActivityIndicatorView to show up, to inform the user that the app is actually doing something. Ive tried just adding [activity startAnimating]; to the IBAction but it only starts animating when the button is back to the default state, not when its highlighted. ive also tried
if ([button state] == UIControlStateHighlited) {
[activity startAnimating];
}
but it doesn't work.
awesome, now it works! Thanks a lot! You forgot to put [spinner start animating] into the code :D. There was a bug that if you pressed the button many times in a row the app would crash, so it got rid of it:
- (IBAction)load:(id)sender {
if ([act isAnimating]) {
}
else {
ASINetworkQueue *queue = [ASINetworkQueue queue];
ASIHTTPRequest *usdRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADUSD=X&f=l1"]];
ASIHTTPRequest *eurRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=l1"]];
ASIHTTPRequest *dateRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=d1"]];
ASIHTTPRequest *timeRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=t1"]];
[queue addOperation:usdRequest];
[queue addOperation:eurRequest];
[queue addOperation:dateRequest];
[queue addOperation:timeRequest];
[queue setQueueDidFinishSelector:@selector(parseLoadedData:)];
[queue setRequestDidFinishSelector:@selector(requestLoaded:)];
[queue setDelegate:self];
[queue go];
[act startAnimating];
}
}
- (void)requestLoaded:(ASIHTTPRequest *)request {
if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADUSD=X&f=l1"]]) {
usdString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=l1"]]) {
eurString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=d1"]]) {
dateString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=t1"]]) {
timeString = [[request responseString] retain];
}
}
- (void)parseLoadedData:(ASIHTTPRequest *)request {
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
NSString *Date = [dateString stringByReplacingOccurrencesOfString:@"\"" withString:@""];
NSString *Time = [timeString stringByReplacingOccurrencesOfString:@"\"" withString:@""];
NSString *total = [NSString stringWithFormat:@"%@ %@",Date,Time];
if ([eurString length] == 0) {
test.text = [defaults objectForKey:@"date"];
eur.text = [defaults objectForKey:@"eur"];
usd.text = [defaults objectForKey:@"usd"];
} else {
test.text = total;
eur.text = eurString;
usd.text = usdString;
[defaults setObject:test.text forKey:@"date"];
[defaults setObject:usd.text forKey:@"usd"];
[defaults setObject:eur.text forKey:@"eur"];
}
[defaults synchronize];
[eurString release];
[usdString release];
[dateString release];
[timeString release];
[act stopAnimating];
}
Need more code... :) Please post IBAction invoked by your button.
the ibaction is here: http://www.iphonedevsdk.com/forum/351611-post12.html
And where do you put your [activity startAnimating];?
oh just to the very top, but as said before it dies not work :(
I've merged your two accounts together. Please read this Faq entry about cookie-based accounts. Also, StackOverflow isn't a forum; if you have a new question, please ask a new question. If you want to include more information in your question, please edit it. If you want to interact with one of the people who has answered, you can leave them a comment.
I think that you should rewrite your code. Maybe I'll do it for you. :)
First of all, download [ASIHTTPRequest library][1]. It's great library for working with network files. I think that you should use queue for this.
Then put this code in your view controller:
- (IBAction)buttonClicked:(id)sender {
ASINetworkQueue *queue = [ASINetworkQueue queue];
ASIHTTPRequest *usdRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADUSD=X&f=l1"]];
ASIHTTPRequest *eurRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=l1"]];
ASIHTTPRequest *dateRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=d1"]];
ASIHTTPRequest *timeRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=t1"]];
[queue addOperation:usdRequest];
[queue addOperation:eurRequest];
[queue addOperation:dateRequest];
[queue addOperation:timeRequest];
[queue setQueueDidFinishSelector:@selector(parseLoadedData:)];
[queue setRequestDidFinishSelector:@selector(requestLoaded:)];
[queue setDelegate:self];
[queue go];
}
- (void)requestLoaded:(ASIHTTPRequest *)request {
if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADUSD=X&f=l1"]]) {
usdString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=l1"]]) {
eurString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=d1"]]) {
dateString = [[request responseString] retain];
} else if([[request url] isEqual:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADEUR=X&f=t1"]]) {
timeString = [[request responseString] retain];
}
}
- (void)parseLoadedData:(ASIHTTPRequest *)request {
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
NSString *Date = [dateString stringByReplacingOccurrencesOfString:@"\"" withString:@""];
NSString *Time = [timeString stringByReplacingOccurrencesOfString:@"\"" withString:@""];
NSString *total = [NSString stringWithFormat:@"%@ %@",Date,Time];
if ([eurString length] == 0) {
test.text = [defaults objectForKey:@"date"];
eur.text = [defaults objectForKey:@"eur"];
usd.text = [defaults objectForKey:@"usd"];
} else {
test.text = total;
eur.text = eurString;
usd.text = usdString;
[defaults setObject:test.text forKey:@"date"];
[defaults setObject:usd.text forKey:@"usd"];
[defaults setObject:eur.text forKey:@"eur"];
}
[defaults synchronize];
[eurString release];
[usdString release];
[dateString release];
[timeString release];
[yourSpinner stopAnimating];
}
In your header file declare theese objects:
NSString *usdString;
NSString *eurString;
NSString *dateString;
NSString *timeString;
I think that it will work. ;)
EDIT: I updated the code so that is must work. I checked it myself. My method of loading your data is faster, safer and more efficient.
hey I cant get the code to work, ive downloaded the library and followed the install instructions (http://allseeing-i.com/ASIHTTPRequest/Setup-instructions) but if i add the code to my project i get a lot of errors :( Please help me out :D
Hey, i have got rid of the problems but the app doesnt seem to run the - (void) part. so ive added all of the files in the download folder and the frameworks and in the header ive added #import "ASINetworkQueue.h"
#import "ASIHTTPRequest.h" the only problem is that the -(void)parseloaddata part isnt run.
no, none, the spinner starts to spin, but doesnt stop, and ive added an alert to the -(void)parseloaddata and its not showing up so the code isnt being run, do i have to import any other files or what?...
Hmm. Can you put NSLogs inside requests' completion blocks? Does console display all of them (4)?
sorry, where exactly, im not very familiar with nslogs, do I just have to put it after every __block ASIHTTPRequest *usdRequest = [ASIHTTPRequest requestWithURL:[NSURL URLWithString:@"http://download.finance.yahoo.com/d/quotes.csv?s=CADUSD=X&f=l1"]]; ? If yes, then it works all 4 are shown.
@Maxner: I updated my code. I double-checked it and it works. Try it and tell me if your spinner stops ;)
| common-pile/stackexchange_filtered |
Angular- How to add different style to a reusable component?
I have made a table where user can add data. The input field is a resuable component.
I want to increase the width and height of the highlighted input field while decreasing the width of the ones inside the table. How do I do it?
This is the stackblitz link
There are two suggestions.
You can set the width of this component to 100% and then control the width of the component from where it is being used. (Only works for width)
width: 100%;
You can make the height and width as @Input fields of the component and also set up some default values so that you don't need to give the height and width every time you use the component.
resusable-component.ts
@Input()
height = 100;
@Input()
width= 100;
resusable-component.html
<div
[style.width.px]={width}
[style.height.px]={height}>
</div>
parent.component.html
<!-- When you want to have a custom height and width-->
<resusable [height]=200 [width]=300></reusable>
<!-- When you want to have a default height and width-->
<resusable></reusable>
this height I should give in the reusable component?
Yes, In the second point I'm referring to the reusable component .ts and .html file
I have done that. Check the stackblitz link. You will find @Input() textSize: string = '1rem'; in the shared component. If I increase that textSize,all the textSizes increases. I don't want that. I only want to increase it in specific case
Seems you might not understand the purpose of the @Input() functionality, I have edited my answer on how to use it, that way you can have the default the height and width when needed and if you want to change the default values you can provide those values as inputs when using the reusable component.
Is it possible to make the width dynamic? Check the link The first app-shared has a fixed width. I want to make it dynamic. Is it possible?
Yes, You can wait without providing the width input when you want to use the default value used inside the app-shared component. Incase you want to use a different value you can provide the width value <app-shared [width]="2rem">. I would also suggest you use Numbers instead string values, as i have done in my answer.
Thanks @Jithendra Would you mind giving one upvote to my question. Will be of great help
| common-pile/stackexchange_filtered |
Tutorial on how to setup multiple laravel 5.1 apps in one project
I am currently working on a Laravel 5.1 project which involves a public section and a admin section. When googling around the issue I was having, I came across this stack post managing user role log in. Where the first post recommends.
Admin and Users in the same laravel app is a bad idea simply because the app will share the same session and storage information. There will be a myriad of egde cases that will cause information to bleed through any logic "walls" you set up and you'll end up spending way too much time patching those bleeds. What you really want to do is set up separate laravel applications for each: admin.project.com & project.com. That way you get two separate sessions and storage. All you need to do is ensure that the databases you need are setup in both database.php config files. You can even host BOTH projects on the same server with separate deployments, listening to different ports. TRUST ME this is the best way to go.
Could someone explain in detail how it could be done? I mean how should I set up my project. I know It is easy for them to share the same DB and setting up that would easy. First question, how can I have URLs admin.mysite.com as my admin section and www.mysite.com as my public section for my 2 apps
Also how to set it up in Azure as a web app? I got my one app I currently have working on Azure ( no 5.1 guides on the internet, got it deploying somehow).
So could someone explain in detail how the project setup should like and how it should be done? Could not find guides for Laravel 5.1 and since 5.1 setup is different from 5 and 4.* I'm not sure how to continue.
Personally disagree with this. Properly managed authorisation and access privileges will (have to) work. Its principally the same as having a role/access hierarchy for your users. If you cant rely on that to work then the whole app is broken. I have had a lot of success using the awesome Sleeping-Owl Admin package that has its own authorisation which deals with most of the issues described http://sleeping-owl.github.io/
@MikeMiller So how would you handle different user types using the same login? rewriting the redirect path according to the user type like the link I posted above suggests or what would you recommend?
So if I understand you want to have users that hold both admin and user roles? This wouldnt be possible using sleeping owls package as admin users are completely separated (different auth, session namespace and db table). You just want a role/access hierarchy. Protect your routes and only allow users with the right roles to access them. try this package maybe https://github.com/romanbican/roles. I used a while ago for something simple and it worked pretty well
Right now I am using middleware to protect my routes @MikeMiller
Out of the box? Check the middleware provided in the package above it will give you some idea
Per your description, it seems you need to deploy 2 apps with custom domains. If so, I think to deploy your admin and user apps separately on 2 Azure Web Apps services. Which is benefit for management, deployment and scaling for each side of your applications. To configure subdomains to your site, you can refer to Azure Websites and wildcard domains and Mapping a custom subdomain to an Azure Web App (Website).
If you insist on deploy 2 apps on a single Azure Web Apps Service, you can try it with URL rewriting in IIS Web.config, E.g.
<rule name="RewriteRequestsAdmin" stopProcessing="true">
<match url="^(.*)$" />
<conditions>
<add input="{HTTP_HOST}" pattern="^admin\.XXXX\.com$"/>
</conditions>
<action type="Rewrite" url="AdminApp/public/index.php/{R:0}" />
</rule>
<rule name="RewriteRequestsUser" stopProcessing="true">
<match url="^(.*)$" />
<conditions>
</conditions>
<action type="Rewrite" url="UserApp/public/index.php/{R:0}" />
</rule>
</rules>
To deploy your local laravel project to Azure Web Apps, you can use Git or FTP tools, please refer to Create a PHP-MySQL web app in Azure App Service and deploy using Git. But by default, the dependence folder vendor and composer files will not be deployed on Azure with project, so we have to login KUDU console site of your Azure Web Apps to install the dependences. You can install composer in site extensions tab of your KUDU console site which the URL should be https://<your_site_name>.scm.azurewebsites.net/SiteExtensions/#gallery then run command composer install in your app root directory.
Additionally, you can simply leverage cmdlet on your KUDU console site, which URL should be https://<your-website-name>.scm.azurewebsites.net/DebugConsole, run the following commands:
cd site\wwwroot
curl -sS https://getcomposer.org/installer | php
php composer.phar install
For deployment laravel on Azure, you can refer to the answer of laravel 5.1 on windows azure web application for more information.
| common-pile/stackexchange_filtered |
Should I wait for Django to start supporting Python 3?
I have a website idea that I'm very excited about, and I love Python. So, I'm interested in using Django. However, I started learning Python in version 3.1, and Django currently only supports various 2.x versions. I've searched for information about when Django will start supporting Python 3.x, and gotten mostly articles from a year or two ago that say it will take a year or two. Meanwhile, the Django FAQ says that there are no immediate plans.
I'm reluctant to build in an old version of Python and then either be stuck with it or go through a huge ordeal trying to migrate later. My original solution to this was to start learning PHP instead and then choose a framework, but as it turns out I don't really like PHP. Django it is, then.
If I decide to wait for a 3.x-compatible version, I can teach myself more front-end web development (and more Python) in the meantime. I've only been at this for a few months so I have a lot to learn. But I don't want to wait forever, and I realize that even after a 3.x-compatible version comes out it will take a while for third-party APIs to catch up. What do you all recommend?
No. Don't wait.
Why? Pretty much all django libraries are written for Python 2.x, and if you ever plan on using any of them with Python 3 with the next major release of Django then you'll be waiting not 1 but 3-4 years when everyone starts converting their code.
In this time, you could have already mastered django and could have worked and launched many sites, could've got a Django gig, etc.
Start immediately and don't postpone!
Pretty accurate, given that with Django 1.5 there is experimental support, and 1.6 is planned for full support. And it's 2013! Nice prediction.
Python 2 will still live for a very long time. Actually, there's no really good reason to use Python 3 right now unless you need Python3 features which are not available as future imports and know that you won't ever need to use 3rd party modules which might not be Python3-compatible.
So the best solution is doing your app now using Python 2 instead of waiting; especially if the app will be useful for you now.
I recommend you learn the frameworks on the old version now, and let 2to3 figure it out when the time comes.
| common-pile/stackexchange_filtered |
Powershell command to see what Exchange calendars a user has access to
I'm trying to find a PowerShell command that would show me what calendars a certain user has permissions to.
I can use Get-MailboxFolderPermission -identity “User:\Calendar” to find what permissions are set on that specific mailbox but what I need is sort of the reverse.
I have Exchange 2010.
Try something such as Get-MailboxFolderPermission domain\username:\calendar | Select FolderName, user, AccessRights. I don't have an Exchange 2010 to run against but look over https://practical365.com/exchange-server/list-users-access-exchange-mailboxes/ when you get a chance to for something you may find helpful.
The below command works for me in my environment:
Get-Mailbox | % { Get-MailboxFolderPermission (($_.PrimarySmtpAddress.ToString())+”:\Calendar”) -User *user1* -ErrorAction SilentlyContinue} | select Identity,User,AccessRights
It will list all mailboxes on whose Calendar the user1 has additional permission.
Refer to: http://www.michev.info/Blog/Post/1516/quickly-list-all-mailboxes-to-which-a-particular-user-has-access
This did not work for me (Exchange 2010). The command executed without errors but nothing was returned, despite the fact I have users that should be found.
This only works for primary calendars, not secondary calendars.
You can try this script.
https://o365reports.com/2021/11/02/get-calendar-permissions-report-for-office365-mailboxes-powershell/
Sample Output:
Additionally, the script can generate 6 different calendar permission report based on the parameter you pass while executing the script.
Sorry for the downvote. Please post the script here. You can still leave the link to it. Links can disappear making your answer useless in the future. I will upvote it then.
| common-pile/stackexchange_filtered |
Integrating: $\int_0^\infty \frac{\sin (ax)}{e^x + 1}dx$
I am trying to evaluate the following integral using the method of contour which I am not being able to. Can anyone point out what mistake I am making?
$$\int_0^\infty \frac{\sin ax}{e^x + 1}dx$$
I am considering the following contour. And function $\displaystyle f(z):= \frac{e^{iaz}}{e^z + 1}$
The pole of order $1$ occours at odd multiple of $i\pi$. By considering above contour there is no singularity. The integral can be broken down into six parts.
$$\int_0^R \frac{e^{iax}}{e^x + 1} dx + i \int_0^{2\pi} \frac{e^{ia(R + iy)}}{e^{R + iy} + 1} dy + \int_{R}^{0}\frac{e^{ia(x+2\pi i)}}{e^{x + 2 \pi i } + 1} dx + \\ i \int_{2 \pi }^{\pi + \epsilon} \frac{e^{ai( iy)}}{e^{ iy } + 1}dy + \int_\gamma \frac{e^{iaz}}{e^z + 1} dz + i \int_{ \pi -\epsilon}^{0} \frac{e^{ia( iy)}}{e^{ iy } + 1}dy$$
First and third gives $\displaystyle (1 - e^{-2 a\pi})\int_0^R\frac{e^{iax}}{e^x + 1} dx$. Second goes to $0$ as $R \to \infty$
For fifth integral, $$\int_\gamma \frac{e^{iaz}}{e^z + 1} dz = \int_{-\pi/2}^{\pi/2} \frac{e^{ia\pi + a\epsilon e^{i\theta}}}{e^{i\pi + \epsilon e^{i\theta}+1}}i \epsilon i e^{i\theta }d\theta \to 0 \text{ as } \epsilon \to 0$$
The real part of fourth and sixth integral does not converge. But since my original integral is imaginary, it suffices to take imaginary part. As $\epsilon \to 0$, I get
$$i\int_{2\pi }^0 \Re \left [\frac{e^{-ay}}{e^{iy} + 1} \right] dy = i \int_{2\pi}^0 \frac{e^{-ay}}{2}dy = i \frac{e^{-2\pi a} - 1}{2a}$$
Finally using residue theorem, I am geting which is incorrect.
$$(1 - e^{-2 a\pi})\int_0^\infty \Im \left [\frac{e^{iax}}{e^x + 1} \right ] dx +\frac{e^{-2\pi a} - 1}{2a} = 0$$
Can anyone point out my mistake or give worked out solution?? Thanks in advance!!
ADDED::
I evaluated fifth integral incorrectly
$$\int_\gamma \frac{e^{iaz}}{e^z + 1} dz = \int_{-\pi/2}^{\pi/2} \frac{e^{ia(i\pi + \epsilon e^{i\theta})}}{e^{i\pi + \epsilon e^{i\theta}}+1}i \epsilon e^{i\theta }d\theta = ie^{-a\pi}\int_{\pi/2}^{-\pi/2}\frac{e^{ia\epsilon e^{i\theta}}}{-e^{\epsilon e^{i\theta}} + 1} \epsilon e^{i\theta}d\theta = i \pi e^{-a\pi}$$
So the total sum should be
$$(1 - e^{-2 a\pi})\int_0^\infty \Im \left [\frac{e^{iax}}{e^x + 1} \right ] dx +\frac{e^{-2\pi a} - 1}{2a} +\pi e^{-a\pi}= 0 $$
After slight manipulation we find that
$$\int_0^\infty \frac{\sin ax}{e^x + 1}dx = -\frac{\pi}{2\sinh (\pi a)} +\frac{1}{2a}$$
Woops!! I found one ... I think this is worng $$\int_\gamma \frac{e^{iaz}}{e^z + 1} dz = \int_{-\pi/2}^{\pi/2} \frac{e^{ia\pi + a\epsilon e^{i\theta}}}{e^{i\pi + \epsilon e^{i\theta}+1}}i \epsilon i e^{i\theta }d\theta \to 0 \text{ as } \epsilon \to 0$$
Since it is a simple pole, it goes to $- i \pi \text{Res}[f,i \pi]$.
And you have the integral limits reversed.
@RandomVariable i took a little detour at $i \pi$ the residue must be zero.
The residue at $z= i \pi$ is nonzero. Taking a detour doesn't somehow cancel the residue there . That limit (with the integral limits reversed) goes to $- i \pi \text{Res}[f, i \pi]$. It's called the fractional residue theorem.
@RandomVariable sorry, I didn't know ... could you check above edited solution above? perhaps it is equal to $-\int_\gamma$ in my above answer. Thank you very much for info. This way i can avoid calculating the above integral.
It looks OK. The theorem is just a shortcut. It's theorem number 9 at the following link: http://www.math.umn.edu/~edman/tex/CA_prelim.pdf
@RandomVariable thank you very much!!
Combine the 4th and 6th integrals to get the Cauchy principal value:
$$PV \int_{\pi}^0 dy \frac{e^{-a y}}{1+e^{i y}} = PV \int_{\pi}^0dy \frac{e^{-a y}}{2 \cos{(y/2)}} e^{-iy/2} $$
You still need to evaluate the imaginary part of the integral.
| common-pile/stackexchange_filtered |
How to handle white screen and clear the previous web cache when using WebView to display a page in android?
I use Webview to load a link when user clicks on a item in a listview. At first it display a blank screen and loads the page. when user hits Back button and clicks on a new item in the listview, Android preloads the previously loaded page and takes a while to display the new page.
This is confusing to the user and how do i handle the blank white screen on first load and the caching problem when using webview
<WebView xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/webview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
/>
Java code :
NewsListView.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> adapterView, View view, int position, long l) {
// Find the current earthquake that was clicked on
News currentNews= mAdapter.getItem(position);
String newsUrl = currentNews.getUrl();
WebView mWebView = (WebView) rootView.findViewById(R.id.webview);
mWebView.setVisibility(View.VISIBLE);
mWebView.loadUrl(newsUrl);
}
});
Use WebView client the page loads in current page
i want to use webview for fast experience
Befor load the ur add progress dialog when the webviewClient page loading finsh that time dismiss the progress dialog
load faster? it depends on the connection speed what you can try is adding a progress bar to keep the track of it
view.loadUrl(url);
return true;
}
@Override
public void onReceivedError(WebView view, int errorCode,
String description, String failingUrl) {
// TODO Auto-generated method stub
Toast.makeText(MainActivity.this, "Oh no! " + description, Toast.LENGTH_SHORT).show();
//super.onReceivedError(view, errorCode, description, failingUrl);
}
@Override
public void onPageFinished(WebView view, String url) {
// TODO Auto-generated method stub
super.onPageFinished(view, url);
progressBar.setVisibility(View.GONE);
}
}
// To handle "Back" key press event for WebView to go back to previous screen.
@Override
public boolean onKeyDown(int keyCode, KeyEvent event)
{
if ((keyCode == KeyEvent.KEYCODE_BACK) && web.canGoBack()) {
web.goBack();
return true;
}
else
{
finish();
return true;
}
}
}
| common-pile/stackexchange_filtered |
Parametrize Vega or Vega-lite data url?
I am trying to create a Vega/Vega-lite data fetch based on a parameter/transform/signal defined later down in the chart specification. Is this possible? It would be of tremendous use with fetching data from parametric APIs.
E.g. instead of:
"data" : {"url" : "https://api.carbonintensity.org.uk/intensity/2021-12-04/fw48h"}
I would like:
"data" : {"url": "'https://api.carbonintensity.org.uk/intensity/'+myDate+'/fw48h'"}
"transform": [ {"calculate": "'2021-12-04'", "as":"myDate"} ]
The URL can be specified as Vega signal expression. Assuming myDate is a named signal with valid string value, try:
"data" : {"url": {"signal": "'https://api.carbonintensity.org.uk/intensity/' + myDate + '/fw48h'"},
"transform": ...
}
Here is an example of Vega tutorial
https://vega.github.io/vega/tutorials/airports/
with the part of url path for traffic data from a signal value.
View in Vega on-line editor
Thanks! This works in flawlessly in Vega - then it's probably a just a matter of a pull request to lift this up into the VL specification as well.
| common-pile/stackexchange_filtered |
Toggling draft mode using ArcPy?
I'm curious if it is possible to toggle on and off 'draft mode' in layout view in ArcGIS using python scripting.
Does anyone have any ideas?
Python is preferred, but if there is another solution in a different language it would also be beneficial!
I'm not familiar with this feature but there seems to be some .net methods and properties:
IFrameElement.DraftMode Property
AND
IFrameDraw.DrawDraftMode Method
If it were possible to do this then there should be a property to do it in the DataFrame (arcpy.mapping) help.
There is not, and to me that makes sense because draft mode seems to be a graphic effect of the ArcMap application.
To do something which is effectively the same, I think you will need to write some ArcPy code to save your layers (perhaps as layer files) and remove them when you want to toggle them off, and to restore your layers by adding them back when you toggle them on.
| common-pile/stackexchange_filtered |
Usage of boolean variable without comparison in while loop
boolean flag=true;
while(flag)
{
//code(flag=false;)
}
In the above code ,inside the while loop condition simply flag is given. How does the while condition satisfy here?
Here flag is assigned true ,inside the while loop simply flag is given my qstn is how does the while condition satisfies here?
While wants a boolean expression. flag is a simple expression that evaluates to whatever value currently is in flag variable.
while(flag) is the shortened notation for while(flag == true).
A conditional expression needs to be a boolean. This could include using a constant (true), equality (==), inequality (!=, >, <), or method call (.equals()).
You already have a boolean variable, and this is a constant (not in the term that its value/reference cannot change), and therefore a valid conditional expression.
The while loop will run as long as the expression evaluates to true.
Below is the syntax of the while loop. It has a condition and a body. It repeats as long as the condition is true. The body of the loop performs the work and update the condition if it needs to terminate the loop.
while(<condition>){
<body>
}
Here's an example:
Repeat until i reaches the value 10.
boolean reachTen = false;
int i=0;
while(! reachTen ){
System.out.println(i++);
if (i == 10) reachTen = true;
}
I often do not use a fag. Instead, I use break to terminate the loop.
int i = 0;
while(true){
System.out.println(i++);
if (i == 10) break;
}
can we use simply flag inside the while condition,what does that mean?
it means that the the while loop repeats until the flag is false.
| common-pile/stackexchange_filtered |
How would I solve an equation of the following type: $ ax+b = ce^{dx^2}$
I wish to solve an equation of the following type: $ ax+b = ce^{dx^2}$.
I know about Lambert W function, however I do not see any way of solving the equation with $x^2$ instead of $x$. Is this possible?
I think it is unlikely, since Mathematica cannot solve even the case $x+1=e^{x^{2}}$. Of course, Mathematica is not perfect, and I am no expert. If $b=0$, however, then you can solve it by squaring both sides, solving the resulting equation for $x^2$, and discarding extraneous solutions.
Maybe useful https://math.stackexchange.com/questions/2309691/equations-solvable-by-lambert-function
| common-pile/stackexchange_filtered |
Jaeger agent unable to connect to collector
I have started Jaeger standalone binary on a Linux box and am trying to run Jaeger agent binary on Mac that tries to connect to the Jaeger collector of the standalone process. However it keeps failing with "error":"tchannel error ErrCodeTimeout: timeout".
The problem is not with different OS versions as I get the same error when trying from another Linux box. I used telnet to confirm that the collector port was open for connection.
The stack trace is below-
./cmd/agent/agent- --collector.host-port=172.xx.2.4:14267
{"level":"info","ts":1542954225.5485492,"caller":"tchannel/builder.go:94","msg":"Enabling service discovery","service":"jaeger-collector"}
{"level":"info","ts":1542954225.5489438,"caller":"peerlistmgr/peer_list_mgr.go:111","msg":"Registering active peer","peer":"172.xx.2.4:14267"}
{"level":"info","ts":1542954225.5502574,"caller":"agent/main.go:62","msg":"Starting agent"}
{"level":"info","ts":1542954226.5518098,"caller":"peerlistmgr/peer_list_mgr.go:157","msg":"Not enough connected peers","connected":0,"required":1}
{"level":"info","ts":1542954226.552439,"caller":"peerlistmgr/peer_list_mgr.go:166","msg":"Trying to connect to peer","host:port":"172.xx.2.4:14267"}
{"level":"error","ts":1542954226.8054206,"caller":"peerlistmgr/peer_list_mgr.go:171","msg":"Unable to connect","host:port":"172.xx.2.4:14267","connCheckTimeout":0.25,"error":"tchannel error ErrCodeTimeout: timeout","stacktrace":"github.com/jaegertracing/jaeger/pkg/discovery/peerlistmgr.(*PeerListManager).ensureConnections\n\t/Users/swarnim/go/src/github.com/jaegertracing/jaeger/pkg/discovery/peerlistmgr/peer_list_mgr.go:171\ngithub.com/jaegertracing/jaeger/pkg/discovery/peerlistmgr.(*PeerListManager).maintainConnections\n\t/Users/swarnim/go/src/github.com/jaegertracing/jaeger/pkg/discovery/peerlistmgr/peer_list_mgr.go:101"}
| common-pile/stackexchange_filtered |
Possible Unhandled Promise Rejection (id: 0) React Native
I am trying to upload files to AWS S3 bucket from my react native app but I get this error ->
Possible Unhandled Promise Rejection (id: 0)
Here is the code:
import { View, Text, StyleSheet, TouchableOpacity } from 'react-native'
import DocumentPicker from 'react-native-document-picker';
import { RNS3 } from 'react-native-aws3';
import Colors from '../constants/Colors';
const Upload = (props) => {
async function openDocumentFile() {
try {
const res = await DocumentPicker.pickSingle({
type: [DocumentPicker.types.allFiles],
});
console.log(
res.uri,
res.name,
res.type,
res.size
);
const file = {
uri: res.uri,
name: res.name,
type: res.type,
}
const options = {
keyPrefix: "uploads/",
bucket: '',
region: 'ap-south-1',
accessKey: '',
secretKey: '',
successActionStatus: 201
}
RNS3.put(file, options)
.then(response => {
if (response.status !== 201){
console.log(response.status);
throw new Error("Failed to upload image to S3");
}
console.log(response.body);
});
}
catch (err) {
if(DocumentPicker.isCancel(err)) {
console.log("user cancelled");
}
throw err;
}
}
return (
<View style={styles.view}>
<TouchableOpacity style={styles.button} onPress={openDocumentFile}>
<Text style={styles.text}>Upload Documents</Text>
</TouchableOpacity>
</View>
)
}
export default Upload;
const styles = StyleSheet.create({
view: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
button: {
justifyContent: 'center',
alignItems: 'center',
color: Colors.white,
padding: 10,
borderColor: Colors.primary,
borderWidth: 1,
},
text: {
color: Colors.primary,
}
})
Here is the output I get which shows that the file is correctly selected but there is error in the code to upload the file to AWS-S3:
LOG content://com.android.providers.downloads.documents/document/msf%3A31 helloWorld.jpeg image/jpeg 26150
LOG 400
WARN Possible Unhandled Promise Rejection (id: 0):
Error: Failed to upload image to S3
I checked other questions and answers, but I didn't find a solution. How can I solve this error?
You're surrounding the problematic code with try/catch, which is good - but then you're re-throwing the error:
catch (err) {
if(DocumentPicker.isCancel(err)) {
console.log("user cancelled");
}
throw err;
}
which will result in an unhandled rejection unless there's another try/catch or .catch around that.
Only throw if there's something higher up on the call stack that can handle it. In this case, there isn't - openDocumentFile should handle everything itself - so don't re-throw in case of there's a problem.
catch (err) {
if(DocumentPicker.isCancel(err)) {
console.log("user cancelled");
}
}
You also need to properly await the call to RNS3.put - right now, it's not being waited for, so it's not connected with the try/catch. This:
RNS3.put(file, options)
.then(response => {
if (response.status !== 201){
console.log(response.status);
throw new Error("Failed to upload image to S3");
}
console.log(response.body);
});
should be
const response = await RNS3.put(file, options)
if (response.status !== 201){
console.log(response.status);
throw new Error("Failed to upload image to S3");
}
console.log(response.body);
Generally, don't mix await and .then unless you understand Promises completely and know what you're doing - otherwise, to avoid confusing yourself, I'd recommend using only one or the other in a particular segment of code. Either await (and try/catch), or use .then, but not both in the same section of asynchronous logic.
| common-pile/stackexchange_filtered |
Calculating AIC number manually Given a distribution of data and some distribution string
Suppose I have the following data:
array([[0.88574245, 0.3749999 , 0.39727183, 0.50534724],
[0.22034441, 0.81442653, 0.19313024, 0.47479565],
[0.46585887, 0.68170517, 0.85030437, 0.34167736],
[0.18960739, 0.25711086, 0.71884116, 0.38754042]])
and knowing that this data follows normal distribution. How do I calculate the AIC number ?
The formula is
2K - 2log(L)
K is the total parameters, for normal distribution the parameter is 3(mean,variance and residual). i'm stuck on L, L is suppose to be the maximum likelihood function, I'm not sure what to pass in there for data that follows normal distribution, how about for Cauchy or exponential. Thank you.
Update: this question appeared in one of my coding interview.
why am i getting a down vote anything I can improve ?
Hi @szd116, i did not downvote. So if you have say some data, and you postulate a normal distribution with a certain mean and variance.. you can calculate the AIC.. but you usually use it to compare two models?
My point is, with AIC for 1 model.. you cannot say anything about the fit
@StupidWolf I agree, a single number of AIC without comparing diff. model may not make much sense. but this is a coding test in one of my recent interview. It asks me to calculate the AIC value for normal,exponential and Cauchy distribution. Thanks.
For a given normal distribution, the probability of y given
import scipy.stats
def prob( y = 0, mean = 0, sd = 1 ):
return scipy.stats.norm( mean, sd ).pdf( y )
For example, given mean = 0 and sd = 1, the probability of value 0, is prob( 0, 0, 1 )
If we have a set of values 0 - 9, the log likelihood is the sum of the log of these probabilities, in this case the best parameters are the mean of x and StDev of x, as in :
import numpy as np
x = range( 9 )
logLik = sum( np.log( prob( x, np.mean( x ), np.std( x ) ) ) )
Then AIC is simply:
K = 2
2*K - 2*( logLik )
For the data you provide, I am not so sure what the three columns and row reflect. So do you have to calculate three means and three StDev-s? It's not very clear.
Hopefully this above can get you started
This seems to make sense, i was little bit confused during the interview where I need to just sum of the all the logs, I've never done AIC this way. I thought the log likelihood function is used to estimate parameters ?
Hi @szd116, yes you are right in a way. You can for example maximize log likelihood to obtain a so called optimal estimate for parameters. you can check out RobertDodier's answer below for a detailed explanation of each of the steps.
I think the interview question leaves out some stuff, but maybe part of the point is to see how you handle that.
Anyway, AIC is essentially a penalized log likelihood calculation. Log likelihood is great -- the greater the log likelihood, the better the model fits the data. However, if you have enough free parameters, you can always make the log likelihood greater. Hmm. So various penalty terms, which counter the effect of more free parameters, have been proposed. AIC (Akaike Information Criterion) is one of them.
So the problem, as it is stated, is (1) find the log likelihood for each of the three models given (normal, exponential, and Cauchy), (2) count up the free parameters for each, and (3) calculate AIC from (1) and (2).
Now for (1) you need (1a) to look up or derive the maximum likelihood estimator for each model. For normal, it's just the sample mean and sample variance. I don't remember the others, but you can look them up, or work them out. Then (1b) you need to apply the estimators to the given data, and then (1c) calculate the likelihood, or equivalently, the log likelihood of the estimated parameters for the given data. The log likelihood of any parameter value is just sum(log(p(x|params))) where params = parameters as estimated by maximum likelihood.
As for (2), there are 2 parameters for a normal distribution, mu and sigma^2. For an exponential, there's 1 (it might be called lambda or theta or something). For a Cauchy, there might be a scale parameter and a location parameter. Or, maybe there are no free parameters (centered at zero and scale = 1). So in each case, K = 1 or 2 or maybe K = 0, 1, or 2.
Going back to (1b), the data look a little funny to me. I would expect a one dimensional list, but it seems like the array is two dimensional (with 4 rows and 4 columns if I counted right). One might need to go back and ask about that. If they really mean to have 4 dimensional data, then the conceptual basis remains the same, but the calculations are going to be a little more complex than in the 1-d case.
Good luck and have fun, it's a good problem.
| common-pile/stackexchange_filtered |
UrbanAirship using same RichPushUser after clear data/reinstall
I'm facing the problem that when i clear data of my app, or i just reinstall it, in the next run i get a new RichPushUser credential, and thus my inbox is completely empty as it looks as a new user.
What i am trying to accomplish is to allow the user use the same richpushuser in multiple devices, so they can access the same inbox.
I tried setting a fixed Alias, but that doesn't help. And the RichPushUser class only allow you to get id/password, but not setting them.
Any ideas?
Thanks
Alias is the way forward, what didn't work about it?
@weston i set the same alias every time (is hardcoded in fact), but every time i reinstall i get new richpushuser, new APID, new everything, and thus the inbox is empty. Is working for you just setting the alias?
And when you send to that alias? What happens?
And yes, works for us. We set the alias and send to that and it arrives.
@weston I have tried sending it directly to the APID as well as all devices option, and messages are delivered but after reinstall they are not in the inbox. But i don't see the option in the Message composer (in urban airship website) to send a push to an Alias.
However, In the end we are going to send N messages to all devices, not 1 by 1 to each Alias. If one of those devices reinstall/clear data, or simply install the app (with same Alias) in his tablet, he should be able to access his Inbox.
maybe i am missing something, thanks for your help!
| common-pile/stackexchange_filtered |
Php cake post request
I am developing restful services with PHP Cake. I have developed all get services. But I am stucked in post services.
Problem is that, I am unable to get post data. I am getting noting in $this->data
Here is code of one controller function.
function signin() {
$this->view = 'Webservice.Webservice';
$message = "Request Received";
if (!empty($this->data)) {
$message = "Request has data";
}
$this->set(compact('message'));
}
Here is my request data
POST http://localhost/blog/posts/signin HTTP/1.1
User-Agent: Fiddler
Host: localhost
Content-Length: 25
username=abc&password=abc
Please help me out here.
Thanks,
Cake only populates the request data array by data sent to PHP that is prefixed with 'data'.
POST your username and password like this, instead:
data[username]=abc&data[password]=abc
This will tell Cake to place it in $this->request->data.
Thank you very much for your reply. But this is not working as well.
Are you using the Security component? Is it being blackholed? Have you inspected the request/response via firebug? If you're on 2.x, have you made sure to use $this->request->data?
I am not using Security component. I am using 1.3 version. Yes, I inspected request/response with firebug
Have you tried turning off checkAgent in config/core.php? Find Configure::write('Session.checkAgent', true); and set it to false.
No luck with that also. I am getting data with $message = file_get_contents('php://input'); but not with this->data
How are you POSTing it (can you add that to the question)? Is $_POST populated? $HTTP_RAW_POST_DATA? Sounds like it might be a PHP config issue.
Thanks you very much. My problem has been resolved. I set Content-Type: application/x-www-form-urlencoded. That change solved my problem. Now I am getting data properly.
let us continue this discussion in chat
| common-pile/stackexchange_filtered |
vkCreateSwapchainKHR Error
Even though validation layers and debug callback extensions are enabled and working (they respond to wrong structs etc.), I'm still getting a "VK_ERROR_VALIDATION_FAILED_EXT" result from vkCreateSwapchainKHR(), and there's no validation layer error to pinpoint the mistake...
Swap chain creation (using GTX 970 ) :
VkBool32 isSupported = false;
vkGetPhysicalDeviceSurfaceSupportKHR( physicalDevices[0], 0, surface, &isSupported);
if (!isSupported) {
std::cout << "*ERROR* This device doesn't support surfaces" << std::endl;
}
VkSurfaceCapabilitiesKHR surfCaps;
vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physicalDevices[0], surface, &surfCaps);
std::vector<VkSurfaceFormatKHR> deviceFormats;
uint32_t formatCount;
vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevices[0], surface, &formatCount, nullptr);
deviceFormats.resize(formatCount);
vkGetPhysicalDeviceSurfaceFormatsKHR(physicalDevices[0], surface, &formatCount, deviceFormats.data());
swapChainInfo.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR;
swapChainInfo.pNext = nullptr;
swapChainInfo.flags = 0;
swapChainInfo.surface = surface;
swapChainInfo.minImageCount = surfCaps.minImageCount;
swapChainInfo.imageFormat = VK_FORMAT_B8G8R8A8_UNORM;
swapChainInfo.imageColorSpace = VK_COLOR_SPACE_SRGB_NONLINEAR_KHR;
swapChainInfo.imageExtent = surfCaps.currentExtent;
swapChainInfo.imageArrayLayers = 1;
swapChainInfo.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
swapChainInfo.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE;
swapChainInfo.queueFamilyIndexCount = 0;
swapChainInfo.pQueueFamilyIndices = VK_NULL_HANDLE;
swapChainInfo.preTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR;
swapChainInfo.compositeAlpha = VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR;
swapChainInfo.presentMode = VK_PRESENT_MODE_FIFO_KHR;
swapChainInfo.clipped = VK_TRUE; // TODO : TEST clipping against another window
swapChainInfo.oldSwapchain = VK_NULL_HANDLE;
result = vkCreateSwapchainKHR( device, &swapChainInfo, nullptr, &swapChain );
if (result) {
std::cout << "*ERROR* Swapchain Creation Failed :" << result << std::endl;
}
Surface Creation using GLFW (Which doesn't return any error):
if (result = glfwCreateWindowSurface(instance, window, nullptr, &surface))
{
std::cout << "*ERROR* Surface Creation Failed : " << result << std::endl;
}
Does it return the same error with layers disabled?
There are a huge number of reasons validation of vkCreateSwapchainKHR will fail (check out PreCallValidateCreateSwapchainKHR in core_validation.cpp).
There may not be enough code here to tell why it is failing, for example, the failure might be because of an invalid surface. But, to pinpoint the problem, it should give a failure message in the debug log, which will tell you exactly why. You should enable it by calling CreateDebugReportCallbackEXT, before trying to create the swapchain. This will also require you to enable the VK_EXT_debug_report extension. See here for details.
Thanks for reply, i do get validation errors in console when i use unavailable/invalid features in structs, but didn't manage it with CreateDebugReportCallbackEXT it may reveal more i'll check it.
Using CreateDebugReportCallbackEXT I got the validation error "internal drawable creation failed" which led me to this : https://stackoverflow.com/questions/41379529/wat-does-the-vkcreateswapchainkhrinternal-drawable-creation-failed-means
Just a few points about your code (potentially cause of problems):
You are not checking VkResult of all the vkGet* commands
You are not checking swapChainInfo.imageFormat against supported formats in your deviceFormats
You are not accounting for the situation that surfCaps.currentExtent can be 0xFFFFFFFF
swapChainInfo.pQueueFamilyIndices is a pointer not a handle; use nullptr
You are not checking swapChainInfo.preTransform against surfCaps.supportedTransforms
You are not checking swapChainInfo.compositeAlpha against surfCaps.supportedCompositeAlpha
Thanks for reply, capturing result on previous commands is a good point i'll check it out once i rewrite the whole thing (accidentally deleted the project folder...), and about those checks you mentioned, as far as i tested them, they're all available in my graphics card/surface and using the wrong features causes a validation error.
c̶a̶u̶s̶e̶s̶ should cause a validation error. Do not assume the validation layers are complete and bugless.
Thanks for help, issue was GLFW creating an OpenGL context.
https://stackoverflow.com/questions/41379529/wat-does-the-vkcreateswapchainkhrinternal-drawable-creation-failed-means
| common-pile/stackexchange_filtered |
Is the phrase "albeit the fact that..." correct?
I recently came across the phrase albeit the fact in a book I was reading, and, as a professional editor, I was immediately struck by how ungrammatical it sounded to me. So I searched Google Books and found a shocking number of published works containing this phrase — mostly government and scientific reports — which led me to think that it's primarily being used by people who want/need to sound intelligent, but don't really know how to use albeit. I searched all the trusted dictionaries and couldn't find one instance of this phrase, and the more I replay it, the worse it sounds. Am I missing something?
Here's an example from The Ellsworth American:
On the matter of Mr. Thomas’s scathing remarks concerning the managing
editor of The Ellsworth American, first of all, let it be said that
the reputation of the American speaks for itself. Beyond its having
been acclaimed several times over as the best weekly publication in
Maine, it ranks the best source for balanced political reporting of
any publication in the state, albeit the fact that its readership and
contributors to letters to the editor and opinion pieces are generally
of a liberal-progressive mindset. So be it, but it’s nice to see the
conservative point of view come through once in a while.
I think this is a misuse of "albeit". A typical use is:
He was making progress, albeit rather slowly.
This essentially means the same thing as
He was making progress rather slowly.
but emphasizes that the slowness is unexpected.
In your quoted phrase, they seem to be using "albeit" as a synonym for "despite". And they could dispense with clumsy "the fact that" phrase entirely if they used "even though".
It's not strictly speaking incorrect. However, I tend to agree with your belief that it's "primarily being used by people who want/need to sound intelligent" as there are more efficient and less clumsy ways of stating the same idea; i.e., replace "albeit the fact that" with "although".
| common-pile/stackexchange_filtered |
how to print special characters which are a string in variable with php?
I'm making this chat server, but it doesn't work quite well. When you send a piece of text, it first gets encoded by the function base64_encode() and then gets sent to a MySQL database.
Then the receiver gets the text from that same MySQL database, which is of course first decoded by the function base64_decode().
The only problem is with the special characters like \n \' and \t: when I get the data from the database and print it between two textarea tags, I see \n as a string, and not as actual line breaks.
In short, I need to fix this problem:
$String = 'Line 1 \n Line 2';
print '<textarea>' . $String . '</textarea>';
//The result I want
//<textarea> Line 1
//Line 2 </textarea>
The function nl2br doesn't work, because tags inside a textarea tag won't work, and also because there other characters like apostrophes.
Could anybody help me?
Thanks!
You need to enclose your string into double quotes, for special characters to be evaluated.
$String = "Line 1 \n Line 2";
print '<textarea>' . $String . '</textarea>';
That's true, but the variable comes actually out of a query. So first the string is send to my database (because it's a chat server), and then take it out of my database with a select query, so the variable is already defined by another user.
I think the problem is that you have to escape '\n' before you write it into the database. Put another slash in front of '\n', before you write your string to the database, like this '\n'. So it would look something like this. Line1 \\n Line2
If you change this:
$String = 'Line 1 \n Line 2';
print '<textarea>' . $String . '</textarea>';
to this:
$String = "Line 1 \n Line 2"; // double quote
print '<textarea>' . $String . '</textarea>';
... you will get the output you want.
That's true, but the variable comes actually out of a query. So first the string is send to my database (because it's a chat server), and then take it out of my database with a select query, so the variable is already defined by another user.
This one is also works same as using " ... ", however maybe helps in your case:
$string = <<<EOT
Line 1 \n Line 2
EOT;
echo '<textarea>' . $string . '</textarea>';
As the others said, your problem is Single-Quotes.
| common-pile/stackexchange_filtered |
Which measure indicates a smooth variation of data?
I am trying to compare text and non-text regions based on the thickness of lines/strokes. Using the distance transform and some fiddling thereafter, managed to obtain the thickness (actually half the thickness) of each stroke comprising the features in a picture.
Here's a typical result of a program run:
1.Text region
34444433343554335533553555545544455445533444444344455435553335545556665444445654444444444444444444444444455434554554455444456544444445555445555543355556665544665444535444553354434553444444444444455444445544444454444444444444444444444444455442444444554444444544444444444444554444456444554414454444444444444444444554444445543454445443444544434443344443334442133223332221
Non-text
1111112222212222222213333232111112234444411415445125544126143211123445716422457887433442222991443110103332222113111163124134444312122222222224551313122222222222243455553141432222222232111422222351515513211134161412234411743111111454181813111434555191113145520111322223334554452121204233145433467891011121311732525252524202022213137326252419192112222222335831818204233332222344315625171714334444451111788992225161619334538215151811341234258811414113111223144488242413131711332543444872416135247724113223544356152554433333332666652323151444444336675523151344443335566523881333444552222113344445514141433345555202120141114444444345201433355644454191313322333474351818134322266657342171266672415161131145657419111421316665581447891113151513135555555586745556555588551214145145335557888755141314774333455886555141011111211981417776348524111099814144444556414341181114135447434567845534444334881088891011111213141113477734444379888881414144477437254448998834733764226777753781313577776677654466665753466712124666645444551124476735456655444432446663254664411476757773464147322222777455332224237738833223378121242311333378583438869913135923222344338101013139943333115533910111111884112155339910101011111111101111111097777778855544553991010111111111111111110999999101111111110777764111113561091097543434552999989998666544436554888778755554455541444465554317777774555555544455556665555564424443356433222345222124422341111312111214411322222223222231221143334424322342222123536411441664431775446548856766655885555664444644665449876444477544227887772
So is there any statistical measure more sophisticated than standard deviation that will indicate the difference in the two datasets: one varies gradually while the second one has drastic variations?
(included the scary numbers to illustrate what I'm attempting to quantize!)
Also please note that the number of data points will not be same, as i'll be comparing different regions with some experimentally determined threshold of SD (or some other measure), not regions among themselves.
Are your regions separated, with either text or non-text contents, or do you have a single list of thicknesses from which you want to extract text and non-text regions?
separate regions found by connected-component analysis, with the classification as text/non-text based on the result of the statistical measure.
The thought that comes to my mind is that you could do a wavelet transform on a chunk then look at the average energy associated with high frequency wavelets.
If you're not familiar with wavelets, the simplest to describe is the Haar wavelet. Assuming that the number of points you have sampled is 2n, you can calculate that as follows:
Divide your data into pairs of points.
Take 1/2 of the difference. That is the coefficient of the detail wavelet.
Take the average of each pair. This gives you 2n-1 points. Recursively do a wavelet transform on those.
For each level of the Haar wavelet, take the average of the square of the coefficient. If your data really looks like what you've described, this statistic for the first few levels will be very different. Experiment, decide where your threshold is, and you'll probably have a pretty reliable test. (I would recommend having 3 possible answers from your test, "Text", "Not text", "unclear". Look at the "unclear" examples and then improve your test.)
I'm reading up on Haar wavelets. Will try this method along with others suggested and see which gives most suitable results.
could you please explain what is meant by the "level" of Haar wavelet. Is it the values i get after step 3. before the next recursive run on the points? Also most resources on the web divide the data points by sqrt(2) rather than 2 - will that be any different in this implementation?
@AruniRC: Considering that I'm going off of memories of something I last saw in the mid 90s, feel free to go with "most resources on the web". Though it will be useful for you either way. As for level, the results of your first pass through 2n points is the first detail level. The results of the second pass through the 2(n-1) averaged points is the second detail level. And so on.
If you are interested in measuring the smoothness, the standard deviation of the differences between adjacent thicknesses should be much smaller for text than non-text.
You can thus simply convert
34444433343554335533553555545544455445533444444344455435553335545556665444445654444444444444444444444444455434554554455444456544444445555445555543355556665544665444535444553354434553444444444444455444445544444454444444444444444444444444455442444444554444444544444444444444554444456444554414454444444444444444444554444445543454445443444544434443344443334442133223332221
into
1000(-1)000…
(1 = 4-3, 0 = 4-4, etc.). The standard deviation of this list of differences is small, for text regions (in your example, this list contains many zeros).
If you need to keep using numbers between 0 and 9 for the thickness difference between thickness t1 and thickness t2, you can perform a rescaling: round((t2-t1+9)/2).
thanks a lot. will try all suggestions and then decide which one works best, though this looks way simpler.
| common-pile/stackexchange_filtered |
How to change style of title of Content Dialog in Windows Phone 8.1
How to change style of title property of ContentDialog page in Windows Phone 8.1.
XAML:
<ContentDialog
x:Class="MyApp.View.Login.ContentDialog1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MyApp.View.Login"
Title="DIALOG TITLE">
<StackPanel VerticalAlignment="Stretch" HorizontalAlignment="Stretch">
<TextBox Name="email" Header="Email address"/>
</StackPanel>
</ContentDialog>
Here we have Title="DIALOG TITLE", can I change style of text show in title?
EDIT
How to reduce top most empty space in ContentDialog if title not mention?
The title will show in the ContentDialog's TitleTemplate. You can change that as needed.
<ContentDialog
x:Class="MyApp.View.Login.ContentDialog1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:MyApp.View.Login"
Title="DIALOG TITLE">
<ContentDialog.TitleTemplate>
<DataTemplate>
<!-- what do you want the title to look like -->
</DataTemplate>
</ContentDialog.TitleTemplate>
<StackPanel VerticalAlignment="Stretch" HorizontalAlignment="Stretch">
<TextBox Name="email" Header="Email address"/>
</StackPanel>
</ContentDialog>
Can I change default height of ContentDialog from top if window dont have any title? looks like by default 50 margin from top occupied by title or windows default. Check edited question.
I unable to add X button at right corner of Contentdialog. I put there two column grid but not work. Can you help me ?
I just want to mention, that in order to use the standard Title value in this case, you'd have to add the following line inside the template: Text="{Binding}"
| common-pile/stackexchange_filtered |
React Drag and Drop in Grid System
I was trying to implement drag and drop for html grid system ( row and col arranged divisions) using react beautiful dnd.
The code can be viewed here ( this is forked from here )
The changes i made is remove the type from droppable as I want columns also to be drop in parent container .
The problem I'm facing is that the sub item can't drag upward and drop in parent container, It can drag to downward without no issues.
The parent container can drag up and Down
| common-pile/stackexchange_filtered |
joining all rows of a csv file that have the same first column with python
I realise similar has been posted before 1, 2, 3
but I have tried these and they have not worked.
I have a single csv file with two columns similar to this:
james,phone1
james,phone2
james,phone3
paul,phone1
jackie,phone1
jackie,phone2
jackie,phone3
etc
I want to merge all the duplicates in column 1 using python to get something like:
james,phone1,phone2,phone3
paul,phone1
jackie,phone1,phone2,phone3
What would be the best way of doing this?
Any help would be greatly appreciated.
import csv
filename = "Filename.csv"
csvList = list(csv.reader(open(filename)))
csvDict = {}
for i in csvList :
if i[0] in csvDict :
csvDict[i[0]].append(i[1])
else :
csvDict[i[0]] = [i[1]]
print(csvDict)
use the dictionary to create the appropriate output format.
Glad to be helpful. Up-voting the answer and accepting the answer will be helpful for me. :)
| common-pile/stackexchange_filtered |
Possible to use flash in java application?
I am writing an application in java and have a question. Is it possible for me to make buttons, etc... In flash, and make them call functions in my java application? If so, how can I do this?
Is it an application or an applet? Applets can communicate with javascript: Invoking Applet Methods From Javascript Code. And I'm pretty sure Flash can call javascript functions, so you could write a layer between the two in javascript.
If only @Sardtok posted that as an answer instead of a comment, that's exactly what I needed!
Posted as an answer. Glad to help.
If you are working on an applet, Flash can communicate with the applet via JavaScript, which has an interface for calling methods in Java Applets. It's also possible for an applet to call JavaScript functions, so it can work as a two-way wrapper. Invoking Applet Methods from JavaScript, and Invoking JavaScript from Applets.
First of all we need to know what kind of java application you are talking about: A web application, a client server (swing, awt, swt, other gui framework), a mobile app, other kind?
Adobe Flex Builder already does it for web/ajax applications. Your view tier is flex/flash specific while your model, controller and database tiers are still java.
If you are talking about a client/server application I would say no - you can't - unless you use some kind of flash container at your view tier.
Well, there's also WebSockets which can be used for a Client/Server system. I think Flash uses WebSockets for its networking, but I haven't really done a lot of Flash. I have tried WebSockets to hook web pages to Arduino hardware, though.
I think that a SOA based java solution can be called via ActionScript, alike Servlets can be called too via common HTTP POST (Flash or Javascript)
| common-pile/stackexchange_filtered |
Generalizing the coordinate space of a manifold
A manifold is, essentially, something that "looks like" $\mathbb R^n$ locally. That is, a topological space $X$ such that for every $x\in X$ there is a continuous map $\varphi$ from an open set $U_x$ containing $x$ to $\mathbb R^n$.
Suppose I want to consider a possible generalization of this concept, in particular by extending which coordinate spaces I can use. A first naive generalization would be to admit any vector space $V$ instead of $\mathbb R^n$, however, any finite dimensional $V$ is isomorphic to some $\mathbb R^n$, so this is clearly not adding anything new. A possible next attempt would be to admit an arbitrary module $M$ (perhaps over a commutative ring) as the coordinate space.
My question would be: Is this concept one that's been explored? Could it produce an interesting object of study?
I'm not sure whether this could be a question of enough interest to post on MO as well / instead, or perhaps the answer is trivial.
What topology would you impose on an arbitrary module?
@AlexProvost I don't know commutative algebra well enough, but I'd assume there's no guarantee of a nice topology existing on an arbitrary module as there is on $\mathbb R^n$, is this correct?
Right. The point of using $R^n$ is that its topology is very natural and compatible with its algebraic structure. It's not clear to me how one could come up with a useful generalization the way you suggest. But do note that we can use your vector space idea and extend it to infinite dimensional vector spaces; see for instance Hilbert or Banach manifolds.
You may also be interested in reading about varieties, schemes, or locally ringed spaces.
@AlexProvost thank you, I will be checking these out! Also, if you'd like to make your comment an answer I'll accept it.
@Mr.Chip thank you, algebraic geometry is definitely on my radar! but I still need quite a bit of preliminaries to get there :)
As I said in the comments, it is not clear what topology one would put on an arbitrary module in order to do this. The point of using $\mathbb{R}^n$ is that its topology is very natural and compatible with its algebraic structure. But do note that we can use your vector space idea and extend it to infinite dimensional vector spaces; see for instance Hilbert or Banach manifolds.
Look up:
Banach manifolds
Hilbert manifolds
Fréchet manifolds
Probably there are other generalizations of the concept of manifold out there, modeled on other spaces. You could also be interested in supermanifolds and related concepts, as well as derived algebraic geometry, where spaces are in some sense modeled by chain complexes.
| common-pile/stackexchange_filtered |
Trying to connect to MacBook: Password etc. doesn't work (Files -> Other Locations -> "MacBook Pro" under Networks.)
On How to share files between Ubuntu and OSX? I found these apparently easy instructions on how to connect to my Mac from Ubuntu:
In mac OS, go to System preferences > sharing and enable Personal File Sharing
In Ubuntu open DashHome and open the Files folder. (or however you know how to get to the files folder.)
In the sidebar, choose Browse Network
As long as the two machines are on the same network, your mac should be in there as a directory that you can mount within Ubuntu.
On Ubuntu 20.04 I open Files (also called Nautilus) *> Other Locations,I choose MacBook Pro ... under Networks. Then a window pops up asking for Username, Domain, Password. But no matter whether I use name and password for the Mac or Ubuntu it doesn't work. And what is "Domain"? I tried "Staff" (and "WORKGROUP"). Nothing works!
PS: The Mac is a MacBook Pro (13-inch, Mid 2012) running MacOS Catalina Ver. 10.15.7. It seems to be using SMB for networking.
To connect to the MacBook from Ubuntu I had to use Terminal commands.
$ cd /mnt
/mnt$ sudo mkdir macbook
/mnt$ sudo mount -v -t cifs //macbook-pro-henrik.local/henrik /mnt/macbook -o user="Henrik"
mount.cifs kernel mount options: ip=<IP_ADDRESS>,unc=\\macbook-pro-henrik.local\henrik,user=Henrik,pass=********
/mnt$
The functionality in Ubuntu Files is VERY primitive. I would call it a waste of time... It doesn't give any error messages. It doesn't even tell you that you need to install this:
sudo apt install cifs-utils
PS: This still really doesn't work. You only get read access to the MacBook... Even after a longer dialogue with 2 helpful people here: https://discussions.apple.com/thread/252656492?page=2
| common-pile/stackexchange_filtered |
I have a issue while run my controller in angularjs using Chrome
I am battling with this code run on crome then it has been throw the following error.but everything is working fine in firefox
1)Uncaught SyntaxError: Unexpected token }
2)Uncaught Error: [$injector:modulerr] http://errors.angularjs.org/1.2.28/$injector/modulerr?p0=lens_admin&p1=Erro…gleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.2.28%2Fangular.min.js%3A18%3A170)
inside my Controller:
angular.module('lens_admin.controllers', ['angularFileUpload']).
.controller('adminController', function($scope,$http,$location,$upload) {
$scope.brand_edit_submit = function(bid) {
var brand_type_editObj=new Object();
brand_type_editObj.edit_mode='brand';
brand_type_editObj.bid=bid;
brand_type_editObj.brand_type_edit=$scope.brand_type_edit;
$http.post("ajax/frame_list_update.php",{brand_type_editObj}). //first error focus here.am i correct to passing Object to server side..
success(function(data, status, headers, config) {
alert(data);
$scope.brand_type_tables();
$scope.lens_brand_table();
$('.modal').modal('hide');
}).
error(function(data, status, headers, config) {
alert("Please Try Again..!");
});
}
});
i have embedded files for "angularFileUpload" module that included in my "admin.controllers".what is wrong with my code.this issue occured only in crome..any one can give me some ideas..
Thanks Advance..
1)Uncaught SyntaxError: Unexpected token } will show a line, what line?
Uncaught SyntaxError: Unexpected token } controller.js:424
$http.post("ajax/frame_list_update.php",{brand_type_editObj}). //this is the 424 line in my controller.js
This is the error line:
$http.post("ajax/frame_list_update.php",{brand_type_editObj})
This is because
{brand_type_editObj}
Isn't a proper object.
It needs to be
{ someName: brand_type_editObj }
Where I have introduced a key someName. JavaScript objects are key/value pairs. So there always needs to be a Key and there always needs to be a Value.
2)Uncaught Error: [$injector:modulerr] http://errors.angularjs.org/1.2.28/$injector/modulerr?p0=lens_admin&p1=Erro…gleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.2.28%2Fangular.min.js%3A18%3A170)
This occured because it probably couldn't find ['angularFileUpload']. But with out seeing the main script its hard to know for sure
Here is the correct code
Few errors like double "." b/w module and controller
second:- Key is not assigned to object in $http call.
angular.module('lens_admin.controllers', ['angularFileUpload'])
.controller('adminController', function($scope, $http, $location, $upload) {
$scope.brand_edit_submit = function(bid) {
var brand_type_editObj = new Object();
brand_type_editObj.edit_mode = 'brand';
brand_type_editObj.bid = bid;
brand_type_editObj.brand_type_edit = $scope.brand_type_edit;
$http.post("ajax/frame_list_update.php",{"data":brand_type_editObj}).
success(function(data, status, headers, config) {
alert(data);
$scope.brand_type_tables();
$scope.lens_brand_table();
$('.modal').modal('hide');
}).
error(function(data, status, headers, config) {
alert("Please Try Again..!");
});
}
});
Why moduler error occured .. Uncaught Error: [$injector:modulerr] http://errors.angularjs.org/1.2.28/$injector/modulerr?p0=lens_admin&p1=Erro…gleapis.com%2Fajax%2Flibs%2Fangularjs%2F1.2.28%2Fangular.min.js%3A18%3A170)
Have you added
Files
i got it bro..but this issue occurred by the first error..i made this mistake in some places.so this error has occurred.now its working fine.thanks you so much bro..
$http.post("ajax/frame_list_update.php",{data:brand_type_editObj}). in the above code passing object right.. in my previous code i didn't make objct well.like this $http.post("ajax/frame_list_update.php",{brand_type_editObj}). In server side i call this objct like this @$edit_mode = $request->brand_type_editObj->edit_mode; Now i didn't make any change here..but it's work fine..Now i have passed object like {data:brand_type_editObj} but i fetch that in server side like @$edit_mode = $request->brand_type_editObj->edit_mode; can i call like this
I didn't get you can you explain >
$http.post("ajax/frame_list_update.php",{data:brand_type_editObj}). in the above code passing object right.. in my previous code i didn't make objct well.like this $http.post("ajax/frame_list_update.php",{brand_type_editObj}). In server side i call this objct like this @$edit_mode = $request->brand_type_editObj->edit_mode; Now i didn't make any change here..but it's work fine..Now i have passed object like {data:brand_type_editObj} but i fetch that in server side like @$edit_mode = $request->brand_type_editObj->edit_mode; can i call like this
No,You should fetch "data" object there
sorry bro..i have some cookie pbm..ya you are correct..thanks again
| common-pile/stackexchange_filtered |
Numpy: how to insert an (N,) dimensional vector to an (N, M, D) dimensional array as new elements along the D axis? (N, M, D+1)
a is an array of shape (N, M, D) == (20, 4096, 6).
b is an array of shape (N,) == (20,).
I would like to insert the values of b to a such that each value in b is appended element-wise to the D dim in a (7th element in a).
So c would be such an array, of shape (20, 4096, 7), where c[i,:,-1] == b[i] for all i, and c[...,:-1] == a.
I know you could just make a new array and add the values accordingly eg:
N, M, D = a.shape # (20, 4096, 6)
c = np.zeros((N, M, D+1))
c[...,:-1] = a
for i in range(N):
c[i,:,-1] = b[i]
But was wondering if one of the numpy wizards here had a more slick way of doing this with numpy ops and no intermediate arrays.
Possible duplicate of np.concatenate a ND tensor/array with a 1D array
I don't see any intermediate arrays in your solution. The desired result is larger than the original, so it has to be a new array. Creating c and then filling it with a and b is a perfectly good solution. You just need to streamline the b assignment.
Replicate b along the second axis after extending it to 3D and then concatenate with a along the last axis -
b_rep = np.repeat(b[:,None,None],a.shape[1],axis=1)
out = np.concatenate((a, b_rep,axis=-1)
Alternatively, we can use np.broadcast_to to create the replicated version :
b_rep = np.broadcast_to(b[:,None,None], (len(b), a.shape[1],1)
Thanks, this worked for me. I guess there is no way to do just directly append_at the vector elementwise along some axis? You always have to broadcast?
@Alnitak Nope, no direct way there.
Here is another one-liner
np.r_['2,3,0', a, np.broadcast_to(b, (a.T.shape[1:])).T]
Also, I'd like to mention that your original method is actually close to the (or at least a) recommended way. Just use empty instead of zeros and broadcasting instead of the loop:
res = np.empty((N,M,D+1), np.promote_types(a.dtype, b.dtype))
res[..., :-1], res[..., -1] = a, b[:, None]
...
And - just for fun - one more, which I expressly do not recommend. Do not use this!
np.where(np.arange(D+1)<D, np.lib.stride_tricks.as_strided(a, (N,M,D+1), a.strides), b[:, None, None])
| common-pile/stackexchange_filtered |
Putting SQL data into HTML table
I'm trying to get the data from my mySQL database and put them into a HTML table.
After searching a lot on the internet, but I coudn't find code that worked for me.
Currently, I have this code
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title>
</head>
<body>
<table>
<thead>
<tr>
<td>Naam</td>
<td>Gemeente</td>
<td>Datum</td>
</tr>
</thead>
<tbody>
<?php
$db_select = mysql_select_db($dbname,$db);
if (!db_select) {
die("Database selection also failed miserably: " . mysql_error());
}
mysql_select_db("databaseiheko");
$results = mysql_query("SELECT NaamFuif, GemeenteFuif, DatumFuif FROM tblfuiven");
while($row = mysql_fetch_array($results)) {
?>
<tr>
<td><?php echo $row['NaamFuif']?></td>
<td><?php echo $row['GemeenteFuif']?></td>
<td><?php echo &row['DatumFuif']?></td>
</tr>
<?php
}
?>
</tbody>
</table>
</body>
</html>
The only thing that I get is the first row of my table (Naam-Gemeente-Datum).
Am I doing something wrong or did I forgot something?
Use mysql_error() to check, whether there were any problems with your query. 2. mysql_ functions are deprecated. You should switch to PDO or mysqli.
What you ar doing with the db not really safe, try to include the database connecting. 2. You have a typo at Datumfuif ->&row. 3. Are you sure the db is filled?
You are using an obsolete database API and should use a modern replacement.
<?php echo &row['DatumFuif']?> You may wish to use a $ sign instead.
@Sirko I don't have any errors after including mysql_error(). I'm looking to switch to PDO.
Replace !db_select with !$db_select. Replace &row with $row.
@JonathanRomer How can I include the database? The typo is fixed, but that didn't solve my problem. And I'm sure my db is filled.
I think its better to check ot some tutorials, who give ou an step by step explanation. Because if i give you the answer im sure you will run into an other probbln in 5 mins.
Also to see these syntax errors in the browser, activate error_reporting(E_ALL); on development code version (remove in production version, as also your die statement, which does not help the site user).
Where is $dbnameand $db defined?
What does mysql_num_rows($results) tell you?
I have added a working pastebin example as a comment to my answer.
First of all, the most important thing to keep in mind is:
You are using deprecated and unsecure code
The mysql_ functions are strongly discouraged, for various reasons:
Are deprecated and will be removed in future versions of PHP,
Are insecure leading to possible SQL injections,
Lack many features present in more current versions of PHP
See the linked question for much more in-depth explanations.
Now, to the code itself:
You are not using mysql_connect to connect to the server
You should use mysql_connect to specify the server, the username and the password that will be used to access the data in the database. From your code, it seems that it was supposed to be present, because there's a $db variable used in the mysql_connect function, but not properly initialized nor used again anywhere else.
You should use mysql_connect in a way similar to this:
$db = mysql_connect('localhost', $user, '$password');
if (!$db) {
die('Not connected : ' . mysql_error());
}
(Don't forget to set your username and password!)
You are using mysql_select_db twice in a row:
$db_select = mysql_select_db($dbname,$db);
if (!db_select) {
die("Database selection also failed miserably: " . mysql_error());
}
followed by
mysql_select_db("databaseiheko");
Note the $dbname and $db variables, you don't have them on your code, this function won't work like this.
The second mysql_select_db overwrites the first, but you don't specify a server connection to be used.
You should use the first version, but you should use mysql_connect before it.
You have typos in your code
if (!db_select) { should be if (!$db_select) {
echo &row['DatumFuif'] should be echo $row['DatumFuif']
mysql_ functions are deprecated, but if you want to use them, i suggest these corrections:
You can correct your code this way:
the mysql connect is needed:
<?php
//connect to your database
mysql_connect("serverIpAddress","userName","password");
//specify database
mysql_select_db("yourDatabaseName") or die;
//Build SQL Query
$query = "select * from tblfuiven";
$queryResult=mysql_query($query);
$numrows=mysql_num_rows($queryResult);
numrows will contain the number of found records in the db.
add an echo for the number of rows and let us know if the number of rows is still one.
Then use mysql_fetch_assoc to get rows:
while($row = mysql_fetch_assoc($queryResult)) {
?>
<tr>
<td><?php echo $row['NaamFuif']?></td>
<td><?php echo $row['GemeenteFuif']?></td>
<td><?php echo &row['DatumFuif']?></td>
</tr>
<?php
}
?>
EDIT: You can test the code and let us know the number of rows that you obtain, using this code(write your real user name and password and db name:
<?php
mysql_connect("localhost","root","root");
mysql_select_db("databaseName") or die;
$results = mysql_query("SELECT * FROM tblfuiven");
$numrows=mysql_num_rows($queryResult);
while($row = mysql_fetch_assoc($queryResult)) {
?>
<tr>
<td><?php echo $numrows ?></td>
<td><?php echo $row['NaamFuif']?></td>
<td><?php echo $row['GemeenteFuif']?></td>
<td><?php echo $row['DatumFuif']?></td>
</tr>
<?php
}
?>
After adding your code, it still doesn't work.
Current code
--> http://pastebin.com/p7J4rZek
@Matt in my answer there is a question. Could i know the value of $numrows ?
@Matt in mysql_connet you used $dbhost etc., within quotes. Write directly the string within quotes, for example "localhost" instead of "$dbhost"
When I echo something, I don't get an output (example: echo 'Hello World"). Even after directly writing the string, I get the same results.
@Matt Then probably your php.ini settings are different from the usual
@Matt, add the row near the other 3 td and watch the output on the webpage.
@Matt use this http://pastebin.com/JmuecfGh and write the correct user name, password and database name. Why you use variables that you didn't declared?
Your problem is in the php echo statement. It is missing ";".
| common-pile/stackexchange_filtered |
SWIFT: Map - Location issue
I'm new to Swift and write an app to show my locally saved locations on a map. In one ViewController I save the locations
func locationManager(manager: CLLocationManager!,
didUpdateLocations locations: [AnyObject]!)
{
var latestLocation: AnyObject = locations[locations.count - 1]
latitudeLabel.text = String(format: "%.4f",
latestLocation.coordinate.latitude)
longitudeLabel.text = String(format: "%.4f",
latestLocation.coordinate.longitude)
In an other ViewController I try to show it on the map but since in the second I had to import the MapKit, in the first ViewController appeared an error.
import MapKit
class MapViewController: UIViewController {
@IBOutlet weak var mapView: MKMapView!
// set initial location in Honolulu
let initialLocation = CLLocation(latitude: 21.282778, longitude: -157.829444)
let regionRadius: CLLocationDistance = 1000
func centerMapOnLocation(location: CLLocation) {
let coordinateRegion = MKCoordinateRegionMakeWithDistance(location.coordinate,
regionRadius * 2.0, regionRadius * 2.0)
mapView.setRegion(coordinateRegion, animated: true)
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
I know that the coordinates are no longer supported by the MapKit and instead of AnyObject I should use but I don't know how exactly should I change the locationManager code. :(
Hope someone can help.
what is the error you are getting ?
Thank you for your question. Meanwhile I solved the problem.
| common-pile/stackexchange_filtered |
Solr dataimporthandler use storedProcedure in deltaImportQuery
I was wondering if its possible to call a storedProcedure in the deltaImportQuery.
This is what I 'm trying to do.
<entity name="entity1" transformer="RegexTransformer" pk="id"
query="SELECT * FROM table1
INNER JOIN tabl2 ON table2.tbl1Id = table1.id"
deltaImportQuery="exec populatetable2 ${dih.delta.id}"
deltaQuery="select id from table1 where dtmodified > '${dih.last_index_time}'"
</entity>
ALTER PROCEDURE (@col1 int)
AS
BEGIN
DELETE FROM table2 WHERE tbl1Id = col1
INSERT INTO table2 (col1,col2) Values(1,2)
SELECT * FROM table2
END
In my store procedure I am deleting n rows and inserting them back. And then finally run a select statement to get some data back from the delta import query.
Can anyone tell me if this is possible in solr or not?
Thanks
Short answer would be yes.
Have you tried / got any errors? If yes, please take a look at : calling stored procedure from solr
You may want to add SET NOCOUNT ON; at the beginning of the stored procedure.
| common-pile/stackexchange_filtered |
Close two form by one click?
I'm working on project with sign in feature
When I run the project there is a form (form1) run the sign in .
after i click on login button build another form (form2) - It's the form of my program .
and made the first form (form1) hide .
The problem is when I press at the X button in form2 it's close but the form1 it's still running .
I tried to close the form1 instead of hide ... but this will close form2 before launching
In form1:
this.Hide();
Form2 x = new Form2();
x.Show();
You could subscribe to the child forms FormClosed event and use that to call Close on the parent form.
x.FormClosed += new FormClosedEventHandler(x_FormClosed);
void x_FormClosed(object sender, FormClosedEventArgs e)
{
this.Close();
}
I think you have your forms around the wrong way.
Form1 sould be your app and shold show Form2 as a dialog when it first loads, then when it closes you can process the result and decide wether to continue or close the application.
Something like:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
Load += new EventHandler(Form1_Load);
}
void Form1_Load(object sender, EventArgs e)
{
Form2 myDialog = new Form2();
if (myDialog.ShowDialog() == System.Windows.Forms.DialogResult.Cancel)
{
// failed login
// exit application
}
// all good, continue
}
}
try this, in the log in button if access is granted
private void logInBtn_Click(object sender, EventArgs e)
{
Form2 frm = new Form2();
frm.ShowDialog();
this.Hide();
}
then in form2 if you want to exit
private void exitBtn_Click(object sender, EventArgs e)
{
Application.Exit();
}
hope this helps.
You go to your form2 then in the event of the form look for FormClosed.
Put this code in your eventhandler:
private void Form2_FormClosed(object sender, FormClosedEventArgs e)
{
Application.Exit();
}
FormClosed is the event whenever the user closes the form. So when you close the form put a code that will exit your application that is -applicationn.exit();-
Hope this will work.
| common-pile/stackexchange_filtered |
Shortest path with a twist
I have n vertices and m undirected weighted edges between them (weights are representing minutes). Each vertex contains a number of minutes required to drink a coffee on that vertex.
I want to determine the shortest amount of time (minutes) neccessary to get from vertex v to vertex w but with the additional constraint that I have to drink my coffee on exactly one of the vertices on my way from v to w).
Example:
(number in the vertex is the amount of minutes required to drink a coffee, the weights on the edges represent the amount of minutes neccessary to travel this edge)
Get from v to w and drink a coffe on your way, output the minimal neccessary time (output should be 30).
My current approach is to find the shortest path with Dijkstra (sum up the weights of all edges on that path) and then add the value of the vertex with the lowest coffee time on that path to my result in order to get the total amount of time neccessary to get from v to w.
My approach doesn't work, here is an example where my approach fails (the result of my approach is 12 minutes, the actual result should be 6 minutes) :
How to determine the shortest amount of time from vertex v to w with the constraint that I need to drink a coffe on my path?
This seems like a question for https://math.stackexchange.com
It's a question I received during a programming interview.
It's best to state that within the question but I still believe that this would be better on the math exchange.
Is the vertex with "9" even connected to the graph?
The standard way to solve this problem is:
make 2 copies of your graph -- the need_coffee version and the had_coffee version.
Connect each need_coffee node with the corresponding had_coffee node, with an edge cost equal to the cost of drinking coffee at that node.
Use Dijkstra's algorithm to find the shortest path from V_need_coffee to W_had_coffee
Does this problem have a name? Where can I learn more about it and maybe find a more detailed description of this standard way of solving it?
It's not just this problem, but lots of similar problems where you have to find a shortest path along with other criteria. I've heard the technique called 'graph layering', but I don't know if there's a standard name for it. Here's a good talk about it: https://www.youtube.com/watch?v=OQ5jsbhAv_M&feature=youtu.be&t=47m7s
Your algorithms solutions are always so insightful
This solution is elegant, straightforward and works for me, I just don't understand the connection between your approach for solving this problem and the video you referenced?
The video shows using the same technique for "shortest path with <= N hops". It also works for shortest path with alternating red+blue edges, shortest path with at most N special edges, etc., etc.
Thanks, I learned a lot from your solution. Would upvote twice if I could.
I would try to write an A* algorithm to solve this. When you expand a node, you will get two children for every outgoing vertex; one where you drink the coffee and one where you don't. If you precondition your algorithm with a run of Dijkstra's (so you already have precomputed shortest paths), then you could inform the A* search's heuristic with the Dijkstra's shortest path + minimum time to drink a coffee (or + 0 if coffee already drunk).
The A* search terminates (you've reached your goal) when you have not only arrived at the destination node, but also have drank your coffee.
Example search for second scenario:
Want: A --> C
A(10) -- 1 -- B(10) -- 1 -- C(10)
\ /
\ /
2 -------- D(2) ------- 2
Expand A
A*(cost so far: 10, heuristic: 2) total est cost: 12
B (cost so far: 1, heuristic: 1 + 2) total est cost: 3
B*(cost so far: 11, heuristic: 1) total est cost: 12
D (cost so far: 2, heuristic: 2 + 2) total est cost: 6
D*(cost so far: 14, heuristic: 2) total est cost: 16
Expand B
A*(cost so far: 12, heuristic: 2) total est cost: 14
B*(cost so far: 11, heuristic: 1) total est cost: 12
C(cost so far: 2, heuristic: 2) total est cost: 4
C*(cost so far: 12, heuristic: 0) total est cost: 12
Expand C
B*(cost so far: 13, heuristic: 1) total est cost: 14
C*(cost so far: 12, heuristic: 0) total est cost: 12
Expand D
A* (cost so far: 14, heuristic: 2) total est cost: 16
D* (cost so far: 4, heuristic: 2) total est cost: 6
C (cost so far: 4, heuristic: 0 + 2) total est cost: 6
C* (cost so far: 6, heuristic: 0) total est cost: 6
Expand C*
goal reached. total cost: 6
Key:
* = Coffee from parent was drunk
So you can see that what this algorithm will do is first try to go down Dijkstra's shortest path (never drinking the coffee). And then when it reaches the end it will see a physical goal state, but with a need to still drink a coffee. When it expands this physical goal state to drink coffee, it will see that the cost to arrive is suboptimal, so it continues its search from another branch and proceeds.
Note that in the above, A and A* are different nodes, so in some way you can revisit a parent node (but only if the coffee drinking state is different).
This is to address a graph like this:
Want A-->B
A(20) -- 1 -- B(20)
\
2
\
C(1)
Where it would make sense to go from A->C->C*->A*->B*
I'm not sure yet if we need to distinguish "coffee drank" states by which node we drank the coffee at, but I'm leaning towards no.
Interesting approach, looks fine (I can't come up with a counterexample).
One way to do it is as following:
Compute the shortest path from u to all the other vertices and call it p(u,x)
Compute the shortest path from all the vertices to v and call it p(x,v)
loop over all vertices and find the minimum of the value (p(u,x)+coffee(x)+p(x,v))
Doing so will lead to an algorithm with the same time complexity than Dijkstra's one (if you use Dijkstra's algorithm in step 1 and 2)
You approach runs Dijkstra (O(E * logV)) 2 times for each node, shouldn't the complexity be O(V * E * logV)?
Dijkstra's algorithm is able to compute the shortest path from one node to all the other ones, see the wikipage for example
to be more precise, you can have a look at https://en.wikipedia.org/wiki/Shortest_path_problem#Single-source_shortest_paths
So in step 1. your algorithm runs Dijkstra once (determines the shortest path from u to all other vertices) and in step 2. you run Dijkstra to determine the shortest path from v to all other vertices. In step 3. you iterate through all nodes and calculate p(u,x)+coffee(x)+p(x,v), save the minimum and output it at the end. Overall this runs Dijkstra 2 times and iterates through all the nodes once. Did I get that correctly?
Yes that's exactly it !
| common-pile/stackexchange_filtered |
Create start and endtime columns based on multiple conditions in R (dplyr, lubridate)
I have a dataset, df
Read Box ID Time
T out 10/1/2019 9:00:01 AM
T out 10/1/2019 9:00:02 AM
T out 10/1/2019 9:00:03 AM
T out 10/1/2019 9:02:59 AM
T out 10/1/2019 9:03:00 AM
F 10/1/2019 9:05:00 AM
T out 10/1/2019 9:06:00 AM
T out 10/1/2019 9:06:02 AM
T in 10/1/2019 9:07:00 AM
T in 10/1/2019 9:07:02 AM
T out 10/1/2019 9:07:04 AM
T out 10/1/2019 9:07:05 AM
T out 10/1/2019 9:07:06 AM
hello 10/1/2019 9:07:08 AM
Based on certain conditions within this dataset, I would like to create a startime column and an endtime column.
I would like to create a 'starttime' when the following occurs: Read == "T", Box == "out" and ID == ""
When the first instance of this condition occurs, a starttime will be generated. For example for this dataset, the starttime will be 10/1/2019 9:00:01 AM since this is where we see the desired conditions occurs first (Read = T, Box = out and ID = "" )
However, the moment when anyone of these conditions is not true, and endtime will be created. So the first endtime would occur right before row 6, where the time is 10/1/2019 9:03:00 AM. My ultimate goal is to then create a duration column for this.
This is my desired output:
starttime endtime duration
10/01/2019 9:00:01 AM 10/01/2019 9:03:00 AM 179 secs
10/1/2019 9:06:00 AM 10/1/2019 9:06:02 AM 2 secs
10/1/2019 9:07:04 AM 10/1/2019 9:07:06 AM 2 secs
dput:
structure(list(Read = structure(c(3L, 3L, 3L, 3L, 3L, 2L, 3L,
3L, 3L, 3L, 4L, 4L, 3L, 1L), .Label = c("", "F", "T", "T "), class = "factor"),
Box = structure(c(3L, 3L, 3L, 3L, 3L, 1L, 3L, 3L, 2L, 2L,
3L, 3L, 3L, 1L), .Label = c("", "in", "out"), class = "factor"),
ID = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 2L), .Label = c("", "hello"), class = "factor"),
Time = structure(1:14, .Label = c("10/1/2019 9:00:01 AM",
"10/1/2019 9:00:02 AM", "10/1/2019 9:00:03 AM", "10/1/2019 9:02:59 AM",
"10/1/2019 9:03:00 AM", "10/1/2019 9:05:00 AM", "10/1/2019 9:06:00 AM",
"10/1/2019 9:06:02 AM", "10/1/2019 9:07:00 AM", "10/1/2019 9:07:02 AM",
"10/1/2019 9:07:04 AM", "10/1/2019 9:07:05 AM", "10/1/2019 9:07:06 AM",
"10/1/2019 9:07:08 AM"), class = "factor")), class = "data.frame", row.names = c(NA,
-14L))
I think overall, I would have to create a loop. I believe I have the thought process correct, just unsure of how to formulate the code. This is what I am trying:
df2 <- mutate(df,
Date = lubridate::mdy_hms(Date))
for ( i in 2:nrow(df2))
{
if(df2$Read[[i]] == 'T')
}
I think this may be a start (just placing my conditions within the loop, I am not sure how to complete this)
Any suggestion is appreciated.
You can do this without loop. Using dplyr since it is easy to do multiple things using pipes.
We first convert Time column to POSIXct class, create a cond column which gives logical values based on the conditions we want to check, create a column to create groups using cumulative sum of cond column. Keep only the rows which satisfies the condition and get first and last value of Time along with the difference in between them for each group.
library(dplyr)
df %>%
mutate(Time = lubridate::mdy_hms(Time),
cond = Read == "T" & Box == "out" & ID == "",
grp = cumsum(!cond)) %>%
filter(cond) %>%
group_by(grp) %>%
summarise(starttime = first(Time),
endtime = last(Time),
duration = difftime(endtime, starttime, units = "secs")) %>%
select(-grp)
# A tibble: 3 x 3
# starttime endtime duration
# <dttm> <dttm> <drtn>
#1 2019-10-01 09:00:01 2019-10-01 09:03:00 179 secs
#2 2019-10-01 09:06:00 2019-10-01 09:06:02 2 secs
#3 2019-10-01 09:07:04 2019-10-01 09:07:06 2 secs
data
I have cleaned up your data a bit and used this as df.
df <- structure(list(Read = c("T", "T", "T", "T", "T", "F", "T", "T",
"T", "T", "T", "T", "T", ""), Box = c("out", "out", "out", "out",
"out", "", "out", "out", "in", "in", "out", "out", "out", "hello"
), ID = c("", "", "", "", "", "", "", "", "", "", "", "", "",
""), Time = c("10/1/2019 9:00:01 AM", "10/1/2019 9:00:02 AM",
"10/1/2019 9:00:03 AM", "10/1/2019 9:02:59 AM", "10/1/2019 9:03:00 AM",
"10/1/2019 9:05:00 AM", "10/1/2019 9:06:00 AM", "10/1/2019 9:06:02 AM",
"10/1/2019 9:07:00 AM", "10/1/2019 9:07:02 AM", "10/1/2019 9:07:04 AM",
"10/1/2019 9:07:05 AM", "10/1/2019 9:07:06 AM", "10/1/2019 9:07:08 AM"
)), row.names = c(NA, -14L), class = "data.frame")
This works fine! So for my understanding, whenever you wish to put a condition within the code using dplyr, you can utilize the 'cond' function?
cond is just a placeholder column here which tells us if the condition we are checking for is satisfied or not for that row. If you need to add more conditions you can add there appending them with &.
Hello, @Ronak, can I add this to the code to filter a certain value? cond = Subject == "^RE|FWD|FW", ignore.case = TRUE as well as a different scenario of cond = Subject !== "^RE|FWD|FW", ignore.case = TRUE
If you are looking for a pattern, you cannot use == or !=, use grepl.
| common-pile/stackexchange_filtered |
Bash sorting an array by two keys
I need to sort the following array by two keys, score and alphabetically.
For example:
arr = (Hawrd 60 James 75 Jacob 60 Leonard 75)
will become :
sorted = (Hawrd 60 Jacob 60 James 75 Leonard 75)
*actually I don't need the sorted array, just need to print it (in a format of name and a score). Thanks!
*I read about the command sort but I dont see how can I sort by two keys using that command
EDIT: Sorry if it wasn't clear enought, but I meant that each person has its own score, Leonard has 75, Jacob has 60 and in the end of the process, each person will still have the same score.
Sorry if I am being dense, but I can't tell from your example whether you want them sorted first by score then by name, or first by name then by score because the output is sorted by both name and score - isn't it? Maybe I am going mad.
First by score and then by name.
Here is one method:
arr=( Hawrd 60 James 75 Jacob 60 Leonard 75 )
#first we sort the array like this: 60 60 75 75 Hawrd James Jacob Leonard
OLDIFS=$IFS
IFS=$'\n' arr_sorted=($(sort <<<"${arr[*]}"))
IFS=$OLDIFS
#second, we split the sorted array in two: numbers and names
cnt="${#arr_sorted[@]}"
let cnt1="$cnt/2"
let cnt2="$cnt - $cnt1"
nr_sorted=( "${arr_sorted[@]:0:$cnt1}" )
names_sorted=( "${arr_sorted[@]:$cnt1:$cnt2}" )
#and third, we combine the new arrays(names ang numbers) element by element
for ((i=0;i<${#names_sorted[@]};i++)); do sorted+=(${names_sorted[i]} ${nr_sorted[i]});done
#now the array 'sorted' contain exactly what you wished; let's print it
echo "${sorted[*]}"
Thanks! I may be wrong but I think your solution doesn't keep the correct order, I mean this array is like a Key,Value: Leonard has the score of 75, Jacob got 60 and etc. When I want to sort it, each person will still have his own score but in the right place.
@Michael That's exactly what's happening. If you run the above lines you will see that the output is Hawrd 60 Jacob 60 James 75 Leonard 75 ...exactly as you asked. Just take your time and test it: copy the above lines and paste them in terminal, nothing more.
I'm not sure if that answers the question, but with this few details provided I'll try
Here is one solution :
[ ~]$ cat test.sh
#!/bin/bash
declare -a array
declare -a ageArray
array=("Hawrd 60" "James 75" "Jacob 60" "Leonard 75")
size=${#array[@]}
for (( i=0 ; i < $size ; i++ )); do
age=$(echo "${array[$i]}"|egrep -o "[0-9]*")
ageArray[$i]="$age_${array[$i]}"
done
# sorting by age and by name (with ascii comparison)
for (( i=0 ; i < $size ; i++ )); do
for (( j=$i+1 ; j < $size ; j++ )); do
if [[ ${ageArray[$j]} < ${ageArray[$i]} ]]; then
tmp="${array[$i]}"
ageTmp="${ageArray[$i]}"
array[$i]="${array[$j]}"
ageArray[$i]="${ageArray[$j]}"
array[$j]="$tmp"
ageArray[$j]="$ageTmp"
fi
done
done
#printing result
for item in "${array[@]}"; do
echo "$item"
done
[ ~]$ ./test.sh
Hawrd 60
Jacob 60
James 75
Leonard 75
If you don't mind Perl, you can do it like this:
#!/bin/bash
arr=(Zigbee 9 Hawrd 60 Apple 99 James 75 Jacob 60 Leonard 75)
echo -n ${arr[@]} | perl -040 -nE 'if(defined($a)){push(@l,{Name=>$a,Score=>$_});undef $a}else{$a=$_};END{foreach(sort {$$a{Score} <=> $$b{Score} or $$a{Name} cmp $$b{Name}} @l){say "$$_{Name} $$_{Score}"}}'
The elements of the array are echoed and each one is read at a time by making Perl's record separator the space (-040). One name is read into $a, and then on the next read, as $a ids defined, the next value is read in $b. $a and $b are then pushed onto the end of an array of hashes. At the end (END), the array is sorted by score and name and the names and scores printed out.
Output:
Zigbee 9
Hawrd 60
Jacob 60
James 75
Leonard 75
Apple 99
| common-pile/stackexchange_filtered |
Should I use a plural when describing a state (e.g., Multiple Product(s) Mode)?
Let’s say that a product has two modes of operation: single and multiple. In the user manual for the product, the single mode has the header Single Product Mode.
What should be the header for multiple mode?
Multiple Products Mode
or
Multiple Product Mode
That is, should the product(s) be singular or plural? And what is the rule that governs this usage? Is it that when multiple products is used like an adjective, then it should not be plural?
I have several PCs, all configured to operate in single user mode. I don't know what single users mode would mean, if anyone were to use that form (which so far as I know, they don't).
This question appears to be off-topic because it is about choosing programming identifiers, which is specifically off-topic according to our help center.
but can't this be in a user manual? For example, "our game console has a single user mode and a multiple user(s) mode?"
@動靜能量 Yes, we always put the second word in the singular: single-user mode, multi(ple)-user mode, single-mouse mode, multiple-mouse mode, one-foot measurement, six-foot measurement, one-horse town, ten-horse town. We have a question or three about that here somewhere or whother.
If it's for a user manual, why are you using 'CamelCase' format as if it's the NameOfAComputerVariable?
I am asking for "multiple product mode" also, but I just put it together in one word just in case there might be a different rule, such as used as "multiple-product-mode"
Actually, this question could be clearly valid, if I am asking about, a user manual, say for a thermostat, which describes the thermostat having a Single Air-conditioner Mode, and a Multiple Air-conditioners Mode. Why do some people think it has to be related to programming and close this question down?
Why has this question been downvoted twice? As per http://english.stackexchange.com/help/on-topic, it's in scope (word choice, grammar), not?
that's because when I first wrote my question, not only did I mention "Single Product Mode", but I also mentioned "SingleProductMode" if used as a variable name in programming. Then people say "programming is not English"
In English, where a noun modifies another noun, it hardly ever has plural inflection, even where it has plural meaning, eg.:
hatstand = a stand for hats
flower garden = a garden for flowers.
There are exceptions, but they are rare. (There is also a different construction where the qualifying noun is put in the possessive, and in that case it may be singular or plural: man's suit, but children's home)
So for your case in the user manual (or users' manual), "multiple user mode" is much more natural. In your code, you can call it anything you like.
Yes, so in a user manual, it could involve a switch that when you slide to the left, set it to "Single User Mode" and slide to the right, for "Multiple User Mode". So it is a perfectly acceptable question, but I just don't know why people are so closed for their minds that they have to close the question
I think because Variable names need not follow any kind of English grammar, so questions about variable names are simple not relevant. Text in a user manual is different, but from your question it looked as if you were just talking about variable names.
you said "In your code, you can call it anything you like"... so that's the thing, the manager actually asked me to rename the variable. I think it could make sense to follow the general English usage rules when naming things
| common-pile/stackexchange_filtered |
How can post images using slack-api in gallery view instead of slack formatting them vertically?
so my question is actually self-explanatory, if you use the slack client and post a message with 1 or more images, slack displays them in a gallery view, but when you try do achieve this using the api, this doesn't seem to be working, slack starts displaying them vertically which sometimes takes the entire viewport of a user and gets annoying.
Here's an example of what I mean:
this is how slack displays the images when posted using the client itself.
and this is how slack displays it when posted through api
So how can I achieve the former but using the API?
Can you provide details of the API your are using? Are you attaching the image or are you using image URL from publicly accessible link? If you are using link, you can make use of block kit to format your message.
@SuyashGaur I am using an external image link and attaching it inside a slack block, it still doesn't work the way i want
Unfortunately, the feature that you request is only available with slack client.
Slack does not allow this feature by block kit nor by using any API.
You can achieve the desired behaviour by putting (hidden) links to the uploaded images into your post message. For example:
lorem ipsum <https://your.image1.url| ><https://your.image2.url| >
Only downside: slack is going to mark the text as edited since those links are removed and replaced by the gallery.
| common-pile/stackexchange_filtered |
My Laravel projects open in file structure on MacOS
I was using Windows before and I was developing Laravel projects on Windows. Now I will continue to develop on the Macbook. I'm new to using macOS. I installed Apache server and PHP with Brew. I also installed Composer. All working flawlessly. I also set the virtual host options correctly. But my Laravel projects still open in the file structure.
When I run the composer install command, I get the following error. Could it have something to do with it?
Composer could not find a composer.json file in /Users/emre
To initialize a project, please create a composer.json file. See https://getcomposer.org/basic-usage
Laravel Version: 8.83.25
PHP Version: 8.1.12
My Sample Laravel Project:
My httpd-vhosts.conf file:
httpd-vhosts.conf
My hosts file:
hosts
"Laravel Version: 4.2.17"? That version is over 10 years old, better upgrade
Looks like /Users/emre is not your project folder. You probably didn't "set the virtual host options correctly"
What is your privileged reason to use this version of Laravel?
Laravel Valet is your solution without the headache for development. https://laravel.com/docs/9.x/valet
Laravel 4.2 will not work on PHP 8.1. (and should not be used anymore)
Please provide enough code so others can better understand or reproduce the problem.
My Laravel version is 8.83.25. Sorry, I'm a little sick so I made a mistake.
@OMiShah Laravel Valet solved my problem, thank you!
| common-pile/stackexchange_filtered |
has_many association migration in Rails
I m working on a Rails project (Rails version 4.2.3).
I created a User and Task model but did not include any association between them during creation.
Now i want one user to have many tasks and one task belonging to one user.
Through rails g migration AddUserToTask user:belongs_to from this thread
i was able to insert the foreign user_id key in the tasks table. But how to i add a the has_many migration? I updated the User model:
class User < ActiveRecord::Base
has_many :customers
end
but i m not sure how i have to write the migration. So far i wrote this:
class addTasksToUser < ActiveRecords::Migration
def change
update_table :users do |t|
t.has_many :tasks
end
add_index :users, taks_id
end
end
But rake db:migrate is not performing any action. Is this the correct way to setup the has_many relationship?
You are doing it wrong. Add associations in model and corresponding fields in migration.
Just add associations in model.
for me, the better answer for this question, was here: https://stackoverflow.com/a/17928074/4179050
Set up associations in models:
class User < ActiveRecord::Base
has_many :tasks
end
class Task < ActiveRecord::Base
belongs_to :user
end
Delete the migration file you've shown.
Add references to tasks table (assuming you already have tasks table):
rails g migration add_references_to_tasks user:references
Migrate the database:
rake db:migrate
If you don't have tasks table yet, create one:
rails g migration create_tasks name due_date:datetime user:references # add any columns here
Migrate the database:
rake db:migrate
From now on your tasks will have user_id attribute.
Hi Andray thanks for you detailed answer. My tasks already have a user_id. How do i get the other way so, that my user have many tasks. I highly struggle implementing this direction
@theDrifter if your tasks table already has user_id column, than as I've said, you only need to ensure associations on models levels(the very first part of my answer).
Hey @Andrey thanks for your sicking with me :). I did setup the models accordingly. How do i get the user associated with a set of tasks and also translate it to my table structure?
OK i got my mistake. I did not fully understand the many_to_one relationship. Now i want the tasks which belong to one user. I just call the tasks for a specific user_id. I thought i have to store each task_id in the user table which makes absolutely no sense :)
Add has_many :tasks to the User model and belongs_to :user to the Task model. In your migration file, delete all the current body of the change method and include a add_index :tasks, :user_id line. After that, run the migration normally.
I know this is an old thread but efforts are only to improve on this.
I think what you were going for was to show reference foreign key in the table.
In which case:
class addTasksToUser < ActiveRecords::Migration
def change
update_table :users do |t|
t.references :task
end
end
Please make sure your references to the table with the primary key is singular.
| common-pile/stackexchange_filtered |
Why did this woman want Arya to have this object?
During Game of Thrones S08E05, while Arya is trying to leave King's Landing, she gets help from a woman with her daughter, loses them in the chaos, only to find them again at a later point.
Shortly before the woman
and her daughter are burned by the dragon,
she wants Arya to have what looks like a pale wooden horse. Though, everything turned out other than expected, Arya is in the end greeted by
a white horse covered in blood.
All this leads me to the question why the woman wanted to give the wooden horse to Arya. Or did she want to give it to her daughter, who she wanted to leave with Arya so she survives?
I am not sure if this info was included in the "Inside the episode" part as I haven't watched it.
The horse might symbolize a knight (Arya) going to kill the dragon (Dany).
@AnkitSharma Arya is not a knight.
I think the women wanted Arya to take her daughter who was clutching the horse toy, as she could not run
@JAD yes she is not "knight" but she will play that role most probably to slay the dragon
It looks like a scenaristic trick to recognize these characters once they're burned. I don't know how this trope is called, but I've seen it several times in catastrophic movies (like the teddy bear in Breaking bad).
She was telling Arya to take her daughter not the toy. In fact Arya does start to lead the daughter away but when she turns back to go to her mum she can't stop her as Drogon is closing in so leaves her.
Arya: We have to keep moving.
Mother: Take her. Take her! Take her.
Game of Thrones, Season 8 Episode 5, "The Bells"
If that is the case, I am completely mistaken :D could you provide an exceprt of the script, if that helps understanding it?
@XtremeBaumer To be honest I didn't think there was anything relevant so didn't double check but she does say "Take her".
Alright, thanks! Now that whole scene makes sense. I somehow understood "Take it." and thought it would have any special meaning because of the horse Arya encounters shortly after
@XtremeBaumer - I think when the daughter tears away to go back to her mother, Arya is left holding the girl's toy which she was carrying, so the horse has the meaning of the pointless slaughter of innocent children to
Arya.
@PoloHoleSet I'm pretty sure the toy is seen burnt in the little girl's hand afterwards when Arya is looking at them.
@TheLethalCarrot - Ah.... the way the question is worded I assumed she wound up holding the toy.
Although Leathal Carrot's answer is correct, the women was not asking Arya to take the horse toy, but her daughter, doesn't mean that there wasn't any symbolism and/or possible callbacks with the toy horse, the girl and mother, and/or a thematic connection to the real horse that turns up later.
Knighthood/A Savior
As Ankit Sharma points out in the comments, it could be a symbol of "knighthood", as this scene may call back to Margery Tyrell's scenes where she was passing out wooden horse and knight toys to the poor young children of King's Landing, saying that even if their fathers weren't "anointed" Knights, they fought bravely do defend and protect the city making them out to be just like Knights. She also promises to take care of all of them.
Boy: He wasn’t a knight. He was just a soldier.
Margaery: And what do knights do? Protect the weak and uphold the
good. Your father did that. Be proud of him.
So it's not that Arya is knight in the anointed sense, but she behaved like knight by putting revenge and even herself aside, to try and help others, such as mother and child escape. (This idea is also juxtaposed by Jon killing a Northern soldier, whose about to rape a women).
Foreshadowing Revelations Allegory - Jesus Ressurection/Forgiveness & Death Rider
The toy horse then also foreshadowed the upcoming scene, in which a pale white horse (actually it's a Gray Arabian covered in ashes. I grew up with them!) appears to help Arya escape the scene, as she seems to be the only survivor in the part of the city, proving herself a master of escaping death once more...
"[I]f Christ hath not been raised, our faith is vain; ye are yet in
your sins” (1 Corinthians 15:17).
In Revelation 6:2-8, John sees a vision of the Four Horsemen of the
Apocalypse. The final horseman John the speaker sees is Death, who is
literally “followed” by Hell (or “Hades,” depending on which version
of the Bible you’re reading from), signifying untold destruction that
comes when Death rides in.
From the King James Bible:
“And I looked, and behold a pale horse: and his name that sat on him
was Death, and Hell followed with him.”
What gets a little funny/complicated is that the horse Arya finds in
King’s Landing is a white horse coated in pale gray ash. This has a
double meaning due to the fact that two out of the Four Horsemen rode
white and pale horses that differed in both color and symbolism.
While Death, the black rider, rode a pale horse (often understood to
be ashen gray or sickly green), fellow horseman Pestilence/Conquest
rode a pure white horse, thus earning the name “the white rider.”
As Inverse continued to point out, it's hard to make this all align perfectly, since there are a couple of different translations and because the horse is covered in ash may give a mixed metaphor, but it doesn't mean this iconic imagery is meaningless and it would seem that "Death Rider" would be more likely what the writers are going for, considering Arya's history and what it may metaphysically entail being a Faceless Man...
Other Mythology:
From earliest times, white horses have been mythologised as possessing
exceptional properties, transcending the normal world by having wings
(e.g. Pegasus from Greek mythology), or having horns (the unicorn). As
part of its legendary dimension, the white horse in myth may be
depicted with seven heads (Uchaishravas) or eight feet (Sleipnir),
sometimes in groups or singly. There are also white horses which are
divinatory, who prophesy or warn of danger.
As a rare or distinguished symbol, a white horse typically bears the
hero- or god-figure in ceremonial roles or in triumph over negative
forces. Herodotus reported that white horses were held as sacred
animals in the Achaemenid court of Xerxes the Great (ruled 486–465
BC),6 while in other traditions the reverse happens when it was
sacrificed to the gods.
In more than one tradition, the white horse carries patron saints or
the world savior in the end times (as in Hinduism, Christianity, and
Islam), is associated with the sun or sun chariot (Ossetia) or bursts
into existence in a fantastic way, emerging from the sea or a
lightning bolt.
What Else Do These Scenes Do?
Callback to Shireen Baratheon & R'hllor/Azor Ahai
Shireen's death was one of the hardest in the series, because she was so innocent and lately there has been friendly reminder of innocence lost in the past couple of episodes. But Arguably given the recurring phrase, "Only Death Pays for Life", her death may have been the cost for Melissandre resurrecting Jon Snow and get back into the series debate over if the characters are dying for something greater than themselves or not?
Shireen's sacrifice, Stannis, Melisandre, and Jon's resurrection also then reminds viewers of the Azor prophecy with possible reincarnation of Azor Ahai, with a threw lint to Arya since Arya killed the Night King with Melisandre's, Beric's, and the Hound's help, but yet she and/or the events of The Long Night didn't meet the full criteria...
Now with Dany breaking down, there is more of a set-up for the prophesy to be fulfilled pointing to Jon being in a position where he may have to kill his beloved Queen, but where does that leave Arya?
It's unclear, but there may be some kind of foreshadowing here that Arya contributes to the stabbing or murder of Queen Daenaryes Targaryen. After all in the TV version, Dany doesn't have violet eyes like her book counterpart, but "green" like actress Emelia Clarks. And Melisnadre's "eye prophecy" does include Arya being involved in the death of someone with "green eyes"...
Callback to Cersei Lannister: Motherhood & The Faith of the Seven/True Identity
Uniquely the women with her child that Arya tries to help is women who has short cropped hair. In Season 6 Cersei gets into it with High Sparrow, leader of the Faith Militant (Faith of the Seven), which results not only in Cersei's cutting of hair and walk of atonement, but Cersei's retaliation with the using Wildfire to blow up the Sept, killing Margery (and Loras) along with many innocence, which lead to her last born child's death (fulfilling Maggie the Frog's TV version prophecy).
There is just a lot of imagery and musical score that calls back to season 6's King Landing's scenes through out this episode, including exploding wildfire, but what it all means may go back to arguments given by the Catholic metaphysical poet John Donne ('For Whom the Bell Tolls') and for Cersie more particularly, it may go back to what the High Sparrow said about the truth about whom we really are despite however we look, as both a pregnant Cersei dies in the arms of her Knight Ser Jaime and the poor woman and her child die following their Knight, Arya.
upvote for your intense contemplation of the storyline :) Quite curious about your thoughts of the flashes relating the Hounds fight and Arya's struggles to get out of the city. It was almost blow by blow... certainly there was a reason for it.
It definitely seems like there is some metaphorical, if not metaphysical transition there, but I'm not sure if she swung to back to humanity (via the Hound words telling her to leave/not seek revenge), and she's staying there, letting go of her anger (ie: the hounds death = lifting of burdens) or if she felt completely defeated by her experience and is going straight back to revenge again? Kind of like the difference between working with Jon (coming to an agreement) or working against Jon (going behind his back). I'm hopeful there is a team effort, but we'll see.
Also thanks for the upvote. I know that's not exactly what was being asked, but I felt that despite the dialogue mishap, the scene/exchange was still really important to other parts of the series and maybe the last episode...
(the girl in me is secretly hoping that) Arya feels like her list is complete and will return to Gendry (for her happily ever after) ... sorry, it's hormones. And I agree. Many things will be realized during the last episode.
| common-pile/stackexchange_filtered |
Deserialize Nested JSON in VB.NET for conversion
I have recently started a project using vb.net and I'm struggling to figure out how get the data for several of the items in the Mid class, o,h,l,c converted to double and loaded into an array, to eventually perform some math functions on them. Below is a small (3 candles out of 1000) sample of the JSON data I wish to work with.
{
"instrument": "EUR_USD",
"granularity": "M1",
"candles": [
{
"complete": true,
"volume": 18,
"time": "2017-07-21T04:13:00.000000000Z",
"mid": {
"o": "1.16281",
"h": "1.16284",
"l": "1.16274",
"c": "1.16281"
}
},
{
"complete": true,
"volume": 96,
"time": "2017-07-21T20:58:00.000000000Z",
"mid": {
"o": "1.16640",
"h": "1.16642",
"l": "1.16628",
"c": "1.16628"
}
},
{
"complete": true,
"volume": 32,
"time": "2017-07-21T20:59:00.000000000Z",
"mid": {
"o": "1.16628",
"h": "1.16652",
"l": "1.16628",
"c": "1.16641"
}
}
]
}
Here is the relevant code:
Imports Newtonsoft.Json
Public Class Rootobject
Public Property instrument As String
Public Property granularity As String
Public Property candles() As List(Of Candle)
End Class
Public Class Candle
Public Property complete As Boolean
Public Property volume As Integer
Public Property time As String
Public Property mid As Mid
End Class
Public Class Mid
Public Property o As String
Public Property h As String
Public Property l As String
Public Property c As String
End Class
... 'jsonstring loaded with data here
Dim obj = JsonConvert.DeserializeObject(Of Rootobject)(jsonstring)
I have attempted to do something similar to below with a loop only to receive errors.
Dim obj2 = obj.candles(0).mid.o
I have also tried to find ways of using JObject.Parse(jsonstring) without any success. So, specifically what is the best way to get the values in the Mid class each loaded into an array for further processing?
Thanks in advance.
If you want to get all the Mid objects into an array, you can use Linq to project them from the collection.
Dim mids As Mid() = obj.candles.Select(Function(candle) candle.mid).ToArray()
If you want a collection of a specific Mid property just select the one you want
Dim os As Double() = obj.candles.Select(Function(candle) Double.Parse(candle.mid.o)).ToArray()
or grab it from the mids array project before
Dim os As Double() = mids.Select(Function(mid) Double.Parse(mid.o)).ToArray()
The same can be done for any of the other properties with mid
| common-pile/stackexchange_filtered |
Why does my discord bot not add roles properly?
I am trying to make a bot that would add a role that is in the server when a person types !join choiceOutOfVariousRoles. I am currently using discord version 12. My error message is:
fn = fn.bind(thisArg);
Although trying various techniques I could not get the code to work.
const Discord = require('discord.js');
const client= new Discord.Client();
const token = process.env.DISCORD_BOT_SECRET
client.on('ready', () => {
console.log("I'm in");
console.log(client.user.username);
});
client.on('message', msg => {
if (msg.content.toLowerCase().startsWith("!join"))
{
var args = msg.content.toLowerCase().split(" ")
console.log(args)
if (args[1] === 'sullen')
{
msg.channel.send('You have successfully joined Sullen!')
const sullenRole = msg.guild.roles.cache.find('name','Sullen')
msg.member.addRole(role.id)
}
}
});
client.login(token)
**EDIT: Fixed what everyone was saying and all I need to do now Is update the permissions, (my friend has to do that because its not my bot) and I should be all good. Thanks everyone! :D
The first thing wrong with your code is that you're mixing v11 and v12 code
May you please elaborate?
you can only use cache in v12 and addRole is v11 syntax. What version do you have?
Right now I am using version 12.4.1.
Also you're using the wrong syntax for Collection.prototype.find(). Please check the page I linked.
discord.js introduces breaking changes very frequently, and v12 is no exception. You need to make sure you find up-to-date code otherwise it won't work. GuildMember#addRole was moved to GuildMemberRoleManager#add, which means you must use msg.member.roles.add(sullenRole).
| common-pile/stackexchange_filtered |
Are some superpositions caused by measurement?
From the Heisenberg uncertainty principle, we know that we can't know both the position and momentum of a particle, does this mean that measuring the position puts the momentum in a superposition? Or does it just mean that we will never know because we've interacted with it and destroyed that information (the momentum before observation)?
..."puts the momentum in a superposition..." A superposition of what? You have to specify the basis first. If you measure the momentum, then the post-measurement state is an eigenstate of momentum (modulo some comments about momentum eigenstates being non-normalizable), which is not a superposition of momentum eigenstates. However, that state is a superposition of position eigenstates. Every quantum state is a superposition, because you can always choose to expand the state in a basis that doesn't contain the state as an element.
I'm. not writing an answer, because I'm not sure what your actual question is. "Are some superpositions caused by measurement?" is different than "Does this mean that measuring the position puts the momentum in a superposition?" which is different than "Does it just mean that we will never know because we've interacted with it and destroyed that information (the momentum before observation)?" Unfortunately, the questions seem to stem from a set of false premises about what QM is about, so it's hard to answer this question. Can you clarify and/or focus your question?
This is not complicated and your intuition is correct: yes, a measurement of $X$ leaves the system in a superposition of eigenstates of $Y$ if $[X, Y] \neq 0$.
From what you've written it looks like you're thinking that the uncertainty is caused by the act of measurement, in the sense that performing the measurement somehow disturbs the system, and that this makes things uncertain. But it's more fundamental than that. The uncertainty is embedded into the underlying physics - in a certain sense, the particle (the underlying wavefunction) doesn't simultaneously have precisely defined position and momentum, weather you measure it or not. See: https://www.youtube.com/watch?v=MBnnXbOM5S4
Not "momentum in superposition", but $\psi$ in superposition of (improper) eigenfunctions of the momentum operator, $\psi_{localized} = \int_{-\infty}^{\infty}c(p)e^{ipx/\hbar}dp/\sqrt{2\pi}$.
A measurement of an observable $A$ collapses the state into one of the eigenstates $|n\rangle$ of $A$. If you measure repeatedly this operator in this basis, you will just get $|n\rangle$, $|n\rangle$, etc (really, the associated eigenvalue).
But there may be another basis and another operator you may want to measure, call it $X$. This $X$ operator has a basis $|\varphi\rangle$ such that every element of $|n\rangle$ can be written as a linear combination of $|\varphi\rangle$ states. This can always be done since all of these live in the same Hilbert space. An example of this would be measuring spin in two different directions. The observables are different, not compatible, yet all in the same Hilbert space. Summarizing, you have
$$|n\rangle = \sum_i c_i |\varphi_i\rangle. $$
So when you measure the observable $X$ you end up getting one of the possible $|\varphi_i\rangle$ states with probability $|c_i|^2$. In this sense, a measurement yields a superposition - but only because you're measuring an incompatible observable.
A few remarks:
You shouldn't think of "being in a superposition" as a meaningful statement about some property of a quantum state (unless you introduce additional qualifiers, give a particular physical meaning to some basis, etc). Every quantum state can be written as a superposition in infinitely many ways. Eg the state $|0\rangle$ can be written as $(|0\rangle+|1\rangle)+(|0\rangle-|1\rangle)$, or as $(|0\rangle+i|1\rangle)+(|0\rangle-i|1\rangle)$, or in infinitely many other ways. Exactly like you can decompose vectors in an abstract vector space in infinitely many choices of basis.
Measurements cause collapse of the state. Projective measurements (which you often describe in terms of "measuring an observable") collapse the state in the eigenstate corresponding to the measured eigenvalue. So yes, you have a collapse, which creates a specific superposition. But then again, the state was "a superposition" also before the measurement, so this seems a rather moot point.
You can make the statement a bit more meaningful saying that the measurement creates a state that has to be a superposition of different eigenstates of any observable that doesn't commute with the one you're measuring, as pointed out in the comments.
I think your question is as follows:
We see the phenomenon--after measuring the position of a particle, we can't measure the exact value of it's momentum, what we know is just a distribution.
To explain this abnormal phenomenon, we raise two possible theories.
(Classical parlance) When we measure the position, we disturbance the particle, so the momentum changes (as you said "destroyed that information"). Thus we can't measure the exact value of it's momentum.
(Quantum parlance) Measuring the position puts the momentum into a superpositon of eigenstates of momentum. Thus we know just a distribution.
Which theory is right?
My answer is as follows:
The two theories are all right.
Classical parlance treat measurement as disturbance. The peculiarity is that the distuibance cannot be ignored by the improvement of the experimental method. In this way we define how a system is absolutely small, and we call this kind of small system quantum system. In this view, what Quantum Mechanics tell us is just how quantum system interacts with our surveying instrument, not how nature acts.
Quantum parlance treat measurment as wave function collapse. This special physical process causes wave function becomes the eigenstate of position, at the same time a superposition of eigenstates of momentum. In this view, Quantum Mechanics means, to some extent, how nature acts.
Both of these theories describe phenomenon in different patterns. We can't say which is right or wrong. They are all effective.
| common-pile/stackexchange_filtered |
If $x_1,\cdots, x_n ~ N(0,\sigma^2)$ and iid and $T=\sum_{i=1}^{n}x_{i}^{2}$ What is the distribution of T
I was thinking using the mgf
$M_{T}(s)=E(e^{Ts})=E(e^{s\sum_{i=1}^{n}x_{i}^2})=E(\prod_{i=1}^{n}e^{sx_{i}^2})$
What would be next?
What would be next?
Next would be that the random variables $X_i$ being i.i.d., $M_T(s)=E[\mathrm e^{sX_1^2}]^n$. Then that if $Y$ is standard gaussian, then for every $u\lt1/2$,
$$
E[\mathrm e^{uY^2}]=\frac1{\sqrt{2\pi}}\int\mathrm e^{uy^2}\mathrm e^{-y^2/2}\mathrm dy\stackrel{(z=y\sqrt{1-2u})}{=}\frac1{\sqrt{2\pi}}\frac1{\sqrt{1-2u}}\int\mathrm e^{-z^2/2}\mathrm dz=\frac1{\sqrt{1-2u}}.
$$
Then that, applying this to $u=s\sigma^2$ yields, for every $s\lt s_\sigma$ with $s_\sigma=1/(2\sigma^2)$,
$$
M_T(s)=(1-s/s_\sigma)^{-n/2}.
$$
Then that, for every $a\gt0$ and $s\lt s_\sigma$,
$$
\int_0^\infty\mathrm e^{st}t^{a-1}\mathrm e^{-s_\sigma t}\mathrm dt\stackrel{(r=(s_\sigma-s)t)}{=}(s_\sigma-s)^{-a}\int_0^\infty r^{a-1}\mathrm e^{-r}\mathrm dr=(s_\sigma-s)^{-a}\Gamma(a).
$$
And finally that
$$
M_T(s)=\Gamma(n/2)^{-1}\int_0^\infty\mathrm e^{st}t^{(n-2)/2}\mathrm e^{-s_\sigma t}\mathrm dt,
$$
hence that $T$ has a density $f_T$ defined by
$$
f_T(t)=\Gamma(n/2)^{-1}s_\sigma^{n/2}t^{(n-2)/2}\mathrm e^{-s_\sigma t}\mathbf 1_{t\gt0}.
$$
Which is the density of some well-known distribution...
Hint: The sqaure of the normal distribution is the chi squared distribution. And being a $\Gamma$ distribution with $(\frac{k}{2},2)$, you can "add" them up. Check your final answer at here.
| common-pile/stackexchange_filtered |
Is it wrong to think of area as a measure of the total number of points?
Consider the following mapping $f: M \to N$ between the points on a hemisphere and the points on a circle of radius r, illustrated by the figure below
Clearly this is a bijection between the two sets, and also if I'm not mistaken, a homeomorphism. Every point on the circle corresponds to a point on the hemisphere and vice versa, and yet one has double the area of the other. How is this possible? How is area rigorously defined in higher mathematics, so as to make sense of this?
A segment and a square have the same "amount" of points.... so the answer to your question is YES, IT IS WRONG. Be careful with infinity.
You cut the surface up into little "rectangles", add up their areas, and take a limit as the dimensions of the "rectangles" go to zero.
You can read about area in any book about measure theory. It's called the $2$-dimensional Lebesgue measure.
Homeomorphisms are too strong to preserve area in any reasonable way. What you want to look at is linear maps, differentiable maps, or in even more generality, Lipschitz maps, the last one appearing in geometric measure theory. Then you get formulas for area in terms of those maps and the original set.
In general, translations and rotations preserve area completely, which are pretty rigid geometric notions.
Well, to define the measure of the hemisphere you need more than the Lebesgue measure. I think you need Riemannian Differential geometry.
@filippo there's something called the $2$-dimensional Hausdorff measure in $\mathbb{R}^3$. You don't need the theory of differentiable manifolds to define it. I think you need it to give explicit formulas, but that too can be circumvented.
Yes, that's an alternative. My point was that the Lebesgue measure is not enough.
Is it wrong to think of area as a measure of the total number of points?
Yes. These concepts are very vaguely related. Especially when infinities are in play.
Clearly this is a bijection between the two sets, and also if I'm not mistaken, a homeomorphism. Every point on the circle corresponds to a point on the hemisphere and vice versa, and yet one has double the area of the other. How is this possible?
Well, homeomorphisms don't have to preserve area. A simpler example is $f:\mathbb{R}^2\to\mathbb{R}^2$, $f(v)=\frac{1}{2}v$ which is a homeomorphism as well, but it doesn't preserve area. For example $f([0,1]^2)=[0,\frac{1}{2}]^2$ and so the image has $4$ times smaller area.
Infinities are weird indeed. This is something you have to accept.
How is area rigorously defined in higher mathematics, so as to make sense of this?
This is done via so called measure theory. You first need a $\sigma$-algebra on which so called measures are defined. Then we define Lebesgue measure which is a measure of subsets of $\mathbb{R}^n$. This measure is tightly related to dimension, and so it won't give you correct answer when trying to calculate "area" ($2$-dimensional Lebesgue measure) of subsets of $3$-dimensional space, like your hemisphere (its Lebesgue measure is simply $0$). But it is important, because it is a building block for other measures.
In the case of a surface this can be done by approximating it with simpler shapes. First you triangulate the surface, meaning you divide it into "triangles". Not real triangles, but "curved" triangles. Meaning each triangle on the surface is simply a set of three points. Now you calculate the area of each triangle. You simply treat them like normal triangles, you ignore the fact that they are curved. This of course won't give you the correct result. But by making the triangulation denser and denser you will get a better and better approximation each time. By going to infinity (taking a limit) you will get the real area.
Of course if you try to do this by hand you will quickly realize that this kind of calculations is not for humans. It is too hard. That's where measure theory helps. It reduces the problem to calculating appropriate integrals.
This method of triangulation can be applied to many different shapes, but not all subsets can be triangulated. For those other subsets, you will need more sophisticated tools. I encourage you to learn the measure theory first.
| common-pile/stackexchange_filtered |
How do I draw the points in an ESRI Polyline, given the bounding box as lat/long and the "points" as radians?
I'm using OpenMap and I'm reading a ShapeFile using com.bbn.openmap.layer.shape.ShapeFile. The bounding box is read in as lat/long points, for example 39.583642,-104.895486. The bounding box is a lower-left point and an upper-right point which represents where the points are contained. The "points," which are named "radians" in OpenMap, are in a different format, which looks like this: [0.69086486, -1.8307719, 0.6908546, -1.8307716, 0.6908518, -1.8307717, 0.69085056, -1.8307722, 0.69084936, -1.8307728, 0.6908477, -1.8307738, 0.69084626, -1.8307749, 0.69084185, -1.8307792].
How do I convert the points like "0.69086486, -1.8307719" into x,y coordinates that are usable in normal graphics?
I believe all that's needed here is some kind of conversion, because bringing the points into Excel and graphing them creates a line whose curve matches the curve of the road at the given location (lat/long). However, the axises need to be adjusted manually and I have no reference as how to adjust the axises, since the given bounding box appears to be in a different format than the given points.
The ESRI Shapefile technical description doesn't seem to mention this (http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf).
OpenMap can convert to degrees for you - you’ll have to drill down into the following classes and methods.
*com.bbn.openmap.layer.shape.ESRIPolygonRecord (see variable “polygons” of type ESRIFloatPoly[])
*com.bbn.openmap.layer.shape.ESRIPoly.ESRIFloatPoly
*getDecimalDegrees(); (method)
0.69086486, -1.8307719 is a latitude and a longitude in radians.
First, convert to degrees (multiply by (180/pi)), then you will have common units between your bounding box and your coordinates.
Then you can plot all of it in a local frame with the following :
x = (longitude-longitude0)*(6378137*pi/180)*cos(latitude0*pi/180)
y = (latitude-latitude0)*(6378137*pi/180)
(latitude0, longitude0) are the coordinates of a reference point (e.g. the lower-left corner of the bounding box)
units are degrees for angles and meters for distances
Edit -- explanation :
This is an orthographic projection of the Earth considered as a sphere whose radius is 6378137.0 m (semi-major axis of the WGS84 ellipsoid), centered on the point (lat0, lon0)
Looks like you're right about the radians and the equations. How about an explanation of what the equations are doing?...
Aaron would you mind sharing your ColdFusion code to read the Shapefile? Thanks find me on twitter same name
In OpenMap, there are a number of ways to convert from radians to decimal degrees:
Length.DECIMAL_DEGREE.fromRadians(radVal);
Math.toDegrees(radVal) // Standard java library
For an array, you can use ProjMath.arrayDegToRad(double[] radvals);
Be careful with that last one, it does the conversion in place. So if you grab a lat/lon array from an OMPoly, make a copy of it first before converting it. Otherwise, you'll mess up the internal coordinates of the OMPoly, which it expects to be in radians.
| common-pile/stackexchange_filtered |
OpenJDK 1.8.0_242, MaxRAMFraction setting not reflecting
I am running a Springboot application in the alpine-OpenJDK image and facing OutOfMemory issues. Max heap is being capped at 256MB. I tried updating the MaxRAMFraction setting to 1 but did not see it getting reflected in the Java_process. I have an option to increase the container memory limit to 3000m but would prefer to use Cgroup memory with MaxRamfraction=1. Any thoughts?
Java-Version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
bash-5.0$ java -XX:+PrintFlagsFinal -version | grep -Ei "maxheapsize|MaxRAMFraction"
uintx DefaultMaxRAMFraction = 4 {product}
uintx MaxHeapSize := 262144000 {product}
uintx MaxRAMFraction = 4 {product}
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (IcedTea 3.15.0) (Alpine 8.242.08-r0)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)
Container Resource limits
ports:
- containerPort: 8080
name: 8080tcp02
protocol: TCP
resources:
limits:
cpu: 350m
memory: 1000Mi
requests:
cpu: 50m
memory: 1000Mi
securityContext:
capabilities: {}
Container JAVA_OPTS screenshot
Max heap is being capped at 256MB.
You mean via -m in docker? If such, this is not the java heap you are specifying, but the total memory.
I tried updating the MaxRAMFraction setting to 1
MaxRAMFraction is deprecated and un-used, forget about it.
UseCGroupMemoryLimitForHeap
is deprecated and will be removed. Use UseContainerSupport that was ported to java-8 also.
MaxRAM=2g
Do you know what this actually does? It sets the value for the "physical" RAM that the JVM is supposed to think you have.
I assume that you did not set -Xms and -Xmx on purpose here? Since you do not know how much memory the container will have? If such, we are in the same shoes. We do know that the min we are going to get is 1g, but I have no idea of the max, as such I prefer not to set -Xms and -Xmx explicitly.
Instead, we do:
-XX:InitialRAMPercentage=70
-XX:MaxRAMPercentage=70
-XX:+UseContainerSupport
-XX:InitialHeapSize=0
And that's it. What this does?
InitialRAMPercentage is used to calculate the initial heap size, BUT only when InitialHeapSize/Xms are missing. MaxRAMPercentage is used to calculate the maximum heap. Do not forget that a java process needs more than just heap, it needs native structures also; that is why that 70 (%).
The memory is capped to 256MB because by default JVM uses MaxRAMFraction=4. My container has 1024MB of memory and JVM caps at 256(please refer attached screenshot in question), and I was trying to override it by passing MaxRAMFraction 1. I did not realize it was deprecated and unused. I will try to UseContainerSupport and post back the feedback.Thanks
| common-pile/stackexchange_filtered |
Unable to login with email or mobile
I am trying to login with email or mobile (in PHP and MySQL). But I am unable to login.
Here is my code:
$sql = "SELECT * FROM tb_users WHERE (email='$loginEmailOrMobile' AND password='$loginPassword') OR (mobile_number='$loginEmailOrMobile' AND password='$loginPassword')";
$mysql = mysql_query($sql) or die(mysql_error());
if(mysql_num_rows($mysql) == 1)
{
echo "login successful";
$user = mysql_fetch_object($mysql);
echo $user->username;
}
else
{
die(mysql_error());
}
What is the error you got?
Did you know mysql_ functions are deprecated and fully removed since PHP 7.0? You should consider switching over to mysqli or PDO. It's actually a well known fact that Jesus sacrificed himself so mysql_ would be removed. Don't let his death be in vein.
Also look into the prepare method aswell so you can protect yourself better from SQL Injection (like you're currently vulnerable too. Btw)
i am getting a blank page
I can't see any syntax errors... try putting error_reporting(E_ALL); ini_set('display_errors', 1); at the top of your script and return here with the output (amend it to your OP)
You'll get a blank page if you fill in the wrong username/password so your mobile might be adding an extra space to either the username or password.
There is soooooo much wrong here: deprecated mysql_* functions, sql injection, plain-text password storage, logged-in state not saved. You should start over.
@IsThisJavascript: in the same vein as your comment, I sometimes give warnings about SQL injection in vain, even if my volunteering to be a teacher makes me look vain. Sadly some code on Stack Overflow is a weather-vane for how hard it is to eradicate insecure practices.
@maththi-siva-charan , you have two input :- one is for email/mobile & other is password & submit button is it right.
I think you have many problems in your code.
The mysql* extension is deprecated, and removed in PHP 7.
Plain passwords have poor security.
Prepared statements are recommanded when using variables.
You are handling an error when there is not.
You probably have no row (or more that 1) matching with your credentials so it's neither a success nor a MySQL error.
I will not cover all of this points in this answer in order to respect your original code, but I will try to explain many of them.
First, there is a tested and working code that does the job.
<?php
//Connect to the database
$mysqli = new mysqli("localhost", "username", "password", "databaseName");
//These variables may be set from $_POST or anywhere
//it is only an example
$loginEmailOrMobile =<EMAIL_ADDRESS>$loginPassword = '123456';
//!!Vulnerable to SQL injections**
$sql = "SELECT * FROM tb_users WHERE password = '$loginPassword' AND (email='$loginEmailOrMobile' OR mobile_number = '$loginEmailOrMobile')";
$result = $mysqli->query($sql);
//Only if we got 1 result
if (1 === mysqli_num_rows($result)) {
$user = mysqli_fetch_object($result);
echo $user->email;
} else {
die('We have not got only one result');
}
In this code you can see that:
Mysqli is used to perform the queries. It's really recommended to use mysqli instead of mysql for security reasons. Also, I'm using PHP 7 and I have not the mysql extension.
Because the case you were handling with mysql_error() was not en error, I have removed it in order to handling the case. If I get 0, or even 2 results, that's not a MySQL error, but perhaps I have no user or more than 1 with that credentials.
I have updated your SQL query in order to be less redundant but the logic is the same.
Your SQL query is not a prepared statement, I haven't touch it in order to respect your code and not to rewrite it entirely. But I really advise you to have a look at mysqli prepared statements
It seems you are currently using a plain password (unless you have already hashed it). It is recommanded to hash your passwords before inserting them into the database, and compare only the hashes. Please have a look to crypt().
| common-pile/stackexchange_filtered |
Sum of new occurrences from previous five years
I have a data frame that looks like this toy data frame:
df <- data.frame(company=c("company_a","company_b","company_b", "company_a","company_b","company_a"),
fruit=c("peaches, apples; oranges","apples; oranges; bananas","oranges; pears","bananas; apples; oranges; pears","apples; oranges; pears","bananas; apples; oranges; pears; peaches"),
year=c("2010","2011","2014","2014", "2016","2018"))
> df
company fruit year
1 company_a peaches; apples; oranges 2010
2 company_b apples; oranges; bananas 2011
3 company_b oranges; pears 2014
4 company_a bananas; apples; oranges; pears 2014
5 company_b apples; oranges; pears 2016
6 company_a bananas; apples; oranges; pears; peaches 2018
Desired Outcome
I would like a column (new_occurrences) with the sum of fruits that has never appeared in the previous five years.
For example, row 4: company_a = bananas and pears never appeared in the previous 5 years, thus new_fruit = 2.
That will look like this:
> df
company fruit year new_occurrences
1 company_a peaches; apples; oranges 2010 3
2 company_b apples; oranges; bananas 2011 3
3 company_b oranges; pears 2014 1
4 company_a bananas; apples; oranges; pears 2014 2
5 company_b apples; oranges; pears 2016 0
6 company_a bananas; apples; oranges; pears; peaches 2018 1
Attempt
I tried the answer from this question, for which I created a function which is the opposite of '%in%' and use it in df3.
'%!in%' <- function(x,y)!('%in%'(x,y))
# clean up column classes
df[] <- lapply(df, as.character)
df$year <- as.numeric(df$year)
library(data.table)
setDT(df)
# create separate column for vector of fruits, and year + 5 column
df[, fruit2 := strsplit(gsub(' ', '', fruit), ',|;')]
df[, year2 := year + 5]
# Self join so for each row of df, this creates one row for each time another
# row is within the year range
df2 <- df[df, on = .(year <= year2, year > year, company = company)
, .(company, fruit, fruit2, i.fruit2, year = x.year)]
# create a function which is the opposite of '%in%'
'%!in%' <- function(x,y)!('%in%'(x,y))
# For each row in the (company, fruit, year) group, check whether
# the original fruits are in the matching rows' fruits, and store the result
# as a logical vector. Then sum the list of logical vectors (one for each row).
df3 <- df2[, .(new_occurrences = do.call(sum, Map(`%!in%`, fruit2, i.fruit2)))
, by = .(company, fruit, year)]
# Add sum_occurrences to original df with join, and make NAs 0
df[df3, on = .(company, fruit, year), new_occurrences := i.new_occurrences]
df[is.na(new_occurrences), new_occurrences := 0]
#delete temp columns
df[, `:=`(fruit2 = NULL, year2 = NULL)]
Unfortunately this attempt does not give me my desired outcome.
Any help would be much appreciated, also solutions with dplyr are welcome! :)
Is the first comma in first row after peaches a typo? Should it have been a semicolon?
Assuming the input shown reproducibly in the Note at the end, define two functions to convert a semicolon separated string to a vector and back again. The for each row determine the previous fruit in last 5 years from current company and compute the required difference. In a second transform compute the number of new fruit. No packages are used.
char2vec <- function(x) scan(text = x, what = "", sep = ";", strip.white = TRUE,
quiet = TRUE)
vec2char <- function(x) paste(x, collapse = "; ")
df2 <- transform(df, new = sapply(1:nrow(df), function(i) {
year0 <- df$year[i]; company0 <- df$company[i]; fruit0 <- df$fruit[i]
prev_fruit <- char2vec(subset(df,
year < year0 & year >= year0 - 5 & company == company0)$fruit)
vec2char(Filter(function(x) !x %in% prev_fruit, char2vec(fruit0)))
}), stringsAsFactors = FALSE)
transform(df2, num_new = lengths(lapply(new, char2vec)))
giving:
company fruit year new num_new
1 company_a peaches; apples; oranges 2010 peaches; apples; oranges 3
2 company_b apples; oranges; bananas 2011 apples; oranges; bananas 3
3 company_b oranges; pears 2014 pears 1
4 company_a bananas; apples; oranges; pears 2014 bananas; pears 2
5 company_b apples; oranges; pears 2016 0
6 company_a bananas; apples; oranges; pears; peaches 2018 peaches 1
Note
This is taken from question. One comma is changed to a semicolon.
df <- data.frame(company=c("company_a","company_b","company_b",
"company_a","company_b","company_a"),
fruit=c("peaches; apples; oranges","apples; oranges; bananas",
"oranges; pears", "bananas; apples; oranges; pears",
"apples; oranges; pears", "bananas; apples; oranges; pears; peaches"),
year = c("2010","2011","2014","2014", "2016","2018"))
df[] <- lapply(df, as.character)
df$year <- as.numeric(df$year)
this is a brilliant and time efficient solution, thank you very much
A tidyverse attempt:
library(tidyverse)
years_window <- 5
df %>%
separate_rows(fruit, sep = "; |, ") %>%
mutate(tmp = 1,
year = as.integer(as.character(year))) %>%
complete(company = unique(.$company),
year = (min(year) - years_window):max(year),
fruit = unique(.$fruit)) %>%
arrange(year) %>%
group_by(company, fruit) %>%
mutate(check = zoo::rollapply(tmp,
FUN = function(x) sum(is.na(x)),
width = list(-(1:years_window)),
align = 'right',
fill = NA,
partial = TRUE)) %>%
group_by(company, year) %>%
mutate(new_occurrences = sum(check == years_window & !is.na(tmp))) %>%
filter(!is.na(tmp)) %>%
distinct(company, year, new_occurrences) %>%
arrange(year) %>%
left_join(df %>%
mutate(year = as.integer(as.character(year))),
by = c("company", "year")) %>%
select(company, fruit, year, new_occurrences)
Output:
# A tibble: 6 x 4
# Groups: company, year [6]
company fruit year new_occurrences
<fct> <fct> <int> <int>
1 company_a peaches, apples; oranges 2010 3
2 company_b apples; oranges; bananas 2011 3
3 company_a bananas; apples; oranges; pears 2014 2
4 company_b oranges; pears 2014 1
5 company_b apples; oranges; pears 2016 0
6 company_a bananas; apples; oranges; pears; peaches 2018 1
the order in my dataset is not alphabetical. Could you show me how to join with original df please?
| common-pile/stackexchange_filtered |
May I declare a member type alias to a type in a surrounding scope, using the same name?
I want a struct to contain a type alias to another type for metaprogramming purposes:
struct Foo {};
struct WithNestedTypeAlias {
using Foo = Foo;
};
Then I can do stuff like WithNestedTypeAlias::Foo in a template etc.
As I understand, this type alias is valid because it does not change the meaning of the Foo type. Clang compiles this happily.
However, GCC complains:
test-shadow-alias.cpp:4:20: error: declaration of ‘using Foo = struct Foo’ [-fpermissive]
using Foo = Foo;
^
test-shadow-alias.cpp:1:8: error: changes meaning of ‘Foo’ from ‘struct Foo’ [-fpermissive]
struct Foo {};
^
Now I'm confused because I'm explicitly not changing the meaning of Foo from struct Foo.
What is the correct behaviour for C++14? I know I can work around this by renaming the struct Foo, but I'd like to understand whether GCC's error is correct here.
Notes:
Tested with clang++ 3.8 and gcc 5.4, but Godbolt suggests this hasn't changed in more recent GCC versions.
I looked at Interaction between decltype and class member name shadowing an external name, where the name of a variable may refer to either a variable in the outer scope or to a class member. In contrast, my question here is about a type alias. There is no ambiguity since Foo always refers to ::Foo within the class scope. I don't see how the answer there applies to my problem.
This is probably due to a misunderstanding of what type aliases actually are.
Changing the statement to using Foo = ::Foo; fixes it, but I cannot explain why.
Changing to using Foo = struct Foo; fixes it, too. Note that the equivalent typedef struct Foo Foo; is idiomatic C code.
The rule GCC is enforcing is in [basic.scope.class]:
2) A name N used in a class S shall refer to the same declaration in its context and when re-evaluated in the completed scope of S. No diagnostic is required for a violation of this rule.
The standard says violating this doesn't require a diagnostic, so it's possible that both GCC and Clang are conforming, because (if GCC is right) the code is not valid, but the compiler is not required to diagnose it.
The purpose of this rule is so that names used in a class always mean the same thing, and re-ordering members doesn't alter how they are interpreted e.g.
struct N { };
struct S {
int array[sizeof(N)];
struct N { char buf[100]; };
};
In this example the name N changes meaning, and reordering the members would change the size of S::array. When S::array is defined N refers to the type ::N but in the completed scope of S it refers to S::N instead. This violates the rule quoted above.
In your example the name Foo changes in a far less dangerous way, because it still refers to the same type, however strictly speaking it does change from referring to the declaration of ::Foo to the declaration of S::Foo. The rule is phrased in terms of referring to declarations, so I think GCC is right.
Hmm, that rule does make sense. But this seems to imply that I shouldn't use the using Foo = ::Foo trick mentioned by François Andrieux in a comment above? While that currently happens to work, it is still a new declaration of Foo and thus in violation of your quoted rule, right? So that would leave me with the only solution struct AnotherFoo{}; struct Nested { using Foo = AnotherFoo; }; using Foo = AnotherFoo; for doing the nested Foo declaration portably.
@amon: The issue in your original code is that the Foo on the right hand side changes meaning because it is unqualified. Before the using declaration, it would have meant the global Foo, later it would have meant the alias. If it is qualified, then that is no longer an issue.
@amon Also note CWG 42 that will answer your question about Foo, ::Foo and everything.
@amon The rule is not that Foo mustn't change its meaning, but that it mustn't be used before having its meaning changed. So using Foo = ::Foo is OK because the unqualified Foo is not used before having its meaning changed. Similarly, the example in the answer would be OK if we simply replace sizeof(N) with 1.
@Oktalist OK, I understand that now. Thank you for your assistance :)
| common-pile/stackexchange_filtered |
false = true problem when solving Lemma in Coq
I have definition:
Definition f (b1 b2 : bool) :bool :=
match b1 with
| true => true
| false => b2
end.
And Lemma...
Lemma l1: forall p : bool, f p false = true.
What I have tried is this:
Lemma l1: forall p : bool, f p false = true.
Proof.
intros p.
destruct p.
simpl.
reflexivity.
simpl.
And then I get false=true. What to do?
You cannot prove l1, since f false false = false.
Perhaps you would like to prove:
Lemma l1: forall p : bool, f p false = p.
instead.
Actually the definition was n ot good. Here it is.
Definition f (b1 b2 : bool) : bool :=
match b1, b2 with
| false, true => false
| _, _ => true
end.
And then I did proof in this way. Am I right?
Lemma l1: forall p : bool, f p false = true.
Proof.
intros p.
destruct p.
simpl.
reflexivity.
simpl.
reflexivity.
Qed.
Your proof is correct. Using tacticals, you can make the proof script shorter and more readable: Proof. destruct p; reflexivity. Qed..
Please note that it's important not to build proofs about erroneous definitions ...
| common-pile/stackexchange_filtered |
What is torrified wheat?
What is torrified wheat and in what styles is it used?
Are there any restriction or special procedures you need to follow to when using torrified wheat?
Can you substitute torrified wheat with malted wheat?
Torrified Wheat has been heat treated (kind of "popped")to break the cellular structure, allowing for rapid hydration and allows malt enzymes to more completely attack the starches and protein. Torrified Wheat can be used in place of raw wheat in Belgian style Wit-Beers, also very good for adding body and head, especially to English ales. Since it has not been malted, you can't sub it for malted wheat. Because it's not malted, it needs to be mashed with a diastatic malt in order to convert the starches.
When a recipe calls for torrified wheat can you use malted wheat instead?
Well, yes and no....you'll get some "wheatiness" from the malted wheat, but it will be a different flavor than torrified.
| common-pile/stackexchange_filtered |
Cluster backup error in postgresql 9.6.2
I'm trying to take a cluster backup of my localhost but throwing an exception like below.
Currently using postgres 9.6.2
pg_basebackup -U repuser -h localhost -D backup -Ft -z -P
Error message:
pg_basebackup: unsupported server version 9.6.2
Can anyone suggest me to resolve.
You are using an old pg_basebackup version - the command line is probably picking up an old installation that is still in the PATH. What does pg_basebackup --version show you?
Its showing old version pg_basebackup (PostgreSQL) 9.2.20. so hot to resolve this
Install the new version
can you please share me the links to install pg_basebackup. Actually am using postgres 9.6.2
pg_basebackup is part of the regular Postgres installation. You either need to remove the 9.2 installation or change your PATH if you have 9.6 installed
If you have mlocate installed, run locate pg_basebackup, if not run find / -name pg_basebackup. It will give you the list of binaries you have, like:
-bash-4.2$ locate pg_basebackup
/usr/bin/pg_basebackup93
/usr/lib64/pgsql93/bin/pg_basebackup
/home/pg/9.1/bin/pg_basebackup
/usr/share/locale/cs/LC_MESSAGES/pg_basebackup-9.3.mo
/usr/share/locale/de/LC_MESSAGES/pg_basebackup-9.3.mo
then run pg_basebackup with full path, like:
/usr/lib64/pgsql96/bin/pg_basebackup -U repuser -h localhost -D backup -Ft -z -P
| common-pile/stackexchange_filtered |
Check if a node is fully selected
I'm using rangy in my project. I want to test if an element is fully selected. I tried the following:
selRange.containsNode( el );
Note: selRange is simply a reference to window.getSelection().getRangeAt(0);
When the selection was made from top to the bottom, it works but not the other way around.
Safari and Chrome return different results depending if you made a bottom up or top down selection. Any ideas?
Exactly what is considered selected depends on how the selection was made and which browser you're using. If you're just interested in whether all of the text within an element is selected, Rangy's Range object provides a non-standard convenience method for this:
selRange.containsNodeText(el);
Here is a non-Rangy equivalent. It isn't fully tested but I think it will work.
function containsNodeText(range, node) {
var treeWalker = document.createTreeWalker(node,
NodeFilter.SHOW_TEXT,
{ acceptNode: function(node) { return NodeFilter.FILTER_ACCEPT; } },
false
);
var firstTextNode, lastTextNode, textNode;
while ( (textNode = treeWalker.nextNode()) ) {
if (!firstTextNode) {
firstTextNode = textNode;
}
lastTextNode = textNode;
}
var nodeRange = range.cloneRange();
if (firstTextNode) {
nodeRange.setStart(firstTextNode, 0);
nodeRange.setEnd(lastTextNode, lastTextNode.length);
} else {
nodeRange.selectNodeContents(node);
}
return range.compareBoundaryPoints(Range.START_TO_START, nodeRange) < 1 &&
range.compareBoundaryPoints(Range.END_TO_END, nodeRange) > -1;
}
You can read minds. That's exactly what I needed. containsNode doesn't behave consistently across browsers. Safari only reports that it contains the node if the selection actually goes beyond the element itself. Thank you!!
what would be the easiest approach to handle this without rangy?
@AdrianZumbrunnen: I've knocked up an equivalent.
Interesting approach. I've come up with the following:
`
Selection.prototype.isFullySelected = function( range, element ){
var result = this.getNodes().some( function( node ){
var el = node.nodeType === 1 ? node : node.parentNode;
return el === element && ( trange.toString().indexOf( element.textContent ) >= 0) && this.selection.containsNode( node, false );
}.bind( this ));
return result;
}
`
whereas getNodes is similar to rangy.getNodes, it gets me an array of all the selected nodes within the range.
it's certainly not as accurate though
Saved me lots of time, exactly what i needed, Thanks a lot
| common-pile/stackexchange_filtered |
Is there a ml like language(standard ml/ocaml/f#/haskell/etc) in which list elements are option types
It seems to me like [] (empty list) and None/Nothing are so similar.
I was wondering if any of that family of languages had the basic list type where each element is an option and the tail guard is Nothing?
Is it(not done this way, in languages that have separate types) because it would make pattern matching lists overly verbose?
What would be the benefit? Defining different types for different purposes is A Good Thing.
You have something similar in Lisp, where nil is used both to represent both the absence of an optional value and the empty list. You can do something like this in Haskell (and I assume most ML-like languages),
newtype MyList a = MyList (Maybe (a, MyList a))
but it's more verbose and doesn't have any obvious benefits over using its own data type.
Hmm, not that I'm aware of. That would be difficult for a Hindley-Milner type system, which forms the basis the type systems for those languages. (In Haskell nomenclature) Nothing would have to have type Maybe a and [] a simultaneously.
Something similar (but unfortunately too unwieldy to use in practice IMO) can be constructed using a fixed point type over Maybe:
-- fixed point
newtype Mu f = In (f (Mu f))
-- functor composition
newtype (f :. g) a = O (f (g a))
type List a = Mu (Maybe :. (,) a)
This is isomorphic to what you are asking for, but is a pain in the butt. We can make a cons function easily:
In (O (Just (1, In (O (Just (2, In (O Nothing)))))))
In and O are "identity constructors" -- they only exist to guide type checking, so you can mentally remove them and you have what you want. But, unfortunately, you cannot physically remove them.
We can make a cons function easily. We are not so lucky with pattern matching. I can't speak for the other ML family languages, but IIRC they can't even represent higher-kinded types like Mu.
You know, or you could forget all that Mu garbage and do what @hammar said. Sigh, still an overengineer :-)
| common-pile/stackexchange_filtered |
Where do Poco and Boost overlap?
I have no experience with the Poco libraries, but some experience with Boost. From a brief look at the Poco documentation, it seems they document the higher-level functionality they offer more than abstract fundamentals, language-augmenting facilities. For example: Boost touts a library of iterators, a library of integer-type-related facilities etc.
Other than extensively browsing the Poco sources - how can I determine what's the overlap of the functionality and facilities offered by Poco and by Boost?
Notes:
I'm not asking about pros and cons - which are relevant to the overlap part - but rather for an identification of the overlap vs the "symmetric difference" in what they offer. It seems like there there might be less overlap than you would expect given that some people present Poco and Boost as alternatives to each other.
If Poco provides something as an "internal" utility, but which is usable by whoever uses Poco, I would still count that in the overlap.
I did some searching, I didn't find a good comparison that discussed the overlap and/or differences. (And the twisty link-chain alley ended up comparing against Perforce RogueWave, of all things.) (I did not downvote.)
| common-pile/stackexchange_filtered |
Live Twitter Stream
I'm programming an application in Ruby on Rails v(2.0.0/4.0.0) and trying to integrate a live twitter feed from a specific source (@u101) onto a basic HTML page and don't understand how. After Googling around for a few hours I've seen a few different gems that seem to integrate Twitter (tweetstream, twitter api, twitter), however, I can't get any of these to work. I've followed guides such as this http://www.phyowaiwin.com/how-to-download-and-display-twitter-feeds-for-new-year-resolution-using-ruby-on-rails
and that one in particular gives me this error:
irb(main):002:0> Tweet.get_latest_new_year_resolution_tweets
NameError: uninitialized constant Twitter::Search
from C:/Sites/EasyBakeOven/blog/app/models/tweet.rb:6:in `get_latest_new
_year_resolution_tweets'
Which I don't understand. If someone could walk me through the necessary steps to accomplish my goal or point me in the right direction I'd be very thankful.
UPDATE:
Model: tweet.rb
class Tweet < ActiveRecord::Base
#A method to grab latest tweets from Twitter
def self.get_latest_new_year_resolution_tweets
#create a Twitter Search object
search = Twitter::REST::Client.new
#grab recent 100 tweets which contain 'new year resolution' words, and loop each of them
search.containing("u101").result_type("recent").per_page(100).fetch.each do |tweet_results|
#parsing the string 'created_at' to DateTime object
twitter_created_at = DateTime.parse(tweet_results.created_at)
#making sure we are not saving exact same tweet from a person again
unless Tweet.exists?(['twitter_created_at = ? AND from_user_id_str = ?', DateTime.parse(tweet_results.created_at), tweet_results.from_user_id_str])
#finally creating the tweet record
Tweet.create!({
:from_user => tweet_results.from_user,
:from_user_id_str => tweet_results.from_user_id_str,
:profile_image_url => tweet_results.profile_image_url,
:text => tweet_results.text,
:twitter_created_at => twitter_created_at
})
end
end
end
end
tweets_controller.rb
class TweetsController < ApplicationController
def index
#Get the tweets (records) from the model Ordered by 'twitter_created_at' descending
@tweets = Tweet.order("twitter_created_at desc")
end
end
index.html.rrb:
<h1>Tweets#index</h1>
<div id="container">
<ul>
<% @tweets.each do |tweet| %>
<li class="<%=cycle('odd', '')%>">
<%= link_to tweet.from_user, "http://twitter.com/#{tweet.from_user}", :class => "username", :target => "_blank" %>
<div class="tweet_text_area">
<div class="tweet_text">
<%= tweet.text %>
</div>
<div class="tweet_created_at">
<%= time_ago_in_words tweet.twitter_created_at %> ago
</div>
</div>
</li>
<% end %>
</ul>
</div>
Also again I'm trying to display tweets with '#u101'
In the particular tutorial you used, go to line 7 in app/models/tweet.rb and you'll see:
search = Twitter::Search.new
Replace that with
search = Twitter::REST::Client.new
This was a recent change to the ruby gem. You can always see what's most current in the documentation!
I made the change you suggested and it seems to have fixed my problem. When I run the app and pull the webpage I do not get any errors or anything but I also do not get any results. I just have a header and then nothing. I updated above to include my controller, index.html.erb, and model.
Nothing happens because too much of that code has been deprecated in version 5 (i.e. tweet.from_user no longer exists, so no error, but no output). Here's the code for v. 5's experimental streaming that you should use instead: https://github.com/sferik/twitter#streaming-experimental
Use TwitterFetcher (JS) (no gems)
This works by taking a widget (which is live & fully supported by Twitter) & parsing out any styling / HTML you may not want. We've implemented this numerous times:
#app/assets/javascripts/twitterfetch.js
[[twitter fetcher JS here]]
#app/assets/javascripts/application.js
//= require twitterfetch
$(function() {
twitterFetcher.fetch('WIDGET_ID', 'twitter_feed', 3, true, true, true, '', false, handleTweets, false);
function handleTweets(tweets){
var x = tweets.length;
var n = 0;
var element = document.getElementById('twitter_feed');
var html = '<ul>';
while(n < x) {
html += '<li>' + tweets[n] + '</li>';
n++;
}
html += '</ul>';
element.innerHTML = html;
};
});
I can help you more if you'd like to use this method
"Live" Streaming
There are several problems with "live" streaming from twitter:
Their 1.1 API is rate-limited
How will you store the data?
I think I get what you're trying to achieve, but in reality, their API policies have made it very difficult to implement any "live" functionality. If you want to just display your tweets in your own way, I'd recommend using a widget & using the JS provided above
| common-pile/stackexchange_filtered |
What is the idea behind an expansion valve?
I am watching this video(https://www.youtube.com/watch?v=gVLhrLTF878) and beginning at around the 2:20 mark, it explains the significance of an expansion valve with the following ideas:
Restricts refrigerant flow to lower refrigerant pressure.
Decreasing surrounding pressure $\lt$ liquid(refrigerant) pressure implies boiling liquid.
Decreasing surrounding pressure around a liquid allows the liquid to evaporate. The evaporation takes some of the kinetic energy from the liquid which consequently lowers the temperature of the liquid.
I have listed the ideas as to how I have understood it, but number 2 and 3 seem contradictory to me. Maybe temperature and pressure do not have a linear relationship? Please, anyone, clarify any misunderstandings I have.
Could you be more specific on what ideas between items 2 and 3 confuse you?
Have you ever held your hand in front of a spray of atomized liquid from a can of liquid sprayer such as an air freshener?
It's cold!
Because the molecules of liquid have consumed heat energy of the same liquid to pick up acceleration. They convert pressure and heat to the kinetic energy of the streaming out spray molecules. The same thing happens in an expansion valve.
After watching this video(https://www.youtube.com/watch?v=kjraelDMrFQ), I have understood your explanation. So it's intuitive that the remaining liquid gets cold because of the loss of kinetic energy carried out of the can by the streaming spray molecules, but on an expansion valve, would we allow vapor to escape like in a spray can? I mean that would be wasteful if that is the case.
Both boiling and evaporation involve changing a liquid to a gaseous state. Both are driven by the liquid/vapor system not being in equilibrium as far as the actual vapor pressure vs. The vapor pressure associated with the saturated state.
Boiling means the conversion process is happening throughout the bulk of the liquid and bubbles are forming. Evaporation is the same process, but it is happening on the surface of the fluid.
The refrigerant compressor pumps vapor. Let's assume it operates on a constant volume basis. The pumping rate is proportional to vapor density at the pump inlet. Cooling the vapor at constant pressure increases density and flow rate. Increasing the pressure at constant temperature increases density and flow rate.
At the expansion valve, fluid is being pulled through because the compressor is removing vapor from the exit side of the evaporator coil. The fluid at the expansion valve inlet is liquid. By design, you don't want vapor there. The exit of the expansion valve is mixed phase. There is a big pressure drop across the valve. As soon as the fluid enters the low pressure side, some of it flashes to vapor. The refrigerant will be in saturated equilibrium at the low-side pressure. Most of it is still liquid, but it is bubbly. The remaining liquid gets vaporized in the evaporator and cools the room.
A TXV valve like in the video has two inputs. There is a knob that adjusts the pressure drop at a given sense temperature (only to be adjusted by a qualified tech), and a temp bulb that senses the evaporator vapor temperature and changes the size of the passage in the valve. The result is a complicated balancing act.
If the valve closes a bit (because the bulb senses too low a temp in the vapor line), the pressure difference increases. After a bit, the low side of the valve (bubbly liquid line) will be cooler and lower pressure. And the flow rate will be less as well. The density of the vapor on the low side will be less. This is the key result. By design, closing the valve a little results in a lower vapor density at the compressor inlet, reducing flow rate, and ensuring that only vapor is getting sent to the compressor. The lower flow rate means the vapor line temp rises, providing the system's negative feedback. You can't show this by just looking at the valve - you have to look at how the entire system responds to the valve, and it is designed so that it responds this way when working properly and properly charged. It may not work this way if there are problems in the system. Lots of these valves get replaced when they aren't bad, but the tech didn't find the real problem.
The TXV valve effects a change to the refrigerant flow rate. It senses the vapor line temperature, and changes the flow rate to keep the vapor line temperature above the saturation temperature. Most are pressure compensating so that in effect, they do a decent job of maintaining a fixed superheat in the low side vapor line.
Compare with AXV, automatic expansion valve - https://neilorme.com/AEV.shtml
| common-pile/stackexchange_filtered |
Looking at those ground squirrels over there - the one that just gave that sharp warning call basically painted a target on itself for that hawk circling above.
That's exactly what puzzles me about animal behavior. Why would any creature do something that puts itself at greater risk just to help others?
Think about it though - those other squirrels that scattered when they heard the alarm, they're probably siblings or cousins. The caller shares genetic material with them.
But genes don't think ahead like that. How could natural selection favor a trait that literally decreases an individual's own survival chances?
Here's where the math gets interesting. If I share half my genes with my siblings, then helping two siblings survive is genetically equivalent to me surviving. The specific calculation depends on the coefficient of relatedness - siblings share fifty percent of their genes, cousins only twelve and a half percent.
You're suggesting there's some kind of genetic cost-benefit analysis happening? That seems like a stretch.
Not conscious calculation, but the underlying logic works out. Organisms that carry genes promoting this kind of family-directed altruism end up with more copies of their genes in the next generation, even if they personally don't reproduce as much.
What about those birds we saw earlier - the ones where some adults never breed but instead help feed their parents' new chicks?
Perfect example. Those helper birds are essentially investing in their siblings' survival rather than their own reproduction. From a purely individual perspective, it looks wasteful. But when you factor in indirect fitness - the reproductive success they enable in relatives - the behavior makes evolutionary sense.
This raises another question though. How does an animal actually recognize relatives versus non-relatives? Scent markers? Behavioral cues?
Usually a combination of factors. Many species use familiarity during early development - animals they grew up with are likely relatives. Some use chemical signatures. The key is that the recognition mechanism doesn't need to be perfect, just better than random.
But wouldn't this lead to increasingly nepotistic societies where cooperation only happens within family groups?
That's one of the fascinating tensions in the theory. Strong kin preference can actually reduce broader social cooperation, but it might also enable species to coexist by making intra-family competition more intense than competition between different species entirely. | sci-datasets/scilogues |
Syncing videos in separate clients
Two video players in separate clients are being synced by exchanging play/pause requests:
$(videoA).on("pause", e => sendRequestToPauseVideoB());
$(videoA).on("play", e => sendRequestToPlayVideoB());
$(videoB).on("pause", e => sendRequestToPauseVideoA());
$(videoB).on("play", e => sendRequestToPlayVideoA());
When the respective video receives a request to play or pause, it will call video.play() or video.pause(), respectively.
The problem
When videoA is paused, it will pause videoB, which will then try to pause VideoA.
This usually works fine, but suppose videoA pauses, sending a request for videoB to pause, but meanwhile videoB has played, sending a request for videoA to play.
Thus there is a pause request incoming to videoB and a play request incoming to videoA, which will in turn result in both videos switching (play/pause) state and telling the other to change to this new state, resulting in an infinite loop.
Why it happens
Essentially, if a video receives a play/pause request, it should not send a play/pause request back to the other video.
Would it be possible for a video, within the on Play and on Pause event to deduce if that event was due to a request? Or somehow preventing the play/pause event from firing at all when a request calls video.play() or video.pause()?
Do you have any good solutions to this?
such logic would need to be written by you
Where's the logic that handles the requests (and alters the video state)?
Try console.log(e) to see whether the event has any useable information about its origin
| common-pile/stackexchange_filtered |
Formula for square of the number of divisors $\sum_{r\mid n} d(r^2) = d^2(n)$
I am trying to prove or disprove the following statement:
$$\sum_{r|n} d(r^2) = d^2(n),$$
where $d$ is the number of divisors function.
Computing it for small numbers yields equality, so I at least think it's true. But I can't see how to prove it formally. I have tried working with prime factorisations, but finding equality between terms seems near-impossible at best. My best guess is there is a way to manipulate
$$\sum_{r\mid n} \sum_{k\mid r^2} 1$$
into a form which becomes the RHS, although my attempts at writing $n=ar$ and $r^2=bk$ for $a,b\in\mathbb{Z}$ become very messy when I attempt substitution.
Any assistance or hints would be greatly appreciated! Thank you!
What divisor function do you mean? just divisors? unique divisors? prime divisors? unique prime divisors?
Just divisors - for example $d(6) = 4$ since it has four distinct divisors.
Write $n=p_1^{\alpha_1}\cdots p_k^{\alpha_k}$. Then
\begin{align}\sum_{r|n} d(r^2)&=\sum_{0\leq i_1 \leq \alpha_1,\ldots,0\leq i_k \leq \alpha_k} d((p_1^{i_1}\cdots p_k^{i_k})^2)\\&=\sum_{0\leq i_1 \leq \alpha_1,\ldots,0\leq i_k \leq \alpha_k} d(p_1^{2i_1}\cdots p_k^{2i_k})\\&=\sum_{0\leq i_1 \leq \alpha_1,\ldots,0\leq i_k \leq \alpha_k}(2i_1+1)\cdots(2i_k+1)\\&=\left(\sum_{0\leq i_1 \leq \alpha_1}(2i_1+1)\right)\cdots \left( \sum_{0\leq i_k \leq \alpha_k}(2i_k+1)\right)\\&=\left(2\frac{\alpha_1(\alpha_1+1)}{2}+(\alpha_1+1)\right)\cdots \left( 2\frac{\alpha_k(\alpha_k+1)}{2}+(\alpha_k+1)\right)\\&=(\alpha_1+1)^2\cdots(\alpha_k+1)^2\\&=d(n)^2.\end{align}
Thank you! Just to clarify, the limits of summation in the fifth line should read $0\leq i_k \leq a_k$, yes?
Yes, that's right.
The divisor counting function $d$ is multiplicative, so it is straightforward to verify that both $d(n^2)$ and $d^2(n)$ are multiplicative. Furthermore, by a fundamental theorem of multiplicative functions, $\sum_{r|n} d(r^2)$ is also multiplicative. Hence, because both sides of the statement are multiplicative, to establish that the statement is true for all positive integers $n$, it suffices to prove that it is true for all prime powers $p^\alpha$:
\begin{align}
\sum_{r|p^\alpha} d(r^2) & = d(1^2) + d(p^2) + d(p^4) + \cdots + d(p^{2\alpha}) &&\text{positive divisors of } p^\alpha \text{are } 1, p, p^2, \dots, p^\alpha\\
& = 1 + 3 + 5 + \cdots + 2\alpha + 1 && d(p^i) = i + 1\\
& = (\alpha + 1)^2 &&\text{sum of the arithmetic series}\\
& = d^2(p^\alpha) && i + 1 = d(p^i)
\end{align}
| common-pile/stackexchange_filtered |
As of Java 7 update 45, one can no longer lookup manifest information without triggering a warning?
Unfortunately, the workaround given by Oracle and others here (Java applet manifest - Allow all Caller-Allowable-Codebase) for getting around the 7 update 45 problem does NOT work if your app needs to access its loaded signed jar manifests. In my case, our app does this so as to log the relevant manifest info.
With my web start app, everything worked fine and dandy with the "Trusted-Library" attribute that needed to be added for 7u21. With 7u45, removing the "Trusted-Library" attribute and adding in all the additional attributes talked about in other workarounds will NOT work -- I will get the same warning that you would get if you were running 7u21 without the Trusted-Library attribute (stating the application contains both signed and unsigned code):
I've tried just about every manifest/JNLP permutation possible -- nothing cuts the mustard.
What I've found is that, basically, when we load one of our applets' jar manifests (not the JRE jars), and call hasMoreElements, additional security checks occur which trigger the warning:
public List<String> getManifests() {
...
Enumeration<URL> urls = getClass().getClassLoader().getResources("META-INF/MANIFEST.MF");
while (urls.hasMoreElements()) {
.... a bunch of loop stuff
// at the end of the loop...
System.out.println("Checkpoint SGMLOOP8.");
System.out.println("Breaking....");
//break; <<<<<<---- if the next jar is one of our signed jars, the next line will trigger the warning. If instead we choose to break, the app works perfectly with no warning.
System.out.println("urls.hasMoreElements(): " + (urls.hasMoreElements() ? "true" : "false")); <<<<<<-------- will evaluate to false if user clicks Block on the warning, otherwise will evaluate to true when our signed jars are next
System.out.println("Checkpoint SGMLOOP9.");
}
...
}
This is what prints out in the Java console at maximum tracing:
Checkpoint SGMLOOP8.
Breaking.... <<<<---- console output pauses here until user answers warning
security: resource name "META-INF/MANIFEST.MF" in http://<path_to_jar> : java.lang.SecurityException: trusted loader attempted to load sandboxed resource from http://<path_to_jar>
(message repeats for all our signed jars)
urls.hasMoreElements(): false <<<<---- false because user clicked blocked, otherwise true when user clicks don't block
Checkpoint SGMLOOP9.
It took me FOREVER to figure out this out, because for some reason when a signed manifest that passes security checks earlier in the startup process, and then later is accessed and is complained about, I don't naturally think it's complaining about the manifest, but rather the resources referenced by the manifest. Go figure!
Looking into the Java source code, I can see why the warning could possibly happen (hasMoreElements leads to more security checks):
// getResources is called in my code above
java.lang.ClassLoader
public Enumeration<URL> getResources(String name) throws IOException {
Enumeration[] tmp = new Enumeration[2];
if (parent != null) {
tmp[0] = parent.getResources(name);
} else {
tmp[0] = getBootstrapResources(name);
}
tmp[1] = findResources(name); <<<<------ This returns a new Enumeration<URL> object which has its own “hasMoreElments()” method overwritten – see below code
return new CompoundEnumeration<>(tmp);
}
java.net.URLClassLoader
public Enumeration<URL> findResources(final String name)
throws IOException
{
final Enumeration<URL> e = ucp.findResources(name, true);
return new Enumeration<URL>() {
private URL url = null;
private boolean next() {
if (url != null) {
return true;
}
do {
URL u = AccessController.doPrivileged( <<-- Security context could block this
new PrivilegedAction<URL>() {
public URL run() {
if (!e.hasMoreElements())
return null;
return e.nextElement();
}
}, acc);
if (u == null)
break;
url = ucp.checkURL(u); <<-- Security checks done in here
} while (url == null);
return url != null;
}
public URL nextElement() {
if (!next()) {
throw new NoSuchElementException();
}
URL u = url;
url = null;
return u;
}
public boolean hasMoreElements() {
return next();
}
};
}
Yes, the manifests are properly signed! Yes, the manifests do have the appropriate attributes! In fact, this is proven by the fact that the jars load just fine and execute, as long as we don't try to directly access their manifests! Just to assuage your fears though, here are the relevant manifest attributes (I've tried MANY additions/subtractions of the below attributes):
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.0
Created-By: 24.45-b08 (Oracle Corporation)
Application-Name: AppName
Codebase: *
Permissions: all-permissions
Application-Library-Allowable-Codebase: *
Caller-Allowable-Codebase: *
Trusted-Only: false
Class-Path: jar1.jar jar2.jar jar3.jar
Specification-Title: AppName
Specification-Version: 1.0
Specification-Vendor: CompanyName
Implementation-Title: AppName
Implementation-Version: 1.0
Implementation-Vendor: CompanyName
The question is: Should the warning be happening when we try to access the manifests? As it stands, we either have to choose to force users see the warning every time, or we have to remove our logging of our signed jar manifest info. Seems like a bad choice, especially since this manifest info is very useful for debugging issues as it is really the only way to verify an end-user is running the correct version of the app (short of direct physical inspection on-site). This is especially true of our applet since the jars are allowed to be cached on client systems (along with the corresponding JavaScript to access the applet), meaning they could very easily be running the wrong jars after upgrades/downgrades, etc. Not having this info in our logs could lead to large headaches in the future.
Any ideas? This is especially frustrating since Oracle intends to fix the Trusted-Library issue anyway, so putting all this investigative work into this may be doing nothing more than wasting my weekend. Grrr....
EDIT: One observation I had was that first jar that ran into the security exception actually had a dependency on a different jar in my app. I thought, "maybe the dependent jar's manifest should be read in first?" So I forced the load-order so that non-dependent jars would be loaded first. End result? I could see the non-dependent jar now threw the security exception first... and there is still a warning.
Did you try to pack the contents of the sandboxed jar to the signed jar? I mean exporting every dependency class and putting them in a single one and sign it?
Well that's just it -- there shouldn't be any sandboxed jars in our app. All of them are signed. The jars the console says are sandboxed really aren't, and were loaded just fine earlier during the startup process. That said, I haven't tried packing everything into a single jar, so it is something to try at least. But in any event that would be less than desirable since we wouldn't be able to update individual components of the app anymore.
Wanted to update this question to say that as of Java 7 Update 55, this issue has been rendered moot since it is possible again to put in both "Trusted-Library" and "Caller-Allowable-Codebase" manifest attributes simultaneously. With both of these attributes in the manifest, the warning will not be triggered.
I'm getting this problem with applets, and found that I get the warning on hasMoreElements on my own applet .jar files (not the Java system .jars that are returned the first few times through the loop), and only when applet .jar file caching is enabled in the Java Control Panel.
I can't get all my customers to disable .jar file caching, but those that do are happy for the moment, as the mixed code warning does not appear for them. It's not an answer, it's a workaround at best.
Right, this is the exact same thing I am running into. I haven't tried out disabling jar caching; I'll have to give that a try.
So, tried out disabling the jar cache. That causes the applet to fail entirely. It blows up with a JARSignerException stating that the first signed jar it attempts to load has "unsigned entries". Which is patently false because everything works just fine when the cache is on... Java FTW.
| common-pile/stackexchange_filtered |
Why do `tf.pad` padding arguments need additional incrementation for accuracy?
I'm trying to implement a symmetric padding layer in Keras, which is just like how Caffe implements it and I've encountered a weird problem.
Let's say we have an 1x1280x1280x3 image with 3 channels, and we want to perform a convolution to it so that it returns an object of a shape 1x320x320x96 with 96 channels. In Caffe, we can set pad parameter right in the convolution layer:
input: "image"
input_shape {
dim: 1
dim: 3
dim: 1280
dim: 1280
}
layer {
name: "conv1"
type: "Convolution"
bottom: "image"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 11
pad: 5 # Padding parameter
stride: 4
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
If you try to compile this with Caffe, output shape of conv1 will indeed be 1x320x320x96.
Now let's try the same thing with Keras using tf.pad and Lambda layer:
from keras.layers import Input, Lambda
import tensorflow as tf
image = Input(shape=(1280, 1280, 3),
dtype='float32',
name='image')
sym_pad = Lambda(lamda x: tf.pad(x, [[0, 0], [0, 5], [0, 5], [0, 0]])) # padding = 5
conv1 = Conv2D(filters=96,
kernel_size=11,
strides=(4, 4),
activation=relu,
padding='valid', # valid instead of 'same'
name='conv1')(image)
Problem:
If we measure the shape of conv1 defined from the code above, it will be 1x319x319x96 instead of1x320x320x96.
But if we increment our padding with 2, therefore utilize a 7x7 pad instead of 5x5, like this:
sym_pad = Lambda(lamda x: tf.pad(x, [[0, 0], [0, 5+2], [0, 5+2], [0, 0]])) # padding = 7
conv1 will have a desired shape of 1x320x320x96 when we pass a padded input of image with a shape of 1x1287x1287x3 instead of 1x1285x1285x3 (notice that only odd padding on the even-shaped image alters the shape of convolution, this might be related to strides).
Why is this happening? Does Caffe automatically increment every padding parameter by 2? Or am I doing something wrong?
Thank you!
P.S
I am aware of padding=same parameter in Keras layers, but I'm looking for symmetric padding instead of asymmetric one.
If you are talking about symmetric padding, I assume that you want to pad the same amount of pixel to the left side of the image as well as to the right side (same for top and bottom). What you are doing currently with tf.pad is padding 5 pixels to the right and 5 pixels to the bottom. Therefore you are padding 2.5 pixel to both sides (in theory).
The output shape is given by:
floor((input_size-kernel_size+2*padding_size)/stride_size) + 1
So in your case when padding 2.5 pixels this yields and output shape of 319.
If you would pad 5 pixels to both sides you would obtain, what you would expect i.e. 320.
I apologize if I'm misunderstanding anything. But let's say we have an image of shape (1, 1280, 1280, 3), my code would return a shape (1, 1285, 1285, 3), so that it symmetrically pads height and width dimensions of image. Float values can't be passed to tf.pad.
But I've noticed, that array shapes are used for that purpose exactly. So I should use [[0, 0], [5, 5], [5, 5], [0, 0]] as a padding parameter instead, to evenly pad from all sides! I didn't know the purpose of this before... Thank you!
In your example you pad the input only to the bottom and to the right. Use:
sym_pad = Lambda(lamda x: tf.pad(x, [[0, 0], [5, 5], [5, 5], [0, 0]]))
to get the same padding like in Caffe.
I was unaware of that due to slightly complex documentation of tf.pad on tensorflow. But now it returns a desired output. Thank you!
| common-pile/stackexchange_filtered |
File upload PHP script not working
I am having issues getting a file upload script to work. The strange thing is that it works fine for an image field I have setup, but not for the form field intended for .mp3 or similar files, and I am using the same script for each. Here is my code:
$download = $_FILES['download']['name'];
$downloadtarget = "events/" . $name . "/";
if(move_uploaded_file($download, $downloadtarget)) {
echo 'do stuff';
}else{
echo 'don't do stuff';
I always seem to get the latter "dont do stuff" with the file upload.
The same script except interchanging:
$download = $_FILES['download']['name'];
with
$download = $_FILES['picture']['name'];
earlier in the script works fine, the file uploads without issue.
what is the size of mp3 file?
Posting form this type
I think it should be $download = $_FILES['download']['tmp_name'];
Sidenote: This echo 'don't do stuff'; will surely throw a parse error. Change to either echo 'don\'t do stuff'; or echo "don't do stuff";
what is print_r($_FILES) output
Try code like this
if ($_FILES["file"]["error"] == 4)
{
$file_attachement_message="No Files attached";
}
else if ($_FILES["file"]["error"] > 0){
$attachement_
error=$_FILES["file"]["error"];
$file_attachement_message="File attachement failed with error code:$attachement_error";
}
else
{
if(!is_dir('../attachements'))
{
mkdir('../attachements');
}
if (file_exists("../attachements/".$id))
{
// echo $_FILES["file"]["name"] . " already exists";
$file_attachement_message="Attached file already exists in db";
}
else
{
$info=pathinfo($_FILES['file']['name']);
$ext = $info['extension']; // get the extension of the file
$newname="$risk_id.".$ext;
["tmp_name"],"../attachements/".$_FILES["file"]["name"]);
move_uploaded_file($_FILES["file"]["tmp_name"],"../attachements/".$newname);
$file_attachement_message="and File Attached successfully";
}
}
please check the syntax errors before posting answer there are too many of them.
<?php
$download = $_FILES['download']['name'];
$downloadtarget = "events/" . $download . "";
$temp_name = $_FILES['download']['tmp_name'];
if(move_uploaded_file($temp_name, $downloadtarget)) {
echo 'do stuff';
} else {
echo 'don\'t do stuff';
}
?>
you are missing $_FILES['download']['tmp_name'];
try
move_uploaded_file($_FILES["download"]["tmp_name"],
"events/" . $_FILES["download"]["name"]);
also form has enctype="multipart/form-data" property
for more info about file upload follow link :- http://www.w3schools.com/php/php_file_upload.asp
tried this, same issues persist. also, file size of mp3 thus far has never been in excess of 5mb, however my host places no limits on this, and there are no statements in my script that limit file size, or even type. also, the form certainly does have entype="multipart/form-data".
so have you any error? also check your upload dir is writable
@RakeshSharma some good programmers says http://www.w3fools.com/. So avoid using it. :)
@RakeshSharma, please avoid w3schools for reference too many errors and misleading info there.
Issue appears to have been file size. Tested with picture vs an mp3, works without issue. Further reading shows php.ini defaults to 2mb limit.
| common-pile/stackexchange_filtered |
Why is kafka not creating a topic? - is not a recognized option
I am new to Kafka and trying to create a new topic on my local machine.
I am following this https://medium.com/@maftabali2k13/setting-up-a-kafka-cluster-on-ec2-1b37144cb4e
Start zookeeper
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
Start kafka-server
bin/kafka-server-start.sh -daemon config/server.properties
Create a topic
bin/kafka-topics.sh --create -–bootstrap-server localhost:9092 -–replication-factor 1 -–partitions 1 --topic jerry
but when creating the topic, I am getting the following error
Exception in thread "main" joptsimple.UnrecognizedOptionException: – is not a recognized option
at joptsimple.OptionException.unrecognizedOption(OptionException.java:108)
at joptsimple.OptionParser.validateOptionCharacters(OptionParser.java:633)
at joptsimple.OptionParser.handleShortOptionCluster(OptionParser.java:528)
at joptsimple.OptionParser.handleShortOptionToken(OptionParser.java:523)
at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:59)
at joptsimple.OptionParser.parse(OptionParser.java:396)
at kafka.admin.TopicCommand$TopicCommandOptions.<init>(TopicCommand.scala:552)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:49)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
I had seen the following Why is kafka not creating a topic? bootstrap-server is not a recognized option
But I cant find an answer to my problem here as the error given is different. Is there some stuff that I am missing here?
I've used your command and had same issue:
bin/kafka-topics.sh --create -–bootstrap-server localhost:9092 -–replication-factor 1 -–partitions 1 --topic jerry
If you look closer to your command you will see, that before options: bootstrap-server, replication-factor and partitions are strange characters.
I think you use copy/paste method and some strange characters were added:
Following command should work:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic jerry
Best way is to write it by your own :).
Yaaa Thanks ..It worked . Yes I had done copy/paste. Just like u told, I there were some strange characters added.
New version of Kafka no longer required ZooKeeper connection string. Use --bootstrap-server localhost:9092 instead of --zookeeper localhost:2181.
For windows: kafka-topics.bat --create -–bootstrap-server localhost:9092 -–replication-factor 1 -–partitions 1 --topic topic-name
For linux : kafka-topics.sh --create -–bootstrap-server localhost:9092 -–replication-factor 1 -–partitions 1 --topic topic-name
In the latest version kafka_2.12-3.1.0 (2022) after unzipping and setting the properties and logs. keep the Kafka folder on the C drive and always run the command prompt with 'run as administrator'.
The .bat file is for windows
Terminal 1
C:\kafka\bin\windows>zookeeper-server-start.bat
..\..\config\zookeeper.properties
Terminal 2
C:\kafka\bin\windows>kafka-server-start.bat
..\..\config\server.properties
Terminal 3
C:\kafka\bin\windows>kafka-topics.bat --create --topic tutorialspedia
--bootstrap-server localhost:9092
Created topic tutorialspedia.
To checklist of topic created
C:\kafka\bin\windows>kafka-topics.bat --list --bootstrap-server
localhost:9092
tutorialspedia
| common-pile/stackexchange_filtered |
Root keeps redirecting url to non existing file
I was playing around with my htaccess file a bit and now every time I go to the root of my website it adds: /home/websitename/public_html/index.html
It is very annoying since I don't have any rule that says it should do this, index.html doesn't even exist on my host. What can I do about it?
My htaccess file:
DirectoryIndex
RewriteEngine on
#Indexes uitzetten
Options -Indexes
#Cross site access toestaan
Header set Access-Control-Allow-Origin "*"
Header add Access-Control-Allow-Headers "origin, x-requested-with, content-type"
#Websitename
DirectoryIndex index.php
RewriteRule ^info/(.*).html catlisting.php?alias=$1 [L]
#SUBCAT
RewriteRule ^catalogus/(.*)/(.*).html cataloguslisting.php?alias_hoofd=$1&alias_sub=$2 [L]
#HOOFDCAT
RewriteRule ^catalogus/(.*).html cataloguslisting.php?alias=$1 [L]
RewriteRule ^offerte.html/(.*)/ /offerte.php?items=$1 [L]
RewriteRule ^nieuws/(.*).html nieuws.php?alias=$1 [L]
RewriteRule ^producten/(.*).html product2.php?alias=$1 [L]
RewriteRule ^lp/(.*).html lp.php?alias=$1 [L]
RewriteRule ^(.*).html content.php?alias=$1 [L]
I was playing around a bit with this answer (2nd one) because of an issue I had. I added [R=301,L] to a rule and now I can't get it back to how it was.
| common-pile/stackexchange_filtered |
How to automatically mark emails as unread if they are older than 3 days and have not been replied to or forwarded?
I'm trying to keep track of emails that I haven't dealt with.
I would like to mark any email that is older than 3 days as unread but only if the email hasn't been replied to or forwarded.
I managed to cobble together the script below from some ones found online. It did the date detection but not the replied status. This resulted in it marking everything older than 3 days as unread.
I haven't added the forwarded status and I would like to add a variable for emails that have attachments with a specific file name but I am trying to get it working with just replies first.
I'm running Outlook 365.
Option Explicit ' Consider this mandatory
' Tools | Options | Editor tab
' Require Variable Declaration
' If desperate declare as Variant
Sub HelpMeRemember()
Dim objInbox As Folder
Dim objInboxItems As Items
Dim i As Long
Set objInbox = Session.GetDefaultFolder(olFolderInbox)
Debug.Print objInbox.Name
Set objInboxItems = objInbox.Items
objInboxItems.Sort "[ReceivedTime]", True
For i = objInboxItems.Count To 1 Step -1
If TypeOf objInboxItems(i) Is MailItem Then
With objInboxItems(i)
If .PropertyAccessor <> 102 And .ReceivedTime < Date - 3 Then
.UnRead = True
.Save
'Debug.Print "Not replied to."
'Debug.Print "Older mail."
'Debug.Print " Subject: " & .Subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
ElseIf .PropertyAccessor = 102 Then
'Debug.Print "Reply Sent."
'Debug.Print " Subject: " & .Subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
ElseIf .ReceivedTime > Date - 3 Then
'Debug.Print "Newer mail."
'Debug.Print " Subject: " & .Subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
Exit For ' Stop when newer mail encountered.
End If
End With
Else
Debug.Print "Non-mailitem ignored."
End If
Next i
Debug.Print "Done."
End Sub
First of all, iterating over all items in the folder is not really a good idea:
Set objInboxItems = objInbox.Items
objInboxItems.Sort "[ReceivedTime]", True
For i = objInboxItems.Count To 1 Step -
If TypeOf objInboxItems(i) Is MailItem Then
Use the Find/FindNext or Restrict methods of the Items class instead. They allows getting items that correspond to your conditions, so you could iterate over items found only. Read more about these items in the following articles:
How To: Use Find and FindNext methods to retrieve Outlook mail items from a folder (C#, VB.NET)
How To: Use Restrict method to retrieve Outlook mail items from a folder
As soon as you find items you may check the PR_LAST_VERB_EXECUTED property value, or just can add one more condition to the search criteria. If that property is not set then you have never replied or forwarded the original email. The Replied mail flag has the following value - 0x00000105. And the Forwarded mail has the following meaning - 0x00000106.
This demonstrates how you could apply.PropertyAcessor.
Option Explicit ' Consider this mandatory
' Tools | Options | Editor tab
' Require Variable Declaration
' If desperate declare as Variant
Sub HelpMeRemember()
Dim objInbox As Folder
Dim objInboxItems As Items
Dim i As Long
Dim intLastVerb As Long
Set objInbox = Session.GetDefaultFolder(olFolderInbox)
Debug.Print objInbox.name
Set objInboxItems = objInbox.Items
objInboxItems.Sort "[ReceivedTime]", True
For i = objInboxItems.count To 1 Step -1
If TypeOf objInboxItems(i) Is mailItem Then
With objInboxItems(i)
Debug.Print "Subject: " & .subject
intLastVerb = .propertyAccessor.GetProperty("http://schemas.microsoft.com/mapi/proptag/0x10810003")
Debug.Print intLastVerb
Debug.Print .ReceivedTime
Debug.Print Date - 3
' Source unknown
' 102 "Reply to Sender"
' 103 "Reply to All"
' 104 "Forward"
' 108 "Reply to Forward"
If .ReceivedTime > Date - 3 Then
Debug.Print "Newer mail."
'Debug.Print " Subject: " & .subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
Exit For ' Stop when newer mail encountered.
ElseIf intLastVerb = 102 Then
Debug.Print "Reply to Sender"
'Debug.Print " Subject: " & .subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
ElseIf intLastVerb = 103 Then
Debug.Print "Reply to All"
'Debug.Print " Subject: " & .subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
ElseIf intLastVerb = 104 Then
Debug.Print "Forward"
'Debug.Print " Subject: " & .subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
ElseIf intLastVerb = 108 Then
Debug.Print "Reply to Forward"
'Debug.Print " Subject: " & .subject
'Debug.Print " ReceivedTime: " & .ReceivedTime
Else
.UnRead = True
'.Save ' If necessary
End If
End With
Else
Debug.Print "Non-mailitem ignored."
End If
Next i
Debug.Print "Done."
End Sub
You could Restrict by date and by mailitem to apply code to older mail only. Since this code limits processing to the applicable items already there would probably be little change in efficiency.
| common-pile/stackexchange_filtered |
What is the "git suite"?
I've seen this term being used in a couple of places. The official documentation doesn't define this explicitly and there's also no Stack Overflow question answering this so I think it's good to have a source defining this.
I think the git suite is a collection of commands and tools you get when you install git. Some of the tools in the git suite are: git, git gui and gitk.
Git commands work in an odd way. Every command, such as git add and git commit, is actually a distinct tool, git-add and git-commit and so on. Thus for example if you look carefully you will see that the git add documentation page actually is documenting the separate tool git-add. And if you look at your Git installation you will see all the separate tools it comprises!
Well then, this collection of tools (git-add, git-commit, and so on) constitutes the suite, i.e, the tools you can access by saying git followed by a command name. That syntax is git[1]. That is why the suite is mentioned at the bottom of every documentation page: git-add is said to be
Part of the git[1] suite
because that's exactly what it is.
The official documentation mention suite at the bottom of every page:
GIT
Part of the git[1] suite
But without further explanation.
On download page is more info:
Git comes with built-in GUI tools for committing (git-gui) and browsing (gitk), but there are several third-party tools for users looking for platform-specific experience.
So git, git-gui and gitk are parts of the git suite.
"official documentation mention suite only in one place". It turns out it's on the bottom of every page in the docs, eg for git-add.
"git[1] suite" site:git-scm.com It's the same thing repeated over 8000 times. But without further explanation.
Well, that's the explanation then - these "over 8000" commands are what constitutes the "git suite", and the page for git[1] is the git suite's main page, linking to all the programs that are part of the suite, such as git-add, git-commit, etc.
| common-pile/stackexchange_filtered |
What does Jesus mean when he said you are all gods?
Jesus seem to be quoting
http://biblehub.com/text/psalms/82-6.htm
In the psalm, Yahweh, had a speech to other gods, that they are all sons of Elyon.
In John 10:34 what was Jesus try to teach when he said that you are all gods?
Is he trying to say that Elyon did have many sons? And that he is one of them?
Jesusl also said that it is written in your law. The thing is, psalm isn't exactly law books. Only the torah is the law book. So why Jesus said that Psalm is a law book.
http://biblehub.com/text/john/10-34.htm
In the previous verse, the Jews had accused Jesus of blasphemy, because he had said (John 10:30) "I and my Father are one":
John 10:33: The Jews answered him, saying, For a good work we stone thee not; but for blasphemy; and because that thou, being a man, makest thyself God.
The author of John has Jesus respond by citing Psalm 82:6. First century Judaism was strictly monotheistic, so neither our author nor the Jews could interpret the Psalm as referring to the assembly of gods. Taken out of context, the citation implies that God had been describing all people as gods, in which case it could not be blasphemy for a man to speak of himself as the Son of God. Although Jesus has cleverly countered the claim of blasphemy, the Jews try to hold him, but he eludes them (John 10:39).
An inadvertent clue that the author was not a Jew is that he refers to "your law" but he was no doubt aware that the Psalms were not part of the Law, although he strengthens the case Jesus is putting by referring to it as such.
@SimplyaChristian Perhaps approximately. But, would you remember whether someone said "your law" or just "the law" decades ago? If not, would you use the words most culturally attuned to your own context?
I agree with @SimplyaChristian - I'd call that 'authorial clue' suggestion a stretch, when the phrase seems more likely a figure of speech. For instance, if I were in a similar debate with a fellow Christian over a theological point, it would be perfectly natural for me to say "Doesn't your Bible say..." - this is not a suggestion that I don't have a Bible or find any belonging with the same group, but rather makes the point that the individual I'm talking to should take ownership and deeper consideration of the text I'm highlighting.
| common-pile/stackexchange_filtered |
Deploying SQLCE.EntityFramework 4.0.8435.1
I've applied SQLCE in a project I've been working.
It works fine in Visual Studio and when I run locally (http://localhost:####) it runs perfectly.
But when I publish it at my remote host I receive the "Yellow Screen of Death" the following error message:
Failed to find or load the registered .Net Framework Data Provider.
My Web.Config and references are OK (As I said it work fine at localhost) there is no need to chenge it.
The sdf file is deployed in the correct path.
What is missing?
Duplicate of http://stackoverflow.com/questions/3223359/cant-get-sql-server-compact-3-5-4-to-work-with-asp-net-mvc-2/3223450#3223450
I found the answer.
When SQLCE is installed to your project it add some files and folders under the bin directory of your webapp.
The following files and folders must be deployed along with you app in the bin folder.
Microsoft.Data.Entity.CTP.dll
System.Data.SqlServerCe.dll
System.Data.SqlServerCe.Entity.dll
WebActivator.dll
[x86] (folder)
[x86]\sqlcecompact40.dll
[x86]\sqlceer40EN.dll
[x86]\sqlceme40.dll
[x86]\sqlceqp40.dll
[x86]\sqlcese40.dll
[amd64] (folder)
[amd64]\sqlcecompact40.dll
[amd64]\sqlceer40EN.dll
[amd64]\sqlceme40.dll
[amd64]\sqlceqp40.dll
[amd64]\sqlcese40.dll
The files in the root of the bin folder (the first four I mentioned above) were deployed but for some reason the x86 and amd64 folders were not sent.
After I deployed those files the app worked fine in the remote host also.
I think you won't need to copy the SQLCE dlls into your bin directory once .Net 4.0 SP1 be released, since they are going to come with standard .Net framework installation.
I used the Private File–Based Deployment as described here: http://msdn.microsoft.com/en-us/library/aa983326%28v=vs.80%29.aspx
Easiest solution is to add a post-build step to copy these from the NuGet package NativeBinaries folder.
The problem is because your remote host does not have the provider for SQLCE. If you look at your connection string it is something like this:
<connectionStrings>
<add name="name"
connectionString="Data Source=|DataDirectory|yourDbFileName.sdf"
providerName="System.Data.SqlServerCe.4.0"/>
</connectionStrings>
Please note that SQL CE and its System.Data.SqlServerCe.4.0 provider has been released after .Net 4.0 so it was not included in standard .Net framework 4.0. So SqlServerCe.4.0 provider is Missing
No. The connection string is correct. See the correct answer I posted.
I did not say your connection string is wrong, I said your remote host does NOT have SqlServerCe.4.0 provider installed because it has been released after .Net 4.0. Sometimes it's a good idea to read the answer first before voting it down :)
They don't need to have SqlServerCe 4.0 installed. The dlls I mentioned above are enough to use the sdf file SQLCE functionality.
I think this is the best thing about SQLCE 4.0 I'm not dependent of the host db config an restrictions anymore.
I agree, it's great that it can be configured without any dependency to the host.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.