text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How to get 1st nested object into a 2nd level nested object controller?
I have a Character model that has a show page. On the show page, I have a loop of comments that are dynamically generated via a partial. In that comments partial, I have another partial for votes, which contains voting buttons. Naturally, I want to allow votes on comments.
I am unsure how to get the comment object into the votes controller (or VotesController module, depending on the implementation) for creating a vote.
Getting the character object id to the votes controller is simple enough, since the actual view is the character show page, but obtaining a specific comment's id that is genrated from a partial, by clicking a vote button in a partial that is nested in the comments partial is causing me to draw a blank for the syntax of accessing that comment.
(I am using acts_as_votable for votes, and acts_as_commentable for comments.)
app/views/characters/show.html.haml
= render partial: 'comments/comment', collection: @comments, as: :comment
app/views/comments/_form.html.haml
.comment{ :id => "comment-#{comment.id}" }
%hr
= render partial: 'votes/vote_comment'
%h4
#comment body
app/views/votes/_vote_comment.html.haml
.vote-comment-buttons
= link_to image_tag("upvote.png"), votes_upvote_path(), method: :post, remote: true
= link_to image_tag("downvote.png"), votes_downvote_path(), method: :post, remote: true
app/controllers/votes.html.haml
VotesController < ApplicationController
def upvote
# Need the specific comment or comment id whose vote button was clicked.
end
def downvote
# Need the specific comment or comment id whose vote button was clicked.
end
end
Well, here are the basic tips:
You can not pass ruby objects through HTTP, but you can pass id and type of them to build them in your controller.
Even when you type something like comment_path(comment), only id of that comment is passed to your action. That is easily checked by observing your action code (it should contain something like Comment.find(params[:id])).
Passing any desired amout of additional parameters can be done with just providing them to your route helpers, like that: some_voting_path(commentable_id: 14, commentable_type: 'character').
You can access that params inside of your action with params['commentable_type'] or whatever values you pass with your URL. In case you follow passing id and type approach, you should be able to do some metaprogramming:
def upvote_method
model = params[:commentable_type].camelize.constantize # => e.g., Post
object = model.find(params[:commentable_id]) # => post object
# here goes your inner logics
end
Beware that in case you send your request using GET method, these params are gonna be shown in your browser URL. However, you should not use GET for your purpose here, as voting changes the state of objects in your database.
| common-pile/stackexchange_filtered |
for each in js how iterte the and get value
I need to write to the dom for each idol in the array how can I do this with the for each look this is how I am doing right now but I need all the idols in the array.
for (var i = 0; i < msg.response.idols.length; i++) {
AddArtistField();
document.getElementById("idols_" + i).value = msg.response.idols[i];
}
storing the response after the loop
Are you just trying to do this?
msg.response.idols.forEach( (idol, index) => {
AddArtistField();
document.getElementById("idols_" + index).value = idol;
}
| common-pile/stackexchange_filtered |
How to get only 1st element of JSON data?
I want to fetch only 1st element of json array
my json data :
{
id:"1",
price:"130000.0",
user:55,
}
{
id:"2",
price:"140000.0",
user:55,
}
i want to access the price of 1st json element
price : "13000.0"
my code
$.each(data_obj, function(index, element) {
$('#price').append(element.price[0]);
});
but my output
is '1'
Possible duplicate of How to access first element of JSON object array?
Try 'element.price' instead of 'element.price[0]', that splits the price in an array, which is what your code does, you get the first character of the price which is '1'.
Assuming that you have array of objects
var arr = [{
id:"1",
price:"130000.0",
user:55,
},
{
id:"2",
price:"140000.0",
user:55,
}]
console.log(arr[0].price)
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
@c.grey I guess you have JSON string, you need to parse your JSON string to JSON first then you'll be able to get data according to my given example. For this, parse with JSON.parse(data)
I have parse the JSON data
You data isn't valid JSON, JSON data key must be wrap within double quote, but your data isn't wrapped in double quote
var data = [{
"id":"1",
"price":"130000.0",
"user":55
},{
"id":"2",
"price":"140000.0",
"user":55
}]
console.log(data[0]["price"]);
Hello You just need to add [] from starting and ending point of your json string. see here var data = JSON.parse( '[{ "id":"1","price":"130000.0","user":55},{"id":"2","price":"140000.0","user":55}]');
var priceValue = 0;
$.each(data, function(index, element) {if(index == 0){ priceValue = element.price;}});console.log(priceValue);
Your answer will be 13000.0
You are using for each loop and in function you get 2 params first one is index and second is the element itself. So this will iterate through all elements.
$.each(data_obj, function(index, element) {
$('#price').append(element.price);
});
If you just want to get first element
$('#price').append(data_obj[0].price);
i want only 1st obj
So why you are using loop tho? $('#price').append(data_obj[0].price);
The element having the your JSON data means, we can able to use below code to get the first JSON data.
element[0].price
Thanks,
If you want only the first item's price, you don't need a loop here.
$('#price').append(data_obj[0].price);
would work here.
For further reading you can refer here
var my_first_json_obj = data_obj[0]; // Your first JSON obj (if it's an array of json object)
var my_price = my_first_json_obj.price; // Your price
$('#price').append(my_price);
Following is the solution worked for my problem
I use return false;
$.each(data_obj, function(index, element) {
$('#price').append(element.price[0]);
return false;
});
Which gives only 1st value of array elements.
| common-pile/stackexchange_filtered |
How to bind a MultiDataTrigger with bindings using different data contexts
I have a custom user control which is in the main window of my WPF application. Within the window is an ItemsControl. I have created a style so that I can bind to an array of items which is a property of my view-model class. The array holds indexes to the position of the items control. I should add that the custom control is inherits from Shape so it has the Stroke property.
public class ViewModel
{
...
public List<int> Selections
{
get => _selections;
set
{
if (value == _selections) return;
_selections = value;
OnPropertyChanged();
}
}
public HypercombState State
{
get => _state;
set
{
if (value == _state) return;
_state = value;
OnPropertyChanged();
}
}
}
Here is the converter that is responsible for identifying whether the view-model is holding selected indexes or not. If should return true if the item is selected during databinding.
public class ArrayContainsConverter : IMultiValueConverter
{
public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture)
{
if (values[0] is not int id || values[1] is not List<int> array) return null;
return array?.Contains(id);
}
...
}
<Style.Triggers>
<MultiDataTrigger>
<MultiDataTrigger.Conditions>
<Condition Value="True">
<Condition.Binding>
<MultiBinding Converter="{StaticResource ArrayContainsConverter}">
<Binding Path="(ItemsControl.AlternationIndex)" RelativeSource="
{RelativeSource AncestorType=ContentPresenter}" />
<Binding Path="DataContext.Selections" />
</MultiBinding>
</Condition.Binding>
</Condition>
...
</MultiDataTrigger.Conditions>
<Setter Property="Stroke" Value="Chartreuse"></Setter>
</MultiDataTrigger>
</Style.Triggers>
Whenever the Selections or State properties change, I would like to update the ItemsControl so that there is a Stroke as a visual cue for the item's selected state when true. I can see the Selections property changing if I use breakpoints but there the converter is not triggering when the list changes.
Try changing the datatype from List to ObservableCollection. There is a difference when using those 2 data types in WPF and usually I recommend using the latter for any trigger updates.
Also try updating the binding modes such as
<Binding Path="DataContext.Selections" />
To
<Binding Path="DataContext.Selections", Mode=“TwoWay” />
The difference between the two is that:
List - Implements IList interface
ObservableCollection - Implements INotifyCollectionChanged
I will take a look at this but I have settled on a different pattern. I didn't want to add an IsSelected property on the model type since it just seemed wrong. After further thought, I have decided to add a Flags enum property called State on the model which can be used for multiple states including Selected, if needed. This seems like a good alternative rather than carrying a list of selections in the view-model. I am not saving the State in the database but it might actually be useful in the future to do just that albeit not for the selections.
| common-pile/stackexchange_filtered |
Running jhipster with Prod profile
I would like to know about the main differences between running a Jhipster app in Dev mode compared to run it in Production mode?
Thanks.
The main difference is, that all your static resources a optimized (e.g. minimizing and combining javascript files). Furthermore the production profiles enables gzip compression and longer caching of assets. Depending on your configuration the production profile may also use a different database.
Ok, thanks, so no difference of logging policies between those two modes?
Oh sorry you're right. The log level in the production profile is info while the loglevel in dev (and fast) is debug.
Thanks, my prod profile doesn't work (I dont know why), nothing appears on my browser. Is there a way to set log level to info programmatically?
Yes. Have a look at your maven pom. You can change the log level there. You can also add the log level configuration in your yml config.
As explained here https://jhipster.github.io/production/
Generating an optimized JavaScript application with Gulp
Will process all your static resources (CSS, JavaScript, HTML, JavaScript, images...) in order to generate an optimized client-side application.
This code will be served when you run the application with the "production" profile.
GZipping
With the "production" profile, JHipster configures a Servlet filter that applies gzip compression on your resources.
By default, this filter will work on most static resources (excluding images, which are already compressed) and on all REST requests.
Cache headers
With the "production" profile, JHipster configures a Servlet filter that puts specific HTTP cache headers on your static resources (JavaScript, CSS, fonts...) so they are cached by browsers and proxies.
| common-pile/stackexchange_filtered |
How do I get the overload through reflection at runtime?
Since where I work force me to use stored procedures and I haven't been around long enough to prevent it I have customized LiNQ to SQL code generation for all our stored procedures. Yes that's right we have stored procedures for ALL our data access and since we don't have table access it means that we often end up with 3 classes for all "entities". Now, I am drawing near completion of the latest changes when I hit a wall.
The following code does not work and the reason being that it is calling itself but I need to get the overload. The reason for this is that it's impossible to keep track of 40 parameters so I generate a "dynamic" class when the project builds.
[Function(Name="dbo.user_insert")]
public IList<ProcedureResult> UserInsert(UserInsertObj userInsertObj)
{
IExecuteResult result = this.ExecuteMethodCall(this, (MethodInfo)MethodInfo.GetCurrentMethod(),
userInsertObj.UserId, userInsertObj.ProductId, userInsertObj.FromIp,
userInsertObj.CampaignName, userInsertObj.Campaign, userInsertObj.Login,
userInsertObj.Password, userInsertObj.Nickname, userInsertObj.Pin, userInsertObj.Language,
userInsertObj.Currency, userInsertObj.Country, userInsertObj.Region, userInsertObj.Birthday,
userInsertObj.FirstName, userInsertObj.MiddleName, userInsertObj.LastName,
userInsertObj.Address1, userInsertObj.Address2, userInsertObj.Address3, userInsertObj.Zip,
userInsertObj.City, userInsertObj.Timezone, userInsertObj.Email, userInsertObj.EmailNotify,
userInsertObj.Gender, userInsertObj.Phone, userInsertObj.MobileCc, userInsertObj.MobileNr,
userInsertObj.MobileModel, userInsertObj.Referral, userInsertObj.GroupId,
userInsertObj.StreetNumber, userInsertObj.StreetName);
return new List<ProcedureResult>
{
new ProcedureResult
{
Name = "userId",
Value = result.GetParameterValue(0),
Type = typeof(System.Nullable<System.Int32>)
}
};
}
I played around with using something like the below but have no idea which overload to use and searching MSDN I haven't come close to anything useful yet.
((MethodInfo)MethodInfo.GetCurrentMethod()).DeclaringType.GetMethod("", new Type[] {})
How would I achieve getting the overload from the CurrentMethod?
EDIT: clarified that we are not allowed to use the database tables.
Seems like a bizare way to call stored procedures using LinqToSQL, you do know you can just map them to methods on your DataContext class don't you?
@Ben, thank you! I have updated the question in regard to your comment.
possible duplicate of How to use Reflection to Invoke an Overloaded Method in .NET
I forgot about linq! In this particular scenario I have only two methods one containing 1 parameter and one containing all the other parameters so a simple (see below) works fine:
method.DeclaringType.GetMethods()
.Where(x => x.Name == "UserInsert"
&& x.GetParameters().Count() > 1)
.Single()
| common-pile/stackexchange_filtered |
Output through json_encode using PDO in php
I have being trying to output data from database using the below code but finding difficult displaying the fetched data on screen. Using PDO in php to trigger the selection. What should I do to resolve this issue?
<?php
include ("core.php");
$output = array('data' => array());
$sql = "SELECT categories_id, categories_name, categories_active, categories_status FROM categories WHERE categories_status = ?";
$result = $connect->prepare($sql);
$result->execute([1]);
if($result->rowCount() > 0) {
// $row = $result->fetch_array();
$activeCategories = "";
while($row = $result->fetchAll()) {
$categoriesId = $row[0];
// active
if($row[2] == 1) {
// activate member
$activeCategories = "<label class='label label-success'>Available</label>";
} else {
// deactivate member
$activeCategories = "<label class='label label-danger'>Not Available</label>";
}
$button = '<!-- Single button -->
<div class="btn-group">
<button type="button" class="btn btn-default dropdown-toggle" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
Action <span class="caret"></span>
</button>
<ul class="dropdown-menu">
<li><a type="button" data-toggle="modal" id="editCategoriesModalBtn" data-target="#editCategoriesModal" onclick="editCategories('.$categoriesId.')"> <i class="glyphicon glyphicon-edit"></i> Edit</a></li>
<li><a type="button" data-toggle="modal" data-target="#removeCategoriesModal" id="removeCategoriesModalBtn" onclick="removeCategories('.$categoriesId.')"> <i class="glyphicon glyphicon-remove-sign"></i> Remove</a></li>
<li><a type="button" data-toggle="modal" data-target="#deleteCategoriesModal" id="deleteCategoriesModalBtn" onclick="deleteCategories('.$categoriesId.')"> <i class="glyphicon glyphicon-trash"></i> Delete</a></li>
</ul>
</div>';
$output['data'][] = array(
$row[1],
$activeCategories,
$button
);
} // /while
}// if num_rows
$connect = null;
echo json_encode($output);
?>
Please I below is the html code which accept the output. Please I'm just a beginner. Kindly help me.
<thead>
<tr>
<th>Categories Name</th>
<th>Status</th>
<th style="width:15%;">Options</th>
</tr>
</thead>
</table>```
It's not very clear what you're trying to do. Currently you're populating the array $output and then encoding it in JSON before printing it.
What would you like to do exactly? Print the HTML code so it appears on the page?
Please i have the html code to be <table class="table" id="manageCategoriesTable"> <thead> <tr> <th>Categories Name</th> <th>Status</th> <th style="width:15%;">Options</th> </tr> </thead> </table>
Where is the code where you try to process the JSON and put it into the page?
The fetchAll() function will get you all the rows and columns, not just one. With that said, just get the variable $row outside of the while loop, call it differently (let's say, $rows), and then iterate over the loop using something like foreach ($rows as $row). Also, now your $row will not take numerical indices, but rather indices with the names of what you selected in the SQL query, like $row['categories_id'].
This should sort out iterating over the rows that you got. One more thing I'm not sure about is $result->execute([1]);, as that function takes parameters, and you're only giving it a 1-element array with the number 1, although I really can't tell because I don't have the rest of the code. My guess is that you're trying to use JavaScript with AJAX, as you're printing the encoded $output variable, so you'll need to get the input at the very beginning using php://input, where you'll get all the parameters that are sent by AJAX, and, before encoding, set the content type to application/json using the function header("Content-type: application/json");
| common-pile/stackexchange_filtered |
Can't get layout to work correctly with dynamic textviews in a nested linearlayout
In an XML layout that is a RelativeLayout I have a nested LinearLayout. Within this LinearLayout I dynamically add a few TextViews and buttons. My problem is, I can't get the items to appear under each other like would naturally happen within a LinearLayout. Here is the basic setup:
LinearLayout mobLayout = (LinearLayout) findViewById(R.id.mobButtons);
mobLayout.removeAllViewsInLayout();
I remove all previous junk inside the layout because I reuse it.
mobLayout.addView(mobName);
mobLayout.addView(mobTextHP);
mobLayout.addView(fightButton);
mobLayout.addView(goBackButton);
These should appear one on top of the other, but instead appear all side by side. When I tried adding LinearLayout.LayoutParams to the first one, it wiped everything after it, or pushed it off screen, I couldn't tell.
Lastly, here is the LinearLayout XML area where these items are added:
<LinearLayout
android:id="@+id/mobButtons"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_below="@+id/fightText">
</LinearLayout>
Thanks in advance!
Use this option for your LinearLayout
android:orientation="vertical"
That was it, didn't think I had to add an orientation to the nested layout!
| common-pile/stackexchange_filtered |
VS2010 Find and Replace
I need help using regular expressions using Find and Replace in VS2010. I want to
Find request("somevar") and replace it with html_encode(request("somevar"))
somevar will be a different for each request("")
Replace request({:q}) with html_encode(request(\1)).
using request({:q}) in the search box. I received an error. "The following text was not found."
Then you should accept this answer by clicking the hollow check.
| common-pile/stackexchange_filtered |
Why are the checkboxes different on different devices?
Why is there a difference in the look of the checkboxes between this device and this Samsung device?
Is this affected by the launcher, in which case the reason is clear?
That images are kind of big for that I think.
Fair enough, I've inlined the imgur versions so that we're not relying on two third-party sites.
Theming is independent of the launcher, although both may be considered part of the UI; for example, the TouchWiz launcher and theme are elements of Samsung's TouchWiz UI.
My Samsung Vibrant is running a custom ROM called Bionix, and they've re-themed the device so my checkboxes look like this (perhaps the only thing I dislike about the ROM):
I see. How can I change checkboxes (and anything else that goes with it)? Is it a different application than the Launcher?
What's the (64) thing in your screenshot? Looks mysterious.
@Francisc I don't know the specifics, but it involves editing various images in the ROM; on most phones you'd need root to do it. I still have the TouchWiz launcher but the images and some color settings have been changed. The (64) is my battery percentage.
Very Bionix I must say. Thank for the answer.
@Francisc: The drawables generally live in a /system/framework/framework-res.apk file (the pathing might be different depending on the device/ROM). Here's a screenshot of the drawable-hdpi folder in the framework-res.apk file that ships with CM7. The PNGs that I've outlined are the drawables used for the checkboxes, as a example. Editing can be a a little tedious, and has a tendency to cause bootloops (for me, at least...)
| common-pile/stackexchange_filtered |
not getting radiobutton 'value' from other function(def) in tkinter, how to achieve this without using class?
Not getting radiobutton 'value' from other function(def) in tkinter, how to achieve this without using class?
In this case a=a1.get() is not taking value from command (of sub1 button) in ques1() function.
from tkinter import *
global root
root=Tk()
root.geometry("500x500")
a1=StringVar()
ans1=StringVar()
def ans1():
a=a1.get() #not getting it from ques1()
print(a)
def ques1():
root.destroy()
global window1
window1=Tk()
window1.geometry("500x500")
question1=Label(window1, text="How many Planets are there in Solar System").grid()
q1r1=Radiobutton(window1, text='op 1', variable=a1, value="correct").grid()
q1r2=Radiobutton(window1, text='op 2', variable=a1, value="incorrect").grid()
sub1=Button(window1, text="Submit", command=ans1).grid()
next1But=Button(window1, text="Next Question", command=ques2).grid()
def ques2():
window1.destroy()
window2=Tk()
window2.geometry("500x500")
question2=Label(window2, text="How many Planets are there in Solar System").grid()
next2But=Button(window2, text="Next Question")
button=Button(root,text="Start Test", command=ques1).grid()
remove global root from your code. This is doing nothing for you. root already exist in the global name space.
This is a side effect from using Tk more than once in a program. Basically, "a1" is tied to the "root" window, and when you destroy "root", "a1" will no longer work.
You have a couple options:
Keep the same window open all the time, and swap out the Frames instead.
Use Toplevel() to make new windows instead of Tk.
Option 1 seems the best for you. Here it is:
from tkinter import *
root=Tk()
root.geometry("500x500")
a1=StringVar(value='hippidy')
ans1=StringVar()
def ans1():
a=a1.get() #not getting it from ques1()
print(repr(a))
def ques1():
global frame
frame.destroy() # destroy old frame
frame = Frame(root) # make a new frame
frame.pack()
question1=Label(frame, text="How many Planets are there in Solar System").grid()
q1r1=Radiobutton(frame, text='op 1', variable=a1, value="correct").grid()
q1r2=Radiobutton(frame, text='op 2', variable=a1, value="incorrect").grid()
sub1=Button(frame, text="Submit", command=ans1).grid()
next1But=Button(frame, text="Next Question", command=ques2).grid()
def ques2():
global frame
frame.destroy() # destroy old frame
frame = Frame(root) # make a new frame
frame.pack()
question2=Label(frame, text="How many Planets are there in Solar System").grid()
next2But=Button(frame, text="Next Question")
frame = Frame(root) # make a new frame
frame.pack()
button=Button(frame,text="Start Test", command=ques1).grid()
root.mainloop()
Also, don't be scared of classes. They are great.
Also, the way you have a widget initialization and layout on the same line is known to cause bugs. Use 2 lines always. So instead of this
button=Button(frame,text="Start Test", command=ques1).grid()
Use this:
button=Button(frame,text="Start Test", command=ques1)
button.grid()
You need to use a single instance of Tk. Variables and widgets created in one cannot be accessed from another.
Your code has some common mistakes. You are creating a new window on each question. It is not a good idea. You can use Toplevel but I will suggest you to use the root. You can destroy all of your previous widgets and place new ones. When the first question, both radiobuttons are unchecked and return 0 when none is selected. You are creating the buttons in Window1 so you will have to tie it with your var.
from tkinter import *
global root
root=Tk()
root.geometry("500x500")
a1=StringVar(root)
a1.set(0) #unchecking all radiobuttons
ans1=StringVar()
def ans1():
a=a1.get()
print(a)
def ques1():
for widget in root.winfo_children():
widget.destroy() #destroying all widgets
question1=Label(root, text="How many Planets are there in Solar System").grid()
q1r1=Radiobutton(root, text='op 1', variable=a1, value="correct").grid()
q1r2=Radiobutton(root, text='op 2', variable=a1, value="incorrect").grid()
sub1=Button(root, text="Submit", command=ans1).grid()
next1But=Button(root, text="Next Question", command=ques2).grid()
def ques2():
for widget in root.winfo_children():
widget.destroy()
question2=Label(root, text="How many Planets are there in Solar System").grid()
next2But=Button(root, text="Next Question")
button=Button(root,text="Start Test", command=ques1)
button.grid()
| common-pile/stackexchange_filtered |
Gravitational potential in a system of two particles
Suppose two particles with masses $m_1$ and $m_2$ are interacting via a central force. Lets work in the center-of-mass frame, and let $r$ be the distance from the masses to the center of mass which lies at the origin. The distance between the masses is now $2r$. The potential of the first mass $m_1$ is the amount of negative work done needed to bring $m_1$ to a distance of $2r$ from $m_2$ all the way from infinity. That is
$$U_1 = - \frac{1}{2}\bigg(\frac{m_1 m_2k}{\bar{\vert r\vert}}\bigg).$$
The potential of $m_2$ is by the exact same arguments
$$U_2 = - \frac{1}{2}\bigg(\frac{m_1 m_2k}{\bar{\vert r\vert}}\bigg).$$
where $\bar{r}$ is the position vector of the center of mass, and $k$ is a constant. On the other hand, the total potential energy of this system is the amount of negative work needed in order to bring an object from infinity to the center of mass. So the total potential is the quantity written in parenthesis. Therefore
$$U_1 + U_2=\frac{1}{2}U_{tot}+\frac{1}{2}U_{tot}=U_{tot}.$$
Is this reasoning valid?
If $m_1\ne m_2$ your $r$ and $2r$ distances don't make sense. The larger mass must be closer to your origin.
Potential of one is this and the other is this is a misnomer at best. The potential energy is stored in the fields and is simply given by-
$$U=-\dfrac{Gm_1m_2}{r}$$
where $r$ is the distance between them.
There is no meaning to the potential of mass $m_1$. Potential of this system at a point would be equal to the work required to bring a test mass to that point.
I hope you can work out the details. If not let me know in the comments.
Suppose we look at the total energies of both particles individually. Then for a particle 1, E1 = K1 + U1 right? Moreover the energy of the system is E1 + E2 = K1 + K2 + ( ? ) certainly we wont have U1 + U2 here now right, nor 2U? Rather the sum of the potentials in this calculus should add up to the total potential of the system? In other words, each particle should contribute half of the total potential energy, correct?
Sarthak: Your formula is right, but it's an interesting philosophical question to wonder where the potential energy is stored. If it's in the field, then from E=mc^2 the field should have some mass, but where is this mass? Are extra particles created, or is it stored in various proportions in the two original masses?
@variations Each particle contributes to half of the potential energy-- This statement is again missing the point. Each particle doesn't have a significance without the other particle. When you want to find the energy of this system, you write it is $E=K_1+K_2+U$. Each particle contributing to half of the potential energy would acquire a meaning if removing one particle halves the PE. It doesnt. It makes PE of the system 0, since there are no interactions.
@John Hunter --- Yes gravitational field does carry mass but in general relativity energy amd mass are the same things. You look at energy and you realise this must have some mass.
If you have two equal masses, each at a distance of , r, from the center of mass, and you move them one at a time out to infinity, then the work to move the first one will be W = Gmm/(2r), but the work to move the second one will be zero, since there is no longer a force on it from the other mass. The initial total potential energy was -Gmm/(2r).
| common-pile/stackexchange_filtered |
Although mine is not an atomic vector I'm still getting the error: "$ operator is invalid for atomic vectors"
I'm trying to perform Text mining on a csv datafile.
The source I'm referring to has this performed on the Twitter data. But I want to do a similar thing on text data stored in a csv.
I'm trying out the following code:
data <- read.csv("Joined_Tab.csv")
dtweets <- data[1:30,]
for(i in 1:20)
{
cat(paste("[ [", i, "] ]", sep=""))
writeLines(strwrap(dtweets[[i]]$getText(), width=25))
}
Here both data and dtweets are dataframes. But when I'm trying to use the getText() I'm getting the following error:
Error in dtweets[[i]]$getText : $ operator is invalid for atomic vectors
But a dataframe is not an atomic vector. (I also tried converting the dataframe to a list, still the same error, although a list is also not an atomic vector)
Here is the str()
'data.frame': 30 obs. of 2 variables:
$ S.No : int 1 2 3 4 5 6 7 8 9 10 ...
$ Tweet: chr "Good cooperation" "engaged team u always get valuable support" "Still klm with domain specific dcm helps to attract clients" "Support with gdp also on short note works nice"
Where am I going wrong??
And are there better ways to perform Text mining in a csv file?
I'm a beginner in R so please suggest me accordingly. Thanks.
Please post the output of str(dtweets).
Maybe want dtweets[[i]][[ getText() ]]. Whatever follows $ is not evaluated.
output of str(dtweets) is:
'data.frame': 30 obs. of 2 variables:
$ S.No : int 1 2 3 4 5 6 7 8 9 10 ...
$ Tweet: chr "Good cooperation" "engaged team u always get valuable support" "Still klm with domain specific dcm helps to attract clients" "Support with gdp also on short note works nice" ...
dtweets[[i]][[ getText() ]] didn't work either:
for(i in 1:20) cat(paste("[ [", i, "] ]", sep="")) writeLines(strwrap( dtweets[[i]][[ getText() ]], width=25)) }
When you use the [[ operator on a data.frame (which is a list too) you get the corresponding variable, so dtweets[[1]] gives the number (1 2 3...) and dtweets[[2]] gives you the tweets ("Good cooperation", ...). For i greater than 2, you should get an error (Error in .subset2(x, i, exact = exact) : subscript out of bounds) cause there are no more variables.
Also, after you use tweets[[2]] the result is a vector! Not a data.frame anymore! Additionally, what you put after the $ should be a function. In R, functions need the parenthesis to be evaluated, even without arguments.
So, even if dtweets[[i]] will give you something that can be indexed with the $, and that sub-thing is named getText(), you need the quotes: dtweets[[i]]$'getText()' in order to index over special characters. My guess it getText is actually a function, so you need something like getText(dtweets[i,2]). That's a guess unless you show the code of the getText function.
| common-pile/stackexchange_filtered |
Update two columns of two different table in single query with order by and limit-mysql
Query
UPDATE vcd_resorts AS resorts,
vcd_deals AS deals
SET resorts.rst_live_date = Date_add(Curdate(), INTERVAL 4 day),
deals.del_date = Date_add(Curdate(), INTERVAL 4 day)
WHERE 0 = (SELECT resort_id_count
FROM (SELECT Count(rst_id) AS resort_id_count
FROM vcd_resorts
WHERE rst_supersaver_resort = 1
AND rst_live_date BETWEEN Curdate() + 1 AND
Curdate() + 4)
temp)
AND resorts.rst_supersaver_resort = 1
AND resorts.rst_id = deals.del_resort_id
AND deals.del_supersaver_deal = 1
ORDER BY resorts.rst_live_date ASC
LIMIT 1
Error
#1221 - Incorrect usage of UPDATE and ORDER BY
what is wrong in this and any other way to do this
Why do you need an ORDER BY?
i have done this without order by and limit
i got solution of this question
UPDATE vcd_resorts AS resorts,
vcd_deals AS deals
SET resorts.rst_live_date = Date_add(Curdate(), INTERVAL 4 day),
deals.del_date = Date_add(Curdate(), INTERVAL 4 day)
WHERE 0 = (SELECT resort_id_count
FROM (SELECT Count(rst_id) AS resort_id_count
FROM vcd_resorts
WHERE rst_supersaver_resort = 1
AND rst_live_date BETWEEN Curdate() + 1 AND
Curdate() + 4)
temp)
AND resorts.rst_supersaver_resort = 1
AND resorts.rst_id = deals.del_resort_id
AND deals.del_supersaver_deal = 1
AND resorts.rst_id = (SELECT resort_id
FROM (SELECT rst_id AS resort_id
FROM vcd_resorts
WHERE rst_supersaver_resort = 1
ORDER BY rst_live_date ASC
LIMIT 1) temp1)
i have placed one more condition which replaced order by and limit
There is no sense of using ORDER BY in Update. Because the command updates all the matching orders no matter in what sequence.
Use Update command without ORDER BY clause as:
UPDATE vcd_resorts AS resorts,
vcd_deals AS deals
SET resorts.rst_live_date = Date_add(Curdate(), INTERVAL 4 day),
deals.del_date = Date_add(Curdate(), INTERVAL 4 day)
WHERE 0 = (SELECT resort_id_count
FROM (SELECT Count(rst_id) AS resort_id_count
FROM vcd_resorts
WHERE rst_supersaver_resort = 1
AND rst_live_date BETWEEN Curdate() + 1 AND
Curdate() + 4)
temp)
AND resorts.rst_supersaver_resort = 1
AND resorts.rst_id = deals.del_resort_id
AND deals.del_supersaver_deal = 1
LIMIT 1
| common-pile/stackexchange_filtered |
Interact with client software
I know there isn´t an answer for this, but i´m a nooby at computer ingeniering and I want to know what I should read or study about for being capable to interact with a desktop client software. Here is an example:
I play poker, and I want to be able to interact with my poker client, I don´t want to do any fancy or anything ilegal such a bot. For example I want to have a program that reads my hands and after playing be able to analize some of the hands or maybe some HUD Helper online.
For those who play poker I´d like to be able to autoseat in some tables automatically or have a mini HUD telling me my BB and my oponents. I know that there very good progrmas for this and I use them. Just I wanto to learn how to read info from this clients and how to interact with them. I talk about poker but I want to learn how to interact with other programs. If anyone could tell me where to begin my studys it would be nice to know something even if I never put my Knowledge in practice. Just i like to know how things work.
PS: Should I use C/C++? I ´ve learned Java and now I am learning Python and JS.
I hope I explained myself, sorry about my English.
And thank you.
There are several alternatives to interact with a poker client, more or less difficult or sophisticated, and effective depending on what you want to accomplish.
For getting the information you could sniff the data over the network, inject your code via API hooks, read the information with screen scraping and OCR, parse the hand histories...
To emulate user actions you can programmatically perform mouse clicks and and key strokes, send messages directly to the UI components of the poker client, or even interact directly with the poker server sending it the expected information by your own(this option, as well as sniffing the data from the network to get the information, may be quite difficult since you will have to deal with (maybe private) protocols, data encryption, etc).
If you know Java, give it a try to the awt's Robot class. With it you can read pixels of the screen, get screen captures, perform mouse clicks and key strokes... I am sure that there are similar tools in Python but don't know about them.
Another higher level tool used for UI automation is Sikuli. It may be useful for your purpose.
I hope this information is useful for you.
Thank you very much for your answer. I will begin to try Robor class on Java and to learn how to sniff data from the network. The problem is when I search for Screen Scrapping "my friend" google always shows me results on web scrapping, so I´m a bit lost to know what teconology or language to use. The same for data sniffing over the network, but that is because I´m a nooby so I have a huge way ahead but I thing this project is a great way to learn computer science.
As I told you before, sniffing data may be quite difficult since the data the poker client is receiving is most probably encrypted... You better try screen scraping or hand history parsing. Some poker clients show you the hand's events in the chat, so maybe you can use the Robot class to select that text, copy it in the clipboard and then parse it to get the information... It's just an idea. I have only used pixel/image matching and OCR, with the mentioned class, and it worked fine more or less. Also have a look to Sikuli, I am sure it may be useful, and you can use it from a Java I think.
Thanks again for your answer. Sorry about not answering before, I was preaparing some exams and didn´t have the time to check Robot class and Sikuli. I´ve been trying them out and they are very nice tools. Now I have to try to parse the text on the chat. For those things is where I need an OCR library for Java. Now i´m looking for them in other topics. Next future step is a tool for sniffing data, but little by little, by now I have good things to play with.
Thanks for your answer.
| common-pile/stackexchange_filtered |
Usercontrol in windows Form closed when my BackgroundWorker is running
I made a form with 3 usercontrols and display it with button.
In 2 of them I got a form for make a backup SQL and a restore.
For the backup my usercontrol get closed after the back is complete.
For the restore I run a background Worker and my usercontrol get closed in the middle of the process.
There is no error so I don't know what to do.
I try nothing because i have no error
I add my user-control in my form with this
if (!Instance.PanelContainer.Controls.ContainsKey("Backup"))
{
Backup backup = new Backup();
backup.Dock = DockStyle.Fill;
PanelTools.Controls.Add(backup);
}
Instance.PanelContainer.Controls["Backup"].BringToFront();
Then I just have a button that run a SQL Query
BACKUP DATABASE DBPDM
TO DISK = 'PathBackupNameBackup'
WITH
FORMAT,
MEDIANAME = 'SQLServerBackups',
NAME = 'Full Backup of SQLTestDB';
And the crash is at the end of BAK
UPDATE :
I find the issue with the restore UserControl. It's because I kill explorer.exe.
Usercontrol is an explorer.exe ? Can I kill explorer without that close my usercontrol ?
There's not much to go on here. A [mcve] would be useful.
I add my user-control in my form with this
if (!Instance.PanelContainer.Controls.ContainsKey("Backup"))
{
Backup backup = new Backup();
backup.Dock = DockStyle.Fill;
PanelTools.Controls.Add(backup);
}
Instance.PanelContainer.Controls["Backup"].BringToFront();
Then I just have a button that run a SQL Query
BACKUP DATABASE DBPDM
TO DISK = 'PathBackupNameBackup'
WITH FORMAT,
MEDIANAME = 'SQLServerBackups',
NAME = 'Full Backup of SQLTestDB';
And the crash is at the end of BAK
Sorry it's my first questions on this site I'm not good already for poste my code and explaination + i'm a noob in developpement and my english is not perfect ^^
Thanks for your answer btw !!!
Please don't add relevant information in comments, you can edit your question and add all relevant and usefull information there.
Where is the code that starts the BackgroundWorker where is the code that closes the form? How do you know there is no exception, perhaps you have an unhandled error on the background thread which is shutting the application? Try running it in the debugger
Sorry, I never close the User-control with a code. It's close by itself.
Maybe we can forget BackgroundWorker and just focus on Backup?
Can I join a video on this site?
"I find the issue with the restore UserControl. It's because I kill explorer.exe. Usercontrol is an explorer.exe ? Can I kill explorer without that close my usercontrol ?" makes no sense. What has a UserControl which is a WinFroms control, got to do with explorer.exe which is an application? And why are you killing it?
I dont know.. I kill explorer.exe because I got Solidworks PDM (it use explorer) and I restore a DB (I can’t restore if it’s used)
But when I comment my code for kill explorer.exe my Usercontrol doesn’t get closed anymore…
Update of my issue.
My Usercontrol doesn't get close but like I said I've got 3 usercontrol.
When my app is starting I load the usercontrol 1.
With a button I load a usercontrol 2.
When the explorer.exe get kill the usercontrol 1 is bringback to front.
I see it's because I load this one on a PaintEvent like that :
private void PanelTools_Paint(object sender, PaintEventArgs e)
{
_obj = this;
if (!Instance.PanelContainer.Controls.ContainsKey("Settings"))
{
Settings settings = new Settings();
settings.Dock = DockStyle.Fill;
PanelTools.Controls.Add(settings);
}
Instance.PanelContainer.Controls["Settings"].BringToFront();
}
to fix it I just load my usercontrol 1 on the LoadEvent :
private void SupportTools_Load(object sender, EventArgs e)
{
_obj = this;
if (!Instance.PanelContainer.Controls.ContainsKey("Settings"))
{
Settings settings = new Settings();
settings.Dock = DockStyle.Fill;
PanelTools.Controls.Add(settings);
}
Instance.PanelContainer.Controls["Settings"].BringToFront();
}
Thanks a lot for your time all and sorry for not being clear when i explain my issue.
| common-pile/stackexchange_filtered |
Proving $f$ is a constant function
Let $f$ be a $2\pi$ periodic entire function satisfying $|f(z)|\leq 1+|{\rm Im}\; z|$.
I am trying to show $f$ is constant.
Initially I thought it is very easy that I can apply Louiville's Theorem. But I realized proving $f$ is bounded is not straight forward. This has to do something with that period of the function. I do not see what I can do with that.
One approach: by the boundedness of its growth at infinity (less than linear), you know it's a polynomial, by periodicity you know it's constant
By the boundedness of its growth at infinity (at most linear) and Cauchy's inequality, you know it's a polynomial, by periodicity you know it's constant
Okay, first the bounded growth at infinity implies polynomial: suppose $|f(z)| \leqslant C|z|^n$ for $|z|$ sufficiently large, then I claim that $f^{n+1}(z) \equiv 0$. Indeed, to see this, for any $w \in \mathbb{C}$ we have $$f^{n+1}(w) = c\int_\gamma \frac{f(z)}{(z - w)^{n+2}} dz$$where we can take $\gamma$ to be a circle of radius $R$ about $w$. Now, the integral is bounded by $$2\pi R c \max_{|z-w| = R} \frac{|f(z)|}{|z^{n+2}|} = c \frac{\max_{|z - w| = R} |f|}{R^{n+1}}$$where I lumped together the constants. Now applying our bound on $f$ and taking $R$ to infinity gives the result.
(in our case we had $|f(x+iy)| \leqslant 1 + y \leqslant x^2 + y^2 = |z|^2$ for big enough $|z|$, so I guess we know $f$ is quadratic)
For the second bit, a periodic polynomial is constant: if it's not constant, it has a root $w$, so infinitely many roots $w + 2\pi \mathbb{Z}$.
Aww thanks guys!
| common-pile/stackexchange_filtered |
Zentest - How to stop automatic testing when red
I am trying to configure autotest so that when I run my test suite and I have a failing test, autotest stops testing and waits for me to make a change before testing again. With my current configuration autotest keeps testing indefinetly when it encounters a failing test making it a bit of a hassle to deal with (having to tab into terminal and stop the autotest server everytime I get a failing test).
I am working on a rails app using RSpec, Zentest and Spork.
Relevant Gem Versions:
autotest (4.4.6)
rspec (2.6.0)
rspec-core (2.6.4)
rspec-expectations (2.6.0)
rspec-mocks (2.6.0)
rspec-rails (2.6.1)
spork (0.9.0.rc8)
ZenTest (4.5.0)
My .autotest file:
module Autotest::Notify
def self.notify title, msg, img, pri='low', time=3000
`notify-send -i #{img} -u #{pri} -t #{time} '#{msg}'`
end
Autotest.add_hook :ran_command do |autotest|
results = [autotest.results].flatten.join("\n")
output = results.slice(/(\d+)\s+examples?,\s*(\d+)\s+failures?(,\s*(\d+)\s+pending)?/)
folder = "~/.autotest_icons/"
if output =~ /[1-9]\d*\sfailures?/
notify "FAIL:", "#{output}", folder+"failed.png", 'critical', 10000
elsif output =~ /[1-9]\d*\spending?/
notify "PENDING:", "#{output}", folder+"pending.png", 'normal', 10000
else
notify "PASS:", "#{output}", folder+"passed.png"
end
end
end
Note: Most of my .autotest file was to get popups working in linux to display if my tests are passing or failing.
I have been searching for an answer to this problem for a while and have had no luck and I have found it very difficult to get my hands on some good documentation for Autotest. I have been staring at the RDoc for Zentest for quite a while now as well and I must just be missing something.
Any help, links to examples, etc, would be greatly appreciated.
I am wondering if your .autotest file is doing something to make it autotest recurse. I would try to remove possible causes one at a time. For example, remove spork from the mix, run autotest with a known error and see if it keeps going. If it does, move your .autotest file somewhere temporarily, effectively running without an .autotest file and try again. Whenever I run into a non-obvious problem I try to walk the logic chain and make small changes until I can isolate the problem.
PS: Autotest should never keep running tests after a given code change. It should run the affected tests, report the results (green or red) and then stop and wait for more changes. Something in your configuration is causing this anomaly.
Thanks for the response. To be honest the problem seems to be rather random. I have been running with the same configuration now for a while and autotest will only re-run tests when they are red once in a while (maybe 1/10 times). I can't really find a relation between the tests I am running and the strange autotest behavior but I will keep trying. Thanks for your advice and if I end up getting a definitive answer then I will post it. My main problem now is that every time I think I have fixed it, it ends up behaving the same just less often.
Interesting. I'd like to see the answer if you find one. Good luck.
Try running with "autotest -v", which will show you what file change triggered the rerun. Then add an exception to .autotest.
I had the same problem and found that my development server was writing to the log file. After I added this to my .autotest file, the problem went away:
Autotest.add_hook :initialize do |at|
at.add_exception(%r{^\./\.git})
at.add_exception(%r{^\./log})
end
I saw a similar problem with ZenTest when I had a gem that was writing data to a directory that ZenTest was monitoring. IIRC, it was a gem that did full-text searching -- the index file the search generated was triggering ZenTest to run again, thereby regenerating the index.
I tracked down the problem by modifying the Growl notifications to tell me which files were triggering the autotest run (I was running on a Mac at the time).
The solution was to add an exception/exclude to the .autotest file, to tell it to ignore the index.
(I've just seen : Spork is repeatedly re-running failing tests in autotest which sounds very similar to your problem)
| common-pile/stackexchange_filtered |
How can I prevent frost buildup on a concentric water heater intake?
I have a hot water tank with a concentric PVC vent that's both an intake and exhaust for the tank.
Whenever it's -30 celcius and below (I'm from Canada), the air intake on the concentric vent clogs up with frost, causing the hot water tank to stop working. When this happens, I need to go outside to clear out the frost.
Does anyone have any tips with this problem? I've heard some posts that people recommend insulating the pipe. What kind of material would you recommend to insulate the vent to ensure it doesn't frost?
Does this answer your question? How do I get an exterior vent to stop frosting over in a cold Canadian winter?
This can be a tough one as a heat pump is going to make more cold if a sealed combustion when pulling in air on the outer chamber will cool it quickly. Adding a heat tape may be your best option with insulation on the outside of that. I like neoprene insulation it is usually the most expensive though.
Edit: Be wary that I am in no way suggesting putting fiberglass insulation in a position to be exposed to weather, nor am I in any way suggesting there is no value to insulating your existing PVC. Fiberglass insulation should not be exposed to weather, and by all means start by insulating your existing ductwork if you want to start that way because even if that doesn't solve your problem you might be able to reuse it if you move on to another approach such as may be outlined below or something similar. You have a lot of input here, between the various Answers and Comments in response to your question, that are the equivalent of throwing parts at car trouble. To carry that analogy, just because you get an emissions fault, doesn't mean it's your evap canister, your gas cap seal might be compromised. That is, just because you're getting ice doesn't mean it's a lack of insulation. Try looking at all parts of the problem; ice needs moisture and low temps to form. In laymens terms, are you sure this ice is forming from natural precipitation? If you build a little roof over your wall cap (or if you had a roof vent, above your roof cap) to keep snow and rain from falling on it, would you still have a problem? Then it's the hot air from your exhaust, carrying moisture from inside and absorbing humidity from outside, then condensing and ultimately freezing on the cold surface of your intake cap.
Here are some resources that will help you understand how to approach your problem, and for others who don't understand my answer please read these so you become an informed used:
Exhaust ducting that carries moisture, same principle as your concentric exhaust duct
You might want to find your specific exhaust model's installation instructions something like the document this links to and check that it was properly installed, especially that it is far enough away from other moisture sources
Another, Canadian specific, incredible resource to help you understand weatherization and how insulation functions; where is is effective, and how to use it, which no reputable resource I know ever recommends applying insulation outside the thermal envelope of a building for what I thought were obvious reasons.
Figure 2-7 from this resource demonstrates the concept of thermal "short circuiting" where the insulation plane is interrupted at the top of the foundation wall over the plate and the boldest arrow shows the greatest transmittance (although in the context of the article, it is demonstrating moisture, the same is true with heat). This is the same principle that renders insulation outside the building envelope practically useless. That is, there's no sense in insulating an open pipe, without a cap that heat is just going to radiate away out the end. And when the exhaust blows - that is already a hot envelope around the intake, there's not heat lost to the inner pipe and neither are you going to insulate the cap itself for obvious reasons, so a uninsulated exterior portion is getting as much warming as it can from the exhaust regardless of whether it's insulated. Best practice, and building science, puts insulation inside the building except in the case of conveyance components of Mechanical systems (like refrigerant supply lines for example).
If you can give us your make/model information for the ducting and some photos, folks might be able to give you an exact solution. Not always, but it usually helps a lot, in your situation there are quite a number of concentric exhaust designs and figuring the best way to deal with your problem heavily depends on the specifics of the design.
Without specifics, some general notes then in order from what you should do first:
I think you may be thinking about the problem wrong if you're thinking insulation is your answer. The first line should be mitigating the moisture that is causing the problem, not preventing the moisture from freezing... you need to find a way to direct the moisture away from your intake. Once you get a cap aka concentric exhaust "head" (the part that projects out of the structure) or find a means to install a deflector to keep the moist exhaust air from getting pulled back to the intake, then your next step should be to insulate the ducting on the interior of your building, starting at the exterior wall and working your way back to conditioned space. Keep combustable materials an appropriate distance from any ignition (including exposed exhaust and/or heat) sources.
As to your specific problem, I'm not sure, but your post might be a duplicate. I'm fairly certain you'll find your problem along with a solution here: Someone with what sounds like a nearly identical problem in a nearly identical climate posted here on DIY.SE before your post. Maybe you saw this and it's not entirely what you're dealing with, but maybe you didn't.
If you did read that, and that's not your huckleberry, before you insulate, try replacing your PVC with typical metal duct and exhaust cap. That way the exhaust heat will easily transfer to the intake parts. PVC is not a great conductor, so the heat from your exhaust is not transferring as much as it could to the intake. Then insulate. Insulating your PVC duct might help, but the extreme cold might be overwhelming the lower conductivity of the PVC... that is, you might be dealing with a material limitation: Even with infinite insulation on the PVC ducting, the material properties itself may not be able to transmit enough heat through itself to overwhelm the extreme cold to the point of melting. Metal ducting and wall cap will transmit drastically more (that goes both ways), so when your exhaust is running, the material itself will be heating up and radiating that out to the fins on your wall cap far better than PVC ever could.
If neither of those steps are where you want to start, go ahead and insulate the duct with whatever you want. Generally fiberglass unfaced rolls are an easy wrap-it-yourself solution because you can build it up as much as you like. This goes inside the building. No insulation should go on the wall cap (the part outside the building). Just take care to note any specified offsets required to stay fire safe between insulation and heat source, as mentioned above - think fire safety.
Sure, my post might be frustrating to everyone out there that says "well, I've always insulated my PVC concentric exhausts - dual, single, and otherwise - and that's always taken care of the problem." Go ahead, try insulation first before you rip your existing exhaust out and replace with metal. But I want to give you insight in case insulation alone isn't enough, and I want to help the next person searching the internet saying "I live at the north pole, and no matter how much insulation I put on this thing, it still freezes up." Downvote me to oblivion if you want to, but my answer and the order of recommended operations still stands as best practice.
Foam plastic pipe insulation for the size of the outer pipe.
You want something closed-cell / waterproof, since it's out in the weather. Possibly an overwrap of aluminum tape to prevent sun damage.
Definitely closed cell. I use neoprene wrap. The aluminum tape would extend the life but not used with most neoprene jobs but at this extreme temp a good idea.
This does very little, short of marginally improving the extent to which your exterior portion of ducting radiates the heat from the exhaust through the material itself and any heat from a conditioned/heated space directly inboard of the wall. The fact that cold air is surrounding and entering the exterior portion of the duct (that portion exterior of any baffle, such as the cap) will render any insulation outside the building envelope what we call "short circuited" - think of cold beer with beer cozy that just wraps the sides, that you set on the hot sand at the beach. Not worth much.
I have a heat tape on the intake pipe but when the weather drops to -25 then the hot water tank will not start.
SO----- I get dressed up and go out with my wife's hair dryer and power cord and put in to the intake pipe. TUrn the hair dryer on HIGH and after 3 to 5 minutes the hot water tank will start and heat the water.
I have been doing this for the past 6 years whenever the weather is below -25 c and the wind is going from the south east.
You get a little chilled being outside but this works.
Please see [answer] and take the [tour]. We're not a discussion forum and this shouldn't have been posted as an answer.
| common-pile/stackexchange_filtered |
Is it possible to track how many event listener does the specific smart contract have?
I'm wondering if the above question is possible in ethers or web3? If yes, is it possible to get their IP as well? Thanks!
Umh, no.
Each event listener uses a node to read its data. The closest thing you can do is track nodes with something like https://ethernodes.org/ , but you won't have much knowledge about what the nodes are used for. In theory you might be able to (somewhat) track which node broadcasts which transaction, if you have direct access to many nodes, but in reality you can't do even that. And for read-only operations (such as event listening) there is no way to know.
| common-pile/stackexchange_filtered |
"process launch failed: timed out trying to launch app" when launching AdHoc app
When I have Xcode run the AdHoc version of my app on my test iPhone, the app starts launching, but then crashes and gives me the following error:
process launch failed: timed out trying to launch app
Here are my Code Signing settings:
Any idea what could be causing this crash? It doesn't crash like this when I run it in iOS simulator with the same settings.
Here is the console:
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Fetching NTP time from "time.apple.com" count:3 timeout:60.00
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>96 wlan.A[4990] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>68 wlan.A[4991] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>54 wlan.A[4992] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>49 wlan.A[4993] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>39 wlan.A[4994] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>58 wlan.A[4995] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>19 wlan.A[4996] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>90 wlan.A[4997] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>88 wlan.A[4998] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -69
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>36 wlan.A[4999] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>10 wlan.A[5000] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>03 wlan.A[5001] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>13 wlan.A[5002] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>25 wlan.A[5003] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>78 wlan.A[5004] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>17 wlan.A[5005] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>80 wlan.A[5006] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>08 wlan.A[5007] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -72
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>91 wlan.A[5008] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 100, fAverageRSSI -70
Sep 8 13:52:32 Andrew-Ghobrials-iPhone kernel[0] <Debug>:<PHONE_NUMBER>59 wlan.A[5009] AppleBCMWLANNetManager::updateLinkQualityMetrics(): Report LQM to User Land 50, fAverageRSSI -71
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Got NTP time 2014-09-08 17:52:32 +0000 ± 0.04 at<PHONE_NUMBER>325 (delay 0.05, dispersion 0.01)
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: NTP succeeded with 431891552.62±0.04 at<PHONE_NUMBER>325
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Received time 09/08/2014 17:52:32±0.04 from "NTPLowConfidence"
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Not setting system time to 09/08/2014 17:52:32 from NTP because time is unchanged
Sep 8 13:52:32 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Want active time in -0.00hrs. Need active time in 83.33hrs.
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Got NTP time 2014-09-08 17:52:32 +0000 ± 0.03 at<PHONE_NUMBER>670 (delay 0.05, dispersion 0.01)
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: NTP succeeded with 431891552.62±0.03 at<PHONE_NUMBER>325
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Received time 09/08/2014 17:52:32±0.03 from "NTP"
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Not setting system time to 09/08/2014 17:52:32 from NTP because time is unchanged
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Want active time in 41.23hrs. Need active time in 124.56hrs.
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Got NTP time 2014-09-08 17:52:32 +0000 ± 0.11 at<PHONE_NUMBER>172 (delay 0.20, dispersion 0.02)
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: NTP got responses from 3 servers total
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: NTP succeeded with 431891552.62±0.03 at<PHONE_NUMBER>325
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Received time 09/08/2014 17:52:32±0.03 from "NTP"
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Not setting system time to 09/08/2014 17:52:32 from NTP because time is unchanged
Sep 8 13:52:33 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Want active time in 41.23hrs. Need active time in 124.56hrs.
Sep 8 13:54:59 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Received timezone "America/New_York" from "Location"
Sep 8 13:54:59 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Current mcc: '0' simulated:'0'.
Sep 8 13:54:59 Andrew-Ghobrials-iPhone timed[18] <Notice>: (Note ) CoreTime: Not setting time zone to America/New_York from Location
Turns out it was because I had to change the debug to development profiles.
You had to change the Distribution to development profile in the Debug Signing Identity - please don't fix up the correct naming, as it is crucial for others to understand the solution. Besides, this is what the other guy also stated out ;D
| common-pile/stackexchange_filtered |
Relocate filters on an Excel worksheet with a Pivot Table
In my worksheet where I have my pivot table I have many different filters to chose between.
To the eye it doesn't really look nice and I want to be able to maybe split that long list of filters into a few shorter ones. But I can't figure out how to do this. I've seen where I can move the whole pivot table, but then it's all included and as one piece that can't be split.
Does anyone know if this is possible?
In Excel 2010 you have an option to "replace" filters with Slicers. You can choose the slicer in the Pivot Table's Options Tab (Sort & Filter group). Slicers can be moved and formatted much easier than traditional pivot filters. They also have an extra advantage of being able to feed multiple Pivot Tables with the same filtering.
Note: If you're using Powerpivot, slicers are built into the Pivot Table Field List, making them even easier to get to and use.
thanks for reply! yeah i know about slicers n am using them two filter some variables that i want to go trough all the pivot tables....and i know they are possible to be moved etc...but still want to use the "normal" filters....so im guessing u dont know, if even possible, how to relocate the traditional filters on the sheet?
Take a normal PivotTable with multiple Report Filters (In this example – Region, Product Description, Business Segment, Customer Name).
Right click anywhere inside the PivotTable and select PivotTable options.
In the Options dialog box, you’ll see a setting called ‘Report filter fields per column’.
Change this setting to specify the number of filter fields you’d like to have in each column. In this example, I’d like break up my Report filters so that only two are showing in each column.
Once implemented, the Report filters are broken out so that only two show up in each column.
This is a nice feature if you want your Report filters conform to the general shape of your PivotTable.
Here is a PivotTable where the Report filters are set to show only one per column.
What example are you referring to? Please edit your answer to include the example.
| common-pile/stackexchange_filtered |
How to customize the customer account after login according to user group
I need to show the different options according to group of the customer in MY ACCOUNT section.
For example, I need to show only My profile, Account Information, social media accounts link in the MY ACCOUNT section of the customer dashboard. Some of links will be custom controller actions so how I can show these link and manage the session so that all this happened only when the customer is login.
you want to change navigation for the customer?
yes want to show different links according to the group of the user. These links contains custom controller actions so I also need to manage the session on those pages.
@QaisarSatti Can you please have a look at my question http://magento.stackexchange.com/questions/133806/magento-2-1-how-to-extend-product-listing-in-your-custom-product-list-and-appl
Step 1: Go To ( YourTemplate/customer/account/navigation.phtml )
Step 2: Replace This Line: <?php $_count = count($_links); ?>
<?php $_count = count($_links); /* Add or Remove Account Left Navigation Links Here -*/
unset($_links['account']); /* Account Info */
unset($_links['account_edit']); /* Account Info */
unset($_links['tags']); /* My Tags */
unset($_links['invitations']); /* My Invitations */
unset($_links['reviews']); /* Reviews */
unset($_links['wishlist']); /* Wishlist */
unset($_links['newsletter']); /* Newsletter */
unset($_links['orders']); /* My Orders */
unset($_links['address_book']); /* Address */
unset($_links['enterprise_customerbalance']); /* Store Credit */
unset($_links['OAuth Customer Tokens']); /* My Applications */
unset($_links['enterprise_reward']); /* Reward Points */
unset($_links['giftregistry']); /* Gift Registry */
unset($_links['downloadable_products']); /* My Downloadable Products */
unset($_links['recurring_profiles']); /* Recurring Profiles */
unset($_links['billing_agreements']); /* Billing Agreements */
unset($_links['enterprise_giftcardaccount']); /* Gift Card Link */
?>
if you want to do this in module then override
<customer_account>
<reference name="customer_account_navigation">
<action method="setTemplate"><template>modulename/customer/account/navigation.phtml</template></action>
</reference>
</customer_account>
for checking the session
checing the session of user
if(!Mage::getSingleton('customer/session')->isLoggedIn()){
$this->setFlag('', 'no-dispatch', true);
$this->_redirect('customer/account');
return ;
}
getting the customer group
$customerData = Mage::getSingleton('customer/session')->getCustomer();
$customerData->getGroupId()
Thanks for the answer. Let me do this way. will get back to you. Thanks again.
for Step 1 you mean this path: app/design/frontend/base/default/template/customer/account/navigation.phtml
@muditmehrotra yes
Now I know how to show/hide the navigation link according to the group but how to add a custom controller action as a link and open its content in the dashboard. For example, Controller action->Profile View->template/modulename/profile.phtml
Now I need to add the profile as link in the navigation and show the profile.phtml contents inside the dashboard.
for that you have rewrite the customer controller.
Is there any example for that so i can also do that
here http://magento.stackexchange.com/questions/92172/how-to-override-the-controller-accountcontroller
Please, check my question. Would be great help. Question Link
any suggestion on my question?
http://magento.stackexchange.com/questions/134448/how-to-add-price-for-the-customer-group-and-add-it-to-the-customer-registration
Can you please check my another question here :
http://magento.stackexchange.com/questions/134782/magento-1-9-how-to-add-customer-group-on-registration-page
any suggestion on my question. why my registration page don't show group names
| common-pile/stackexchange_filtered |
Teamfoundationserver branching not working in any form
I have tried branching in a very simple sample project and it worked. Now I want to branch a real life project and it is simply not working.
When I try to branch the whole team project, tfs tells ask for a destination. If I choose a new destination it tells me, that the destination does not exist. If I create a new one and point to it, it tells me that the folder already exists.
When I try to branch the team project into a sub-folder within the team project it tells me that this procedure cant be done, fair enough.
But when I try to branch a single project within the team project to another sub-folder it tells me that there was no correct mapping ('keine passende Zuordnung', in German I don't know the exact English error message).
Any help on this is very appreciated. I fail to see, what I do differently here, then I did before in my test project.
Edit: As suggested I post an image of my project structure. The upper folder is my actuall project which I have converted from a teamproject to a branch. The second on is the destination folder which is empty.
When branching a whole team project ($/ProjectName), you either need to use the New Project Wizard to create a new project and specify that it should branch from your current project.
When branching a sub-folder of your team project, that should work, unless a parent of that sub-folder is marked as a branch root, in which case there is no location to branch to.
Any folder that either holds a branch root as a child, or has abranch root as a parent cannot be used to create a new branch:
On the commandline try running tf branches . from the folder that you want to branch (to see if it is part of a branch) and from the folder you want to branch to. If the target folder is already under a branch, you can't branch to it. You might need to use the Convert to Folder option in the Source control explorer to allow branches to be created there.
It looks like you've already created the target folder, and the target folder is already a branch. You haven't described how that came to be, if it's a result from a Branch action on the source folder, then instead of choosing Branch pick Merge instead.
If there is no relationship between the two folders then it won't be pre-populated in the list of possible merge targets. If you're using Visual Studio 2013 you can enter the path manually and TFS will create the relationship by doing a baseless merge. If you're using an older version of Visual Studio you may need to create this relationship from the commandline:
tf merge "$/TeamProject/Machinenzustandsanzeige" "$/teamproject/Machinenzustandsanzeige NC-Prä" /baseless /recursive /collection:{uri}
You can also destroy the target branch that you've created using the commandline and then re-attempt the branch, which should then succeed.
tf destroy "$/teamproject/Machinenzustandsanzeige NC-Prä" /recursive /collection:{uri}
tf branch "$/teamproject/Machinenzustandsanzeige;T" "$/teamproject/Machinenzustandsanzeige NC-Prä" /recursive /collection:{uri}
tf checkin "$/teamproject/Machinenzustandsanzeige NC-Prä" /recursive /collection:{uri}
In case the workspace isn't setup correctly yet you can either do it through the UI using the steps outlined here or from the commandline using:
tf workfold /map "$/teamproject/Machinenzustandsanzeige NC-Prä" c:\path\where\you\want\it
followed by:
tf get "$/teamproject/Machinenzustandsanzeige NC-Prä" /recursive
to effectuate the addition of the folder.
There is a very slim chance that the a-umlaut is causing the issue. Have you tried a path without special characters?
What you want to achieve is a standard operation and TFVC supports it, but somehow you ended up in a situation that is non-default. Even in such a situation you can get it fixed, but you might need to resort to advanced features such as /baseless or /force or tf destroy which are not available from the UI.
To further improve the answer it would be helpful if you could post a screenshot/textual representation of your project structure and from where you want to branch to where.
I put an image in my initial post. But since noone else is answering this seems to be broken. My structure is far from being complicated. If TFS cant do THIS SIMPLE TASK, we will complety forget about creating brances, and look for anohter solution. It is SIMPLY IMPOSSIBLE FOR ME to spend my worktime in this!!!!!!!
This is not broken. Your structure is pretty simple, but there seems to be an incorrect order to the actions which brought you in your current situation. I added some 'advanced' commands that may get you out of there. If you want additional support, I might have time to chat later.
It sounds like the workspace mappings are not there for the folder being branched to. Can you show your workspace mapping as well?
Added some guidance to setup workspaces. See also: Shttp://msdn.microsoft.com/en-us/library/ms181383.aspx#edit
| common-pile/stackexchange_filtered |
How to listen to active tab change cross windows in a chrome extension
In a chrome extension you can listen for tab changes like this
chrome.tabs.onActivated.addListener(function (activeInfo) {
console.log(activeInfo.tabId);
});
So, if you select a tab in the same browser window (a browser window has 1 or more tabs), this code wil trigger. But if you have 2 windows (both Chrome), the above code will not trigger if you switch focus between windows. I hoped that this would be also an tab activity change, but its not. So my question is, how can I detect a change in focus between browser windows?
To provide a little bit of context of what I'm trying to achieve: I need the url of the active tab, so if the user switches to an other window, my extension needs to know this and extract the url of the new active tab.
cross browser in a chrome extension? chrome extensions don't work cross browser, they're for "chrome" - the clue is in the name
I mean within the same browser, so chrome -> chrome. So you have two chrome windows. Each with tabs open, and I would like a trigger when the users switches windows, not tabs
oh, - ok - seems intrusive to monitor what a user has open in other tabs though
I have updated the question a bit to make this more clear
Maybe you can find something useful is this question: Chrome Extension: What is the active tab in 2 opened windows?. Maybe you can combine with chrome.windows to get a trigger when the window is changed?
| common-pile/stackexchange_filtered |
How to convert nvarchar value '********' to data type int
How do I convert a nvarchar value '********' to data type int?
Column TransactionDate there is a column, some of which have a value of '********'
select
PolicyNumber
,PlanType
,Premium
,TransactionDate
,PostDate
from
EasyCover
where
transactionDate >= 20170101
Do you want to convert it from nvarchar to int or nvarchar to datetime?
Is transactionDate of type int or datetime?
Do not store dates as strings.
data type of TransactionDate is nvarchar(50)
when I run the code it gives this error "Conversion failed when converting the nvarchar value '********' to data type int"
You can't convert that value to int, it's not possible.
'********' isn't a number, so what number are you expecting? Why is your TransactionDate column an nvarchar(50) and why are you passing it an int? A date should be stored as a date, and when you pass a value to it it should be a literal string in the format yyyyMMdd or yyyy-MM-ddThh:mm:ss(.sssssss).
You should indeed not store dates in varchar types - store them where they belong, in a DATE type column!
You have to use single quotes for a date literal like that, i.e. '20170101'.
I suggest that you read about SQL Server's datatypes, https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-2017. The purpose of datatypes is to limit the kinds of data that you can store in a column. Each datatype also has a related set of operators. For example you can use arithmetic operators such as plus, and minus with integer datatypes but you cannot use plus and minus with character datatypes.
Even if there were no rows in your data where TransactionDate had a value of **********, your query as written would still throw an error.
Consider this query:
declare @t table
(
NotADate nvarchar(50)
);
insert @t
values (N'2018-01-01 14:30:00.000'),(N'2019-01-01 13:30:00.000');
select *
from @t
where NotADate > 20170101
Msg 245, Level 16, State 1, Line 4
Conversion failed when converting the nvarchar value '2018-01-01 14:30:00.000' to data type int.
In your table, you don't have any actual date values. You have strings. Date functions won't work, or, worse, will sort of work but will produce unexpected, probably incorrect results. Also, in your WHERE clause, you're using an integer value, 20170101. That's a number, not a date.
To accomplish what you want, first you have to tell SQL Server to treat the string field TransactionDate as though it were a date field, at least as often as possible, i.e. ignoring those asterisk values. You'll accomplish this with TRY_PARSE (or TRY_CAST or TRY_CONVERT if you prefer).
Then, you'll want to tell the engine to do an implicit conversion on your WHERE clause, which you can accomplish just by wrapping the predicate in single quotes.
Using the same set up as above, but with an asterisk entry this time, here's how your WHERE clause should look, and work:
declare @t table
(
NotADate nvarchar(50)
);
insert @t
values (N'*****************'),(N'2019-01-01 13:30:00.000');
select *
from @t
where TRY_PARSE(NotADate AS datetime USING 'en-US') > '20170101'
Results:
+-------------------------+
| NotADate |
+-------------------------+
| 2019-01-01 13:30:00.000 |
+-------------------------+
You cannot convert '********' to int, because it's not a number. You can use try_cast, try_convert etc. to avoid the error that you mentioned, but you will get a NULL instead of '********'.
TRY_CAST(TransactionDate as int)
| common-pile/stackexchange_filtered |
Run all csv columns at the same time
I have a big block of code that repeats the same thing for example :
What I did was create a csv with the following:
column1, column2, column3
I read the csv in a for loop
for row in reader:
However, the above gets me the value one by one, not all columns at once.
Do you mind to share a mcve?
Yes,
Right now. With the above code, I get all the values (temptreevalue, tempwatervalue, tempplantvalue, tempxvalue) and publish them using Json.
What I'm trying to accomplish is instead of doing everything manually, to have a csv file that can replace the manual function.
Is that what you looking for? It will assign column1, column2, column3 values to x, y, and z variables. In python it's called unpacking. Basically, you assign each value in the list (row) to the variables.
x, y, z = row
Here is an example of a complete code:
import csv
with open("some_file.csv") as f:
reader = csv.reader(f)
for row in reader:
x, y, z = row
print(x, y, z)
you can do the same if you just need to print output instead of variables:
print(*row)
Hey Vlad, Thanks for the help. That gives me all the rows at once, if I do a time.sleep(1) I get one row every second, not all rows in one second.
What I'm trying to do is publish json
temptreevalue" : temptreevalue,"tanklevel":tanklevel, "watervalue",watervalue
Take a look at yield. It will allow you to pull one row at the time. Let me know if that is what you looking for. https://pythontips.com/2013/09/29/the-python-yield-keyword-explained/
If you don't expect the index or structure of the csv to change, I would create list and enumerate or zip through them.
listValues = [x,y,z]
csvCols = ['column1,'column2','column3']
mapped = set(zip(listValues,csvCols))
For reference: https://www.geeksforgeeks.org/zip-in-python/
| common-pile/stackexchange_filtered |
Java Caesar Cipher replacing a char in a string with an array value
Okay, so for a school project I am attempting to make a method that will take a file, use a caesar cipher to 'encrypt' it, and then output the new encrypted word to an output file, reading in all the words of the file to leave us with a separate encrypted file.
The problem I'm running into is I get a
"Unexpected type. Required: variable. Found: value" error
whenever I try to replace the character with the new one.
Here's my encryption method, so hopefully that will be enough to find the problem.
public static void encrypt(File inputFile, int key) throws FileNotFoundException
{
char[] alpha = new char[26];
for(char ch = 'a'; ch <= 'z'; ch++)
{
int i = 0;
alpha[i] = ch;
i++;
}
Scanner in = new Scanner(inputFile);
while(in.hasNext())
{
String word = in.next();
word = word.toLowerCase();
for(int i = 0; i < word.length(); i++)
{
for(int j = 0; j < alpha.length; j++)
{
if(word.charAt(i) == alpha[j])
{
word.charAt(i) = alpha[j + key];
}
}
}
}
}
As Strings are immutable you can not do
word.charAt(i) = alpha[j + key];
Consider using a StringBuilder instead, like
StringBuilder buf = new StringBuilder ();
for(int i = 0; i < word.length(); i++)
{
for(int j = 0; j < alpha.length; j++)
{
if(word.charAt(i) == alpha[j])
{
buf.append(alpha[j + key]);
}
else {
buf.append (word[i]); // ??
}
}
}
// do something with buf ??
Thanks, that seems to have solved that problem. Just curious, how would you print this out to a file now? I need to print one word at a time obviously, but how would I go about doing that without overwritting the save file each time.
@Redfox2045 I am glad it helps. Please consider to upvote and/or accept this answer.
@Redfox2045 As per your other question, please review http://stackoverflow.com/questions/8563294/modifying-existing-file-content-in-java
| common-pile/stackexchange_filtered |
Compiler error after deploying to Azure
I have a cloud service which has a web role and a worker role, using queues, storage and tables. I published it to Azure and everything seemed fine. When I click on the web app URL I get the following error:
Compiler Error Message: The compiler failed with error code -2146232576.
If I deploy the web app locally everything works fine! Can someone help me please?
I tried looking online but I couldn't find a solution and I'm very new to this.
What's your Roslyn compiler version? If you are using Roslyn compiler 2.0+, you need to install .net framework 4.6+ on your server.
Are you doing a "Publish" directly from Visual Studio?
Try nuking the web root and publishing again. It's likely due to file conflicts.
https://.scm.azurewebsites.net/DebugConsole/?shell=powershell
Get-ChildItem 'D:\home\site\wwwroot' | Remove-Item -Recurse -Force
Worked for me... thanks for saving me two hours of troubleshooting.
This worked for me once but again the same issue arose and not able to fix this time
| common-pile/stackexchange_filtered |
difference between User.username and username?
In cakephp 2 , whats the difference between:
echo $this->Form->input('User.username');
and
echo $this->Form->input('username');
??
Thanks.
depends on your form
if you use this->Form->create('User') there is no difference because it belongs to the same model
but if you save related data then you need to use the descriptive version to tell cake which model the field belongs to.
The first explicitly defines the User model's username field, while the second falls back to the controller's default model, what is User if you are in your UserController but if your controller is e.g. PostsController your Post model should have a username field also to e.g. save the form data.
| common-pile/stackexchange_filtered |
Showing that $\bigcap A$ is the least element for the set $A$ where $A$ is a set of ordinals.
The notes I am reading define a set $x$ to be an ordinal provided $x$ is transitive and every element in $x$ is transitive.
Let $A$ be a set of ordinals. I have shown that $\bigcap A$ is an ordinal. I having trouble showing that $\bigcap A \in A$.
I realized this is obvious in the finite case. Let $x$ and $y$ be ordinals. Then $\bigcap \{x, y\} = x \cap y$. Since ordinals are linear ordered(which I have shown), then $x \in y$ or $x = y$ or $y \in x$. Suppose $x \in y$, then I can show that $x \cap y = x$.
Sorry if this a dumb question since this seems simplee.
Perhaps one way to do it is to note that $\alpha$ and $\beta$ are ordinals, and $\alpha\subseteq \beta$, then either $\alpha=\beta$ or $\alpha\in\beta$. For example, see here.
You know that $\cap A$ is an ordinal. We know that $\cap A\subseteq \beta$ for all $\beta\in A$. If $\cap A$ is equal to some $\beta$ in $A$, then we are done. Otherwise, we have $\cap A\subsetneq \beta$ for all $\beta\in A$, and therefore, by the result noted above, $\cap A\in\beta$ for all $\beta\in A$.
But that would mean that $\cap A\in\beta$ for all $\beta\in A$, and therefore that $\cap A\in\cap A$... which is impossible for ordinals.
this makes complete sense thanks. I kept trying to look for $\beta \in A$ such that $\beta = \bigcap A$.
| common-pile/stackexchange_filtered |
How to wait for the user to click to continue the remaining automation steps
I have a code to automate the filling of a form. I want to wait for the user to finish reading the notice and click manually to then continue the automation
codigopostal = driver.find_element(By.NAME, 'codigoPostal')
codigopostal.clear()
codigopostal.send_keys(line[4])
time.sleep(2)
enter = driver.find_element(By.XPATH, "/html/body/div[1]/div[2]/section/form/div[2]/div[2]/table[2]/tbody/tr/td[1]/input")
enter.click() # I want to wait for the user to manually click
#continue with autofill form code....
To wait for the user to finish reading the notice and click manually and then continue the automation, an elegant approach would be to wait for the user to press the RETURN key on the console as follows:
codigopostal = driver.find_element(By.NAME, 'codigoPostal')
codigopostal.clear()
codigopostal.send_keys(line[4])
time.sleep(2)
enter = driver.find_element(By.XPATH, "/html/body/div[1]/div[2]/section/form/div[2]/div[2]/table[2]/tbody/tr/td[1]/input")
input('Press RETURN key when finished reading')
#continue with autofill form code....
thanks for your answer, this method works perfectly in console but doesnt in the webpage, i want to manually click directly over the button on webpage and continue with automation. it is not so practical to be jumping between console and webpage for each element in automation. I have read a lot with different examples such as solving a captcha manually and then continuing with the automation, etc. but I haven't found an answer
| common-pile/stackexchange_filtered |
How to replace url in js whihot jquery?
Guys i have many links:
"img/dasda/ja.png", "../img/no.png", "wild/rpm/gulp/in.png",
How in js change sting to last "/"? To get just the name of image?
What research have you done and what have you tried?
You could use Array#match together with RegExp.
var links = ["img/dasda/ja.png", "../img/no.png", "wild/rpm/gulp/in.png"];
links.forEach(v => console.log(v.match(/(?!\/)\w+(?=\.)/g)[0]));
var url = "img/dasda/ja.png";
var idx = url.lastIndexOf('/');
console.log(url.substring(idx + 1));
thanks, thats work
| common-pile/stackexchange_filtered |
Mix python script with jave script for html
I'm trying to build up my personal project and wish to have a website that could have some interactivity with one of the applications I wrote in python script. Currently, my personal website is working and published, which is mainly using JaveScript as the scripting language. My question is: Is it possible that I could add python script onto the existing website without changing too much of the work that is being done already (index.html, etc.).
What do you want to achieve in python that you can't in javascript?
Yes, you can for example use uWSGI.
| common-pile/stackexchange_filtered |
How to determine View's background colour?
I'm trying to get the colour values of my background. The background colour of my application is set to randomise on a button press, but I want to be able to then get the values of that colour in the background and determine what colour that is. This is what I got from my research but whenever I print the results to the console I just get 0s. Any help is really appreciated!
func getBackgroundColor(){
let color = self.view.backgroundColor
let colour = UIColor(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0)
self.view.backgroundColor = color
var red = CGFloat()
var green = CGFloat()
var blue = CGFloat()
var alpha = CGFloat()
print(red, green, blue, alpha)
color!.getRed(&red, green: &green, blue: &blue, alpha: &alpha)
}
Check the return value of the call to getRed(_:green:,blue:alpha:). If it returns false then the color couldn't be treated as an RGB color. It might help if you show how you set the background color.
let red = self.view.backgroundColor?.ciColor.red
let blue = self.view.backgroundColor?.ciColor.blue
let green = self.view.backgroundColor?.ciColor.green
let alpha = self.view.backgroundColor?.ciColor.alpha
print("r \(red) b \(blue) g \(green) a \(alpha) ")
That didn't work for me, it crashes whenever I call the function
Try getting this if this code crashes var color = self.view.backgroundColor
var fRed: CGFloat = 0
var fGreen: CGFloat = 0
var fBlue: CGFloat = 0
var fAlpha: CGFloat = 0
if (color?.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha))! {
print("color (fRed) (fGreen) (fBlue)")
} else {
print("error: color could not be converted")
}
| common-pile/stackexchange_filtered |
How to scale things in OpenGL to real sizes?
I am trying to make a program in OpenGL 3.3. Everything is working, but the problem is that everything looks too small. I measured out my field of view in real life and I came to the result of 46.9°. So I applied that to my projection matrix like this:
projection = infinitePerspective(radians(46.9f), float(windowWidth) / float(windowHeight), 0.005f);
but it hasn't helped much.
I made a screenshot for you that shows a one by one by one meter cube that is 3 meters away from the camera:
If I measure out one meter in real life and view it from three meters away it looks way bigger than that. Is there some coefficient needed to scale everything, so that it looks real?
"it looks way bigger than that." -- Could you explain how you came to that conclusion? What is your method for comparing the apparent size of the object in real life vs the object on your screen? What size do you expect it to be on the screen?
@Romen Just by simply comparing it. I expect some coefficient that is dependent on the screen size(real world measurements)
@t.niese What do you mean?
@httpdigest I measured my view angle using arcsin not arctan with arctan I get 42°
@user11914177, Please humour me and try to explain your method for "simply comparing it". A formally defined approach to comparing an on-screen size to the real-life size you perceive will give us a framework to determine what the correct coefficient will be.
@Romen I laid down a one meter long ruler 3 meters away from my eyes. I remembered that size and then compared it to my program running on fullscreen on a 13" and 26" monitor.
@user11914177, How far away was your eye from the monitors you checked? Are you able to estimate a factor representing the difference you did see? Are you expecting the object on the screen to appear the same apparent size as in real-life, or, are you expecting it to occupy the same percentage of the field-of-vision (on screen) as it does in real life?
@httpdigest, There may be a fundamental problem with the question here that has nothing to do with the code. If I have a 3m wide TV, should the cube be rendered at 100% the width of the TV when viewed from any distance? If not, then there are real-life factors that make this problem impossible to solve without head-tracking.
@user11914177, To reiterate my question in the comment above. How big should the cube appear on a TV that is 1 metre wide when the user stands 3m away from the TV? Should it fill 100% of the width of the TV?
@Romen My 13" monitor is 58cm away from my eyes the 26" monitor is 70cm away. As you may have noticed, I have measured my view angle wrongly. With my correct view angle it looks better on both screens. I expect to occupy the same percentage of space.
@httpdigest How to feed the information in?
@httpdigest Yes, but how to use the four lengths measured? How do you get them into infinitePerspective(...)?
As it turns out, a real scale is impossible. But with the right view angle it looks good.
| common-pile/stackexchange_filtered |
Why this is a Fourier expansion if the Fourier inversion doesn't contain any $e^{-ikx}$ term?
I'm studying Quantum Field Theory and in the book the author states the following:
To connect special relativity to the simple harmonic oscilator we note that the simplest possible Lorentz-invariant equation of motion that a field can satisfy is $\Box \ \phi = 0$. That is:
$$\Box \ \phi = (\partial_t^2-\nabla^2)\phi=0.$$
The classical solutions are plane waves. For example, one solution is:
$$\phi(x)=a_p(t)e^{i\mathbf{p}\cdot \mathbf{x}},$$
where
$$(\partial_t^2+\mathbf{p}\cdot \mathbf{p})a_p(t)=0.$$
This is exactly the equation of motion of a harmonic oscilator. A general solution is:
$$\phi(x,t)=\int \dfrac{d^3 p}{(2\pi)^3}[a_p(t)e^{ip\cdot x}+a_p^\ast(t) e^{-ip\cdot x}],$$
with $(\partial_t^2+\mathbf{p}\cdot \mathbf{p})a_p(t)=0$, which is just a Fourier decomposition of the field into plane waves. Or more simply:
$$\phi(x,t)=\int \dfrac{d^3p}{(2\pi)^3}(a_p e^{-ipx}+a_p^\ast e^{ipx})$$
with $a_p$ and $a_p^\ast$ now just numbers and $p_\mu=(\omega_p,\mathbf{p})$ with $\omega_p=|\mathbf{p}|$ and $px = p^\mu x_\mu$.
Now, the Fourier transform of a function $f : \mathbb{R}^3\to \mathbb{C}$ is defined as:
$$\hat{f}(\mathbf{k})=\int f(\mathbf{x})e^{-i\mathbf{k}\cdot\mathbf{x}}d^3{\mathbf{x}}$$
where I am not considering the $2\pi$ factors, nor the question of convergence, which would involve restricting $f\in \mathcal{S}(\mathbb{R}^3)$. Anyway, the Fourier inversion formula is:
$$f(\mathbf{x})=\int \hat{f}(\mathbf{k})e^{i\mathbf{k}\cdot \mathbf{x}}d^3\mathbf{k}.$$
Now, here comes the thing. The way I know to solve $\Box \ \phi = 0$ with Fourier transform is to take the Fourier transform on both sides. This leads to:
$$\partial_t^2\hat{\phi}(p,t)+|p|^2\hat{\phi}(p,t)=0$$
This determines $\hat{\phi}(p,t)$. Then Fourier inversion leads to:
$$\phi(x,t)=\int \dfrac{d^3p}{(2\pi)^3}\hat{\phi}(p,t)e^{i p\cdot x}$$
To solve the DE we notice that $\mathbf{p}$ is just a parameter, so that
$$\hat{\phi}(\mathbf{p},t)=A_p e^{-i|p|t}+B_p e^{i|p|t}$$
From which we get
$$\phi(\mathbf{x},t)=\int \dfrac{d^3 \mathbf{p}}{(2\pi)^3}(A_p e^{-i|p|t}+B_p e^{i|p|t})e^{i\mathbf{x}\cdot\mathbf{p}}=\int \dfrac{d^3 \mathbf{p}}{(2\pi)^3}(A_p e^{-i|p|t}e^{i\mathbf{x}\cdot\mathbf{p}}+B_p e^{i|p|t}e^{i\mathbf{x}\cdot\mathbf{p}}).$$
This is different than the book. Indeed, in the Fourier transform I know, we integrate $\hat{\phi}(\mathbf{p},t)e^{i\mathbf{p}\cdot\mathbf{x}}$. On the book a term with $e^{-i\mathbf{p}\cdot \mathbf{x}}$ appears in the inversion formula which I don't know where it came from.
In summary, what inside the integral is got is:
$$A_p e^{-i|p|t}e^{i\mathbf{x}\cdot\mathbf{p}}+B_p e^{i|p|t}e^{i\mathbf{x}\cdot\mathbf{p}}$$
while the book presents:
$$a_p e^{-ipx}+a_p^\ast e^{ipx}$$
how this result from the book relates to the solution with Fourier transform that I know? How this $a_p^\ast$ appeared? And how the $e^{-i\mathbf{p}\cdot\mathbf{x}}$ got there? Why am I getting a different result and how to bridge the gap between the two?
What the book probably has done to get to that form is to use that $\phi$ is a real field so $\phi = \phi^$. This implies that $\hat{\phi}({\bf p}) = \hat{\phi}^(-{\bf p})$ which implies a relation between $A_p$ and $B_p$.
If you write $\hat{\phi}(\mathbf{p},t) = A(\mathbf{p}) \exp(i t \vert \mathbf{p} \vert) + B(\mathbf{p}) \exp(-i t \vert \mathbf{p} \vert)$
and demand that $\phi$ is real, you find that you need to have $B(\mathbf{p}) = A^{*}(-\mathbf{p})$. Thus (up to the Fourier transformation coefficient)
\begin{align} \phi(\mathbf{x},t) &= \int\limits_{\mathbb{R}^{3}} (A(\mathbf{p}) e^{-i t \vert \mathbf{p} \vert} + B(\mathbf{p}) e^{i t \vert \mathbf{p} \vert}) \, e^{i \langle \mathbf{p}, \mathbf{x} \rangle} \, \mathrm{d} \mathbf{p} \\ &=\int\limits_{\mathbb{R}^{3}} (A(\mathbf{p}) e^{-i t \vert \mathbf{p} \vert} + A^{*}(-\mathbf{p}) e^{i t \vert \mathbf{p} \vert}) \, e^{i \langle \mathbf{p}, \mathbf{x} \rangle} \, \mathrm{d} \mathbf{p} \\ &= \int\limits_{\mathbb{R}^{3}} A(\mathbf{p}) e^{-i t \vert \mathbf{p} \vert} \, e^{i \langle \mathbf{p}, \mathbf{x} \rangle} + A^{*}(\mathbf{p}) e^{i t \vert \mathbf{p} \vert} \, e^{-i \langle \mathbf{p}, \mathbf{x} \rangle} \, \mathrm{d} \mathbf{p}, \end{align}
where in the last step I performed the substitution $\mathbf{q} = -\mathbf{p}$ in the second summand and relabeled $\mathbf{q}$ to $\mathbf{p}$ again (!). This expression may be rewritten to the expression stated by the book by means of the Minkowski Pseudo Inner Product.
Thank you, I understood your point. Just one more silly doubt: when applying the Fourier transform reality condition I end up with $(A(-\mathbf{p})-B^\ast(\mathbf{p}))e^{i|\mathbf{p}|t}+(B(-\mathbf{p})-A^\ast(\mathbf{p}))e^{-i|\mathbf{p}|t} = 0$. Now I would need to show that the coefficients are zero so that I get $B(\mathbf{p})=A^\ast(-\mathbf{p})$, however I don't know why this happens. I mean, I know the complex exponentials $e^{inx}$ are orthogonal, but this doesn't help here because $|\mathbf{p}|$ need not be integer. How do I show this last part?
The functions $f^{\pm}{c}(t):=\exp(\pm i c t)$ are linearly independent for $c \neq 0$ and $ \vert f^{\pm}{c}(t) \vert = 1$.
True, I remembered that this can be shown with the Wronskian. Thanks again for the answer!
You are misunderstanding the notation $px$ used in the textbook. It refers to the "Minkowski inner product" or "Lorentzian inner product" which includes both the space component and the time component. This is exactly what is needed to match up your answers.
Thanks for the answer @QiaochuYuan. It took me some time indeed to identify the meaning of the notation, but it's not only this. Notice that I have $A_p e^{-i(|p|t-\mathbf{x}\cdot\mathbf{p})}+B_p e^{i(|p|t+\mathbf{x}\cdot \mathbf{p})}$. The first exponential really can be written as $e^{-i px}$, but the second can't, since there isn't the negative sign. Furthermore, the book answer would mean $B_p = A_p^\ast$, which doesn't seem to be necessary. What am I missing?
| common-pile/stackexchange_filtered |
"OutOfMemoryError: Java Heap Space" error in my program
Please have a look at the following code
public void createHash() throws IOException
{
System.out.println("Hash Creation Started");
StringBuffer hashIndex = new StringBuffer("");
AmazonS3 s3 = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
Region usWest2 = Region.getRegion(Regions.US_EAST_1);
s3.setRegion(usWest2);
strBuffer = new StringBuffer("");
try
{
//List all the Buckets
List<Bucket>buckets = s3.listBuckets();
for(int i=0;i<buckets.size();i++)
{
System.out.println("- "+(buckets.get(i)).getName());
}
//Downloading the Object
System.out.println("Downloading Object");
S3Object s3Object = s3.getObject(new GetObjectRequest("JsonBucket", "Articles_4.json"));
System.out.println("Content-Type: " + s3Object.getObjectMetadata().getContentType());
//Read the JSON File
BufferedReader reader = new BufferedReader(new InputStreamReader(s3Object.getObjectContent()));
while (true) {
String line = reader.readLine();
if (line == null) break;
// System.out.println(" " + line);
strBuffer.append(line);
}
JSONTokener jTokener = new JSONTokener(strBuffer.toString());
jsonArray = new JSONArray(jTokener);
System.out.println("Json array length: "+jsonArray.length());
for(int i=0;i<jsonArray.length();i++)
{
JSONObject jsonObject1 = jsonArray.getJSONObject(i);
//Add Title and Body Together to the list
String titleAndBodyContainer = jsonObject1.getString("title")+" "+jsonObject1.getString("body");
//Remove full stops and commas
titleAndBodyContainer = titleAndBodyContainer.replaceAll("\\.(?=\\s|$)", " ");
titleAndBodyContainer = titleAndBodyContainer.replaceAll(",", " ");
titleAndBodyContainer = titleAndBodyContainer.toLowerCase();
//Create a word list without duplicated words
StringBuilder result = new StringBuilder();
HashSet<String> set = new HashSet<String>();
for(String s : titleAndBodyContainer.split(" ")) {
if (!set.contains(s)) {
result.append(s);
result.append(" ");
set.add(s);
}
}
//System.out.println(result.toString());
//Re-Arranging everything into Alphabetic Order
String testString = "acarus acarpous accession absently missy duckweed settling";
String testHash = "058 057 05@ 03o dwr 6ug i^&";
String[]finalWordHolder = (result.toString()).split(" ");
Arrays.sort(finalWordHolder);
//Navigate through text and create the Hash
for(int arrayCount=0;arrayCount<finalWordHolder.length;arrayCount++)
{
Iterator iter = completedWordMap.entrySet().iterator();
while(iter.hasNext())
{
Map.Entry mEntry = (Map.Entry)iter.next();
String key = (String)mEntry.getKey();
String value = (String)mEntry.getValue();
if(finalWordHolder[arrayCount].equals(value))
{
hashIndex.append(key); //Adding Hash Keys
//hashIndex.append(" ");
}
}
}
//System.out.println(hashIndex.toString().trim());
jsonObject1.put("hash_index", hashIndex.toString().trim()); //Add the Hash to the JSON Object
jsonObject1.put("primary_key", i); //Create the primary key
jsonObjectHolder.add(jsonObject1); //Add the JSON Object to the JSON collection
System.out.println("JSON Number: "+i);
}
System.out.println("Hash Creation Completed");
}
catch(Exception e)
{
e.printStackTrace();
}
}
I am not capable of running this code either in my local machine or in Amazon EC2, I get the following error
I am worried because this "test" is running on 6mb JSON file, while the original file will be terabytes. I am using Linux instance in EC2, but I am not a Linux guy. How can I get rid of this?
All answers suggesting to increase the heap size should be deleted, IMO. Such advice is like instructing a victim with a severed limb where to get more blood.
You are declaring hashIndex outside of the loop
StringBuffer hashIndex = new StringBuffer("");
...
for(int i=0;i<jsonArray.length();i++) {
hashIndex.append(...);
This means that the StringBuffer keeps getting bigger and bigger as you iterate the buckets until it finally explodes!
I think you meant to declare hashIndex inside the loop.
The answers is a combination of you and @Andremoniy. I am glad if I can select both as the the selected answer :)
It's a very bad idea to construct StringBuffer object for passing it inside JSONTokener. This class has constructor directly from Reader or InputStream, so your code should be something like that:
JSONTokener jTokener = new JSONTokener(new BufferedReader(new InputStreamReader(s3Object.getObjectContent())));
Does that mean this will cause memory issue? Just curious to know?
The string buffer with all of the data loaded into it will certainly take up extra space.
The answers is a combination of you and @Lance Java. I am glad if I can select both as the the selected answer :)
Your java have run out of heap memory. You can increase your heap memory up to 4gb on a 32bit system. If you're on a 64 bit system you can go higher. If you ask for over 4gb on a 32 bit system you will get invalid value from java and it will exit.
Below is how you can set your memory heap up to 6gb on a 64bit system using cmd command:
java -Xmx6144M -d64
Increasing java heap space is by no means a fix, only a temporary stop-gap.
@hexafraction: I do not appreciate providing downvotes, but I agree with you. I am navigating through 60 billion records, so the Java heap size means "nothing".
@GloryOfSuccess I did not downvote this answer, although I would see reasoning behind that. +1 for technical correctness.
| common-pile/stackexchange_filtered |
Jsoup get contents of javascript that has CDATA tags?
I am using Jsoup to parse a webpage. But some if the info that I want to parse is inside a CDATA tag that prevents the parser from extracting the data inside. How would I go about extracting data from within a CDATA tag?
EXAMPLE:
<script type='text/javascript'><!--// <![CDATA[
OA_show('300x250');
// ]]> --></script>
<script type='text/javascript'>alert("Hello");</script>
If i use Jsoup to parse this page and try selecting all tha matching elements in the page with "script[type=text/javascript]" I get returned the contents of other scripts in the page that do not have CDATA tags but not the Alert("Hello"); value.
How would I go about getting that a value inside a CDATA tag with Jsoup?
Thanks!
I don't think the problem is the CDATA, but the comment surrounding it. Can't you just strip the comment and CDATA crap (with String.replace()) before you ship the webpage text to JSoup? It shouldn't affect anything, a tolerant HTML parser should know how to deal with unescaped Javascript inside <script> tags.
@Shredder2794 Could you post and accept your own answer?
String page = "<script type='text/javascript'><!--// <![CDATA[\n" +
" OA_show('300x250');\n" +
"// ]]> --></script>\n" +
" <script type='text/javascript'>alert(\"Hello\");</script>";
String html = Jsoup.parse(page).select("script").get(0).html();
html = html.replace("<!--// <![CDATA[", "");
html = html.replace("// ]]> -->", "");
System.out.println(html);
Result
OA_show('300x250');
| common-pile/stackexchange_filtered |
Are include()s faster or database queries?
A client is insisting that we store some vital and complex configuration data as php arrays while I want it to be stored in the database. He brought up the issue of efficiency/optimization, saying that file i/o will be much faster than database queries. I'm pretty sure I heard somewhere that file includes are actually slow in PHP.
Any stats/real info on this?
There are some more answers from a [previous question] about the same topic.1
I don't think that performance is a compelling argument either way. On my Mac, I ran the following tests.
First 10,000 includes of a file that doesn't do anything but set a variable:
<?php
$mtime = microtime();
$mtime = explode(' ', $mtime);
$mtime = $mtime[1] + $mtime[0];
$starttime = $mtime;
for ($i = 0; $i < 10000; $i++) {
include("foo.php");
}
$mtime = microtime();
$mtime = explode(" ", $mtime);
$mtime = $mtime[1] + $mtime[0];
$endtime = $mtime;
$totaltime = ($endtime - $starttime);
echo 'Rendered in ' .$totaltime. ' seconds.';
?>
It took about .58 seconds to run each time. (Remember, that's 10,000 includes.)
Then I wrote another script that queries the database 10,000 times. It doesn't select any real data, just does a SELECT NOW().
<?php
mysql_connect('<IP_ADDRESS>', 'root', '');
mysql_select_db('test');
$mtime = microtime();
$mtime = explode(' ', $mtime);
$mtime = $mtime[1] + $mtime[0];
$starttime = $mtime;
for ($i = 0; $i < 10000; $i++) {
mysql_query("select now()");
}
$mtime = microtime();
$mtime = explode(" ", $mtime);
$mtime = $mtime[1] + $mtime[0];
$endtime = $mtime;
$totaltime = ($endtime - $starttime);
echo 'Rendered in ' .$totaltime. ' seconds.';
?>
This script takes roughly 0.76 seconds to run on my computer each time. Obviously there are a lot of factors that could make a difference in your specific case, but there is no meaningful performance difference in running MySQL queries versus using includes. (Note that I did not include the MySQL connection overhead in my test -- if you're connecting to the database only to get the included data, that would make a difference.)
For a relevant benchmark, you should have only one include statement in the script and then run the script 10000 times, rather than having 10000 includes in a single PHP script and running the script once.
For one thing, repeated includes do not lend themselves to opcode caching, because each inclusion could theoretically change how the next include should be parsed. In my tests, the include benchmark actually runs 7 times faster when APC is disabled. :-)
It's gonna vary heavily based on your specific case.
If the database is stored in memory and/or the data you're looking for is cached, then database I/O should be pretty fast. A really complex query on a large database can take a fair bit of time if it's not cached or it has to go to disk, though.
File I/O does have to read from the disk, which is slow, though there are also smart caching mechanisms for keeping often-accessed files in memory as well.
Profiling on your actual system is gonna be the most definitive.
The data which will be pulled would be 30-50 rows and not a complex query, just a select * from. Any thoughts in that case?
And +1 for the profiling suggestion. There are lots of variables--is the DB in memory? Do you use a bytecode cache like APC? Is the DB local?
This is a pretty obvious case of premature optimization. Don't ever try to optimize things like this unless you've actually identified it as a real bottleneck in a production environment.
That said, using an opcode cache like APC (you are using an opcode cache, right? Because that is the very first thing you should do to optimize PHP), my money is on the include file.
But again, the difference will likely be neglible, so pick the solution which requires 1) the least code and 2) the least maintenance. Programmer time is much more expensive than CPU time.
Update: I did a quick benchmark of the inclusion of a PHP file defining a 1000-entry array. The script ran 5 times faster using APC than without.
A similar benchmark, fetching 1000 rows from a MySQL database (on localhost), only ran 15% faster using APC (since APC doesn't do anything for database queries).
However, once APC was enabled, there was no significant difference between using an include file and using a database.
Given that most people will include 10-20 files into their script for a regular page, I have a feeling that includes are much faster than MySQL queries.
I could though, be wrong.
The question is that if those values will never change without you doing other modifications (moving files, etc), it should probably be stored in an include file.
If the data is dynamic in any way, it should be pulled from a database.
I don't think this decision should be based on performance. The question I'd ask myself: is this data going to be updated by the application. If the answer is "no", consider how much quicker and simpler it will be to implement and use as an included array.I work with a large system where almost every possible thing is stored in the database. Even data that has to be manually changed via a database alter written by the developer, and I can tell you it has led to way more coding, and way more complexity than if the information was stored the way your client is suggesting.If the data won't change often and has to be changed via manual intervention anyway and doesn't need to be made available in the database (for other systems, for instance), give the array a try. You can always put it in the database later and write all the necessary SQL.
| common-pile/stackexchange_filtered |
Who are Gaston's "five hangers-on"?
In the song "Gaston", it is said that the eponymous character has "five hangers-on". I imagine LeFou, Tom, Dick and Stanley are part of them?
Who are Gaston's 'five hangers-on'?
Note this is originally from the extended original song, which wasn't used in the original cartoon movie. Seems like they added it to the live action film? I don't recall hearing it though.
Here are the full lyrics to the mentioned version of the song: http://www.fpx.de/fp/Disney/Lyrics/BeautyAndTheBeast.html#Gaston
Funny story with this one. The Five hangers-on are LeFou, Tom, Dick, Stanley, and the nameless elderly man that sings along too. However, they get rid of the elderly man in the live action version and keep the lyric the same, which makes no sense, as it would be four hangers-on without the elderly man. But yeah, there's the answer.
| common-pile/stackexchange_filtered |
Why aren't persistent connections supported by URLLib2?
After scanning the urllib2 source, it seems that connections are automatically closed even if you do specify keep-alive.
Why is this?
As it is now I just use httplib for my persistent connections... but wonder why this is disabled (or maybe just ambiguous) in urllib2.
It's a well-known limit of urllib2 (and urllib as well). IMHO the best attempt so far to fix it and make it right is Garry Bodsworth's coda_network for Python 2.6 or 2.7 -- replacement, patched versions of urllib2 (and some other modules) to support keep-alive (and a bunch of other smaller but quite welcome fixes).
Alex, finally a straight answer on this one ('its' a well known limit'), The question remains though, why is URLLib2 written this way?
@sbartell, because nobody felt the problem was important enough to submit a patch to the Python code and have it accepted -- I didn't, neither did you, and so on for millions of people who could and no doubt would if they felt the problem was important (assuming they're decent citizens of the open-source community, of course, but, hey, aren't we all?-). I think Gary took the right approach by releasing a third-party solution so that lots of real-world "field" experience in a variety of uses can be accumulated before things are "frozen" into the standard library.
You might also check out httplib2, which supports persistent connections. Not quite the same as urllib2 (in the sense that it only does http and not "any kind of url"), but easier than httplib (and imho also easier than urllib2 if you really want to do http).
httplib does support them, we just use the same httpconnection object over again.
It just boggles me why urllib2 does support this.
| common-pile/stackexchange_filtered |
How to say "If I can be"?
I'm studying Japanese by myself.
To become is なる.
Able to become / can be = なれる
If I say なれば it would mean ”If I became” (conditional)
so "If I can be " should be "なれれば"?
The sentence I'm trying to formulate is :
It's no problem if that would make me rich.
My try was :
あんまり大変じゃない、お金持ちになれば。
Which might get interpreted as :
"it's no problem if I was rich."
I appreciate your help.
The -えば clause should come before the main clause.
I think you could say like:
(もし)それで(お)金持ちになれるのなら / なれるなら / なれたら / なれれば、問題(は)ないだろう。(or 構わないだろう / いいだろう etc. depending on the context.)
それで = それ (that) + particle で (with; because of)
I think the use about the word なる you said is right. But I am not so sure about the translation you did.
Firstly, no problem does not mean 大変じゃない, that sounds a little weird to me because in my opinion, 大変じゃない means it is not exhausting, I can still take it. For no problem, I would say 大丈夫だ, that means it is not a problem, it's ok.
And for "if that would make me rich", because there is this word make, I think it's better to translate it to する. So this sentence will be like "もしそれが私をお金持ちにしてくれれば、大丈夫だ".
As for your last sentence, was is the past tense of is, right? So it means である or です or だ in Japanese. There is no change, and it's just a condition I was in right? So it is not "become", and this sentence will be 私がお金持ちだったら/であったら/でしたら、大丈夫だった。
I hope this is helpful to you. Enjoy learning Japanese.
この事で自分自身がお金持ちになっても、こんなに問題と言うわけではあるまい。
Sorry, I accidentally ticked the 'recommend for deletion'. Is it possible to undo this.
| common-pile/stackexchange_filtered |
date prefix auto increment id In JPA + Hibernate
I have to generate a custom Auto Increment Which reset every day at time 00:00 to date prefixed auto increment value
for example Jan 9, 2016, the id should be in range<PHONE_NUMBER> to<PHONE_NUMBER>,
and Dec 15, 2016 t should be like<PHONE_NUMBER> to<PHONE_NUMBER>.
Eg: Characteristics of<PHONE_NUMBER> are as below
The id is an INT
first two digit represent the year, 16 means 2016
digits 3 and 4 denotes month,
5 and 6 represents date
and remaining just a counter, so I expect we have less than 9999 records generated in a day.
How can I achieve this in JPA, what is the best way, currently i did it using another table to count the current value and using the last updated time and current time to compute the prefix and to decide the auto increment part to reset to 1 or not. Which i feel not a right solution.
IS there a way I can use @GeneratedValue annotation with a custom strategy to this, if so how can I do without causing an error, where it fails to generate an ID.
Why there is no comments, is it because there is no direct solution or something else.....
This is crazy, so there is no solution for this problem or no one known a direct solution...This is really crazy....Then I think then its time to adopt some other technologies than Java and Hibernate...
Did you get this to work yet?
I didnt do it, since it took long time....to get this reply...currently i did it using a counter on a different table and things are working...I will try out the following way when i get a chance to refactor my existing code...
I found 2 options that would do the trick
You can use hibernate @GenericGenerator annotation and create a custom sequence generator. There is an example in this SO answer.
You can also use the JPA @PrePersist annotation.
@PrePersist
public void getNextIdFromGenerator() {
id = yourInjectedGeneratorOrSomething.nextId();
}
| common-pile/stackexchange_filtered |
What do complexity classes look like, if we use Turing reductions?
For reasoning about things like NP-completeness, we typically use many-one reductions (i.e., Karp reductions). This leads to pictures like this:
(under standard conjectures). I'm sure we're all familiar with this sort of thing.
What picture do we get, if we work with Turing reductions (i.e., Cook reductions)? How does the picture change?
In particular, what are the most important complexity classes, and how do they relate? I am guessing that $P^{NP}$ plays the role that used to be taken up by $NP$ and $coNP$ (because $P^{NP}$ is closed under Turing reductions in the same way that $NP$ is closed under Karp reductions); is that about right?
So should the picture look like $P \subset P^{NP} \subset PH \subset PSPACE$ now, i.e., something like the following?
Is there some new sequence that plays a role that corresponds to the polynomial hierarchy? Is there a natural sequence of complexity classes $C_0=P$, $C_1=P^{NP}$, $C_2=?$, ..., such that each complexity class is closed under Turing reductions? What is the "limit" of this sequence: is it $PH$? Is it expected that each class in the sequence is different from the previous one? (By "expected", I mean under plausible conjectures, similar to the sense in which it is expected that $P \ne NP$.)
Related: Many-one reductions vs. Turing reductions to define NPC. That article explains that the reason we work with Karp reductions is that it gives us a finer-grained, richer, more precise hierarchy. Essentially, I am wondering what the hierarchy would look like if we worked with Turing reductions: what the coarser, less rich, less precise hierarchy would look like.
We may have some information spread out over several questions.
from that question eg answer "they are conjectured to be distinct notions. the distinction of coNP vs NP disappears with Turing reductions." also note coNP≠NP (widely conjectured) implies P≠NP (P is closed under complement). so its tied up with some deep open complexity theory questions.
Thanks, @Raphael, I've reviewed all of those, and I don't think they answer any of my questions.
You can use $\mathsf{P}^{\Sigma^\mathsf{P}_i}$.
Some authors denote them by $\square^\mathsf{P}_i$
(similar to $\Delta^\mathsf{P}_i$ and
$\nabla^\mathsf{P}_i$ notations).
It's essentially the Turing closure of the polynomial hierarchy.
We have
$$\mathsf{P}^{\Sigma^\mathsf{P}_i}\subseteq \mathsf{NP}^{\Sigma^\mathsf{P}_i}=
\Sigma^\mathsf{P}_{i+1}\subseteq
\mathsf{P}^{\Sigma^\mathsf{P}_{i+1}}$$
Therefore
$\mathsf{P^{PH}} =
\sum_{i\geq 0}\mathsf{P}^{\Sigma^\mathsf{P}_i} =
\sum_{i\geq 0}\Sigma^\mathsf{P}_{i} =
\mathsf{PH}$.
If the polynomial hierarchy does not collapse all inclusions are strict.
| common-pile/stackexchange_filtered |
pip install errors out: SyntaxError: invalid syntax
pip install does not work when trying to install virtualenv, requests or pex on CentOS6. I am on python2.6 and pip 9.0.1. Can anyone tell me why is this happening?
(pex_build)[root@pex pex_build]# pip install virtualenv
Output:
Traceback (most recent call last):
File "/opt/pex_build/bin/pip", line 7, in <module>
from pip._internal import main
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import (
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/index.py", line 526
{str(c.version) for c in all_candidates},
^
SyntaxError: invalid syntax
Command:
(pex_build) [root@pex pex_build]# pip install requests pex
Output:
Traceback (most recent call last):
File "/opt/pex_build/bin/pip", line 7, in <module>
from pip._internal import main
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/__init__.py", line 42, in <module>
from pip._internal import cmdoptions
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/cmdoptions.py", line 16, in <module>
from pip._internal.index import (
File "/opt/pex_build/lib/python2.6/site-packages/pip/_internal/index.py", line 526
{str(c.version) for c in all_candidates},
^
SyntaxError: invalid syntax
Also curl gives the similar error, when trying to get get-pip.py
Command:
(pex_build) [root@pex pex_build]# curl https://bootstrap.pypa.io/get-pip.py | python
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1602k 100 1602k 0 0 7373k 0 --:--:-- --:--:-- --:--:-- 14.6M
Traceback (most recent call last):
File "<stdin>", line 20636, in <module>
File "<stdin>", line 197, in main
File "<stdin>", line 82, in bootstrap
File "/tmp/tmp5zrn_f/pip.zip/pip/_internal/__init__.py", line 42, in <module>
File "/tmp/tmp5zrn_f/pip.zip/pip/_internal/cmdoptions.py", line 16, in <module>
File "/tmp/tmp5zrn_f/pip.zip/pip/_internal/index.py", line 526
{str(c.version) for c in all_candidates},
^
SyntaxError: invalid syntax
Python 2.6 is very very very old, and pip 9.x is not compatible with it.
Consider upgrading to python 2.7 or 3.5. If you cannot do it on the entire machine because of dependency/permissions issues, consider installing it into a local directory, then installing pip and all your required packages there
Forgot to revoke I realized its not duplicate once I dig more into this.
The problem is your version of pip is broken with Python 2.6. If you upgrade to 9.0.3 it should work again.
pip install pip==9.0.3
If you are unable to upgrade pip using pip, you could re-install the package as well using your local package manager, and then upgrade to pip 9.0.3.
Keep in mind that if you are using virtual environments it is recommended that you upgrade virtualenv as well so that your virtual environments will have pip 9.0.3 as well.
pip install virtualenv==15.2.0
Be careful though to not upgrade to a version of pip higher than 9.0.3 or virtualenv higher than 15.2.0 as Python 2.6 support was removed with those versions, as mentioned by Prateek.
pip works with CPython versions 2.7, 3.3, 3.4, 3.5, 3.6 and also pypy.
This means pip works on the latest patch version of each of these
minor versions. Previous patch versions are supported on a best effort
approach.
Just use below command one you upgrade to compatible Python version.
pip install requests
check pip-documentation for more details.
Including @eandersson's comments
Or you you can upgrade pip to 9.0.3
pip install pip==9.0.3
This is incorrect. He mentioned that he using pip 9.0.1 which still supports Python 2.6. Pip 10 however does not. Might be worth including instructions on how to correctly identify the version he is running and how to downgrade.
Actually, this is just a bug in pip 9.0.1. It's fixed in pip 9.0.3.
| common-pile/stackexchange_filtered |
How does wmode="transparent" works?
I have an activex object (developed in delphi 6) and floating div message. When I dragging floating div over activex object its blinking.
I'd like to add wmode param to my activex object like in flash objects to overcome this blinking, how does this feature works exactly ?
By the way I developing activex in Delphi 6 and already tried to use DoubleBuffered and VCLFixPack and there were a little improvement but it didnt fixed the problem.
As far as I understand the concept, it does not do anything by itself.
The HTML will just send all the parameters to the embedded object (be it Flash or your own custom control) so wmode is just another parameter.
The real work must be done on your side, HTML can't make it transparent for you, it treat the object as "black box" and can only pass it parameters.
I need to know how wmode works inside the activex or at least what command in activex will solve this blinking.
Like I said, it does't do any work. It's just a parameter passed to your object.
| common-pile/stackexchange_filtered |
transfer domain on bluehost to point to heroku app
I have a domain registered on bluehost and Im trying to point the DNS to my heroku app. I have changed the www entry to the provided target. I also added the domain in heroku (once with the www and once without). The domain name is still responded to with the bluehost hosted app and not redirecting to the heroku instance. Please advise?
Please check if there is any A record.
| common-pile/stackexchange_filtered |
Calculus of variations for implicitly defined functional
I would like to minimize a functional of the type:
$$L[\gamma]=\int_a^b F(T(\gamma(t))dt$$
on the space of paths $\gamma$, where $T=T(\gamma,t)$. Now, usually I would simply apply Euler-Lagrange's equations, but in this case $T$ is defined implicitly by an equation of type:
$$f(t,\gamma,\dot\gamma,\ddot\gamma,T)=0$$
So, how can I do to find a path $\gamma$ minimizing $L$?
A possible way could maybe be the following (I will skip all formality à la physicist):
Assuming $f$ to be well behaved (i.e. a small variation in the path $\gamma$ leads to a small variation of the solution $T$), take a variation $\delta\gamma$ of the path. It will lead to a variation of $T$ in the following way:
$$0=f(t,\gamma+\delta\gamma,\dot\gamma+\delta\dot\gamma,\ddot\gamma+\delta\ddot\gamma,T+\delta T)$$
And this (in first order in the variation) gives:
$$0=f+f_\gamma\delta\gamma+f_{\dot\gamma}\delta\dot\gamma+f_{\ddot\gamma}\delta\ddot\gamma+f_T\delta T$$
where $f_x=\frac{\partial f}{\partial x}$ and $f=f(t,\gamma,\dot\gamma,\ddot\gamma,T)$. Assuming $f_T$ to be invertible (in particular this gives us unicity of the solution, by the implicit function theorem), then we find:
$$\delta T = -f_T^{-1}(f+f_\gamma\delta\gamma+f_{\dot\gamma}\delta\dot\gamma+f_{\ddot\gamma}\delta\ddot\gamma)$$
And thus we obtain:
$$\begin{array}{ll}\delta L=L[\gamma+\delta\gamma]-L[\gamma]&=\int F(T+\delta T)-F(T)dt\\&=\int F_T(T)\delta Tdt\\&=\int -F_T(T)f_T^{-1}(f+f_\gamma\delta\gamma+f_{\dot\gamma}\delta\dot\gamma+f_{\ddot\gamma}\delta\ddot\gamma)\end{array}$$
Integrating by parts (and omitting boundary terms) the term in the integral becomes:
$$\left(-F_T(T)f_T^{-1}f+\frac{d}{dt}(F_T(T)f_T^{-1}f_\gamma)-\frac{d^2}{dt^2}(F_T(T)f_T^{-1}f_{\dot\gamma})+\frac{d^3}{dt^3}(F_T(T)f_T^{-1}f_{\ddot\gamma})\right)\delta\gamma$$
By variational principle, $\delta L=0$ if, and only if the term in the big brackets is $0$.
Is this correct? And does this give us an effective way to find $\gamma$ (at least numerically)?
@FrauenarztDoktorFotzenglotz I don't think so. Even though I used a notation coming from physics (to give an intuition of my reasoning), the question is purely mathematical. Also, all the ideas I expressed can be formalized.
@FrauenarztDoktorFotzenglotz Well, I will try and cross-post, then.
| common-pile/stackexchange_filtered |
Full node or Full node third-party services pros and cons
I'm setting up P2P Bitcoin website similar to LocalBitcoin for school project.
Before I go deep into this journey I'm doing some plannings.
I have lot's of questions in my head and have been studying bitcoin technology.
I would like know if its better to run full node on my server and execute RPC commands or should I just look for companies that offer such services to save me some time?
What are the pros and cons of running full node
I will also appreciate recommendations for third-party services for Bitcoin P2P market
What are the pros and cons of running full node
it all comes down to the level of trust. Whom do you trust? Your own system, or someone else? And then, how much money are we talking about? Several low values, or monthly or yearly values?
For sure I recommend a full node. And yes, it has the disadvantage of downloading the blockchain (easily up to three weeks), and you need to have that infrastructure up and running. For this you gain the absolute security, that noone faked a block or a tx in the blockchain, that is assembled on your system. Also the level of privacy is higher than e.g. SPV nodes. A full node supports the decentralization aspects of the network. More full nodes, more tamperproof is the whole bitcoin eco-system.
Using RPC calls to your local bitcoin node requires then some developer knowledge, or a dev-team. That can add to costs in setting up your environment.
Using services like bitpay or similiar has the advantage of getting quickly started. This way you can start to setup your business, and later on, if you have really big values, you are still able to convert to a full node.
So from a risk perspective: when it is low values, go ahead with services from third parties, but when you achieve higher values (above monthly income), think about securing your environment with an infrastructure, that is under your own control.
Thanks a lot for your answer. I was told exactly same thing when I asked in Bitcoin irc chatroom. Since then I have gained more knowledge and have built an API that talks to my full node using rpc. Thanks
There is no company offering reliable full node services(i.e. they have various rate limits). Best is to use your own node. Unfortunately the bitcoin nodes suck big time(i.e. bitpay insight is broken, bitcoind indexes only one address/wallet etc) so you will have to build a lot from scratch. If bitcoin is not a hard requirement I suggest you to look for a "localethereum" project. Ethereum has better software.
| common-pile/stackexchange_filtered |
Can't echo $CATALINA_HOME
When I echo $CATALINA_HOME I get a blank line, but when I ls $CATALINA_HOME I correctly get a listing of the directory. Why can't I echo it? I am running Ubuntu 11.04.
Because ls $CATALINA_HOME is expanding $CATALINA_HOME to the empty string and accordingly simply doing ls (or ls . to be pedantic). Either you're in the correct directory already or you're not getting what you think you are.
Ah! Of course. Turns out I am not getting what I thought I was. I have a regular Tomcat setup - any idea why these environment variables aren't being set? Or maybe they are just set for a tomcat user/group? Because it seems to be working correctly, I was just curious.
I would guess they're set right by whatever starts up Tomcat, but not for anything else. But I don't know Tomcat so it's just a guess.
There is no environment variable $CATALINA_HOME unless you defined it beforehand. You can see that the output of ls will always display the current directory (that is not necessarily your home directory!) if invoked with an empty or undefinded variable:
ls $BLABLABLA
The current user's home directory is stored in $HOME:
echo $HOME
The current user name is stored in $USER:
echo $USER
Only invoking ls with $HOME will always list the contents of the current user's home directory.
| common-pile/stackexchange_filtered |
How to pass in a dynamic parameter to a function inside of a addEventListener scroll as it seems to get memoized?
So I have states as this:
const [pointer, setPointer] = useState("");
And I update this state whenever I fetch data from an API as such:
const fetchData = (fileType) => {
if (pointer === null) return;
API.call(pointer).then((res) => setPointer(res));
};
const triggerRefetch = () => {
if (!hasMore) return;
const element = loadingRef?.current?.getBoundingClientRect();
if (element?.top < window.innerHeight && element.bottom >= 0) {
fetchData(fileType);
}
};
useEffect(() => {
triggerRefetch();
const el = document.getElementById("room-detail-container");
el?.addEventListener("scroll", triggerRefetch);
return () => el?.removeEventListener("scroll", triggerRefetch);
}, []);
So my idea is to implement some sort of an infinite scrolling feature, where if the Loader component is visible on the screen, it will trigger a refetch to the API. That's why I use the addEventListener("scroll", ...). And I'm using this pointer to know exactly what's the next data I want to fetch. However, whenever this triggerRefetch, it seems that the fetchData function somewhat memoize the pointer state, so even though the pointer state has been updated through setPointer(res), if I console.log(pointer) inside of the function, it returns me an empty string. Is there a way I can get around this?
setPointer doesn't update pointer until the next render.
@James I'm still a little confused, because if I try to log pointer outside of the function, it does get updated. Unless you mean something else.
See for example https://stackoverflow.com/questions/54069253/the-usestate-set-method-is-not-reflecting-a-change-immediately
you likely want another use effect to check the scroll position and pointer have changed before making the new request. might also want to wait until scroll end before making the request, multiple scroll events will likely fire off before the api returns anything from the first request
You can think of it this way:
fetchData only knows about the variables in it's surrounding scope for where it is declared. That means that on the initial render, it can see the original value of pointer
triggerRefetch only knows about the variables in it's surrounding scope for where it is declared. That means that on the initial render, it can see the original fetchData function (which was created on the initial render, which again, can see the original pointer state)
Your useEffect() only triggers once on the initial render, so addEventListener() is only called once and is passed the original triggerRefetch function created on the initial render, which in turn knows about the original fetchData function, which knows about the original pointer).
fetchData and triggerRefetch are created again on each rerender of your component, so they can then access the new version of the pointer state, however, since your useEffect triggers once on mount and only adds the event listener once with the original triggerRefetch, the new versions of those functions which see the latest state are never invoked.
To help resolve this, you can re-add your event listener each time triggerRefetch changes by updating your useEffect dependencies. That would normally be on every rerender (as it recreates a new function object), but with the help of useCallback() you can memoize it so that it only gets recreated when fetchData changes. Similarly, you can limit fetchData to only be recreated when the pointer state changes:
const fetchData = useCallback((fileType) => {
if (pointer === null) return;
API.call(pointer).then((res) => setPointer(res));
}, [pointer]);
const triggerRefetch = useCallback(() => {
if (!hasMore) return;
const element = loadingRef?.current?.getBoundingClientRect();
if (element?.top < window.innerHeight && element.bottom >= 0) {
fetchData(fileType);
}
}, [fetchData /*, hasMore <--- depends on what hasMore is */]);
useEffect(() => {
triggerRefetch();
const el = document.getElementById("room-detail-container");
el?.addEventListener("scroll", triggerRefetch);
return () => el?.removeEventListener("scroll", triggerRefetch);
}, [triggerRefetch]);
Thanks this seems to solve the issue, it's just that I want to add a debounce function to the scroll event listener: el?.addEventListener("scroll", debounce(() => triggerRefetch, 500)); but if I scroll maybe 20 units, then the debounce will get triggered 20 times as well. I'm guessing this is because of triggerRefetch being a dependency?
Hey, it may be, although triggerRefetch should only cause your useEffect to remount when pointer changes, unless triggerRefetch includes other dependencies other than fetchData. What does your debounce function look like?
I'm using lodash debounce and nope, its only dependency is fetchData
@Owenn perhaps you can try creating a custom debounce hook (eg: https://jsfiddle.net/hsvn4cra/, note, I haven't tested this). Alternatively, you can try memoizing the debounce function that you get back to avoid it changing on the change of the useEffect (example: Problems with debounce in useEffect). But I'm just guessing what the issue is though, it might be easier if you can create a reproducible example using a codesanbox or something like that which will help.
| common-pile/stackexchange_filtered |
Can I get a visa to Germany outside my home county as a student in the USA?
I'm a Kenyan college student currently in the United States on an F1 Visa but I would like to go to Germany to visit my family in the summer of next year, can I apply for a German Visa in the USA ?
Yes, you not only can, but actually must apply for the German visa in the USA. The general rule is that you should apply for a German visa at your place of residence. Your citizenship is for that decision not relevant.
| common-pile/stackexchange_filtered |
use more than 1 input inference in Virtuoso
I have one input inference in Virtuoso Open Source, that was defined from goodrelations site --
rdfs_rule_set('http://purl.org/goodrelations/v1', 'http://purl.org/goodrelations/v1');
-- that I used in query using --
define input:inference <http://purl.org/goodrelations/v1> .
Now I want to consolidate all brand which have same name, give owl:sameAs inference to it, and insert in into rule set --
rdfs_rule_set('samebrands', 'samebrands');
However, when I add more inference, Virtuoso told me I can't add more than 1 inference to query.
How should I do it? Thank you :).
You have to use another pragma:
DEFINE input:same-as "yes"
See the documentation.
Another approach is to define an inference rule which contains two graphs (GR and samebrands).
You might need to create a separate ontology that includes the terms that you want to use for inference. If it is all terms from another ontology then use owl:imports
| common-pile/stackexchange_filtered |
"ValueError: Precision not allowed in integer format specifier" in f-string pops up even when using only floats
I'm training a Neural Network, but that's not relevant. The only relevant info is this line:
print(f"{it*100/iters}%, Pos: ({Xcg:.4},{Ycg:.4}), Ideal position: ({Xcg:.4},{Xcg**2-Xcg:.4}), Rew: {c_reward:.3}, Ang: {ang:.3}, ε: {self.eps:.3}")
where Xcg, Ycg and ang are floats (either 0. or something like 0.5-0.5, which is also a float);
c_reward is calculated with this expression:
r = math.exp(-abs(posy-posx**2+posx))*math.sqrt(posx**2+posy**2)*abs(posx)/posx if posx!= 0 else 0
and self.eps is calculated using this expression:
self.eps = 0.8 + 0.8 * math.exp(-0.001 * self.steps)
where steps goes from 0 to 4000. So, eps is also a float.
None of those can possibly be floats. What's going on here? It doesn't happen at a specific value of self.steps either, right now it looks random.
I can see at least one int already without seeing all the details, your else is for 0 which is an int. Need to do 0.0 for a float.
r = math.exp(-abs(posy-posx**2+posx))*math.sqrt(posx**2+posy**2)*abs(posx)/posx if posx!= 0 else 0.0
Good catch! That was the problem (well not actually there but your response made me look at other parts of the code I previously thought wouldn't remain active). Thanks
| common-pile/stackexchange_filtered |
What is more compact equation of this relationship?
What is more compact equation of this relationship?
$\sum |x_i|^2\sum |y_j|^2+\sum |x_j|^2\sum |y_i|^2-2|\sum x_i \overline y_i||\sum x_j \overline y_j|$
Remark:
Euclidean space
$\sum x_i^2\sum y_j^2+\sum x_j^2\sum y_i^2-2\sum x_i y_i\sum x_j y_j=\sum_i\sum_j(x_iy_j-x_jy_i)^2$
Are you working in $\mathbb{C}^n$?
@Stucky i am working on complex numbers
$\sum_{i}^{n-1}\sum_{j=i+1}^{n}|(x_i\overline y_j-x_j\overline y_i)|^2$
i think it must be more compact equation of that relationship.
| common-pile/stackexchange_filtered |
Lua - My documents path and file creation date
I'm planning to do a program with Lua that will first of all read specific files
and get information from those files. So my first question is whats the "my documents" path name? I have searched a lot of places, but I'm unable to find anything. My second question is how can I use the first four letters of a file name to see which one is the newest made?
Finding the files in "my documents" then find the newest created file and read it.
The reading part shouldn't be a problem, but navigating to "my documents" and finding the newest created file in a folder.
Can you please clarify your questions? 1) Do you mean a specific place on a Windows computer (that's what it sounds like to me, but I'm not very familiar with Windows; maybe it will be clear to others). 2) How would you ever be able to tell order of file creation based on file names? Were the files named in some specific way that guarantees that?
C:\Users\USERNAME\Documents but USERNAME for me would be Jakob, but what about for others? arent there something that works everywhere? and I can see the newest file by a date added at the end of it, even time.
For your first question, depends how robust you want your script to be. You could use Lua's builtin os.getenv() to get a variety of environment vars related to user, such as USERNAME, USERPROFILE, HOMEDRIVE, HOMEPATH. Example:
username = os.getenv('USERNAME')
dir = 'C:\\users\\' .. username .. '\\Documents'
For the second question, there is no builtin mechanism in Windows to have the file creation or modification timestamp as part of the filename. You could read the creation or modification timestamp, via a C extension you create or using an existing Lua library like lfs. Or you could read the contents of a folder and parse the filenames if they were named according to the pattern you mention. Again there is nothing built into Lua to do this, you would either use os.execute() or lfs or, again, your own C extension module, or combinations of these.
thanks! well I though I could get around the .dll thing or anything with c/c++
Not sure what you mean by "get around the .dll thing or anything with c/c+": you build .dll using c/c++. I don't think it is clear based on your comment whether this answers your question or not. If it does, please mark it as accepted, otherwise please comment clearly what is lacking so we can help.
I got it all sorted out
@SpecialLUANewbie Great. Feel free to pay forward by posting what worked (via a comment or an answer) so others can benefit from everyone's efforts.
| common-pile/stackexchange_filtered |
Prolog for the Mac
For one of my classes I need to write several prolog programs. Can someone suggest a mac friendly prolog compiler? I tried gnu prolog but when it doesn't work for me.
As I said there, I would try packaged versions. Fink has gprolog and swi-prolog.
Hey, I just tried that. But not having any luck. Created a new post: http://superuser.com/questions/146988/mac-os-x-trying-to-install-prolog-using-fink
| common-pile/stackexchange_filtered |
Artificial Intelligence for a simple game
So currently I am working on a simple game for fun. In the game you walk around in a map and meet foes and enter battles similar to those in Pokemon. I have finished almost everything in the game and I now want to implement AI for the non-playable fighters (the foes). The game is pretty simple and there is very little complexity in terms of the actions that me or the foe can do. The possible actions the foe can do are to block and use attacks (the attacks vary in terms of damage and cost). So my question is, what data structure or method would be most suitable for this purpose? As I said it is not meant to be really complicated but it should be somewhat challenging. I have read about some ways in which AI is implemented in games such as action lists but I feel these methods are way to complicated for a game like this.
You've given too little information to give an intelligent answer.
What more information is needed? My question is simply: What is a basic way to implement simple AI?
AI can be implemented MILLIONS of ways based on what you want it to do. What are the foe's abilities? "The possible actions the foe can do are to block and use attacks (the attacks vary in terms of damage and cost)" isn't a lot of information. Can they move? If so, how? In which directions? How are they going to determine where they move? How are they going to determine which attacks to use? When I think "game AI", I think algorithms over data structures.
The possible actions are only the ones I mentioned, no moving or anything. Would using many "if" statements be a good idea then?
As pointed out by @Frecklefoot, your question is extremely broad. However, if you've got a simple game, start by putting together a decision tree for you foes (given a state of the game, what are the possible actions a foe can take and how would it decide between them) and then implement using if-then or switch statements.
Here is a very simple class for "AI" for a game you described. As you can see, this is just a partial solution. You'll have to come up with the Attack class (just damage and cost, right?) and the AttackResult class.
I would use this as a dependency for your Foe, so you can just inject his brain when you construct him. That way, you can inherit from this class and create different types of enemies (some smart, some stupid, etc.):
public class FoeBrain {
bool block;
ArrayList[Attack] attacks;
public FoeBrain() {
block = false;
attacks = new ArrayList[Attack]();
...
}
public AttackResult determineAttack() {
...
}
}
Add whatever other methods you think it needs. It might need some massaging; it's been a while since I developed in Java.
| common-pile/stackexchange_filtered |
Orbeon Forms: Dynamic label not working in nested section
I am using Orbeon Forms 2<IP_ADDRESS>912301747 CE.
My form structure looks like this:
<s-2>
<s-2-iteration>
<s-2-position>
...
<s-2.7>
<s-2.7-iteration>
<s-2.7-position>
...
</s-2.7-position>
</s-2.7-iteration>
</s-2.7>
...
</s-2-position>
</s-2-iteration>
</s-2>
Tags <...-position> are sections in which I would like to have dynamic labels like "Position no X", where X is the repeat number. I've done that for <s-2-position> tag using xxf:repeat-position() in Section Settings/Label/Template Parameters (screen).
But, when I've tried to do that in <s-2.7-position> tag, which is nested in <s-2-position> it did not work. Label was blank when running form. Futhermore, when I've tried to use any sort of dynamic label in <s-2.7-position> and in any other tag inside <s-2.7-position> it did not work as well.
So, have you ever encountered this problem? What is the solution/workaround? Is that Orbeon Froms error?
Thank you!
I have found a cause of the problem. After renaming tag <s-2.7-position> into anything not starting with s-2.7 dynamic labels with template parameters are working, including xxf:repeat-position().
Or rather something not including s-2.7, even separated.
I have found a cause of the problem. After renaming tag <s-2.7-position> into anything not including s-2.7 (even separated) dynamic labels with template parameters are working, including xxf:repeat-position().
Indeed, it would make sense for xxf:repeat-position() to work in this case, or to have higher level Form Runner specific functions. This is covered by the request for enhancement (RFE) #4144. In the meantime, you can try, as mention in the RFE, something like:
count(../preceding-sibling::*) + 1
I also added a +1 from OP on #4144. ‑Alex
Thaks for answer, but my problem was even more general. Any template parameter (even XPath Expression: 1) was giving empty label in result. I found another cause of it and described it in comment to question. After fix even xxf:repeat-position() is working.
And I think above is a good candidate for an to-fix issue on your GitHub as well.
@KamilKoszarny I read your comments on the question, but don't quite understand what the situation is. I imagine that this is with a form you created in Form Builder? Does the problem also happen without doing any changes to the source of the form? If so, would it be possible for you to create a minimal example, and share the source code with us?
Sure, I've posted example in today's answer below. Form was created in Form Builder and can be reproduced without direct changes in source (only using GUI) as well.
@KamilKoszarny I see you posted the source of a form under "Not working form" in an answer (BTW, answers on Stack Overflow are supposed to be for answer, not for additional information.) I imported this as a form on https://demo.orbeon.com/demo/fr/orbeon/builder/edit/6fb69d9054577b13915ee295a2d759b024a35962. What should I do next to reproduce the problem, i.e. what are the steps? What is the behavior you observe? And what is the behavior you expect?
Sorry, I did not managed to fit the form here. If you change s-foo in this form into s-2.7-pozycja then label of this section is not showing (again sorry for confusing). Expected behavior is to have label with "Position X" where X is repetition number.
In edited answer below I uploaded images with wrong and expected behavior. Also changed "Not working form" into "Working form" as posted form is with described fix (again sorry for confusing).
After changing names of s-2.7-pozycja tags in below form into s-foo, labels of those tags show correctly. Before labels were not shown at all (empty).
Working form:
<xh:html xmlns:xh="http://www.w3.org/1999/xhtml"
xmlns:xxf="http://orbeon.org/oxf/xml/xforms"
xmlns:fr="http://orbeon.org/oxf/xml/form-runner"
xmlns:xf="http://www.w3.org/2002/xforms"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:fb="http://orbeon.org/oxf/xml/form-builder"
fr:data-format-version="4.0.0">
<xh:head>
<xh:title>PL825 - Projekt podziału przemieszczenia</xh:title>
<xf:model id="fr-form-model" xxf:expose-xpath-types="true" xxf:analysis.calculate="true"
xxf:hint.appearance="tooltip">
<!-- Main instance -->
<xf:instance id="fr-form-instance" xxf:exclude-result-prefixes="#all" xxf:index="id">
<form>
<s-2>
<s-2-iteration>
<s-2-pozycja>
<s-2.7>
<s-2.7-iteration>
<s-foo>
<grid-7>
<control-1/>
</grid-7>
</s-foo>
</s-2.7-iteration>
</s-2.7>
</s-2-pozycja>
</s-2-iteration>
</s-2>
</form>
</xf:instance>
<!-- Bindings -->
<xf:bind id="fr-form-binds" ref="instance('fr-form-instance')">
<xf:bind id="s-2-bind" ref="s-2" name="s-2">
<xf:bind id="s-2-iteration-bind" ref="s-2-iteration" name="s-2-iteration">
<xf:bind id="s-2-pozycja-bind" ref="s-2-pozycja" name="s-2-pozycja">
<xf:bind id="s-2.7-bind" ref="s-2.7" name="s-2.7">
<xf:bind id="s-2.7-iteration-bind" ref="s-2.7-iteration" name="s-2.7-iteration">
<xf:bind id="s-foo-bind" ref="s-foo" name="s-foo">
<xf:bind id="grid-7-bind" ref="grid-7" name="grid-7">
<xf:bind id="control-1-bind" ref="control-1" name="control-1" xxf:whitespace="trim"/>
</xf:bind>
</xf:bind>
</xf:bind>
</xf:bind>
</xf:bind>
</xf:bind>
</xf:bind>
</xf:bind>
<!-- Metadata -->
<xf:instance id="fr-form-metadata" xxf:readonly="true" xxf:exclude-result-prefixes="#all">
<metadata>
<application-name>EMSC</application-name>
<form-name>PL825</form-name>
<title xml:lang="pl">PL825 - Projekt podziału przemieszczenia</title>
<description xml:lang="pl"/>
<created-with-version>2<IP_ADDRESS>912301747 CE</created-with-version>
<updated-with-version>2<IP_ADDRESS>912301747 CE</updated-with-version>
<library-versions>
<orbeon>1</orbeon>
</library-versions>
</metadata>
</xf:instance>
<!-- Attachments -->
<xf:instance id="fr-form-attachments" xxf:exclude-result-prefixes="#all">
<attachments/>
</xf:instance>
<!-- All form resources -->
<xf:instance xxf:readonly="true" id="fr-form-resources" xxf:exclude-result-prefixes="#all">
<resources>
<resource xml:lang="pl">
<control-1>
<label/>
<hint/>
</control-1>
<s-2.7>
<label>2.7 </label>
</s-2.7>
<s-foo>
<label>Position {$nr_poz}</label>
</s-foo>
<s-2>
<label>2. </label>
</s-2>
<s-2-pozycja>
<label>Position {$nr_poz}</label>
</s-2-pozycja>
</resource>
</resources>
</xf:instance>
<xf:instance xxf:readonly="true" xxf:exclude-result-prefixes="#all" id="s-2-template">
<s-2-iteration>
<s-2-pozycja>
<s-2.7>
<s-2.7-iteration>
<s-foo>
<grid-7>
<control-1/>
</grid-7>
</s-foo>
</s-2.7-iteration>
</s-2.7>
</s-2-pozycja>
</s-2-iteration>
</xf:instance>
<xf:instance xxf:readonly="true" xxf:exclude-result-prefixes="#all" id="s-2.7-template">
<s-2.7-iteration>
<s-foo>
<grid-7>
<control-1/>
</grid-7>
</s-foo>
</s-2.7-iteration>
</xf:instance>
</xf:model>
</xh:head>
<xh:body>
<fr:view>
<fr:body xmlns:p="http://www.orbeon.com/oxf/pipeline" xmlns:xbl="http://www.w3.org/ns/xbl"
xmlns:oxf="http://www.orbeon.com/oxf/processors">
<fr:section id="s-2-section" bind="s-2-bind" repeat="content" min="2"
template="instance('s-2-template')"
apply-defaults="true"
fb:initial-iterations="first">
<xf:label ref="$form-resources/s-2/label"/>
<fr:section id="s-2-pozycja-section" bind="s-2-pozycja-bind">
<xf:label ref="$form-resources/s-2-pozycja/label">
<fr:param type="ExpressionParam">
<fr:name>nr_poz</fr:name>
<fr:expr>xxf:repeat-position()</fr:expr>
</fr:param>
</xf:label>
<fr:section id="s-2.7-section" bind="s-2.7-bind" repeat="content" max="99"
template="instance('s-2.7-template')"
apply-defaults="true"
fb:initial-iterations="first">
<xf:label ref="$form-resources/s-2.7/label"/>
<fr:section id="s-foo-section" bind="s-foo-bind">
<xf:label ref="$form-resources/s-foo/label">
<fr:param type="ExpressionParam">
<fr:name>nr_poz</fr:name>
<fr:expr>xxf:repeat-position()</fr:expr>
</fr:param>
</xf:label>
<fr:grid id="grid-7-grid" bind="grid-7-bind">
<fr:c x="1" y="1" w="12">
<xf:input id="control-1-control" bind="control-1-bind">
<xf:label ref="$form-resources/control-1/label"/>
<xf:hint ref="$form-resources/control-1/hint"/>
<xf:alert ref="$fr-resources/detail/labels/alert"/>
</xf:input>
</fr:c>
</fr:grid>
</fr:section>
</fr:section>
</fr:section>
</fr:section>
</fr:body>
</fr:view>
</xh:body>
</xh:html>`
Wrong behavior
Expected behavior
| common-pile/stackexchange_filtered |
Why does U.S. airspace revert to class E above flight level 600?
According to Wikipedia, airspace defaults to class E below 18,000 feet AMSL (that is, all airspace below 18,000 feet AMSL is class E unless otherwise specified), whereas all airspace from 18,000 feet AMSL through FL600 is class A... but, above FL600, U.S. airspace reverts to class E.
Why does U.S. airspace revert to class E above FL600, instead of simply staying class A all the way up?
@abelenky: Quoth the Federal Aviation Administration's Aeronautical Information Manual (12 October 2017 edition), page 3-2-9 (page 137\732 of the PDF): "The airspace above FL 600 is Class E airspace."
You ask why doesn't it go all the way up... but all the way up to where? It's somewhat of an arbitrary limit, but FL600 tends to generally be the top of the troposphere, so it seems like a good choice if we're picking numbers.
One reason could be that the FAA AIM (Airman’s Information
Manual) says that the top of the service volume for the high-altitude
VORs is 60,000 feet above the VOR facility, so it seemed like a good altitude to end the restriction of Class A, that all aircraft must be on an IFR flight plan. If you're tooling around above FL600, perhaps you don't want/need to be on an IFR flight plan (think experimental planes, rocket launches, etc.)
It could also be that when they set up the airspace, there wasn't a whole lot going on above 60,000 feet, and given that survivability at that altitude without a space suit is low, so there weren't a lot of aircraft that high, plus the top of the troposphere is around that altitude, it just seemed to be a good number. Most commercial aircraft have a service ceiling well below FL600 (the Boeing 777 ceiling is FL431), and since Class A is positively controlled airspace with everyone having to be on an IFR flight plan, aircraft above FL600 are not as common, and so it didn't make sense to spend resources (at the time) to force things that high to go through some kind of waiver process, or put the effort in maintaining the services required to provide IFR separation services for aircraft that high.
However, as we look to the future, there is a great report from Embry-Riddle about Safe Operations Above FL600 that sheds some light on what goes on above FL600 and considerations that should be taken into account going forward as the airspace starts to get more crowded with drones, and planned spaceflight and supersonic transports.
"... but all the way up to where?" The Kerman line would seem to be a sensible upper boundary.
No, it is not a sensible boundary. There is no traffic up that high to manage.
@abelenky yeah, most airplanes used for commercial service have a service ceiling well below FL600 (777 ceiling is FL431), so if you think about what's up there above FL600 it's probably stuff that the FAA doesn't need to force on an IFR flight plan. It'd be military stuff or rocket launches, mostly.
@Canuk: The link you provided to the 777X's specifications does not have the service ceiling listed; were you perhaps meaning to link to the specifications section of the main 777 page, which does?
Average top of troposphere is FL360. FL600 is not related to any interesting property of atmosphere as it is usually somewhere in the middle of stratosphere.
Airspace definitions are primarily about how Air Traffic Control handles mostly commercial and GA traffic, with occasional military traffic. They manage traffic primarily so they don't hit each other and have a mid-air collision. I don't know of any commercial traffic that goes above 60,000 ft. Even the Concorde's altitude record was FL600.
Only a few rare military and space-exploration flights are capable of flight above FL600, and that is on relatively rare occasions, so there are no traffic conflicts.
Even if there were traffic conflicts, ATC's typical control mechanisms (Radio voice call and response) probably wouldn't be fast enough to de-conflict airplanes travelling hypersonic speeds, as it required at extremely high altitudes.
In one of your comments, you suggested the Kármán line (around 330,000ft) would be a sensible boundary point.
Why would the FAA spend people, time, money and resources to monitor empty space?
| common-pile/stackexchange_filtered |
How to Tile Asp repeator control horizontally
I am using asp.net repeator control to get pictures from picture library in sp2010. I am able to display it vertically in one row. I now want to tile pictures as follows:
How do i do that. any help would be appreciated.
Here is the markup
<table width="60%" cellpadding="0" cellspacing="0" border="0">
<tr>
<asp:Repeater ID="rptrSearchResults" runat="server" OnItemDataBound="rptrSearchResults_ItemDataBound">
<HeaderTemplate>
</HeaderTemplate>
<ItemTemplate>
<td >
<div>
<a id="aItemURL" runat="server" href='<%#DataBinder.Eval(Container,"DataItem.itemURL")%>'>
<img width="100" height="100" id="imgPhoto" runat="server" src='<%#DataBinder.Eval(Container,"DataItem.PhotoURL")%>' /></a>
</div>
</td>
</ItemTemplate>
With CSS? What markup is generated?
code is inserted in the body of this topic. Yes with css or any other methods
I used DIV and applied css.
I wrapped the images around a div and inserted ID called "repeat" with float=left css on the div.
That worked.
In this case, I recommend using an ASP.NET DataList instead of the Repeater. You can still specify all the templates like in the case of the Repeater (Header, Footer, Item, Separator, Alternating) but you have more control over the direction of the flow and the number of columns. Check out please the RepeatDirection, RepeatLayout and RepeatColumns properties of the control.
In your case I would do:
<asp:DataList ID="dlistSearchResults" runat="server" OnItemDataBound="rptrSearchResults_ItemDataBound" RepeatDirection="Horizontal">
<HeaderTemplate>
</HeaderTemplate>
<ItemTemplate>
<a id="aItemURL" runat="server" href='<%#DataBinder.Eval(Container,"DataItem.itemURL")%>'>
<img width="100" height="100" id="imgPhoto" runat="server" src='<%#DataBinder.Eval(Container,"DataItem.PhotoURL")%>' /></a>
</ItemTemplate>
</asp:DataList>
It will automatically fill out the available space horizontally with pictures, and it will continue on the next row when it reached the right margin.
I hope it helped!
| common-pile/stackexchange_filtered |
Is there a way to disable video rendering in OpenAI gym while still recording it?
Is there a way to disable video rendering in OpenAI gym while still recording it?
When I use the atari environments and the Monitor wrapper, the default behavior is to not render the video (the video is still recorded and saved to disk). However in simple environments such as MountainCarContinuous-v0, CartPole-v0, Pendulum-v0, rendering the video is the default behavior and I cannot find how to disable it (I still want to save it to disk).
I am running my jobs on a server and the officially suggested workaround with xvfb does not work. I saw that a lot of people had problems with it as it clashes with nvidia drivers. The most common solution I found was to reinstall nvidia drivers, which I cannot do as I do not have root access over the server.
Yes you have video_callable=False kwarg in gym.wrappers.Monitor()
import gym
from gym import wrappers
env = gym.make(env_name) # env_name = "Pendulum-v0"
env = wrappers.Monitor(env, aigym_path, video_callable=False ,force=True)
then you wish to use
s = env.reset() # do this for initial time-step of each episode
s_next, reward, done = env.step(a) # do this for every time-step with action 'a'
to run your episodes
NameError: name 'aigym_path' is not defined
aigym_path is a directory you wish to save the videos rendered UNLESS you set the 'video_callable' being False. Give it a string of directory sth like '/home//bla/bla/savedFilms'
Call this function before calling env.render(), since rendering is not imported before your first render() call, and this function will replace the default viewer constructor.
def disable_view_window():
from gym.envs.classic_control import rendering
org_constructor = rendering.Viewer.__init__
def constructor(self, *args, **kwargs):
org_constructor(self, *args, **kwargs)
self.window.set_visible(visible=False)
rendering.Viewer.__init__ = constructor
| common-pile/stackexchange_filtered |
How do I get to the value in the 'scriptPubKey' part of the transaction ?
How do I get to the value in the 'scriptPubKey' part of the transaction ?
The sender knows only the address (1A3XjcuZmcszX2tGoVn1TrMNchuHwdAxZV) so how did he get into 63339ebf4914d964fbc729b0dc4c1a44c4fb8f80 ?
I've tried ripemd160(sha256(address)) didn't work .
I've tried sha256(sha256(address)) didn't work.
Addresses are kept in Base58Check format. Here's how you decode it.
Decode the base58 encoding (similar to Base64). You should have 25 bytes.
Check that the 1st byte is 0x00 (the version byte of Bitcoin)
Check that the last 4 bytes are a correct checksum of the rest. This is done (in Python) by:
sha256(sha256(data[0:21]))[:4] == data[-4:]
(Or, "take the first 4 bytes of a double-SHA256 of the first 21 bytes of the decoded data, then compare to the last 4 bytes of the decoded data.")
Take the middle 20 bytes (data[1:21]) and insert it into the following scriptPubKey.
OP_DUP OP_HASH160 <x> OP_EQUALVERIFY OP_CHECKSIG
Here are some libraries that will do steps 1-3:
libbasse58 (C)
base58perl (Perl)
python-bitcoinlib (Python)
bitcoinj (Java)
byte[] hash160 = new Address(NetworkParameters.prodNet(),
<addr>).getHash160();
coinstring.js (node.js)
require('coinstring').decode(<addr>)
Various examples on rosetta code
Worked well accept
"Check that the last four bytes are a correct checksum" , of what ?
@HaddarMacdasi Does the edit make it more clear?
I'm trying with an online tool. can't seem to work it out. The checksum part.
If you are using C# with NBitcoin
var address = new Script("OP_DUP OP_HASH160 ... OP_EQUALVERIFY OP_CHECKSIG").GetDestinationAddress(Network.Main);
It has the nice effect to work with P2SH addresses too.
| common-pile/stackexchange_filtered |
QNAP MySQL container: "You need to specify one of the following as an environment variable"
In my QNAP NAS, I've installed a MySQL container, but after trying to start it, there is a console error:
You need to specify one of the following as an environment variable: MYSQL_ROOT_PASSWORD, MYSQL_ALLLOW_EMPTY_PASSWORD, _MYSQL_RANDOM_ROOT_PASSWORD.
I can't see any way to configure the container with such variables. I can see the configuration in Inspect, but this is not editable. Separate to Inspect, if I click Edit, I cannot add any such variable.
If I click Attach Terminal, there is an error "Container not running", and I can't start the container due to the error above.
Help appreciated.
Can you add what have you done during I've installed a MySQL container, when creating there should be Environment section in Advanced Settings, there you can specify for example MYSQL_ROOT_PASSWORD and value root
Thanks @zori. That fixed that. If I deploy a phpMyAdmin Docker container using the same process, is there any configuration I need to make to have it connect to the MySQL server? When trying to login with root/root to phpMyAdmin, I receive an error Cannot log in to the MySQL server.
You need to provide more details and probably new question what have you done, normally you as server name you should provide mysql container name, also you need to have connection between them if I'm correct it's called virtual switch in QNAP. More info you can find on Docker Hub phpMyAdmin it's good to provide env variable PMA_ARBITRARY=1 it will allows you to provide server name, so it can be via ip or hostname. Unfortunately I don't QNAP to give more details
New question here @zori.
As we got in comment, when creating MySQL container there should be Environment section in Advanced Settings, there you can specify for example MYSQL_ROOT_PASSWORD and value root
Before creating any container I encourage your to read documentation on https://hub.docker.com, in this case it will be https://hub.docker.com/_/mysql. There you can find many information that will help you to prepare it correctly
| common-pile/stackexchange_filtered |
official icon for Touch ID and Face ID
Does Apple provide icons for Touch ID and Face ID that can be used in Apps?
I cannot find them. Searched in:
https://developer.apple.com/ios/human-interface-guidelines/icons-and-images/system-icons/
https://developer.apple.com/ios/human-interface-guidelines/user-interaction/authentication/
https://developer.apple.com/documentation/localauthentication
The Apple MacBook Pro site provides one, but I think it´s not allowed to use it.
How do you solve that in your code? Designing an own icon?
The icons for Touch ID and Face ID now can be found in SFSymbols.
Use the symbol names "touchid" (iOS 14.0+, macOS 11.0+) and "faceid" (iOS 13.0+, macOS 11.0+).
No. Apple recommends (in the links you provided) that you refer to the technology by name. eg:
Sign In with Face ID
or
Sign In with Touch ID
| common-pile/stackexchange_filtered |
Get the list of registered view for navigation with Prism
I am working on a modular application with Prism DryIoc on WPF
In my modules I have views registered for navigation like this
public void RegisterTypes(IContainerRegistry containerRegistry)
{
containerRegistry.RegisterForNavigation<Manual1View>();
containerRegistry.RegisterForNavigation<Manual2View>();
containerRegistry.RegisterForNavigation<ProductionView>();
}
Is there a way to find the list of currently registered views for navigation directly from Prism or the Container ?
What do you want to achieve? If you want those views to automatically show up in a region, register them for that region (instead of registering for navigation and navigating by hand).
I want to be able to list them in some configurable HMI, for exemple I have some items controls with buttons that will perform RequestNavigate to a view and I want the user to be able to choose a registered view.
Better not abuse the container, then, and roll out your own registry...
You should roll out your own registry for those views you want the user to be able to select from. That one could also do the registration for navigation, so you don't have to duplicate the registration code.
internal class MachineModeRegistry : IMachineModeRegistry
{
public MachineModeRegistry(IContainerRegistry containerRegistry)
{
_containerRegistry = containerRegistry;
}
#region IMachineModeRegistry
public void RegisterView<T>()
{
_containerRegistry.RegisterViewForNavigation<T>(nameof(T));
_listOfViews.Add( nameof(T) );
}
public IReadOnlyCollection<string> RegisteredViews => _listOfViews;
#endregion
#region private
private readonly List<string> _listOfViews = new List<string>();
private readonly IContainerRegistry _containerRegistry;
#endregion
}
and in the app or bootstrapper's RegisterTypes
_containerRegistry.RegisterInstance<IMachineModeRegistry>(new MachineModeRegistry(_containerRegistry);
and in the modules'
_containerRegistry.Resolve<IMachineRegistry>().RegisterView<Manual1View>();
Note: Resolve in RegisterTypes is evil and error-prone, but can't be avoided here.
Note: you can't inject IContainerRegistry, therefore we use RegisterInstance (registering the container registry instead would have been very evil)
Is there a way to find the list of currently registered views for navigation directly from Prism or the Container ?
Not using the IContainerRegistry interface but using the DryIoc implementation:
if (containerRegistry is DryIocContainerExtension dryIoc)
{
IContainer container = dryIoc.Instance;
Type[] types = container.GetServiceRegistrations()?
.Where(x => !string.IsNullOrEmpty(x.OptionalServiceKey?.ToString()))
.Select(x => x.ImplementationType)
.ToArray();
}
Doesn't this get all named registrations (not just the views)?
I will try this, but I think I will also find all the other registrations (services and so on, but may be it can be filtered)
GetServiceRegistrations gets all registrations so you have to filter them, for example using the OptionalServiceKey as I showed in the answer. It this filtering isn't good enough, you'll have to keep track of the views yourself one way or another.
| common-pile/stackexchange_filtered |
Convert WPF BitmapSource to Icon for window
I have a 16x16 .png file which I have loaded as an ImageSource (BitmapSource) and it is working fine when I use it on an Image in a tabcontrol header.
I now want to use that same image in a floating window (inherited from the WPF Window class) when the user drags the document tab. (This is AvalonDock which I have tweaked to allow images in the tab header)
After many searches on the web, I understand that Window.Icon requires a BitmapFrame but all the sample code seems to assume that a .ico file is available which it isn't in my case.
I have tried the following code (plus variants including cloning, freezing etc):
var image = (Image) content.Icon;
var bitmapSource = (BitmapSource) image.Source;
Icon = BitmapFrame.Create(bitmapSource);
but when the Show() method is called, an exception is thrown: "Exception of type 'System.ExecutionEngineException' was thrown."
How can a I create a compatible bitmap on the fly to allow the Window to display the icon?
Do you need to be able to load the image dynamically, or will you be using the same icon all the time?
If the image isn't determined at runtime then you could always just convert the image manually to produce a .ico file using Microangelo or something similar. Obviously this doesn't help if you actually do need to create the icon on the fly.
I suppose I'm asking how to create an icon resource in memory on the fly at runtime.
Turns out that the BitmapFrame.Create was correct. Just needs to pass a Uri rather than an existing BitmapSource.
| common-pile/stackexchange_filtered |
Let $a,x \in \mathbb{Z}$ and $n \in \mathbb{N}$. Suppose that $ax ≡ 1 \mod n$. Prove that $a$ is coprime to $n$
Let $a,x \in \mathbb{Z}$ and $n \in \mathbb{N}$. Suppose that $ax ≡ 1
\mod n$. Prove that $a$ is coprime to $n$.
I have shown that $1=ax-ny$ for some $y \in \mathbb{Z}$ but I don't know if this is sufficient? (I.e. if Bezout's lemma is an $\iff$ situation)?
Help please.
$d\mid\color{#c00}{a,n},\Rightarrow, d\mid \color{#c00}ax!-!\color{#c00}ny = 1,,$ i.e $,d\mid 1,,$ so $,a,n,$ have only $,d=\pm1,$ as common divisors.
Beautiful answer.
With $d=\gcd(a,n)$, we conclude from $ax\equiv 1\pmod n$ that $0\equiv ax\equiv 1\pmod d$.
With your ansatz: Assuming that $a$ and $n$ are multiples of $d$, then so is $ax-ny$, so $d\mid 1$.
Well, how about going the other way? Contrapositive proof:
If gcd(a,n) = b $\neq$ 1
Then we have the set:
{at + nu: t,u $\in \mathbb{Z}$} = b$\mathbb{Z}$ (quite easy to show why they are equal)
So, lets say n = bq. The next number in b$\mathbb{Z}$ is b(q+1). That means b$\mathbb{Z}$ cannot contain any number $ax \equiv$ 1 mod n. Due to the way I defined the set, it contains all ax such that x $\in \mathbb{Z}$.
| common-pile/stackexchange_filtered |
Ubuntu shows wrong graphics card
I installed Ubuntu 12.04.2 LTS on my Dell Inspiron laptop. I have an Nvidia GeForce GT 525M graphics card but lspci shows 540M? I am unable to install nvidia drivers for this card now.
I also asked another question about unable to install nvidia drivers here.
I tried the solution mentioned in this answer, but doesn't work.
Sorry if it is inappropriate to ask two questions about the same thing...
Edit:
Result of
lspci| grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev a1)
optirun glxspheres
[ 1053.429303] [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA kernel module. Please see the
[ 1053.429348] [ERROR]Aborting because fallback start is disabled.
These two cards are identical aside from how fast the core/processor and memory runs on the cards.
See the comment made by Lekensteyn here: http://askubuntu.com/questions/240825/how-to-make-nvidia-optimus-gt540m-work-on-ubuntu-12-10 Here too the next comment states it DOES work. If not it might be worth while to edit your question with the results from the commands used in that topic.
Rinzwind: Thanks for the solution, now I do have some drivers installed, but on running a simple CUDA program it shows CUDA driver version is insufficient for CUDA runtime version. Would you be able to help out with this?
| common-pile/stackexchange_filtered |
ordered list two column flow
I want a preferably CSS-only technique for getting an ol to flow into two columns if it is longer than the height of the container. The number of items in the list may vary and the height of the container may change.
When I try setting the li to width:50% and float:left it goes in two columns but 2 goes beside 1 instead of below it.
what I want to achieve is this:
abcdef 4. abcdef
abcdef 5. abcdef
abcdef
Have you tried this? link Automatic two columns with CSS or JavaScript
This will work for modern browsers (i.e not IE) using the column-count and column-gap rules:
JSFiddle demo
ol#two-columns {
-moz-column-count: 2;
-moz-column-gap: 20px;
-webkit-column-count: 2;
-webkit-column-gap: 20px;
column-count: 2;
column-gap: 20px;
}
The cross-browser option would be:
define two DIVs inside the OL and float them to the left
pre-calculate the total number of LI's
if the total exceeds the capacity of one DIV, put the rest in the second DIV
| common-pile/stackexchange_filtered |
Starting to use PHP namespaces in an existing repository, all files give syntax errors
Having replaced the normal <?php with <?php namespace foo; on each file in my main source I received the error:
PHP Fatal error: Namespace declaration statement has to be the very first statement in the script in src/admin_house_videos.php on line 1
On a majority of files. Looking around the internet people suggest that something called UTF8-BOM is to blame, but how can I get rid of this?
Why do you only ask and answer your own questions? Very weird...
I answer a lot of questions and many of my questions have been answered by other people. However when I solve problems for which I haven't discovered a complete end-to-end solution to online I like to post them to help others, which is why the site encourages Q&A style answers.
Not saying there's anything wrong with it - I just went back through your post history and 99% of your questions were answered and accepted by yourself. No big deal, they seem to add value, just odd.
It turned out, helpfully, that Sublime Text was able to fix this problem on a per-file basis:
File -> Save with Encoding -> UTF8
However on a large repository this would be time consuming. I then found this guide which suggested that the unix tool awk could be used to replace the file:
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' file.php
However trying to write this back to the file with > file.php appended to the end blanked the file, presumably some sort of read/write issue with output is sent to stdout.
So a bash script needed to be written to solve the issue; in my case it was run from a root where the files were in ./src/ but change the $dir parameter to alter this. The echo line is just to report on progress.
#!/bin/bash
dir="./src/"
for file in `find $dir -name "*.php"` ; do
echo $file
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' $file > $file.awk
mv $file.awk $file
done
| common-pile/stackexchange_filtered |
Is there more police brutality in cities controlled by the Democratic party?
Shaun King posted this claim on Twitter:
Democrats, from top to bottom, are running the cities with the worst
police brutality in America right now.
Is this true? It seems to have been picked up by several right wing news outlets, including Fox, but none of them seem to have actually verified it or provided any statistical evidence.
https://twitter.com/shaunking/status/1268911183878410246?s=20
Obvious caveat: Democrats are running most big cities in the USA...
True, and also areas with large black populations tend to vote Democrat.
Lots of related talk (regarding cities being mostly Democrat-ruled) in this question.
@Evargalo, are you saying that police brutality is more common in larger cities than smaller ones?
@PaulDraper I have no statistics to support this claim, but intuitively I think that's very possible.
A major problem here is how to define "worst police brutality in America"? I don't think we can have any conclusive evaluation of Shaun King's statement without a clear definition and an appropriate collection of data. However, as a first approximation, let's take the following list of cities ranked by the rate of people killed by police per million inhabitants, 2013-2019.
St Louis is extreme, and there are many other Democrat-run cities on this list. But Oklahoma City (#2) has had a Republican mayor throughout this period (mostly Mick Cornett), as had Tulsa (#7) (Dewey F. Bartlett Jr. and G. T. Bynum). This is far from conclusive proof of anything, but I do think it suggests there are exceptions to King's generalization.
This dataset uses the 100 largest cities by population. Wikipedia has a list of the mayors for the top 50, of which 35 are Democrats, 13 Republicans and 2 Independents.
I just want to remark that the term police brutality would extend to far beyond just merely 'police killings' (EG suspect breaks jaw 'resisting arrest', pepper spraying protestors in the face, or medics successfully saving what would have otherwise been a dead victim etc), so it strongly depends on what Shaun King defines as police brutality. All the above data shows is which cities have the highest number of killings per city, not necessarily the highest amount of police brutality.
Good links, but this answer doesn't make clear if the exceptions are exceptions to a trend or a trend.
@PaulDraper It's not clear to me that the original claim is merely about a trend. To me "top to bottom" means an entire list of the worst cities (although it could also mean from the top to the bottom of the local governments). Either way, the claim is weak if the #2 worst city has a Republican mayor.
@BrianZ "top to bottom" is a common phrase for "entire organization": mayor, city council, commissioner, etc. Re "merely about a trend," as you say yourself, it's a "generalization." I believe a good quality answer would help understand whether Shaun King was correct 2/10 or 8/10.
Yes
It's not a very technical or precise claim, but its essence is factually sound.
Using the common measure for "police brutality" as police killings,* a large majority of the cities with high police brutality have Democratic majors. While King did not qualify it further, this claim remains mostly true even when taking into account party prevalence among mayors.
As usual, do not mistake King's claim of correlation as causation. There might be an underlying factor yielding both results; changing party control wouldn't necessarily reduce policy brutality.
Also, there isn't any claim of statistical significance. Several dozen cities is a relatively small sample size.
Cities with high police killings
Earlier this year, security.org looked at cities with high police brutality: Which cities have the biggest problems with police brutality? (The source data comes from the aggregator Mapping Police Violence.)
(source: security.org)
(The criteria for a "major city" isn't given, but Scottsdale is 84th most populous in the U.S., with 262k people.)
Of the fifteen cities with the most police killings, 12 have Democratic mayors and 3 have Republican mayors.
Laredo - D
Orlando - D
Las Vegas - D
Tucson - D
Scottsdate - R
Mesa - R
Albuquerque - D
Phoenix - D
Columbus - D
Denver - D
Louisville - D
Aurora - R
Atlanta - D
St. Louis - D
Kansas City - D
Cities with low police killings
While the above renders King's statement true, it's relevant context to understand how much of this is due simply to prevalence of Democratic mayors.
Of the fifteen cities with the least police killings, 8 have Democratic mayors and 6 have Republican mayors:
New York - D
Indianapolis - D
El Paso - R
District of Columbia - D
Chicago - D
San Jose - D
Omaha - R
Virginia Beach - R
San Francisco - D
Minneapolis - Other
Wichita - R
Seattle - D
Nashville - D
Corpus Christi - R
Mayor party prevalence
Of the top fifty U.S. cities, there are 35 Democrats and 13 Republicans. It would seem that large cities have mostly Democratic mayors, but not quite so many as to be proportionate to their presence in the earlier list.
* These numbers are for all police killings, whether officially ruled "justified" or not. This approach addresses a common concern about bias in judicial treatment.
Ballparking a 3/4 chance of a large-city mayor being Democratic, the standard deviation for a sample of 15 cities is sqrt(151/43/4) = sqrt(45)/4 or approximately 1.7. Since the expected value of the sample size would be 3/4*15 or roughly 11, having 12 democratic mayors out of those 15 cities is well within one standard deviation, not even remotely statistically significant.
@StevenStadnicki the D-heavy majorships go down as you move towards the top 100 instead of the top 50 cities, but yeah overall I agree with you. Note the claim made no mention of statistical significance of the relationship; just that "Democrats are running the cities with the worst police brutality," without regard to the random or non-random cause of this fact. (Because his intent was to show that partisanship was insufficient.)
Yes, but very likely not for the reason implied by the question and it's source:. As others have pointed out, you get higher rates of everything in more dense populations which tend to not vote for Republicans.. for completely unrelated reasons to that of police-violence. The connecting thread is precisely what #BLM proponents claim to be the problem: The hiring practices, nature, and militarization of police forces across the country. This has not been an issue on the docket of the cities' Mayors. It's not that they can't as much as they haven't and instead rely on internal processes.
It's like comparing charts that have a similar curve and then trying to assume a correlation between them - or asking for proof to explain a phenomenon that might be better explained through more direct means.
The mayors of America’s larger cities, nearly all members of the
Democratic Party and some of whom are black or Latino themselves, must
reckon with political priorities that appear in conflict — living up
to their rhetoric as champions of marginalized communities while
maintaining a close working relationship with police departments often
accused of inflicting harm.6
Beyond answering questions is understanding the context by which it exists. The root implication of this claim is that there is some correlation between the party of mayors and police violence and for that I have yet to see any evidence. There's a more direct connection between police violence and Chiefs of Police.
| common-pile/stackexchange_filtered |
Install and configure new php version on Mac Catalina
When i got mac, it had already installed php 7.3. I installed new version of php with Brew to /usr/local/Cellar/php/7.4.9/bin/php and edited ~/bash_profile, where i added
export PATH="/usr/local/Cellar/php/7.4.9/bin:$PATH"
export PATH="/usr/local/Cellar/php/7.4.9/sbin:$PATH"
then did source .bash_profile. It worked for the current terminal window, but other terminal windows are still on older PHP version /usr/bin/php, even after restart. Even plugin in PhpStorm claims that php version is older. Any help?
You can try the following
brew unlink<EMAIL_ADDRESS>brew link<EMAIL_ADDRESS>--force --overwrite
brew services start<EMAIL_ADDRESS>
sadly, as first installation of php was not done with Brew, unlinking with Brew has no effect.
In that case brew upgrade php and then brew unlink php && brew link --overwrite<EMAIL_ADDRESS>may help.
| common-pile/stackexchange_filtered |
shuffle an array with a "limit" in PHP
What I try to achieve: randomize the order of all elements in an array, but allow each element to change its position only by a limited number of "steps".
Say I have an array like below, and I wish to randomize with a limit of 2 steps:
$array = [92,12,2,18,17,88,56];
An outcome could be: [2,12,92,17,18,56,88] (all elements of the array moved a maximum of 2 steps), but it could not be: [56,92,2,12,17,18,88] because in this example 56 moved too far.
I considered using a combination of array_chunk and shuffle, but this is problematic because elements will be shuffled inside their chunk, resulting in elements at the beginning or end of a chunk only moving in one direction. This is what I came up with (and problematic):
// in chunks of 3 an element can move a max. of 2 steps.
$chunks = array_chunk($array, 3);
$newChunks = [];
foreach ($chunks as $chunk){
$keys = array_keys($chunk);
shuffle($keys);
$newChunk = [];
foreach ($keys as $key){
$newChunk[$key] = $chunk[$key];
}
$newChunks[] = $newChunk;
}
Another idea I had was to get the key of the item in the array and with rand add of subtract my limit. For example:
foreach ( $array as $key => $value ) {
$newArray[] = ["key" => $key+rand(-2,2), "value" => $value];
};
This creates a new array with each of its elements being an array with the original value plus a value key that is the original key plus or minus 2. I could flatten this array, but the problem with this is that I can have duplicate keys.
Why not write some code to achieve this? Looks like a pretty good exercise to learn TDD
@NicoHaase, I understand what you are saying. My problem is thinking of a solution and not so much writing code. Meaning I cannot write code that works without having come up with a solution.
Here is a possible solution in one pass :
Try to swap each element at position i with an element between i (stay in place) and i+x. I look only forward to avoid swaping an element several times. And I need an extra array to flag the already swapped elements. I don't need to process them in the future as they were already moved.
function shuffle_array($a, $limit)
{
$result = $a ;
$shuffled_index = array() ; // list of already shuffled elements
$n = count($result);
for($i = 0 ; $i < $n ; ++$i)
{
if( in_array($i, $shuffled_index) ) continue ; // already shuffled, go to the next elements
$possibleIndex = array_diff( range($i, min($i + $limit, $n-1)), $shuffled_index) ; // get all the possible "jumps", minus the already- shuffled index
$selectedIndex = $possibleIndex[ array_rand($possibleIndex) ]; // randomly choose one of the possible index
// swap the two elements
$tmp = $result[$i] ;
$result[$i] = $result[$selectedIndex] ;
$result[$selectedIndex] = $tmp ;
// element at position $selectedIndex is already shuffled, it needs no more processing
$shuffled_index[] = $selectedIndex ;
}
return $result ;
}
$array = [92,12,2,18,17,88,56];
$limit = 2 ;
shuffle_array($array, $limit); // [2, 18, 92, 12, 17, 56, 88]
I expect more elements to stay in place than in the solution of Kerkouch, as some elements can have very few remaining free choices.
Although this function is rather slow when shuffling big arrays, It is reliable, also with big limits, which is not the case with Kerkouch's solution (which is great for small limits).
I created this function to do this, but I guess it needs more improvements:
/**
* @param array $array
* @param int $limit
* @return array
*/
function shuffleArray(array $array, int $limit): array
{
$arrayCount = count($array);
$limit = min($arrayCount, $limit);
for ($i = 0; $i < $limit; $i++) {
for ($j = 0; $j < $arrayCount;) {
$toIndex = min($arrayCount - 1, $j + rand(0, 1));
[$array[$j], $array[$toIndex]] = [$array[$toIndex], $array[$j]];
$j += (($toIndex === $j) ? 1 : 2);
}
}
return $array;
}
Test:
$array = [92, 12, 2, 18, 17, 88, 56];
$limit = 2;
$result = shuffleArray($array, $limit); // [12, 92, 17, 2, 18, 56, 88]
This seems to work well. Could you perhaps explain what happens here: [$array[$j], $array[$toIndex]] = [$array[$toIndex], $array[$j]] ?
I swapped the values using symmetric array restructuring which was introduced in PHP7.1, otherwise, you can use a temporary variable to do that.
| common-pile/stackexchange_filtered |
Recently acquired a Macintosh Plus M0001A, and it won't boot
I recently got a Mac Plus model M0001A. It came with an Ultra Drive 20 external hard drive, a keyboard and a mouse.
Both the Mac and the external drive seem to work fine other than it won't boot up, I hear the hard drive spin up, the screen comes on and the speaker works.
When I turn on the Mac it beeps, about 50 percent of the time a checker board pattern comes up, and I get the sad Mac icon and error code 010020.
I don't have any floppy disks that work for this system, and I have no way to check the hard drive to see if it is working/has an operating system on it.
When I first tried the external hard drive I did have to cycle power to it a few times before it would spin to life.
I have not opened either of the devices yet, I don't have a screwdriver to open the Mac and I just haven't taken the time to open the external hard drive because I suspect that it's fine, just doesn't have an operating system on it, or the drive could be dead, and if the drive is dead I can only go off the fact that it's spinning audibly to tell if it's dead or alive.
I have done some digging and I couldn't find anything on the error code I got. It seems that I am the only person with this problem.
Here are some recordings of the system
It also makes no difference if the keyboard is plugged in or not.
Has anyone else seen anything like this? Any ideas what the problem is?
Sad Mac error code:
Partial checkerboard which transitions to a Sad Mac:
->
Back of the machine:
Have you tried it without attaching/powering the hard drive? You should at least get the "insert disk" icon. If you're not, then you have hardware issues not related to the hard drive itself.
Do I understand correctly that half the time you get a checkerboard pattern, and half the time you get a sad Mac? Do you mean large checker board squares, or single pixel checkerboard squares? The single pixel "checkerboard" (not usually described as checkerboard) is the normal startup background. But after that it should eventually show a floppy icon, a happy mac or a sad mac.
The sad mac and the large checkerboard squares indicate hardware problems. Sad Mac codes beginning with 01 indicate a ROM problem. The checkerboard pattern often happens with leaky capacitors causing shorts on the logic board, etc. (Such problems can simultaneously be responsible for the sad mac). These issues can usually be repaired if you're handy with a multimeter and soldering iron.
Unfortunately, this isn't as simple as the hard drive not working. There's no way to know if the hard drive works until you get the computer working. However, you will probably end up needing a modern scsi replacemet solution (scsi2sd, zuluscsi, bluescsi, etc). Most scsi hard drives of that age suffer from "stuckage", where the read+write head gets stuck to the 30-year-old melting rubber parking bumper inside the drive. They will spin up but the read+write head can't dislodge itself.
I edited some grabs from the linked videos into the question; the poster seems to mean the large chequerboard that indicates hardware issues.
Interesting, if the issue is "stuckage" then it would seem that just the right 'whack' to the side of the drive (in plane with the platters) could perhaps unstick the head?
@GlenYates From the video, it sounds like the hard drive is spinning up (i.e. it's not stuck), though it could just be a noisy fan. I wouldn't do anything violent to the drive until/unless you can confirm that it's actually stuck. Also, if you do need to unstick it, you want to give it a sudden spin (on a vertical axis, ideally centered on the drive's spindle -- you'll need to open the case to find that). Do not just knock it sideways, that won't help.
@GordonDavisson, Stuck platters is not what scott.squires mentioned in his answer. It is acknowledged that the drive is spinning up, stuck heads is the condition he described and what I was commenting on.
Whacking the drive isn't much of a solution, imo. If it does work, you'll have to do it every time to turn the drive on, and cross your fingers each time. You can open the drive and replace the bumper with a DIY fix. Of course if you open it outside of a clean room, you risk dust mucking it up. But for an old drive with no important data stored now or in the future, there's not much to lose.
From your descriptions, I'd say the machinery suffers from worn electrolytic caps. Maybe seek help in a local hacker space?
The 'sad mac' indication is a power-on-self-test (POST)
error sign. If the same sequence repeats, it may be
a failed memory chip, and could respond to something
as simple as wiping down the SIMM30 modules' edge
connector pads.
A long (10 inches) Torx-15 screwdriver removes the screws
that hold the back case, and a metal tool that fits
into the 'crack' (wide putty knife?) helps push the back
off. Be careful, any bump to the picture-tube neck
can break the (irreplaceable) vacuum tube.
Once open, avoid contact with the non-logic parts,
detach the power connector from the logic board and
slide the board out. There are four sockets
with SIMM30 memory modules; taking antistatic precautions is wise at this point, you want to ground both yourself
and the metal bracket of the logic board.
The sockets have (metal or plastic) latches on the
edges, the memory module tilts up when you free those,
then lifts out. Wipe the card edges until they
look bright, maybe clean them with an alcohol swab,
and that might fix the problem.
| common-pile/stackexchange_filtered |
Is there is a way to fire mouseover event in amp-script (javascript)?
There are some examples of using Javascript and Web Workers in AMP Documentation.
https://amp.dev/documentation/components/amp-script/?format=websites#script-hash
For example, I have a function that appends H1 after button click fired:
JS:
const button = document.getElementsByClassName('hello-url')[0];
button.addEventListener('click', () => {
const h1 = document.createElement('h1');
h1.textContent = 'Hello World!';
document.body.appendChild(h1);
});
AMP Virtual DOM:
<amp-script script="hello-world" class="sample">
<button class="hello-url">Short button</button>
</amp-script>
When I am trying to use mouseover instead of click errors occurs:
[amp-script] Blocked 2 attempts to modify DOM element children, innerHTML, or the like. For variable-sized containers, a user action has to happen first.
[amp-script] amp-script[script="hello-world"].js was terminated due to illegal mutation.
Note that I want to avoid using Jquery
You can use "mouseenter" in your EventListener : https://developer.mozilla.org/fr/docs/Web/API/Element/mouseover_event
Unfortunatly same error with:
button.addEventListener('mouseenter', () => { ... });
Indeed, I made some research and it appears that you can encounter this error if your element doesn not have a fixed size : <amp-script layout="fixed" height="300" script="myscript"></amp-script>. Doc here
That works! The only thing is that width has to be specified as well. BTW layout="responsive" works as well.
| common-pile/stackexchange_filtered |
Grab the index numbers of a list which have been built by code
I wanted to compare two lists of vector elements and grab the equal elements of the lists and compose a 3rd list, I've already accomplished that.Now, I want to find out what are the index numbers of the elements grabbed from the 1st list. ex:
vectlist1 = [(0.25,0.65,0.33), (0.43,0.23,0.55), (0.56,0.8, 0.90), (0.34, 0.45, 0.67)]
... vectlist3 = [(0.43,0.23,0.55), (0.56,0.8, 0.90)]
vectlis1tindexnumbers = (1,2)
How do I do that plz?
OBS: The elements are vectores, so in order for the vectors to be equal to another vector, all the three floats inside the two o them must be equal. I ommited the 2nd list in the example above, but it should have the same elements within it which were taken in order to compose the 3rd list.
You want the corresponding index in the first list of the tuples in the second one, is that correct? What isn't working, what have you tried?
Something like this should work:
vectlis1tindexnumbers = [vectlist1.index(vector) for vector in vectlist3]
print(vectlis1tindexnumbers)
>>> [1, 2]
Thanks! Is there a way to do this whitout loops? Perhaps by grabbing the indices yet when we extract the values from the vectlist1 for the first time?
My dev for extracting had been this one : https://stackoverflow.com/questions/59106106/compare-vector-lists
I think it's best if you create another question with what you want. Is this answer correct in it's original intention? @AAAYerus
yes it is! Thanks! I wonder if that is much more difficult to get the indices from the start instead of with loop.
i checked how the other answer solved it using sets, so you would have to code it with manual fors in order to get everything in one iteration. so my suggestion is don't, my answer only adds one line of code, so mix both answers. but it's definitely possible. @AAAYerus
| common-pile/stackexchange_filtered |
Best way to cover main water supply valve into house
What is the best way to cover exterior pvc pipe where shutoff valve is that leads into a house? I was thinking about down the line when it will be time to winterize.
Cover in what sense? Is this pipe outside and exposed or inside and exposed. And - you're not talking about permanently covering the main shutoff, are you? Because that would be very unwise.
Exterior pipe. I mainly wanted to cover pipe itself to prevent freezing.
This isn't an incoming supply line is it?
Yes, it is incoming line.
A photo would be helpfully here, I have used Styrofoam on several occasions where I had to bring the main up outside the home. If it is going into an unheated space like a garage be prepared to add some heat tape or let the water trickle when temps get below 27F.
I've added a picture.
The best way to cover is it is to use a few feet of dirt: bury it below the frost line. I'm guessing you're looking for something that doesn't require major work though.. You talk about winterizing -- do you mean you intend to shut the water off over the winter? What's this supplied from? What's the winter low temperature (or where is this)? What's the lowest temperature while you want to have running water?
Water is supplying house I am living in. Supplied by county. Live in central ga-The lowest temperature it normally gets is in 20's.
I would use a plastic valve box (with lid) of the appropriate size. Cut a slot in one end to slip over the horizontal run of PVC that goes into the brick wall. This will protect the pipe from string trimmer, mower, and other damage of that type. These boxes are usually an unobtrusive color and will cover the white pipe.
Inside the box would put foam tubing insulation around the pipe and cram that into the hole in the brick wall surrounding the pipe. I would put flexible sheet insulation around the inside of the box above ground and also under the top, but maybe not along the wall next to the brick wall to allow heat to flow from the ground and from the brick wall.
EDIT
Plastic valve box
Thanks for info. Could you post a picture of a valve box like the one you are talking about?
@dwhorne see edit. Valve box 12" x 17" x 12".
| common-pile/stackexchange_filtered |
Java Parameter specified as non-null is null IllegalException
Hi I have Java android application, and integrate stripe and use stripe confirmPayment method. I get the error java.lang.IllegalArgumentException: Parameter specified as non-null is null: method kotlin.jvm.internal.Intrinsics.checkParameterIsNotNull
Does this answer your question? java.lang.IllegalArgumentException : Parameter specified as non-null is null: method kotlin.jvm.internal.Intrinsics.checkParameterIsNotNull
hi this question but my project java codes not kotlin
The error is exactly what the message says, you're passing null into the confirmPayment method and you should not be.
I think mikeb is right, but would also be interested to see your code here. Can you add that to your question for more context?
| common-pile/stackexchange_filtered |
flutter gridview lagging UI
i am getting list of a data from API. and the length of list is 18 now after every 8 iam showing ad. now on every end of scroll I again call API and add more 18 length of data in same list in staggered gridview. as scroll come to an end I call API and add more 18 length list of data to staggered gridview. Now problem is that Ui performance is too bad.
i am expecting that UI didnt update recent updated data, after adding new data in the list in gridview
| common-pile/stackexchange_filtered |
Meta Box Data added to header redirect
I got a lot of my code from this post: Create more Meta Boxes as needed
What I need to happen is the user input a code and it redirects them to a specific url. I know I'm grabbing the data correctly as I was able to output it a list, but I can't figure out how to get it to function as the redirect. Here is the code:
<?php
add_action( 'add_meta_boxes', 'dynamic_add_custom_box' );
//Do something with the data entered
add_action( 'save_post', 'dynamic_save_postdata' );
//Adds a box to the main column on the Post and Page edit screens
function dynamic_add_custom_box() {
add_meta_box(
'dynamic_sectionid',
__( 'My Codes', 'myplugin_textdomain' ),
'dynamic_inner_custom_box',
'page');
}
//Prints the box content
function dynamic_inner_custom_box() {
global $post;
// Use nonce for verification
wp_nonce_field( plugin_basename( __FILE__ ), 'dynamicMeta_noncename' );
?>
<div id="meta_inner">
<?php
//get the saved meta as an arry
$all_codes = get_post_meta( $post->ID, 'all_codes', true );
$c = 0;
if ( count( $all_codes ) > 0 ) {
foreach ( (array)$all_codes as $track ) {
if ( isset( $track['code'] ) || isset( $track['url'] ) ) {
printf( '<p>Promo Code <input type="text" name="all_code[%1$s][code]" value="%2$s" /> -- URL : <input type="text" name="all_code[%1$s][url]" value="%3$s" /><span class="remove">%4$s</span></p>', $c, $track['code'], $track['url'], __( 'Remove Code' ) );
$c++;
}
}
}
?>
<span id="here"></span>
<span class="add"><?php _e('Add Code'); ?></span>
<script>
var $ =jQuery.noConflict();
$(document).ready(function() {
var count = <?php echo $c; ?>;
$(".add").click(function() {
count = count + 1;
$('#here').append('<p> Promo Code <input type="text" name="all_codes['+count+'][code]" value="" /> -- URL : <input type="text" name="all_codes['+count+'][url]" value="" /><span class="remove">Remove Code</span></p>' );
return false;
});
$(".remove").live('click', function() {
$(this).parent().remove();
});
});
</script>
</div><?php
}
//When the post is saved, saves our custom data */
function dynamic_save_postdata( $post_id ) {
// verify if this is an auto save routine.
// If it is our form has not been submitted, so we dont want to do anything
if ( defined( 'DOING_AUTOSAVE' ) && DOING_AUTOSAVE )
return;
// verify this came from the our screen and with proper authorization,
// because save_post can be triggered at other times
if ( !isset( $_POST['dynamicMeta_noncename'] ) )
return;
if ( !wp_verify_nonce( $_POST['dynamicMeta_noncename'], plugin_basename( __FILE__ ) ) )
return;
// OK, we're authenticated: we need to find and save the data
$all_codes = $_POST['all_codes'];
update_post_meta($post_id,'all_codes',$all_codes);
}
?>
And I'm trying to use this to make the redirect happen. But something isn't connecting since when you enter the code, nothing happens.
<form action="" method="post">
<p>enter your promo code:</p>
<input type="text" name="all_codes" id="all_codes" />
<input type="reset" id="all_codes" style="display:none"/>
<script>$('#track').click();</script>
<input type="submit" />
<?php
# Gets the array of CODE/URL pairs
$all_codes = get_post_meta( $post->ID, 'all_codes', true );
# Get the URL correspondent to the posted 'codes'
$redirect = $all_codes[ $_POST['url'] ];
# Do the redirect
header("Location: $redirect");
?>
</form>
Any ideas?
do you have debugging enabled? also note that you can't send headers after content has been sent to the browser.
My original thought was using the switch statement like: switch($_POST['codes']) { case "1234": header("Location: http://www.myfoursquare.net/archives"); break; ---which worked properly.
And this is the error when i turned on debugging: Undefined index: url So obviously an issue.
The problem is that already pointed out by @Milo; you can't send headers after the page has started rendering. You need to process the redirect when the form is submitted and not at the tail end of the HTML form itself.
| common-pile/stackexchange_filtered |
What would happen if a pokemon died instead of fainting?
I have searched for it and would like to know what would happen if a pokemon were to die. Would the trainer go on and get another pokemon? Or maybe they would stop being a trainer forever?
You want to know what would happen to the trainer?
what about Cubone ? They use their Mother`s skull as a mask :v
Yes I would like to know what the trainer would do if all of their pokemon died
I suck at finding specific cannon sources, but here's is my general understanding.
You are asking one of two things. What happens if pokemon (in general) die instead of fainting? OR What would happen if a pokemon (just one) died instead of fainting?
If you are asking the first, that's a bit opinion based, but you would likely see defensive gear for pokemon to protect them more, higher regulations on battles to ensure fights don't go passed certain limits, and more medical items to help critically injured pokemon to prevent them from dying.
If you're asking the second, the trainer's response depends on the trainer. Some continue to fight. (I would assume most. If their goal is to become a master, it's a loss they learn to deal with. Not that they become emotionless, but they learn to remember and move on and fight in their honor etc.) Some would possibly stop because they don't want to lose more people.
This can be seen as something similar to when people command a division in any kind of combat. You either continue to fight when you've lost a man, or you discover that kind of stress is too much for you.
There's always theories like this one that go over whether Red kills his rival's Raticate. This implies that pokemon CAN die and have died, with your rival being an example of someone who continues on anyway. And with a city like Lavender Town and the cemetery and the natural effect of aging (since there are baby pokemon), you can reasonably assume most pokemon can die of old age and, therefore, other causes of bodily deterioration/harm.
Hope that helps out some. It's a bit long because I wasn't entirely sure what you were asking. Feel free to comment and I'll try to explain more or find better sources :)
Next thoughts are based on manga, so I'm not sure if all of this applies to anime.
First of all, it is not that strongly dependent on pokeballs. Pokeballs were did not exist forever, you know. And even when they've appeared, there was first pokemon captured by pokeball, which means, trainer managed to capture it without other captured pokemons.
You see, it may also depend on attitude of pokemon towards particular human. If pokemon likes someone, it may willingly allow to capture him, or at least obey commands of human he likes. Last one even applies to pokemons, who belong to other trainers. Also, trainers may lend their pokemon to other people. In manga, several main characters lent their pokemons to people without pokemon to help them caught one (Red lent his Pika to Yellow to capture Rattata, Ruby lent his Ralts to Wally).
So, there are numerous solutions to solve situations, at least in manga.
In terms of trainer mental state - well, that heavily depends on character mentality, and is mostly same to real world situations. Will you take new cat, if your cat dies?
| common-pile/stackexchange_filtered |
Return statement in try block java . What will be returned by method and why?
I was trying exception handling. below code i was not able to understand.
Please explain how this works internally
public int method()
{
try
{
return 1;
}
catch(Exception e)
{
return 2;
}
finally
{
return 3;
}
}
Please explain me how this works in java
I'm voting to close this question as off-topic because read a tutorial and indicate very precisely what isn't clear about it. http://docs.oracle.com/javase/tutorial/essential/exceptions/
Jeroen: I think his question is, it should return 1, but in theory finally always runs.
Raj: in this case, the method 'll be ended by the return statement.
I guess this is not a duplicate question as i wanted to know internal working of this . Please go through this code
try
{
return 1;
}
catch(Exception e)
{
return 2;
}
finally
{
System.out.println("hello");
}
for above code why value 1 is return
The method will always return 3.Because even if a return statement is there in try block the control will be passed on to finally block.And it will return 3 and the return value from try block will be lost
| common-pile/stackexchange_filtered |
Looking for an algebraic structure
I'm looking for the name of algebraic structures (in which the elements are partially ordered) with the following properties:
Top element defined, bottom optional;
Join defined for all elements, meet defined for some;
In other words: I need some of the elements to be atomic (they are minimal, or "bottom-most", elements of the semi-lattice), while others are not (necessarily) minimal in the sense that for such an element $y$ there is an $x$ s.t. $x \leq y$.
I would call this a bounded semi-lattice, although you might have to clarify which bound exists.
| common-pile/stackexchange_filtered |
Python str.contains function related to other columns
I have a table that includes title, genre columns, and I need to change genre columns if title columns have a word blue. But the thing is I have many blue names in title, so I want to change only a specific genre type. ex:
title genre title genre
blue x blue x
red x -------> red x
blue y blue ychanged
red y red y
I used this but this function changed all genre if title has blue
df['genre'][df.title.str.contains('blue')] = 'ychanged'
How can I make a specific str.contains?
You can use &
df.loc[df.title.str.contains('blue') & df.genre.isin(['y']), 'genre'] = 'ychanged'
You can directly index the desired rows using a logical expression like:
Code:
df.loc[df.title.str.contains('blue') & (df.genre == 'y'), 'genre'] = 'ychanged'
Test Code:
df = pd.read_fwf(StringIO(u"""
title genre
blue x
red x
blue y
red y """), header=1)
print(df)
df.loc[df.title.str.contains('blue') & (df.genre == 'y'), 'genre'] = 'ychanged'
print(df)
Results:
title genre
0 blue x
1 red x
2 blue y
3 red y
title genre
0 blue x
1 red x
2 blue ychanged
3 red y
| common-pile/stackexchange_filtered |
Render form errors with the label rather than field name
I would like to list all form errors together using {{ form.errors }} in the template. This produces a list of form fields and nested lists of the errors for each field. However, the literal name of the field is used. The generated html with an error in a particular field might look like this.
<ul class="errorlist">
<li>
target_date_mdcy
<ul class="errorlist">
<li>This field is required.</li>
</ul>
</li>
</ul>
I would like use the errorlist feature, as it's nice and easy. However, I want to use the label ("Target Date", say) rather than the field name. Actually, I can't think of a case in which you would want the field name displaying for the user of a webpage. Is there way to use the rendered error list with the field label?
I don't see a simple way to do this.
The errors attribute of the form actually returns an ErrorDict, a class defined in django.forms.utils - it's a subclass of dict that knows to produce that ul rendering of itself as its unicode representation. But the keys are actually the field names, and that's important to maintain for other behavior. So it provides no easy access to the field labels.
You could define a custom template tag that accepts the form to produce the rendering you prefer, since in Python code it's easy to get the field label given the form and the field name. Or you could construct an error list by label in the view, add it to your context, and use that instead.
edit
Alternately again, you can iterate over the fields and check their individual errors, remembering to display non_field_errors as well. Something like:
<ul class="errorlist">
{% if form.non_field_errors %}
<li>{{ form.non_field_errors }}</li>
{% endif %}
{% for field in form %}
{% if field.errors %}
<li>
{{ field.label }}
<ul class="errorlist">
{% for error in field.errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
</li>
{% endif %}
{% endfor %}
</ul>
You might want to wrap non_field_errors in a list as well, depending.
Thanks, Peter. I didn't know that about ErrorDict. I went with custom filter, as I'll probably resuse it.
I know this has already been answered but, I ran across the same scenario and found there is a simple way to use the label:
{% if form.errors %}
<ul class="user-msg error">
{% for field in form %}
{% for error in field.errors %}
<li>
{% if field != '__all__' %}
<strong>{{ field.label }}:</strong>
{% endif %}
{{ error }}
</li>
{% endfor %}
{% endfor %}
</ul>
{% endif %}
great answer, the django docs are pretty helpful regarding this too.
I solved this in a custom form class which all my forms inherit instead of django.forms.Form. There I change the way form.errors works by returning a custom ErrorDict whose as_ul method takes labels into account. Thus you don't need to change your templates, but you need to have your forms inherit CustomBaseForm.
class CustomErrorDict(ErrorDict):
def __init__(self, form, iterable=None, **kwargs):
self.form = form
super(CustomErrorDict, self).__init__(iterable, **kwargs)
def as_ul(self):
if not self:
return u''
def humanify(field_name):
try:
return self.form.fields[field_name].label or field_name
except:
return field_name
# main logic is copied from the original ErrorDict:
return mark_safe(u'<ul class="errorlist">%s</ul>'
% ''.join([u'<li>%s%s</li>' % (humanify(k), force_unicode(v))
for k, v in self.items()]))
class CustomBaseForm(forms.Form):
@property
def errors(self):
return CustomErrorDict(self, super(forms.Form, self).errors)
... rest of CustomBaseForm ...
from django import forms
def my_clean(self):
self.my_errors = ''
for x in self.visible_fields():
if x.errors:
self.my_errors += "<p>%s: %s</p>" % (x.label, x.errors)
class SetPwdForm(forms.Form):
pwd= forms.CharField(label='password', required=True, min_length=6)
def clean(self):
...
my_clean(self)
use myform.my_errors in views.
Just in case anyone is looking do something like this using the django.contrib.messages framework in, for example, a FormView:
def form_invalid(self, form):
for field, errors in form.errors.items():
for error in errors:
messages.error(
self.request,
form.fields[field].label + ": " + error
)
Note this is just a basic template, you'll have to take care of non-field errors et cetera in your code in the case where form.fields[field] doesn't make sense.
The following approach shows verbose_name instead of the field name.
It can be used in get_context_data() too but personally, I prefer this way:
from django.core.exceptions import FieldDoesNotExist
class ShowVerboseNameInFormsMixin:
def add_error(self, field, error):
super(ShowVerboseNameInFormsMixin, self).add_error(field, error)
for field, message in self._errors.copy().items():
try:
verbose_name = self._meta.model._meta.get_field(field).verbose_name
del self._errors[field]
self._errors[verbose_name] = self.error_class()
self._errors[verbose_name].extend(message)
except FieldDoesNotExist:
pass
and then use it like this:
from django import forms
class FooForm(ShowVerboseNameInFormsMixin, forms.ModelForm):
class Meta:
model = Foo
fields = ['foo', 'bar', 'baz']
with a little extra code, It can show the __all__ to all or any other intended string.
Here's the filter I used to render an error list with the field label, following Peter's suggestion.
from django.utils.safestring import mark_safe
from django.template import Library, Context, loader
register = Library()
@register.filter
def form_error_list(form):
result = ""
if form.errors:
fields = form.fields
error_items = []
for error_field in form.errors:
label = fields[error_field].label
if not label:
label = ' '.join([word.capitalize() for word in error_field.split('_')])
errors = form.errors[error_field]
error_li = ''.join(['<li>{0}</li>'.format(error) for error in errors])
error_items.append({'label': label, 'error_li': error_li})
inner = ''.join(['<li>{0}<ul class="errorlist">{1}</ul></li>'.format(item['label'], item['error_li']) for item in error_items])
result = mark_safe('<ul class="errorlist">{0}</ul>'.format(inner))
return result
That will throw an exception if you have any non-field errors - for the full general case you'll want to allow for the possibility that a key in form.errors is not actually a field name. And now that I look into the code, the values of the ErrorDict are instances of ErrorList, a class which has a as_ul method that can construct your error_li method and the surrounding ul element, with escaping enabled - so you could just do error_li = errors.as_ul()
| common-pile/stackexchange_filtered |
How can i print current time in kernel?
i'm a beginner of linux. (sorry about my poor english)
I should print current time and do something through system call in linux.
I did other things but failed to print current time..
I wrote like
#include<linux/kernel.h>
#include<linux/time.h>
...
asmlinkage long sys_printtime(void) {
...
struct timeval time;
struct tm tm1;
...
do_gettimeofday(&time);
local_time=(u32)(time.tv_sec -(sys_tz.tz_minuteswest * 60));
time_to_tm(local_time,(3600*9),&tm1);
printk(KERN_DEBUG "time @(%04d-%02d-%02d %02d:%02d:%02d)\n", tm1.tm_year+1900,tm1.tm_mon+1,tm1.tm_mday,tm1.tm_hour,tm1.tm_min,tm1.tm_sec);
...
return 0;
}
but it doesn't work.
The error said i can not use do_gettimeofday, and i finally knew that i can not use do_gettimeofday anymore because kernel5 doesn't support.
I searched on google and stackoverflow,
but i don't know how to print current time in kernel5..
anybody can help me?
Yes, do_gettimeofday has been removed because of y2038 problem. Instead the kernel provides time interfaces which you can use as per your need. Check the documentation https://www.kernel.org/doc/html/latest/core-api/timekeeping.html.
For example, you have ktime_get_ts64(struct timespec64 *ts) which will provide you time in seconds and nanoseconds.
struct timespec64 {
time64_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
If you only want in nanoseconds, you can use u64 ktime_get_ns(void). Please check the documentation above for what suits your purpose.
Also you can check timekeeping.h and ktime.h for further information.
If you want to find an example just search the function name in the kernel source either using grep -rni <func name> or use cscope. You can also search it online here
Thank you so much! I already solved it using __ktime_get_real_seconds() :) I'll try again using ktime_get_ts64 as you told. thanks again x)
There is also a one liner to get the rtc time like this:
#include <linux/ktime.h>
#include <linux/rtc.h>
struct rtc_time t = rtc_ktime_to_tm(ktime_get_real());
printk(KERN_INFO "%ptRs", &t);
Document of ktime_get_real can be found here.
I didn't find any documentation for rtc_ktime_to_tm even in the code but it gets the time as Unix epoch in nano seconds as input and outputs struct rtc_time. How to print struct rtc_time can be found here.
Any sample output for printing struct rtc_time ?
| common-pile/stackexchange_filtered |
PHP simple html dom find after find
How do I execute find loop within find loop ? I keep getting PHP Notice: Trying to get property of non-object in line echo $li->find('span')->innertext;.
// get the html
$html = file_get_html('<url>');
// loop through the required element
foreach($html->find('body') as $body){
// loop through the selected element
$lis = $body->find('li');
foreach($lis as $li){
// loop through li to get the required span
echo $li->find('span')->innertext;
}
}
I am using Simple Html Dom http://simplehtmldom.sourceforge.net/manual.htm
which line generates your error?
I guess echo $li->find('span') returns an array of spans inside $li. Use another loop, or implode...
ok so the assumption is $li->find('span') doesn't return an object. so don't called innertext without first checking the object.
what do you expect as output?
I need to execute foreach inside foreach
You get the error because the find span returns an array or null if the second parameter is used
// get the html
$html = file_get_html('<url>');
// loop through the required element
foreach($html->find('body') as $body){
// loop through the selected element
$lis = $body->find('li');
foreach($lis as $li){
// loop through li to get the required span
$span = $li->find('span', 0);
if(null !== $span){
echo $span->innertext;
}
}
}
And I think you may want to check traverse the dom
You may try something like the following:
// get the html
$html = file_get_html('<url>');
// loop through the required element
foreach($html->find('li > span') as $span){
echo span->innertext;
}
Some examples (from site) are given below to clarify your understanding:
// Find all <li> in <ul>
$es = $html->find('ul li');
// Find Nested <div> tags
$es = $html->find('div div div');
// Find all <td> in <table> which class=hello
$es = $html->find('table.hello td');
// Find all td tags with attribite align=center in table tags
$es = $html->find(''table td[align=center]');
You should read the docs for better understanding. Also, check the item before accessing any property, i.e: if ($someItem) and so on.
Please note that iteration over an object - like it's done in the question - does not work (in this case).
I cannot do this, as I need to go through the loops step by step. Is there a way I would iterate as mentioned in the question ?
@maan81, for a separate loop. you may try $lis = $html->find('li'); foreach($lis as $li).
| common-pile/stackexchange_filtered |
Access Rails EKS Endpoint
I am working on deploying a Kubernetes cluster to AWS and I am having a certificate issue.
I am using Chrome.
And I get this error message when I try to access it:
I can see what I guess is a Kubernetes dashboard if I go through Firefox though.
k8 yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: project # name of the deployment
labels: # these labels apply to the deployment
app: project
component: project
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: project
template:
metadata:
labels: # these labels apply to our container
app: project
component: project
spec:
containers:
- name: project # name of our container
image: id.dkr.ecr.region.amazonaws.com/project_api:latest # the URI that we got from ECR
env:
- name: DB_URL
value: project.1234.region.rds.amazonaws.com # URL of our database endpoint
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-creds
key: username # use the encoded username from the K8s secret db-creds
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-creds
key: password # use the encoded password from the K8s secret db-creds
- name: DB_NAME
value: projectDB # our DB is named projectDB
ports:
- containerPort: 3000 # expose the running contianer on port 3000
name: project
loadbalance.yml
apiVersion: v1
kind: Service
metadata:
name: load-balancer
labels:
app: project
spec:
selector:
app: project
type: LoadBalancer
ports:
- nodePort: 31000
port: 3000
targetPort: 3000
In Firefox. I was expecting to be able to access it from port 3000. URLENDPOINT:3000 timesout though.
Any idea how I can access the my app ?
Any idea what I need to do to get around the ssl cert issue ?
That URL is the API for your EKS control plane, and not the LoadBalancer for the Service of load-balancer; if you run kubectl -n default get -o yaml svc load-balancer it will likely show you the actual DNS name for the Load Balancer that it created, if any (since that Service is a request for a LoadBalancer and your EKS cluster may not have the IAM permissions, or your Account may not have the quota to fulfill the request)
You can tell it's the EKS ELB and not your service's ELB not only from the blatant .eks in the domain name -- something it will for sure not do when it creates your load balancer -- but also from the CN of the SSL certificate it presented:
* subject: CN=kube-apiserver
* start date: May 7 20:52:30 2020 GMT
* expire date: May 7 20:56:46 2021 GMT
* issuer: CN=kubernetes
thanks. My load balancer endpoint times out also, a655902ea8af448338fdb49794ef9eef-441183339.us-east-1.elb.amazonaws.com. I intended to edit the post and add this but forgot. I believe since it shows up from your command my IAM isn't the issue.
| common-pile/stackexchange_filtered |
How to execute a slow method asynchronously on a single thread?
Suppose I have the following basic code in a WindowsFormsApplication.
ButtonClick(object sender, System.EventArgs e)
{
Stopwatch SW = new Stopwatch();
SW.Start();
WriteToGui("Hello World");
textBox1.Text += String.Format("Updated in {0} ms.{1}",
SW.Elapsed.TotalMilliseconds,
System.Environment.NewLine);
}
WriteToTheGUI(string newMessage)
{
textBox1.Text += String.Format("{0}{1}", newMessage, System.Environment.NewLine);
}
I would get a textbox that looks like this:
Hello World
Updated in 0.5000 ms
I would like the output to actually be:
Updated in 0.0005 ms
Hello World
Edit: Took out all previous extra information.
The textbox control itself does have a message pump that can be invoked with BeginInvoke.
How can I replace the test update portion of the provided call with a call that would add the update to the textbox message pump and only handle it later?
Edit 3:
There is too much focus on the UI itself and not the task at hand so I will try explaining the goal.
I have an external thrid party database logging and processing some specialized messaging.
I noticed that the basic send message command takes about .5 ms, I believe that it waits for a reply from the sever that it received the message. The underlying messages happen in bursts so I can have thousands of them or more to log very quickly. As a result the code can become extremely backed up.
The messages look like this by time:
[ ][ ][ ][ ][ ] and so on and the time keeps growing with more messages.
Each of the brackets represents the time to return from each message call.
I want it to be:
[ ]
[ ]
[ ] and so on so I am done with them quickly and waiting for replies.
The software and database can (supposedly) handle the much higher message rate that I am trying to achieve. Either way I will do all I can as fast as I can and wait.
The software requires that any updates be sent from the thread that created the connection or use BeginInvoke to properly send the message. That is why I gave the UI as an example.
I do not need to wait for any responses or errors from the database as that is handled separately, I just need to send the messages as fast as possible and wait for replies.
I can see 2 ways of doing that, for every message, create a new thread for every message which will leave me with either thousands of thread or use a thread pool with limited threads which would again be slow as eventually it will sit and wait before sending more messages.
But I don't see any need to create and endless number of threads. I should be able to use BeginInvoke to add the message to the message pump, even though I don't need to. The stupid part is that I don't know how to do it when on the same thread. I just need the syntax for a forced BeginInvoke for any function method.
Then I am free to do what I like and am only limited by the speed the message pump is handled.
Update:
Looking at the post here: How to update the GUI from another thread in C#?
Changing the line in WriteToGUI to:
textBox1.BeginInvoke(new Action(() => textBox1.Text += String.Format("{0}{1}", newMessage, System.Environment.NewLine)));
Actual output before:
Hello World
Updated in .2783 ms.
and after:
Updated in 0.0494 ms.
Hello World
this gives the desired result of returning immediately and printing "Hello World" after the time.
However, does the Action in the example start a new thread or is this the best method to queue a new operation?
Your entire question is pretty badly worded. You need to extract the long-running operation (whatever it is), put it onto its own thread and then call BeginInvoke() to call the UI to update.
As in the question and in other cases, the long running operation is a single line, not an entire method. That is why I gave updating the UI as an example. Thank you.
Half a millisecond is a really, really short period of time, on a human timescale. It's not like it's talking half a second or anything like that.
You can't update a textbox's text from another thread. You can only do that from the UI thread.
If you had a long running non-UI operation (currently you have very fast UI operation) then you can sensibly consider looking into alternatives.
On a human time scale yes, but for a computer it is an eternity. I though a textbook or control can be updated outside the thread using the Control.BeginInvoke approach. Yes it still executes the code on the UI thread, but it does it at some time later which is what I am trying to do.
@MichaelElkin You shouldn't be doing computational work in the UI thread to begin with. If you actually have long running non-UI work to do then it shouldn't be done in the UI thread, and so shouldn't be waiting on this; it should be done in another thread.
Thank you, but I want to clarify the UI is just an example. I am looking for a generic way to run single slow command (that must be run on the same thread) in a asynchronous way, meaning I don't wait for the result and can continue.
@MichaelElkin It's impossible in the general case. A lot of work needs to be put in place for a thread to be able to schedule operations to be executed on that thread; it needs a message pump, or some analogous concept.
Yes exactly, but how can I post a message to the message pump of a GUI control when I can update it directly? Every example I have seen involves different threads and I just cannot get it working.
@MichaelElkin What possible reason would you have for doing so here? If the UI operation is preventing some non-UI code from being run then that other code should be run in another thread. What possible reason do you have for not executing the UI operation immediately?
| common-pile/stackexchange_filtered |
How to load cookies, that were collected by selenium in not autamated session?
I collect cookies by selenium and store them in file, and then I'm trying to load them in not automated browser console by executing: document.cookie="key=value;...";.... Is there any other way?
Selenium has it's own methods for this (essentially the same as document.cookie...), but maybe you should explain a bit more about what you are trying to do here... are you trying to transfer cookies from one profile to another?
@pcalkins I want to collect cookies with selenium from automated session and then load them in not automated session
You can set Selenium to use a certain profile. Then there's no need to store them as they will already be available to that profile after Selenium methods are run.
@pcalkins as far as I know, automated session does not save new cookies, you can only use old ones, which were got by user in not automated session
hmm... looking into this further, you might be right. At least it used to be true for geckodriver that it would create a temporary copy of that profile and then remove it when the driver quit. I couldn't find out if Chromedriver did the same thing. I guess an option would be to copy the copied directory (overwriting the original profile) before quitting the driver? (Though I think it'll prevent copying some files because they will still be in use by the browser...)
Just a note that session-based cookies couldn't be re-used as they are tied to a session. The "remember me" types could and you should be able to just export them from the browser to import into another profile. (Each browser has their own methods for that... so export manually before the driver quits. Then import manually on your manual session.)
| common-pile/stackexchange_filtered |
Loop through an array to find the index of a user input, then replace the value at the same index of another array
I am creating a Hangman game (with a twist) in JS/jQuery. I have figured out how to identify the index of the array of the word being guessed based on the user input (when they guess a letter). However, I cannot figure out how to take that index and replace the blank (underscoreArray) with the value of the user input at the same index.
I have tried to do this with a a conditional (at the end of the JS code), but I'm not sure how to make this work.
Upon a correct guess, this conditional should change the html from (for example, if the word is "drag" and the user guesses "d") " _ _ _ _ " to "d _ _ _".
Here is the fiddle and here is the JS:
$(document).ready(function() {
var words = ["shade", "she mail", "queen"];
var usedLetters = [];
var wrongLetters = [];
$('.play').click(function() {
var word = words[Math.floor(Math.random() * words.length)];
var wordLength = word.length;
var underscores = "";
for (i = 0; i < wordLength; i++) {
underscores = underscores + "_ ";
}
$('.btn-default').on('click', function() {
var guess = $(this).text();
if (jQuery.inArray(guess, usedLetters) === -1) {
usedLetters.push(guess);
$('.used').html('<p class="used">Letters used:</p><span>' + usedLetters + '</span>');
} else {
alert("You already guessed \"" + guess + "\"!");
}
/*Find out if guess = an index of word, and if it does replce the underscore index with the guess*/
var find = word.indexOf(guess);
var wordArray = [word];
//loop through wordArray and where wordArray === find replace same index of underscores with guess.
for (i = 0; i < wordArray.length; i++) {
var underscoresArray = [underscores];
var index = wordArray.indexOf(guess);
if (index !== -1) {
underscoresArray[index] = guess;
$('#words').after(underscoresArray).toString();
};
}
});
});
});
Please scale this down to a minimal representation of the problem and remove irrelevant code. really not clear what you are trying to do
The problem is that you seem to think that the [] syntax converts a string into a list of characters. When you're doing [word] you're creating a list of 1 element, namely 'word'. What you need to do is create a list of characters of your word, which you can do by using the split() method: var characters = word.split('');
You shouldn't have deleted that code, part of your problem lies within the part you deleted (@charlietfl just went tl;dr on you). Just remove comments, console.logs, script that renders html on the screen (i.e. "gentlemen start your engine"), or put the old code back up there and I'll edit the irrelevant part out for you.
That's what I thought! I'll put it back up!
There are a few problems:
As @Glubus pointed out, [word] doesn't make a character array. Use word.split('').
You set up your underscores string to have spaces in it, making it twice as long as word, so the index needs to be multiplied by two to compensate
You aren't catching every instance of a guessed letter, because indexOf only returns the first one
This should be closer:
var wordArray = word.split('');
var underscoresArray = underscores.split('');
for (var i = 0; i < wordArray.length; i++) {
// Catch every letter, eg. the "p"s in "popcorn"
if (wordArray[i] == guess) {
// Underscored indices are twice the word indices
// "popcorn" -> "p _ p _ _ _ _"
underscoresArray[i * 2] = guess;
};
}
// Only need to do this once after all changes are made
underscores = underscoresArray.join(''); // Collapse back to string
$('#words').after(underscores);
console.log(underscores);
Thanks @Kristján. This is really helpful.
Agreed with @Glubus and @Kristjan,
His solution is good, except you should also use join to bring the array back into a string before adding it into the #words element.
Edit: Kristjan updated his solution with join.
For reference, see:
Split
Join
| common-pile/stackexchange_filtered |
icomoon font on iOS 8 do no work anymore
I am using icomoon font in my app, it works very well.
This way we removed all our png icons and save disk space, but on iOS 8 beta 3, all my icons do not appear.
It seems that there is a problem in the generated font by icomoon since this is OK using http://fontello.com.
Did you experienced this problem?
Thanks for your help.
Thierry
| common-pile/stackexchange_filtered |
Azure SignalR Blazor app not receiving messages
I'm looking at incorporating Azure SignalR functionality into my .net core Blazor web application. To this end i've been following this tutorial - Azure Signalr Serverless. This is working fine - i have a project running the Azure functions app and can start up two browsers and have a chat session. What i'm trying to do is add the ability to receive these message notifications from the Azure signalR hub that's been configured into my Blazor app. I've added the following code in Index.razor.cs that mimics the javascript code in the example client:
public class IndexComponent : ComponentBase
{
private HubConnection _connection;
public string Message;
protected override Task OnInitializedAsync()
{
_connection = new HubConnectionBuilder()
.WithUrl("http://localhost:7071/api")
.Build();
_connection.On<string, string>("ReceiveMessage", (user, message) =>
{
Message = $"Got message {message} from user {user}";
this.StateHasChanged();
});
_connection.StartAsync();
return base.OnInitializedAsync();
}
}
The example javascript code btw is:
const connection = new signalR.HubConnectionBuilder()
.withUrl(`${apiBaseUrl}/api`)
.configureLogging(signalR.LogLevel.Information)
.build();
connection.on('newMessage', newMessage);
connection.onclose(() => console.log('disconnected'));
console.log('connecting...');
connection.start()
.then(() => data.ready = true)
.catch(console.error);
So the problem is that my Blazor app never receives any message notifications sent from the javascript chat clients (so the _connection.On handler is never hit). What am i missing in my Blazor code ?
Blazor Server or WebAssembly?
@HenkHolterman Blazor Server
I thought Blazor Server was already connected and running on the code was executed on the server?
@MichaelPuckettII I want to connect to a Azure SignalR service, which is what the clients in the example are doing.
@auburg I think I see. So instead of connecting backend and using the built in SignalR you want to add a new SignalR connection so the client side can both be connected to the backend and to another SignalR service. Is that right?
@MichaelPuckettII Yes - that's right. The example i've given is a half way house - ultimately i want my blazor client(s) to receive updates from my back end when inserts are made to a azure hosted sql server via a webapi. For now i just want to hook up to this simpler scenario where the blazor app receives the same messages from a chat azure signalr service.
@auburg Just a thought, since you're using SignalR to talk to the server already, would it be easier to add the connection on the server as well and then relay that information to the client in the same format as BlazorServer is doing currently?
@MichaelPuckettII What i'm looking for is an example of a Blazor Web UI app receiving events from a serverless Azure SignalR service via function bindings - not dissimiliar to this https://stackoverflow.com/questions/59995067/how-can-i-create-an-output-binding-to-signalr-service-from-azure-functions My binding would be different because i want it to be triggered when there are changes to a SQL database. So assuming such a binding was in place, what would my Blazor code (either client side or server side) be?
@auburg So to me, and this may not be the right answer just preference, I would place all of this logic on the Server and relay to the client the same as I do in a typical Blazor Server app. I believe this would be easier to maintain and scale since you're not running essentially two logical processes for the same view(s). So IMO, place it all on the server but that doesn't mean using the client is wrong.
Ok so this is what i needed to do to get it to work in my Blazor app:
_connection.On<object>("newMessage", update =>
{
Console.WriteLine(update);
//Message = update;
});
I needed to subscribe to the 'newMessage' target (since that's the JS is sending on) and also the type that's being posted isn't a string but a JObject type which i would need to deserialize to the correct type.
I've spent two hours trying to understand why my Blazor app wasn't receiving any notification. Just in case anyone else has the same problem I've figured that, using NewtonSoft instead of the default System.Text.Json solved the serialization problem AddNewtonsoftJsonProtocol
spent a few hours on this today... thanks!!
| common-pile/stackexchange_filtered |
How to encrypt data so only clients can read, but not the server
I am building a system where the clients should be able to communicate with each other.
However I want all the data to be hidden from the server where the information is stored in a database.
How can the users invite more users and share the encryption key without that the server also get the key?
First user creates a key (locally on the device) to encrypt the messages (symmetric encryption).
First user invites another user. How can the key be distributed from the first user to the second user without that the server also get this information? Same question goes for user three etc.
The idea is that the second user, after accepting invite, can read all the data that the first user have created and that now user two and three can create new data that user one can read.
I don't want the users having to enter passwords or such things.
The clients cannot communicate directly with each other, it has to go through the server.
I suppose that when the key has been distributed it's quite easy to just store that on the clients and then the key can be used to both decrypt and encrypt the messages. If a client loses the key I suppose that that user needs a new invite to be able to read the data again.
For this system it is not required that the key needs to be changed if someone leaves the group.
What do you need more than the Signal Protocol?
That, and why do you want to use symmetric encryption?
Note that, for full security, either the users must be able to communicate outside of the service, or there must be some trusted entity (typically a server or certificate authority) authenticating the users (or both). If the server is malicious, it can substitute one user's public key / public [EC]DH parameters with its own, and the only way the victim will know they're being man-in-the-middle attacked is if they have some alternative way to verify the other party's public key is the same as the one they see.
Matrix.org have an implementation of the double ratchet algorithm used by signal for end to end encryption and they have an implementation for groups as well. It is available as a free library (Apache license) on their site with bindings for JavaScript, Python, Java (Android), and Objective-C as well as some others provided by third parties.
https://gitlab.matrix.org/matrix-org/olm
Signal protocol >= The protocol combines the Double Ratchet algorithm, prekeys, and a triple Elliptic-curve Diffie–Hellman (3-DH) handshake,[5] and uses Curve25519, AES-256, and HMAC-SHA256 as primitives.
| common-pile/stackexchange_filtered |
Terraform CDKTF Unable to Find remote state
It has not been too long since I started working with C# and Terraform. Some things are not entirely clear and it reflects as Terraform refuses to plan my generated cloud environment stacks.
The issue seems to be rooted in cross referencing resources from different stacks and I am not sure how I can fix it.
Example Storage account stack:
internal class StorageAccounts : TerraformStack
{
internal readonly StorageAccount _storageAccount;
internal readonly StorageShare _storageShare;
/// <summary>
/// The StorageAccounts class contains all the definitions of the Azure resources which are directly related to StorageAccounts
/// </summary>
/// <param name="scope">an object of class Construct</param>
/// <param name="id">a string identifier</param>
/// <param name="nameGenerator">an object of class NameGenerator</param>
/// <param name="networking">an object of class Networking</param>
/// <param name="resourceGroup">an object of class ResourceGroups</param>
internal StorageAccounts(Construct scope, string id, NameGenerator nameGenerator, ResourceGroups resourceGroup ) : base(scope, id)
{
string[] _ips = new string[] { "<IP_ADDRESS>", "<IP_ADDRESS>" };
AzurermProvider azurermProvider = new(this, "AzureRm", new AzurermProviderConfig
{
Features = new AzurermProviderFeatures(),
});
StorageAccount storageAccount = new StorageAccount(this, "azurerm_storage_account", new StorageAccountConfig
{
Name = nameGenerator.GetResNames()["Storage"],
ResourceGroupName = resourceGroup._resourceGroup.Name,
Location = nameGenerator._region[1],
AccountTier = "Standard",
AccountKind = "StorageV2",
AccountReplicationType = "RAGRS",
AllowBlobPublicAccess = true,
NetworkRules = new StorageAccountNetworkRules
{
DefaultAction = "Deny",
VirtualNetworkSubnetIds = new string[] { "test" },
IpRules = _ips
}
});
StorageShare storageShare = new StorageShare(this, "azurerm_storage_share", new StorageShareConfig
{
Name = "authtickets",
StorageAccountName = storageAccount.Name,
Quota = 5
});
_storageAccount = storageAccount;
_storageShare = storageShare;
}
}
As per my definition of the StorageAccounts class, it's constructor takes two more objects than usual, one which is of importance and that is the ResourceGroups object.
The ResourceGroups class is at it sounds, a class dedicated to ResourceGroups and I pass it with the constructor in order to get the necessary name.
Once I synthesise the stacks through the following (which I thought was the way to go as passing the objects with their own inherent attributes would make for the right referencing of values):
internal static void Synthesise(NameGenerator nameGen)
{
HashiCorp.Cdktf.App app = new();
ResourceGroups resourceGroup = new(app, "ResourceGroup", nameGen);
StorageAccounts storageAccounts = new(app, "Storage", nameGen, resourceGroup);
Networking networking = new(app, "Network", nameGen, resourceGroup, storageAccounts);
Kubernetes kubernetes = new(app, "Kubernetes", nameGen, networking, resourceGroup);
app.Synth();
}
I get the relevant JSON files with the correct output and expected names values etc.
If I initialise and plan the ResourceGroups using terraform.exe, it will complete both processes successfully.
However, once I try to initialise and plan the StorageAccounts stack, I get the following error:
data.terraform_remote_state.Storage_crossstackreferenceinputResourceGroup_C0FA14C2:
Reading...
data.terraform_remote_state.Storage_crossstackreferenceinputResourceGroup_C0FA14C2:
Still reading... [10s elapsed] ╷ │ Error: Unable to find remote state
│ │ with
data.terraform_remote_state.Storage_crossstackreferenceinputResourceGroup_C0FA14C2,
│ on cdk.tf.json line 21, in
data.terraform_remote_state.Storage_crossstackreferenceinputResourceGroup_C0FA14C2:
│ 21: "workspace": "${terraform.workspace}" │ │ No stored
state was found for the given workspace in the given backend.
I have been searching around on the Terraform documentation regarding possible fixes, but they seem to vary a lot due to the examples given with AWS, HCL and TS, which I find hard to convert back in to my own context.
I've been looking at Remote Backends but it is hard to grasp as there is minimal information available to work with in C#, same goes for Remote state.
Source thread in HashiCorp's discuss:
https://discuss.hashicorp.com/t/cdktf-c-terraform-plan-gives-unable-to-find-remote-state/41352/3
Any help is appreciated!
It looks like the problem stems from a cross stack reference, so the reference of a value origin from one stack to another one. It seems like the remote state data source can not find the value in the stack the value origins from. A common reason for this is that you did not apply the source stack yet before trying to plan one that depends on it or that the execution environment has no access to the backend.
| common-pile/stackexchange_filtered |
Regarding featuretools, the rank results are wrong
Using Featuretools, I want to convert the value of a certain feature to rank.
This will be the exact question. If anyone can help me, please answer.
First, the following code uses the rank function of pandas and displays the result. I believe this result is correct.
import pandas as pd
df = pd.DataFrame({'col1': [50, 80, 100, 80,90,100,150],
'col2': [0.3, 0.05, 0.1, 0.1,0.4,0.7,0.9]})
print(df.rank(method="dense",ascending=True))
However, when I create a custom primitive and run the following code, the results are different. Why is this happend? Please fix my code if it is wrong. Thank you very much for your help.
from featuretools.primitives import TransformPrimitive
from featuretools.variable_types import Numeric
import pandas as pd
class Rank(TransformPrimitive):
name = 'rank'
input_types = [Numeric]
return_type = Numeric
def get_function(self):
def rank(column):
return column.rank(method="dense",ascending=True)
return rank
df = pd.DataFrame({'col1': [50, 80, 100, 80,90,100,150],
'col2': [0.3, 0.05, 0.1, 0.1,0.4,0.7,0.9]})
import featuretools as ft
es = ft.EntitySet(id="test_es",
entities=None,
relationships=None)
es.entity_from_dataframe(entity_id="data",
dataframe=df,
index="index",
variable_types=None,
make_index=True,
time_index=None,
secondary_time_index=None,
already_sorted=False)
feature_matrix, feature_defs = ft.dfs(entities=None,
relationships=None,
entityset=es,
target_entity="data",
cutoff_time=None,
instance_ids=None,
agg_primitives=None,
trans_primitives=[Rank],
groupby_trans_primitives=None,
allowed_paths=None,
max_depth=2,
ignore_entities=None,
ignore_variables=None,
primitive_options=None,
seed_features=None,
drop_contains=None,
drop_exact=None,
where_primitives=None,
max_features=-1,
cutoff_time_in_index=False,
save_progress=None,
features_only=False,
training_window=None,
approximate=None,
chunk_size=None,
n_jobs=-1,
dask_kwargs=None,
verbose=False,
return_variable_types=None,
progress_callback=None,
include_cutoff_time=False)
feature_matrix
Here is the result.
enter image description here
However, when I tried the following code, I was able to get the correct data.
Why are the answers different?
import pandas as pd
df = pd.DataFrame({'col1': [50, 80, 100, 80,90,100,150],
'col2': [0.3, 0.05, 0.1, 0.1,0.4,0.7,0.9]})
print(df.rank(method="dense",ascending=True))
pd.set_option('display.max_columns', 2000)
import featuretools as ft
es = ft.EntitySet()
es.entity_from_dataframe(entity_id='data',
dataframe=df,
index='index')
fm, fd = ft.dfs(entityset=es,
target_entity='data',
trans_primitives=[Rank])
fm
NEW ANSWER:
Based on your updated code, the problem is arising because you are setting njobs=-1. When you do this, behind the scenes, Featuretools is distributing the calculation of the feature matrix to multiple workers. In doing so, Featuretools is breaking up the dataframe for calculating the transform feature values among the workers and sending pieces to each worker.
This creates a problem with the Rank primitive you have defined as this primitive requires all of the data to be present to get a correct answer. For situations like this you need to set uses_full_entity=True when defining the primitive to force featuretools to include all of the data when the primitive function is called to compute the feature values.
If you update the Rank primitive definition as follows, you will get the correct answer:
class Rank(TransformPrimitive):
name = 'rank'
input_types = [Numeric]
return_type = Numeric
uses_full_entity = True
def get_function(self):
def rank(column):
return column.rank(method="dense",ascending=True)
return rank
OLD ANSWER:
In the custom primitive function you define, the parameters you are passing to rank are different than the parameters you are using when you call rank directly on the DataFrame.
When calling directly on the DataFrame you are using the following parameters:
.rank(method="min", ascending=False, numeric_only=True)
In the custom primitive function you are using different values:
.rank(method="dense", ascending=True)
If you update the primitive function to use the same parameters, the results you get from Featuretools should match what you get when calling rank directly on the DataFrame.
I'm sorry. I made a careless mistake and asked the wrong question. I wrote a new code above, please check it
I don't see any difference in your code. What did you change?
I'm sorry, my question was deleted automatically. I updated above my question. Please check it out again.
I have updated the answer to reflect the updated question.
I was able to confirm it. Thank you very much for your kindness and support.
| common-pile/stackexchange_filtered |
Call Equation Editor plugin when custom button is clicked
I have a very basic text editor that uses CKEditor. The default CKEditor toolbar is hidden because for editing I need only image upload (which is completely custom) and Equation Editor plugin for formulas.
My goal is to use a custom button for the Equation Editor and call the plugin on click. So, somewhere in the UI of the editor, I would have this:
<a href="#" class="custom-formula-button">Insert formula</a>
Clicking on the button should open the Equation Editor.
How do I achieve this?
Note: I have multiple CKEditor instances on the page.
Please see "Should questions include “tags” in their titles?", where the consensus is "no, they should not"!
You should use CKEDITOR.editor.execCommand() like this
CKEDITOR.instances.myEditorInstance.execCommand( 'mathjax' )
And this is where you'll find how to do this.
You can list available commands of the CKEditor instance by browsing CKEDITOR.instances.myEditorInstance.commands object.
| common-pile/stackexchange_filtered |
Nested animated routes in React Router V6
I'm really struggling to implement something in React Router Version 6, that was relatively straight-forward in Version 5.
I'd like to not only animate between routes (which is now finally possible in Version 6 due to a recent commit to the library), but also to animate nested routes. Think sliding pages, within sliding pages.
Here is a CodeSandbox with what I have so far.
I'm using Framer Motion's AnimatePresence to detect exit animations. But React Router wants to re-animate the parent route, even if just the nested route has changed.
As you can see in the CodeSandbox demo, the routes are animating okay, but when clicking the "Next nested page" link, it also re-animates the whole page, which is not what's desired. What I'd like is for Page 1 to stay static while its' nested route animates. Obviously, when you click "Next page", the whole page should animate (moving from Page 1 to Page 2). But I can't seem to have my cake and eat it.
Has anyone been able to do this? Grumblings in the library and on the wider web are indicating that this may be impossible currently?
The reason why AnimatePresence is re-animating the parent route upon child navigation is because of the key prop. When the key changes, React treats it as an entirely new component. Since the location.key will always change when a route changes, passing location.key to both Routes components will trigger the route animations every time.
The only solution I can think of is to manage the keys for both Routes components manually. If you change the URL structure to make the base route path /page1, the solution is pretty simple:
Forked Demo
To keep your current URL structure, you'll need to create a key for the top-level Routes comp that changes when navigating from / to /page2, but doesn't change when navigating from / to /nested1 or from /nested1 to /nested2.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.