text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
All files in my PC are stuck on "read only", even as the only admin
Windows 7, 32-bit: There is 1 user account in this PC, and that is my account.
What I do is, I right click to a folder and uncheck the read only option, apply, close. After when I reopen the folder's properties I see that it is restored and file is still read only.
What should I do?
Have you considered opening up process monitor and seeing who's setting the readonly bit?
can you give details?, I am using windows in another language, so I couldn't find the process monitor, do you mean ctrl + alt + del ?
Could you be looking at a SSD that's reached the end of it's rated life and gone into read-only mode?
The question is inconsistent, saying you remove readonly from the folder, but then your title is about files. Folders in Win 7 can show in File Explorer as "Readonly" (a black square vs a checkmark means that only some of the files underneath that folder are Readonly) but that doesn't actually mean all the files are Readonly. Pls clarify the question to address 1) What folders are you looking at; 2) How are you seeing the Readonly box (File Explorer, right-click folder & choose Properties ?), and 3) Whether you are checking files or just folders. This type of detail is important.
From your question it seems this issue is on folders, and you do not mention any inability to access these files. If this is the case the below should apply to you:
Unlike the Read-only attribute for a file, the Read-only attribute for a folder is typically ignored by Windows, Windows components and accessories, and other programs. For example, you can delete, rename, and change a folder with the Read-only attribute by using Windows Explorer. The Read-only and System attributes is only used by Windows Explorer to determine whether the folder is a special folder, such as a system folder that has its view customized by Windows (for example, My Documents, Favorites, Fonts, Downloaded Program Files), or a folder that you customized by using the Customize tab of the folder's Properties dialog box. As a result, Windows Explorer does not allow you to view or change the Read-only or System attributes of folders. When a folder has the Read-Only attribute set it causes Explorer to request the Desktop.ini of that folder to see if any special folder settings need to be set. It has been seen where if a network share that has a large amount of folders set to Read-only, it can cause Explorer to take longer then what is expected to render the contents of that share while it waits on the retrieval of the Desktop.ini files. The slower the network connectivity to the share the longer this process can take to the point where Explorer may timeout waiting for the data and render nothing or appear to hang.
I have run into the issue you are describing in the past where you uncheck the box and after closing and opening properties again it is still checked. The above is what I found when looking into this issue.
Source
I've spent a lot of time trying to make my folders not read only too. Even talked to my computer support staff and they were unable to resolve it. Guess what! There wasn't anything to fix.
Did Google search and found out that while files restrict changes when it's read only, folders that are read only are read only so different programs know how to treat them (it's a bit technical for me to explain more, since that's all I understand). Kyle's answer has the more technical details about it all.
But the real eye opener was this website which had pictures. Seems that even though the read only box has a little square in it - it isn't read only. Just when the box has a check mark in it, is it read only. So all this time and energy and they weren't even read only and even if they were, it's only important to the programs to know.
Hope this is the case with you and they aren't read only, so you won't have to worry. Cheers! Here's the pictures telling which are and aren't read only.
http://www.sevenforums.com/tutorials/63741-read-only-file-folder-attribute.html
The link to the site with pictures c/should have been a comment on Kyle's answer.
Please note that a black box, vs. a black checkmark, means that -some- of the files underneath that folder structure are readonly. If all of them were, it would be a black checkmark. This difference (black box vs checkmark) is becoming a standard for Windows selection checkboxes.
Are you checking files or folder? Windows ignores read-only attributes for folders. If you un-check the read-only checkbox for a folder, it just appears greyed out the next time you open the folder properties. Check the "Cause" section of this KB article: http://support.microsoft.com/kb/326549
It is not grayed out, there is a blue squere at read only section, and It remains when I reopen the folder properties.
Try using the attrib tool from the command prompt. You can type attrib /? to see usage guidelines.
Try this:
attrib -r -s c:\YourFolder /S /D
-r means remove readonly attribute
-s means remove system attribute
/S means process files and all subfolders
/D means process folders also
I just ran into the same issue as the OP. Applying this commandline fix had no effect.
Your methodology is sound, but unfortunately it doesn't resolve the original question, primarily because the files themselves are unlikely to be Readonly (or, at least, only some of them are). Also the Readonly attribute on a folder only applies to the files below it IF they inherit the attribute -- i.e. put a non-readonly file into a readonly folder in Windows, and it normally remains NOT readonly. OTOH if Readonly occurs as a permissioning issue set in the ACL so the user only has "Read" and "Traverse" rights and the rights are inherited, the attrib command will fail.
I don't know the reason why, but I recently had an issue where trying to open files in AutoCAD warned that they were read-only. Previously not an issue. The files always opened, but would not allow me to save them with the same file name. But I had not changed any attributes. But turning off the preview pane allows them to open normally without the read-only warning and I can make modifications and save them under the same file name again. Not sure why it matters, but the problem of Read-only went away when the preview pane is not selected.
You're on the right track for a common cause of this issue. Basically, AutoCAD has to run in order for you to preview files of its type, and AutoCAD locks the file while it has it open. If you look at your task manager while previewing AutoCAD files you'll see the AutoCAD executable running. Adobe Reader does a similar thing. Turning off Preview is an easy way to resolve some of these problems. HOWEVER, what you have posted is more of a comment than a solution. We're glad you found the answer, but please stick to answers in this section until you have the rep necessary to comment.
@Dave Filicicchia , please edit your answer to provide the steps for turning off Preview in Windows 7 (File Explorer, I assume), even if it seems obvious to you. Perhaps remove some of the personal commentary as well & focus on the solution.
Here is the solution:
The problem isn't that the folder is Read Only. Rather, because your folder was created on a different
installation of Windows you no longer have NTFS security permissions to access (read) the folder.
Correct this by following these steps to take ownership and then grant yourself full access to the folder.
1.Right-click the folder > Properties
2.Security tab > Advanced
3.Click Change to the right of Owner
4.Enter Users (user name you used when installing Windows) into box and click check names then OK
5.Enable the checkbox Replace owner on sub-containers and objects then click Apply
That should fix it at this point item 5, however see below if problem continues:
6.If prompted that You do not have permissions to read... click Yes
7.Completely close out of the Advanced Security Settings dialog
8.Right-click the folder > Properties
9.Security tab > Edit...
10.Add...
11.Enter Users into box and click OK
12.Enable the Full Control checkbox then click OK
Where does the OP mention the fact that disk of even files are from another system?
I had the same issue and now I was messing around and ended up figuring out what my problem was.
Go into the folder properties
Click on advanced on the general tab
Uncheck all options
Then take off the read only attribute and apply
Hope it helps.
This looks like essentially the same solution as jáquer's answer.
I found a solution - make sure the preview button is off in the file's folder. The preview button is in the top right corner of the file.
Please read the question again carefully. Your answer does not answer the original question. Preview has nothing to do with ReadOnly.
| common-pile/stackexchange_filtered |
C++ Delimiter for newline and again for :
Say I have a textfile "Employees.txt" with Employee name and ID.
Like so:
John:d4250
Sarah:s5355
Alan:r4350
If I have a very very basic Employee class which has a constructor for Name and ID
and I wish to read from this textfile and insert them into a vector
Am I best to use something like:
void GenericProgram::loadEmployees()
{
string line;
ifstream empFile("employees.txt");
if(empFile.fail())
{
cout << "input file opening failed\n" << endl;
exit(EXIT_FAILURE);
}
while (!empFile.eof())
{
string empName;
string empID;
while (getline(empFile, line, '\n'))
{
// this will give me the line on its own
// now how to delimit again using ':'
// then do something like
Employee e(empName, empID)
employeeVector.push_back(e);
}
empFile.close();
}
}
I apologise that this is so basic. Brain is failing on me. I was wondering if there are better ways to read from files to populate objects with streams.
Never write while (!f.eof()), see http://stackoverflow.com/q/5605125/981959, while(getline(...)) already handles the EOF case
Also "this has been answered but I didn't understand it" is a poor reason to ask the same thing again. Try harder.
Just add to your code
int pos = line.find(":", 0);
if (pos != std::string::npos)
{
std::string empName(line.begin(), line.begin() + pos);
std::string empID(line.begin() + pos + 2, line.end());
}
For multiple ":"
std::vector<std::string> str;
std::vector<size_t> positions;
positions.push_back(0);
do
{
positions.push_back(line.find(":", positions.size() - 1));
}
while(positions[positions.size() - 1] != std::string::npos);
for (int i = 0; i < positions.size(); i+=2)
{
str.push_back((line.begin() + positions[i], line.begin() + positions[i+1]));
}
But i haven`t tested it.
I get this error from 'line' on the third line with empID:
std::string line
Error: no instance of constructor "std::basic_string<_Elem, _Traits, _Alloc>::basic_string[with _Elem=char, _Traits=std::char_traits,_Alloc=std::allocator]" matches the argument list
argument types are: (std::_String_iterator<std::_String_val<std::_Simple_types>>, const size_t)
I assume you meant to insert this inside :
while (getline(empFile, line, '\n'))
Oh my bad. Sorry.
std::string empID(line.begin() + pos + 2, line.end());
Thanks a lot :) What should I search for to read up about this method?
@user3665874 careful, unless you're assume you're always parsing direct data directly (eg. what about a line with no ':') this will fail.
And what happens if there is no ':' in the line? As a general rule, either use the standard functions of <algorithm> (whose std::find will return the end iterator if it doesn't find the character), or use std::string::substr() (which accepts length of std::string::npos) with the results of the member functions of std::string. (Although in this case, I suspect that some wort of error handling would be appropriate, maybe doing nothing but outputting an error message if there is no ':'.)
If there are no ":" in line. The function return std::string::npos. It`s equivalent to size of your array. You need to use something like this:
if (pos != std::string::npos)
{
//good
}
else
{
//there are no : in your line
}
Does anyone know what to search for to read up about this? I want to do the same thing for when there are more than one ":" in the line.
There are many ways to do this. You can do it by using only std::string. Or using CString. Or you can use stl algorithms. I saw in your code std::string and suggested this method.
You need to know find method in std::string. I read about methods in http://www.cplusplus.com.
For example std::string::find is here: http://www.cplusplus.com/reference/string/string/find/
Alright thanks a lot, will do :)
With a file with multiple ":" when I use this "line.begin() + pos + 1, line.end()" on my second string I start at the correct position but then I get the entire remaining line (I assume because of line.end()). Do I need to use find(":") to stop at the next ":" then?
It`s will be better to find all positions first. std::vector<size_t> positions;
int pos = 0;
while(pos != std::string::npos)
{
pos = line.find(":", pos);
positions.push_back(pos);
}
and then create an array of strings
| common-pile/stackexchange_filtered |
I married my wife without my parent/guardian's permission; is it valid?
My mother refuses marriage with a girl (my cousin's sister in law). She says the girl's status is not same as me, and she lives in a village, that means my family is only concern to the social status.
However, we got married in front of quazi and in that occasion two of my wife's cousins were present, and there were witnesses also, but we still didn't announce our marriage to our family members because they will got hurt.
We thought our family issue will be managed, and we will get married again by their permission, but the scenario is different and not as we thought; my guardian is too strict and they are not allowing me to do so.
If I want to listen to my family then I'll have to divorce (secretly) that girl.
If I don't then I'll have to announce the marriage to society.
Question: Is our marriage valid?
The marriage is already done by papers, although the marriage is not valid by Allah. So the papers/kabin is just like an agreement now, not kabin (my understanding).
This question is a continuation from Should I marry a Muslim girl without my parents' permission?
Possible duplicate of I want to marry a muslim girl without my parents permission , should i do this?
Well questions asking for advise are hardly on-topic on SE. I wonder if you have been married by a qadhi how he could have married you without the necessary conditions? If the conditions where not fulfilled, maybe you should nullify your marriage or do a marriage with all the necessary conditions. As you -as a man- are allowed to marry yourself. But it is always the best to have the agreement of your parents. I can't really see anything new in this question!
yes, the qadhi is wrong, and he did it only for money/ charge of the marry. any way , now the girls parents is agree to make relationship with us as my family did so many comments on them , please advice.
As nim quoted and with the new information you can get married again or at least legalize your marriage. Or is the major problem that you already have marriage papers? Please clarify and add the information to your post!
First thing, this site is not support site, personal detailed daily life questions is not suggested in this site. However, I will try to explain some points I think you misunderstood.
Don't get it wrong, Islam highly suggest marriage which provides continuum of Islam. If you and your wife will be religious in your marriage, you should continue marriage and announce to everyone to be valid in Islam. Having problem with you family is not first concern of Islam. Allah ask from us to obey his rules and live your life in these rules.
In your situation, it seems you are having conflict with two rules of Islam:
Get Marriage
Obey your parents
Situations like this, you need to look from above and think consequences of your actions;
Is your marriage will lead you an Islamic life?
Is obeying your parent will lead you an Islamic life?
There are 4 outcomes of these questions:
Yes, Yes:
If both of outcome leads you to Islamic life, It's up to you. You can decide to make your parent happy. Or you can decide to make your wife happy.
Yes, No:
In this part you should choose your marriage. If your parents concern is not an Islamic life, you don't have to obey them. Primary concern of Islam believing Allah and living your life in His way. If your parent suggest other way you don't have to obey them. Don't get frustrated from pain of this life which comes to you because of living Islam. Allah will reward you so many different way in this life or hereafter.
No, Yes:
Obviously you should obey your parents and abandon your marriage. However, I must add that divorce is halal in Islam but never highly suggested.
No, No:
This is up to you also. You should decide with your concerns. However, as I mentioned, Islam doesn't offer divorce as a first option.
| common-pile/stackexchange_filtered |
IntelliJ - fix alt+shift+space
I'm using intellij and I have this issue:
every time I hit Control+shift+space to autocomplete a windows's contextual menu pops out at the top-left corner (and also the autocomplete is triggered).
Is there a way to cahenge either the intellij or windows's menu
shortcut?
The picture of the menu opened is attached:
Generally the keymaps can be changed under File - Settings - Keymap
The Windows O.S. has owned the Ctrl+Space shortcut for decades. Check the Keymap settings in IntelliJ to remap it there.
I was using eclipse mapping, it was an error there, now I put it on alt+numpad0 (twice)
Windows uses Alt+Shift+Space as a system shortcut. I found no way to remap it; even PowerToys cannot effectively do that.
You may want to select another keymap for IntelliJ IDEA. Here is the list with the comparison: https://www.jetbrains.com/help/rider/Keymaps_Comparison_Windows.html
Thank you, I found out my idea was set to "eclipse" shortcuts, I remapped it to alt+numpad0 (twice). Thanks for helping.
| common-pile/stackexchange_filtered |
Escaping Expressions: Backslashes
I have a Codefragment that works for every escape except \
the problem i have is that split removes all \
public static String esc = "\\";
public static String[] saveSplit(String str, String regex) {
String[] test = str.split(regex);
ArrayList<String> list = new ArrayList<>();
String storage = "";
for (String string : test) {
if (!endsWith(string, esc)) {
if (!storage.equals("")) {
list.add(storage + string);
storage = "";
continue;
}
list.add(string);
} else {
storage += string.replace(esc, regex);
}
}
if (!storage.equals("")) {
list.add(storage);
}
String[] retArray = new String[list.size()];
retArray = list.toArray(retArray);
return retArray;
}
i need this method to savely split a string that have \ without removing the \ so when i escape the \\ with \\\\ they wont get lost
what i excpect is that String s = "\\\\element1\\element2\\\\\\element3\\\\element4
turns to [{\\element1},{element2},{\\element3\\element4}]
but it removes all \
Possible duplicate of Replacing single '' with '\' in Java
If you want to split without removing the elements you want to split at, you'll need a look-behind and look-ahead, e.g. a regex like (?<=[^\\])(?=\\) (meaning "any non-backslash before the match and a backslash after the match" which means the zero-width position right in between). Thus "\\\\element1\\element2\\\\\\element3\\\\element4".split("(?<=[^\\\\])(?=\\\\)") would result in ["\\\\element1", "\\element2", "\\\\\\element3", "\\\\element4"]. You should then be able to work from there to get your desired output.
| common-pile/stackexchange_filtered |
Ruby code crashes with a ExceptionNoMethodError undefined method for nil:NilClass
I know very little about Ruby, but inherited a Ruby app that has recently started to have an issue.
There's a Resque based job which runs periodically, and has been running fine for months and until now it suddenly stopped working. I verified the code hasn't changed, and thus thinking maybe it's a data related issue.
It runs for about a minute and then shows up as an error in Resque.
Retried just now
Remove
ClassProcessor::ReportRunnerArguments
ExceptionNoMethodErrorErrorundefined method `nass_code' for nil:NilClass
/var/www/applications/usps/app/models/report.rb:58:in `build_node'
/var/www/applications/usps/lib/processor/processor/reporter.rb:9:in `block in prep'
/usr/local/rvm/gems/ruby1.9.3p448/gems/activerecord3.2.8/lib/active_record/relation/delegation.rb:6:in `each'
/usr/local/rvm/gems/ruby-1.9.3-p448/gems/activerecord-3.2.8/lib/active_record/relation/delegation.rb:6:in `each'
/var/www/applications/usps/lib/processor/processor/reporter.rb:8:in `prep'
/var/www/applications/usps/lib/processor/processor/reporter.rb:15:in `execute'
/var/www/applications/usps/app/models/owner.rb:33:in `gone'
/var/www/applications/usps/lib/processor/processor/report_runner.rb:5:in `perform'
The line of code in question is:
<tran:NassCode>"+self.trip.master_destination.nass_code+"</tran:NassCode>
master_destination comes from app/models/trip.rb:
class Trip < ActiveRecord::Base
belongs_to :contract
belongs_to :origin, :class_name => 'Location'
belongs_to :destination, :class_name => 'Location'
belongs_to :frequency
...
def master_destination
return master_trip_end.destination
end
And that Destination is a reference to a Location.rb, which has that 'nass_code' referenced in the error.
class Location < ActiveRecord::Base
belongs_to :owner
has_one :address, :as => :owner, :dependent => :destroy
attr_accessible :name, :nass_code, :radius, :address_attributes
I know that's not much to go on - but any clues or ideas on what causes that kind of error, etc...
Looks like a data issue. trip.master_destination is returning nil.
for at least one of your records master_destination is returning null.
I would add a debug statement in master_destination that just outputs the trip you are currently processing.
When it crashes, right before the crash you will see what trip has an issue.
| common-pile/stackexchange_filtered |
Angular 2 - Cancel Service Call
I have a menu with category tabs and the following function on click.
this._customerservice.GetCustomersByFilter(this.FilterOptions) //Get Customers by Page Number and CategoryId
.subscribe(res => {
if (this.customersArray.paginatedCustomers.length != 0) {
res.paginatedCustomers.forEach(element => {
this.customersArray.paginatedCustomers.push(element);
});
}
else
this.customersArray = res;
if (infiniteScroll != null)
infiniteScroll.complete();
this.customersView = this.customersArray;
}
Once he clicks on a category and the service call is still loading, if he clicks on the 2nd category, I want to stop this function of returning a result because it is producing wrong output for the wrong category.
Calling subscribe returns a Subscription. You can cancel the subscription by calling the unsubscribe method. Example:
this.customerSubscription = this._customerservice.GetCustomersByFilter(...).subscribe(...);
// cancel the subscription
this.customerSubscription.unsubscribe();
ok so when I put the unsubscribe above , it should unsubscribe all the previous requests? I will try now
I got unsubscribe is not a function
@MissakBoyajian Maybe your rxjs version is different. Try dispose() instead of unsubscribe().
You might want to take a look at switchMap. It's often used in conjunction with Debounce.
Also RangeIo has a nice article.
Why did I get a -1? I mean your category change is effectively calling for a request you want to override. This does the same. Quote: switchMap will automatically unsubscribe from any previous observable when a new event comes down the stream.
I did not -1. I am now checking it. It looks a bit complicated but I keep you posted if it worked
| common-pile/stackexchange_filtered |
How to replace repetitive string value with values in a list?
I'm trying to create simple Python program for below example.
Sentence = "Person A has ? baloon, Person B has ? baloon Person C has ? baloon"
There can be n number of balloons in this sentence.
Color = ["Black", "Blue", "Red", "Green"]
There can be n number of colors.
FinalSentence = "Person A has Black baloon, Person B has Blue baloon Person C has Red baloon"
Trying to replace ? in Sentence with the colors in list Color sequentially.
I've applied FinalSentence = Sentence.replace("?", Color[0]) with loop. Not sure if this is correct way of thinking.
You could use string formatting with the built-in str.format() method:
sentence = "Person A has ? baloon, Person B has ? baloon Person C has ? baloon"
color = ["Black", "Blue", "Red", "Green"]
final = sentence.replace("?", "{}").format(*color)
print(final)
# >>> Person A has Black baloon, Person B has Blue baloon Person C has Red baloon
In the first step, we replace the given placeholder ? with a placeholder that the format() method understands, namely {}, using the str.replace() method. As a consequence, the sentence now looks as follows:
"Person A has {} baloon, Person B has {} baloon Person C has {} baloon"
In the second step, we replace the placeholders with the actual contents from the color list using the format() method. We have to "unpack" the list here, which we do with the preceding *. So, what we actually write here, is the following:
("Person A has {} baloon, Person B has {} baloon "
"Person C has {} baloon").format("Black", "Blue", "Red", "Green")
Update: As @SIGHUP pointed out, there is one more color in the given color list (4) than there are placeholders to be replaced (3). Yet still, the approach using format() works. To find out why this mismatch does not cause an error, we can look at the corresponding source code:
It seems that the string formatter simply iterates over the placeholders (line 203), determines the argument index for the empty ones (which is the type of placeholder that we are using; line 216 and 221), and then gets the argument at the determined index from the list of arguments (line 234). Because there is no check for superfluous arguments, this implies that an argument list that is too long (as is the case here) will not cause any problems. However, whether this should be considered expected behavior or an implementation detail of an edge case is not immediately clear to me – the documentation of str.format() does not handle this case.
If we wanted to make absolutely sure that the list of arguments is no longer than the list of placeholders, we could cut it off at the count of placeholders, e.g. like so:
final = sentence.replace("?", "{}").format(*color[:sentence.count("?")])
However, as we saw, this is not necessary; at least not with the "standard" Python implementation (i.e. CPython). Also, this still doesn't handle the case of having too few arguments, but then again, this is a different problem.
It's interesting to note that the number of parameters passed to format() must be at least as many replacement fields as are in the string to which it's applied. Too few and you'll get an IndexError. Too many and the "extra" values are simply ignored. Thus, this is a perfectly reasonable solution
Sentence = "Person A has ? baloon, Person B has ? baloon Person C has ? baloon"
Color = ["Black", "Blue", "Red", "Green"]
replaced_text = Sentence
for color_ in Color:
replaced_text = replaced_text.replace("?", color_, 1)
print(replaced_text)
Note that it assumes that the order in Color in the order that you want in your sentence.
This will iteratively replace "?" by the values within Color
Here's the stepwise approach to address your issue:
Given sentence with placeholders:
sentence = "Person A has ? balloon, Person B has ? balloon, Person C has ? balloon"
List of colors to replace the placeholders:
colors = ["Black", "Blue", "Red", "Green"]
Initialize a variable to keep track of which color to use:
color_index = 0
Iterate over the sentence and replace each '?' with the corresponding color:
while "?" in sentence:
if color_index < len(colors):
sentence = sentence.replace("?", colors[color_index], 1) # Replace only the first occurrence
color_index += 1 # Move to the next color in the list
else:
# If there are not enough colors to replace all '?', break to avoid infinite loop
break
print(sentence) # Output the final sentence
| common-pile/stackexchange_filtered |
Error: DateTimeZone::__construct: Unknown or bad timezone (7200)
I'm getting error "Error: DateTimeZone::__construct: Unknown or bad timezone (7200)" when opening CiviCRM on my Drupal6 site. Refreshing the page clears the error. The error occurred after I upgraded to D6.38 and Civi4.7.11.
My timezone is Africa/Johannesburg. Any ideas?
This is not an answer - I don't have the reputation needed to comment @davy-ivins are you also finding that the system status page is broken? I am also getting Timezone errors after updating to 4.7.11 on Drupal 6 and trying to determine if they are the cause of the issue with the system status. Ref: http://civicrm.stackexchange.com/questions/14848/system-status-page-empty-and-status-remains-critical-after-updating-from-4-7-8
I am also seeing the same error message after upgrading from CiviCRM 4.7.9 to 4.7.11, and still persists at 4.7.12 also on Drupal 6.x. Suspecting a defect got into CiviCRM between versions 4.7.9 and 4.7.11. Our error message is as follows: Sorry but we are not able to provide this at the moment. DateTimeZone::__construct(): Unknown or bad timezone (-18000) Also the following shows up in the civicrm log file at the time of the error: Nov 11 16:47:55 [info] $Fatal Error Details = array(3) { ["message"]=> string(61) "DateTimeZone::__construct(): Unknown or bad timezone (-14400)" ["code"]=> NULL [
This does not really answer the question. If you have a different question, you can ask it by clicking Ask Question. You can also add a bounty to draw more attention to this question. - From Review
I have a similar setup running Drupal 6 and CiviCRM 4.7.25. My system also showed System Status: Critical and no messages. The issue was is that it first attempts to set the time zone and if that fails it doesn't output any further messages for some reason.
In CiviCRM version 4.7.25 you can use the API console to run the System.check action by visiting http://sitename/civicrm/api#explorer and choosing Entity System and action check to repeat this test while you edit the files.
The fix is to fix the time zone problem which inherits from the Drupal 6 user and sets (in my case) an invalid time zone string which creates this issue.
For instances with Drupal 6, it uses the method CRM_Utils_System_Drupal6::getTimeZoneString() which retrieves the time zone from the user's timezone member in the database, which in my case is the offset in seconds, or a string -14400 for Eastern Time.
The problem is the DateTimeZone code which converts this into a database time zone string supports offsets in hours, not seconds, so I added this code to detect and adjust the values, use at your own risk:
--- Drupal6.php (revision 4843)
+++ Drupal6.php (working copy)
@@ -754,6 +754,9 @@
// Note that 0 is a valid timezone (GMT) so we use strlen not empty to check.
if (variable_get('configurable_timezones', 1) && $user->uid && isset($user->timezone) && strlen($user->timezone)) {
$timezone = $user->timezone;
+ if (is_numeric($timezone) && $timezone % 3600 === 0 && abs($timezone) >= 3600) {
+ $timezone = intval($timezone / 3600);
+ }
}
else {
$timezone = variable_get('date_default_timezone', NULL);
Once I patched this, the check step works correctly and I can review the further messages we're getting. Another trick here, by the way, is to update CRM_Utils_Check::CHECK_TIMER in sites/all/modules/civicrm/CRM/Utils/Check.php to a small number while you're debugging, otherwise the checks update only once a day.
As always, use best practices when working on production systems and back up all source files before modifying them.
The other possible place to look is in the Drupal variables table for the named value date_default_timezone which is used if the user does not have a time zone configured.
| common-pile/stackexchange_filtered |
Coplanar conducting sheets at different potentials
There is an exercise in Zangwill (7.23 Contact Potential) shown below, it is not very clear in the diagram, but $\rho$ is normal to $z$. The aim is to compute the potential in space, you are suggested to argue that the potential is univariate in $\phi$ by symmetry arguments.
Consider a cylindrical coordinate system with axis along the boundary between the sheets. The radial direction away from the boundary has no scale, so we expect no variation in that direction. There is also symmetry along the line of the boundary, so we expect no dependence there either. This leaves just the angular variable.
I have two results for the potential of this system, but they aren't the same. This violates uniqueness.
The textbook solution is that the problem is univariate in $\phi$ and therefore linear in $\phi$. And if it's linear in $\phi$ then the $E$ field lines have power law spacing.
However, when I did this, I thought of a solution via conformal map where I take the solution of a infinite parallel plate capacitor and map the coordinates with a function so that the two plates become co-linear like in the problem statement. (Credit on Desmos tool found here made by Youtube: Partial Science, I had to make very few edits but it was useful for visualizing with a parameter to map continuously)
Starting with the infinite parallel plate capacitor, each sheet on $z=x\pm i\pi/2$, shown below,
the same curve in the mapped domain, $w=\exp(z)$ we get the upper/lower plate on $w=\pm iv, v=\exp(x)$ where the gap near zero vanishes as the lower limit of x approaches $-\infty$
But under this map, the field lines start at uniform spacing then get exponential spacing disagreeing with the first solution. I looked up the exponential map and found that it is conformal. I've run across some other properties on a region of $\mathbb C$ near zero, but none of them make it clear to me that this should fail.
What am I missing?
Turns out I messed up the method. I don't know if I should or shouldn't answer my own question if I just made a mistake.
Sorry, I don't understand what potential you are trying to calculate. I don't see why is this problem univariate in $\phi$ also. And I don't understand what you are trying to do with these maps
As for the univariate in $\phi$. Suppose we use cylindrical coordinates, where the axis is the boundary between the plates.
There is no scale for the radial direction, so we expect independence in that axis, further, there is symmetry in the longitudinal direction, these two remove the dependence on radial and longitudinal distance.
Also, more importantly. I have found that I simply misunderstood the technique. @Emmy Do you think I should I answer my own question or should I remove the post?
I think you should try to answer it; it is always a good exercise to explain things, and it may help someone else someday
Thanks, I took a shot at it. Let me know if it reads well or if I should partition the solution from my misunderstanding.
I had some misunderstanding of the conformal map technique with the complex potential. So I'll be going through the solution, my misunderstanding, and clarifying the technique.
Before this, a known solution to the problem is that $\varphi$ is a linear function of the angle around the edge of contact, $\varphi(\theta)=V\theta/\pi$ and uniqueness implies any other solution must be equal to this one. Note that the $E$ field for such a solution would be inverse to the distance from the edge of contact.
The solution to the problem (the potential as a function of space for two half planes in contact) via using conformal maps is as follows.
Consider an infinite parallel plate capacitor with distance $\pi$ between the surfaces. One surface at $\varphi=V$ the other at $\varphi=0$. Each surface is a contour described by $z = x\pm i\pi/2$. If we want these two curves to lie parallel to one another and intersect at one point, then we can use the map,
$$w=\exp(z)=\exp(x)e^{\pm i\pi}=\pm i\exp(x)$$
and these two contours will both lie in the purely complex axis and share one point. It occurs as $x\rightarrow-\infty$ where $w\rightarrow0$.
My misunderstanding happened here. I have one solution that has $\exp(x)$ and I considered this to be related to the "spacing" of field lines and therefore strength of the E field, but from the prior result, I expected field strength to be inverse to length, not exponential.
Turns out, I jumped the gun on a conclusion. The conformal map of the coordinates is not a complete description of the problem. The technique is to leverage a change of coordinates and the following fact,
$$
f(z) = f(z(w)) = \varphi + i\psi
$$
for a potential, $\varphi$, and a function, $\psi$ of field lines to describe the "same potential" in different coordinates.
Where in both cases the physical potential, $\varphi$ is described in both representations of $f$, but each in a different coordinate system. So I describe $\varphi$ in the $z$ coordinate system.
The function, $f(z)=-(V/\pi)iz$ has uniform vertical field lines, just like a parallel plate capacitor with horizontal plates. But $z=\log w$, so
$$
f(z) = -(V/\pi)iz = f(z(w)) = -(V/\pi)i \log w = f(w)
$$
and if we polar parametrize $w=r\exp(i\theta)$, then
$$
f(w) = (V/\pi) (\theta-i\log r)
$$
which has a real part $\varphi = {\mathrm{Re}}\,f =(V/\pi) \theta\,$ as expected. The imaginary part gives that there are logarithmic spacing of field lines, which I did expect, but misinterpreted.
So my misunderstanding was twofold, the first part was I didn't get to the end of the technique to find a potential, the other part was that the spacing of the field lines (related to the exponential) was not the form of the field strength (inverse power).
| common-pile/stackexchange_filtered |
FFmpeg cannot write file to /dev/shm: Permission Denied
Issue:
I have an FFmpeg command that I've been running for months now to stream video into the /dev/shm directory. It had been working fine until relatively recently (e.g. within a week), now it throws a permission issue.
The command:
ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 /dev/shm/manifest.mpd
This is not the exact command (paired down for brevity), however the outcome is the same:
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
X Error: GLXBadContext
Request Major code 151 (GLX)
Request Minor code 6 ()
Error Serial #57
Current Serial #56
ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
Input #0, video4linux2,v4l2, from '/dev/video2':
Duration: N/A, start: 1900.558740, bitrate: 147456 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x55b15d8912c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x55b15d8912c0] profile High 4:2:2, level 3.0, 4:2:2 8-bit
[libx264 @ 0x55b15d8912c0] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[dash @ 0x55b15d88f600] No bit rate set for stream 0
[dash @ 0x55b15d88f600] Opening '/dev/shm/init-stream0.m4s' for writing
Could not write header for output file #0 (incorrect codec parameters ?): Permission denied
Error initializing output stream 0:0 --
Conversion failed!
(tl;dr: Could not write header for output file #0 (incorrect codec parameters ?): Permission denied)
For contrast, this version of the command (writing to the home directory) works fine (/tmp/ also works):
ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 ~/manifest.mpd
As mentioned above, the strange thing is that I have not (knowingly) changed permissions on anything or altered the application; it seemingly just stopped working (although, not ruling out that I caused it). The last time I remember it working was probably a week ago (~March 20th, 2021).
What I tried:
Running ffmpeg as sudo (sudo ffmpeg...)
Result: sudo: ffmpeg: command not found. This hasn't been necessary in the past, and it had the same output as before.
sudo sysctl fs.protected_regular=0
Result: No change.
Ran the ffmpeg ... command as su
Result: No change
chmod +777 /dev/shm
Result: No change (ls -tls reveals that the directory is indeed rwxrwxrwt)
chown'd both root:root and my username on /dev/shm
Result: No change.
touch /dev/shm/test.txt and sudo touch /dev/shm/test.txt
Result: The file is created without issue.
I've exhausted everything I could think of relating to permissions to get it to work.
The Question What do I need to do to get FFmpeg write files to /dev/shm? Ideally, figuring out why this happened in the first place.
If anyone has any ideas for commands I should run to help diagnose this issue, feel free to add a comment.
System Info:
Kernel: 4.19.0-14-amd64
Distro: Debian
FFmpeg: version n4.3.1 (Was installed using Snapd, if it matters.)
== Solution ==
jsbilling's solution of using snap.<snapname>.* unfortunately did not work, however in the linked forum thread there was a post which basically got around the issue of writing to /dev/shm by mounting a directory in home ~/stmp and writing the ffmpeg output there:
$ mkdir ~/stmp/
$ sudo mount --bind /dev/shm/streaming_front/ ~/stmp/
...
$ ffmpeg -threads 2 -video_size 640x480 -i /dev/video2 -c:v libx264 -f dash -streaming 1 ./stmp/manifest.mpd
Not an ideal solution, but a working one.
If you are using snaps, this forum post indicates there are specific patterns that are allowed for files in /dev/shm:
/dev/shm/snap.<snapname>.*
Another forum member suggested this hack, although it is basically a security bypass:
$ mkdir /dev/shm/shared
$ mkdir ~/shmdir
$ sudo mount --bind /dev/shm/shared ~/shmdir
$ touch ~/shmdir/foo
$ ls /dev/shm/shared/
foo
See my edit - I was able to get it working from one of the more 'hacky' solutions in the linked forum pos. If you edit your answer to include that, I'd be wiling to mark your answer as accepted (unless you'd rather not for security-moral reasons :) ).
And, while I know that this site discourages this sort of commenting: holy sh*t thank you.
@schil227 I added it here, since it does solve the immediate problem despite the security implications, the forum post might be removed/archived/etc.
| common-pile/stackexchange_filtered |
Load a KML in Google Maps API with proper rendering of colored disc placemarks
I am using this piece of code to load a KML file in GoogleMapsAPI:
<script>
function googleMapInitialize() {
var myOptions = {
center: new google.maps.LatLng(-34.397, 150.644),
zoom: 8,
mapTypeId: google.maps.MapTypeId.ROADMAP
}
var myMap = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
var ctaLayer = new google.maps.KmlLayer('http://www.freemages.fr/test/session_k.kml');
ctaLayer.setMap(myMap);
}
</script>
While it works properly to load the points, the style I have defined is not rendered properly, as it uses a white disc in PNG format that is supposed to be re-colorized according to the color parameter, and also a resizing of the picture according to the scale parameter:
<Style id="sn_style_0">
<IconStyle>
<color>FF4ea24a</color>
<scale>0.4</scale>
<Icon>
<href>http://www.freemages.fr/test/disc.png</href>
</Icon>
</IconStyle>
<LabelStyle>
<scale>0.5</scale>
</LabelStyle>
</Style>
<Style id="sh_style_0">
<IconStyle>
<color>FF4ea24a</color>
<scale>0.5</scale>
<Icon>
<href>http://www.freemages.fr/test/disc.png</href>
</Icon>
</IconStyle>
<LabelStyle>
<scale>0.5</scale>
</LabelStyle>
</Style>
<StyleMap id="msn_style_0">
<Pair>
<key>normal</key>
<styleUrl>#sn_style_0</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#sh_style_0</styleUrl>
</Pair>
</StyleMap>
Google Earth can render properly this kind of style and it works very nicely.
In my project, I want to let the user choose exactly the color he wants, so I cannot use a collection of colored discs that I would use without recoloring, which otherwise would be a simple option.
I also tried to use a dynamic creation of PNG transparent disc in PHP using the following code, but PHP does not manage transparency very well and it does not display properly in the file:
<?php
header('Content-type: image/png');
$r = $_GET["r"];
$fg = $_GET["fg"];
is_numeric($r) or $r = 16; // radius
strlen($fg)==6 or $fg = '22e822'; // fill color
$bg = 'ffffff';
function hex2rgb($im,$hex) {
return imagecolorallocate($im,
hexdec(substr($hex,0,2)),
hexdec(substr($hex,2,2)),
hexdec(substr($hex,4,2))
);
}
$d = $r*2;
$dm = $d-4;
$im1 = imagecreatetruecolor($d,$d);
imagecolortransparent($im1, hex2rgb($im1,$bg));
imagefill($im1,0,0,hex2rgb($im1,$bg));
imagefilledellipse($im1, $r, $r, $dm, $dm, hex2rgb($im1,$fg));
imagepng($im1);
?>
Ideally if it could be possible just to create a colored disc by code in the KML file it would be the best, but I did not find any way to do so in the documentation.
Any suggestion how to workaround this issue?
Thanks!
Take a look at the 'styledmarker' in the google-maps-utility-library-v3.
http://code.google.com/p/google-maps-utility-library-v3/
It will let you control the icon styles programatically.
http://google-maps-utility-library-v3.googlecode.com/svn/trunk/styledmarker/
For example, changing color
http://google-maps-utility-library-v3.googlecode.com/svn/trunk/styledmarker/examples/change_color.htm
| common-pile/stackexchange_filtered |
wp pagenavi not working with category id
Can someone let me know i am using wp pagenavi wordpress plugin
http://renegadehealth.com/blog/?cat=6708
When i click on the next button at the bottom the following link comes:-
http://renegadehealth.com/blog/blog/index.php?cat=6708&paged=2
an extra "blog/blog/" is being added
Whats your permalink structure ?
Permalink structure is:-
/%year%/%monthnum%/%day%/%postname%
Why are you using a category link built with a question mark?
If you use the correct permalink which is http://renegadehealth.com/blog/herbs
pagination is working fine.
For the herbs page i am using a page template for publishing all posts in a category.
If i don't use WCS hotfix for displaying category then these does not display.
If your permalink structure is /%year%/%monthnum%/%day%/%postname% your category link should look like http://renegadehealth.com/blog/category/herbs and should not contain any question marks. Try resaving your permalink structure.
If you are on a windows server there are some specific permalink recommendations such as adding a index.php to the permalink: /index.php/%year%/%monthnum%/%day%/%postname%/ http://codex.wordpress.org/Using_Permalinks#Permalinks_without_mod_rewrite
Actually Mirage, the question mark is there because i am using WCS hotfix. Earlier the categories were not displaying and i had to use this plugin.
for some strange reason
http://renegadehealth.com/blog/blog/index.php?cat=18&paged=2
the above url is having "blog" 2 times instead of:-
http://renegadehealth.com/blog/index.php?cat=18&paged=2
If there is someway we cannot use hotfix and display categories details page that should also work.
is there a way to add code in .htaccess file for this so 2 times blog does now come?
Is there a reason why you are redirecting from renegadehealth.com to renegadehealth.com/blog ?
How is this redirection done?
wordpress is installed in the blog folder.
from what i can see there is some issue with .htaccess that is adding blog 2 times in the above url.
Can we somehow modify htaccess file for this to work?
ok, wordpress is in your blog folder. But what is in your root directory that it is redirecting to blog? Maybe this redirection is causing the error. You could install wordpress in the blog folder and put index.php in your root directory. This way you can get rid of the blog directory in your url
http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory#Pointing_your_home_site.27s_URL_to_a_subdirectory
I used this above suggestion as well but the problem is the same. Is there no way to change:-
http://renegadehealth.com/blog/blog/index.php?cat=2728&paged=2
to:-
http://renegadehealth.com/blog/index.php?cat=2728&paged=2
If you see blog is coming 2 times
do you have an additional .htaccess in your root folder? How do you redirect renegadehealth.com to renegadehealth.com/blog/ ?
the site is being redirected to blog folder with simple javascript
In your permalink settings did you enter anything as category base? What is the content of your .htaccess file?
| common-pile/stackexchange_filtered |
How to timeout ADB connect <ip>
I'm making automation bash script that will go over range of IP's and try to connect over ADB.
If I try:
adb connect <not android device>
the command will hang for 30 seconds until it say
"failed to connect to '<IP_ADDRESS>:5555': Operation timed out".
How I can make ADB connect to try to connect for less time. Something like 3 seconds?
You can try:
timeout <seconds> adb connect <ip-address>
More can be found on: https://linuxize.com/post/timeout-command-in-linux/
| common-pile/stackexchange_filtered |
how to bind piece of javascript after ajax update
Possible Duplicate:
How to bind a function to Element loaded via Ajax
in my application I want to populate an info panel onmouseover of some cell and highlight the cell so that the user can see which cell. I used the following js and css, which worked well for cells that exist when the page is first loaded
$(document).ready(function(){
$("table1 td").hover(function(){
$("table1 td").removeClass('highlight');
$(this).addClass('highlight');
});
});
and highlight
.highlight{
border-style:outset;
border-width:5px;
border-color:#0000ff;
}
But in my application the table1 td are generated by a wicket listview and will be updated via Ajax when the user does a search. And after an Ajax update those js code had no effect. How can I make those js code still work after Ajax update? I really appreciate any help!
I think you're looking for live()
this is what your looking for http://stackoverflow.com/questions/3840149/jquery-live-event-for-added-dom-elements
Please always search before asking. And @Ben - quite right, though on() is a more up-to-date means of doing this, since they cleaned up the events API in 1.6.
Ah crap, I'm old school now..
Sorry, if you want to use bind function you should use this instead. jQuery doesn't support hover.
$(selector).bind('mouseenter', function () {alert('hover'); });
$(selector).bind('mouseleave', function () {alert('hover'); });
Ajax will update your table, so you should not write it in ready function.
here are two points:
1). Write it in Ajax success function.
$.ajax(function () {
....,
....,
success: funciton () {
$("table1 td").hover(function(){
$("table1 td").removeClass('highlight');
$(this).addClass('highlight');
});
}
});
2). Use bind function.
$("table1 td").bind('hover', function(){
$("table1 td").removeClass('highlight');
$(this).addClass('highlight');
});
| common-pile/stackexchange_filtered |
Why does code render in code block?
I want to display HTML in a webpage. I've wrapped in a code block but the last two lines still execute/render. What am I doing wrong?
<pre><code>
div {background: brown;}
div.bluebg {background: blue;}
<div>default brown background</div>
<div class="base">blue background</div>
</code></pre>
The last two lines were wrapped in div tags. I notice stackoverflow strips them out. I don't want to strip them but modify I guess with < and >. Is there a listing of tags that should be modified to render them in a webpage? Is there an online program that can convert these to the above syntax?
i cant parse your question. you need to provide somewhat more information
I do not think [those tags] mean what you think they mean.
<pre> allows you to preserve white space and line feeds. <code> allows you to semantically indicate that code is being displayed on your page. Both have some default styles (such as applying a fixed-width font), but neither one does anything to escape <, >, &, or ", so any unescaped HTML code you put in between those tags is going to be processed as HTML. You'll have to use <, >, &, and ". Here's a page where you can paste in text and have it escaped: http://accessify.com/tools-and-wizards/developer-tools/quick-escape/
Thanks on the link. That's what I was looking for.
Just replacing the < with < should be enough to get the HTML tag to display as text. You don't have to bother with the > characters.
You need to replace < with <, > with > and & with & and that's it.
Judging by the contents of your post I assume you're talking about showing code in Markdown on Stack Overflow.
Don't wrap your code in <pre> and <code>, just indent it by 4 spaces, like so:
div {background: brown;}
div.bluebg {background: blue;}
<div>default brown background</div>
<div class="base">blue background</div>
Markdown actually parses HTML (at, least these tags) as you can see. Outside of markdown you'd still have to change <div> to <div> for it to render properly anyway - using code/pree doesn't stop the code from rendering.
I only want it to display in stackoverflow so others can see the question. I need to display HTML in a webpage and pre/code isn't working on some tags. Are you saying I will have to use the less/greater than characters for most tags?
| common-pile/stackexchange_filtered |
Wave file modification using java
I am currently trying to write a program in which i have to process a .wav file, i need to process the data in the wave file.
My main aim is to extract the data from currently existing file, create a new file with appropriate wave headers, and then write the information/data in the new file.
I am expecting the program to be written in java.
Interesting project. Do you have a question?
Do you have a problem, or are you just looking for pointers on how to get started?
I have come across this in Google:
http://www.jstick.de/javaprojects/sampleeditor
This is a WAV editor and should provide you with the ideas for your implementation since it is open source.
| common-pile/stackexchange_filtered |
Server running MS SQL getting login request from another server every minute
I have a Windows Server 2016 virtual machine running IIS 10 and MS SQL Server 2016. Every minute I'm getting a "Login failed for user" message in the event log. Sometimes 3 times per minute. The user that is specified is the name of another server. The servers are unrelated and shouldn't be talking to each other.
Event Log
Login failed for user 'Domain\OtherServer'. Reason: Could not find a login matching the name provided. [CLIENT: [ServerIP]]
Event Details
I ran a NetStat utility. It shows multiple connections from the IP address of the other server in a Time Wait state. I ran the NetStat utility on the other server and it shows no connection whatsoever to this server.
Net Stat Utility
I tried disabling the SQL Agent on this server, it made no difference. The other server does not have a SQL Agent running.
Any suggestions or troubleshooting steps would be appreciated.
Thanks
Netstat isn't going to show anything. You need to run a netmon packet capture on the other server for the "minute" and stop it after the logon fails. Netmon will show the process id of the process that is connecting. Should take about five minutes.
Greg beat me to it. Run a Netmon capture on the source server until you see new events in the event log on the destination server. Then stop the Netmon capture on the source server, filter the capture for the destination server ip address, and find out which process on the source server is responsible for the traffic.
I was able to capture the conversation. There isn't a Process ID or Name for the conversation. It's listed under
You need to go onto that other server and see which processes connect to youf machine on port 1433. Or a ore harsh one would be to add a rule to windows firewall to block traffic on port 1433 from that machine.
| common-pile/stackexchange_filtered |
Combining form textfield values using php implode
I have 3 text fields and I want to pass the values after combining them using a hyphen.
<input type="text" name="val[]" />
<input type="text" name="val[]" />
<input type="text" name="val[]" />
Preferably help me with php implode option.
How do i retrieve it after submit ?
Thanks.
After sending the form, your values will be in $_POST['val'] or $_GET['val'] as an array, depending on the method of your form.
You can combine them simply by:
$hyphenated = implode("-", $_POST['val']); // or $_GET['val']
thanks. how do i change focus to next field once a field has max values
thanks. how do i change focus to next field once a field has max values:
See if this works:
<input type="text" name="val[]" onkeyup='checkVals("field1", "field2");' id='field1'>
<input type="text" name="val[]" onkeyup='checkVals("field2", "field3");' id='field2'>
<input type="text" name="val[]" id='field3'>
<script>
function checkVals(this_field, next_field){
var fieldval = document.getElementById(this_field).value;
var fieldlen = fieldval.length;
if(fieldlen > 10){ // you can change 10 to something else
document.getElementById(next_field).focus();
document.getElementById(next_field).select();
}
}
</script>
| common-pile/stackexchange_filtered |
Delete only incoming call in Call Logs in android
Is it possible to delete only incoming calls from the call log list?
If so how?
I am able to delete all of them easily but not sure how to delete only incoming calls?
Can somebody help me out with this?
Thanks!
In CallLog provider database, android.provider.CallLog.Calls.TYPE ("type") column will have value
android.provider.CallLog.Calls.INCOMING_TYPE (1) for an incoming call record.
Use below method with condition type = 1
public void delete(final String id, final String number) {
Uri uri = Uri.parse("content://call_log/calls");
ContentResolver cr = context.getContentResolver();
Cursor c = cr.query(uri, null, "and type = 1 and _id" + "=?", new String[] { "" + id }, null);
if (c != null && c.moveToFirst()) {
do {
String pid = c.getString(c.getColumnIndex("_id"));
String pnumber = c.getString(c.getColumnIndex("NUMBER"));
if (id.equals(pid) && number.equals(pnumber)) {
context.getContentResolver().delete(CallLog.Calls.CONTENT_URI, CallLog.Calls._ID + " = ?", new String[] { id });
}
} while (c.moveToNext());
}
}
| common-pile/stackexchange_filtered |
I want to get Count the clicks on an item of RecyclerView
I have a recyclerview,this recycler have 5 item[a,b,c,d,e].Suppose 10 user clicked this item[b].how can i get count = 10 and i want to store this count data in firebase database.(I am not using firebase recyclerview)
My another question is i have 5 list,i want to show 5 list item in one recyclerview randomly.
How can i get this.Thank you.
You have to implement click listener on your recycler view.
For each click, check which element has been clicked.
You can save temporarily the count for each row somewhere and then save it to firebase.
Take the 5 list items you want. (randomly or by another way)
Then create a recycler view and pass the 5 list items as a dataSource to your adapter.
| common-pile/stackexchange_filtered |
Converting ArrayList to HashMap, however, selecting choice objects with different variable classes within ArrayList
This is a project I am working on for my intro to java class.
My professor has already laid out the base code, and the point of the project is to work with HashMaps and ArrayLists in combination with arithmetic.
Everything here is done by my professor so far except:
HashMap<String, Integer> typeAttack = new HashMap<String, Integer>();
I am also provided with a .csv file containing various statistics of a whole list of pokemon.
Out of the objects that my professor has already passed into the ArrayList "pokemonList," I only need to consider the "type" and "attack" variable, as I need to figure out which type of pokemon in the whole .csv file averages to have the highest attack level.
int attack = Integer.parseInt(split[1]);
String type = split[5];
My question is very simple. How can I convert only a portion of the ArrayList, specifically the "attack" and "type" variables into my HashMap?
import java.util.*;
import java.io.*;
public class Project6 {
public static void main(String[] args) throws IOException {
ArrayList<Pokemon> pokemonList = collectPokemon(args[0]);
HashMap<String, Integer> typeAttack = new HashMap<String, Integer>();
}
// Don't modify this method. If you get errors here, don't forget to add the filename
// as a command line argument.
public static ArrayList<Pokemon> collectPokemon(String filename) throws IOException {
BufferedReader file = new BufferedReader(new FileReader(new File(filename)));
ArrayList<Pokemon> pokemonList = new ArrayList<Pokemon>();
file.readLine();
while(file.ready()) {
String line = file.readLine();
String[] split = line.split(",");
String name = split[0];
int attack = Integer.parseInt(split[1]);
int defense = Integer.parseInt(split[2]);
double height = Double.parseDouble(split[3]);
double weight = Double.parseDouble(split[6]);
String type = split[5];
Pokemon current = new Pokemon(name, attack, defense, height, weight, type);
pokemonList.add(current);
}
return pokemonList;
}
}
POKEMON CLASS
import java.util.*;
public class Pokemon {
private String name;
private int attack;
private int defense;
private double height;
private double weight;
private String type;
public Pokemon(String inName, int inAttack, int inDefense, double inHeight, double inWeight, String inType) {
name = inName;
attack = inAttack;
defense = inDefense;
height = inHeight;
weight = inWeight;
type = inType;
}
public String getName() {
return name;
}
public int getAttack() {
return attack;
}
public int getDefense() {
return defense;
}
public double getHeight() {
return height;
}
public double getWeight() {
return weight;
}
public String getType() {
return type;
}
public String toString() {
return "Pokemon: '" + name + "' Atk: " + attack + " Def: " + defense + " Ht: " + height + "m Wt: " + weight + "Kg Type: " + type;
}
}
You loop through the list. For each pokemon in the list, you get its attack, adn you get its type (by calling the methods of the Pokemon class allowing to get this information). Then you do what you need to do (which you haven't said) to populate the map. Start by simpler problems. 1. loop through the list, and print each pokemon. 2. Loop through the list, and print the attack an type of each pokemon.
Can you show what the Pokemon class looks like
I don't understand, however... What is the method that he has created? It looks like it just sorts the elements of the .csv file into an ArrayList. How can I call this to obtain individual elements?
Also you don't need a map to figure out which pokemon has the highest attack level. You can do that with a simple loop
I am instructed to complete this by using HashMaps and ArrayLists instead of hardcoding with individual variables
How can I call this to obtain individual elements? By calling getAttack() and getType() on each pokemon of the list. Read my first comment again. Do each of the steps I recommend you to do.
HashMap<String, Integer> typeAttack = new HashMap<String, Integer>();
for(String pokemon: pokemonList){
typeAttack.put(pokemonList.getType(), pokemonList.getAttack());
}
This is what I've edited to my code so far.
I think it would be tough to print out each of the pokemon, as there are over 800 in the list provided, as well as 18 different types.
Now my issue is looping through the map and sorting each of the pokemon and their attack values according to their types.
Are you assuming that all the pokemon are of different types? If so then I really don't see the need for a hashmap. However, if your prof expects you to compute an average of the powers of all the pokemons of a given type and print this type then your map should be of type Map<String, List<Integer>> not Map<String, Integer>
Sorry for the sloppy code in my comments, btw. I'm new to the platform.
@all.k19 this code doesn't compile. The compiler will tell you why. read its error messages very carefully. They have a meaning. The list is a list of Pokemon. So every element of the list is a Pokemon. Not a String. And each pokemon has an attack and a type. But the list doesn't have an attack or a type. You're skippig the two steps I advise you to do before doing what you want to do. Don't. Those are intermediary steps that will help you get to the solution, and understand what you're doing.
You could try with a simple for each loop:
// Note the Map type change to Double!
HashMap<String, Double> typeToAvgAttack = new HashMap<String, Double>();
// Intermediate map to store list of all attack values per type
HashMap<String, List<Integer>> typeToAttack = new HashMap<String, List<Integer>>();
for (Pokemon pokemon: pokemonList) {
String type = pokemon.getType();
int attack = pokemon.getAttack();
// the map is empty at start, we need to check for the keys (pokemon type) existance
List<Integer> attackValues = typeToAttack.get(type);
if (attackValues == null) {
typeToAttack.put(type, attackValues = new ArrayList());
}
attackValues.add(attack);
}
// Iterate over map keys to calculate the average
for (String type : typeToAttack.keySet()) {
List<Integer> attackValues = typeToAttack.get(type);
double average = calculateAverage(attackValues);
typeToAvgAttack.put(type, average);
}
The helper function, copied from here:
public static double calculateAverage(List <Integer> values) {
double sum = 0d;
if(!values.isEmpty()) {
for (Integer value: values) {
sum += value;
}
return sum / values.size();
}
return sum;
}
Be aware though that this approach is neither optimal nor elegant. I'd prefer to use Stream API, but that may not be best suited for your current proficiency.
EDIT: I've adjusted the code not to use Java 8 methods.
This helps. My current knowledge doesn't allow me to extend past your first enchanced for loop, but the two former lines help. Thank you.
I've updated the answer, trying to be more explicit. Let me know if there is still something unclear.
There is no reason to declare sum as boxed Integer instead of a straight-forward int. And instead of if (typeToAttack.containsKey(type)) { List<Integer> attackValues = typeToAttack.get(type); attackValues.add(attack); } else { List<Integer> attackValues = new ArrayList(); attackValues.add(attack); typeToAttack.put(type, attackValues); } you can use List<Integer> attackValues = typeToAttack.get(type); if(attackValues == null) typeToAttack.put(type, attackValues = new ArrayList()); attackValues.add(attack);
@Holger the second part I agree, editing the answer. The helper func I copied from the linked anwser. So maybe you should put the comment there, it has been awarded 64 points so far. There might be more appropriate people in the discussion than here ;). Pasting the link once more https://stackoverflow.com/questions/10791568/calculating-average-of-an-array-list
For the linked answer, the author changed int to Integer to fix the invocation of sum.doubleValue(). But you can replace the method invocation with a type cast, i.e. (double)sum instead, then it works smoothly with int sum; or just use double sum; in the first place. I left a comment there too, but there’s no reason to keep questionable code, just because its also in another answer…
| common-pile/stackexchange_filtered |
Why the input function is not work in the spyder (Python3.9)?
Why the input function is not work in the spyder (Python3.9)?
#The code I typed
while True:
s = input('Enter something : ')
if s == 'quit':
break
print('Length of the string is', len(s))
print('Done')
#The console
output
runfile('C:/Users/85251/Documents/Year3sem1/ISEM 2006 Python/indproj/untitled1.py', wdir='C:/Users/85251/Documents/Year3sem1/ISEM 2006 Python/indproj')
Traceback (most recent call last):
File ~\Documents\Year3sem1\ISEM 2006 Python\indproj\untitled1.py:8 in <module>
s = input('Enter something : ')
File ~\anaconda3\lib\site-packages\ipykernel\kernelbase.py:1075 in raw_input
return self._input_request(
File ~\anaconda3\lib\site-packages\ipykernel\kernelbase.py:1120 in _input_request
raise KeyboardInterrupt("Interrupted by user") from None
KeyboardInterrupt: Interrupted by user
Tried to traceback, not work
KeyboardInterrupt is unrelated to your code. Please provide more information about the problem your having. What does 'not working' mean?
It works for me. There is something more that not in the post ...
| common-pile/stackexchange_filtered |
What is a default generic app in CentOS 7 to monitor UPS battery over USB?
I installed an UPS battery that is not a well known brand, but it has an USB; Other generic UPS monitoring apps on windows can find it and read battery level and other data even without manually installing a driver for it.
In CentOS 7, what is the most common / generic app to monitor an UPS battery over USB? I want o use simplest method that maybe comes included with CentOS, and only use branded software if needed, e.g I seen APC has apcupsd but I assume this might only work with theirs UPS, even if it works with this one, maybe there is a more generic default app for this in CentOS ?!
What I want to accomplish is to run a command and get battery level from it, while installing as little extra dependencies as possible.
NUT (Network UPS Tools) is a common tool and supports lots of UPSes; https://networkupstools.org/stable-hcl.html
NUT is also in EPEL7 (as the nut package).
Thanks. I also came across that one, I will give it a try.
| common-pile/stackexchange_filtered |
svn.remote.RemoteClient.info: FileNotFoundError: [WinError 2] The system cannot find the file specified
I am trying to connect SVN remote client and get the latest committed revision with following python codes in Windows:
r = svn.remote.RemoteClient(svnPath)
revNum = str(r.info().get("commit#revision"))
I am getting following error:
in \n revNum = str(r.info().get("commit#revision"))\n', '
File "C:\Program
Files\Python36\lib\site-packages\svn-0.3.45-py3.6.egg\svn\common.py",
line 75, in info\n do_combine=True)\n', ' File "C:\Program
Files\Python36\lib\site-packages\svn-0.3.45-py3.6.egg\svn\common.py",
line 54, in run_command\n return self.external_command(cmd,
environment=self.env, **kwargs)\n', ' File "C:\Program
Files\Python36\lib\site-packages\svn-0.3.45-py3.6.egg\svn\common_base.py",
line 25, in external_command\n env=env)\n', ' File "C:\Program
Files\Python36\lib\subprocess.py", line 709, in __init\n
restore_signals, start_new_session)\n', ' File "C:\Program
Files\Python36\lib\subprocess.py", line 997, in _execute_child\n
startupinfo)\n']: [WinError 2] The system
cannot find the file specified
I tried to print the "svnpath" and "r" to make sure it goes correct. I got as expected the correct remote server path (lets say "remote_path") for "svnpath" and < SVN(REMOTE) remote_path> for "r".
The remote SVN needs credential (UID & PWD) to access. However, the machine I use this script to run has already SVN setup with correct credentials.
Do I need to still specify explicit credential in python scrip to access? If so then how? Or do I need any extra python package for SVN?
Please help...
I had the same error and fixed it by installing an SVN command line and adding it's path to the PATH environment variable.
If you're on Windows, you can install the command line executable when installing Tortoise SVN, but by default the corresponding option is not checked (see ccpizza's answer).
You probably solved your issue by then but looking at the code might help.
The class RemoteClient inherits from CommonClient which starts like this:
class CommonClient(svn.common_base.CommonBase):
def __init__(self, url_or_path, type_, username=None, password=None,
svn_filepath='svn', trust_cert=None, env={}, *args, **kwargs):
super(CommonClient, self).__init__(*args, **kwargs)
...
Therefore, the following should work:
import svn.remote
url = "http://server.com:8080/svn/SuperRepo/branches/tool-r20"
client = svn.remote.RemoteClient(url, username="toto", password="SuperPassword")
print(client.info())
Not working for me. error , " [WinError 2] The system cannot find the file specified "
Just to double check: you are sure that the path exists?
Path is exists. I can checkout through Tortoise svn client.
| common-pile/stackexchange_filtered |
Is it required to mark class as final to make it immutable?
The concept of immutability in java says's that class needs to be marked final to make it immutable.
My question is if we do not mark a class as final then it can be subclassed but still it will be the object of sub-class that will be mutable and not the base class. The state/properties of base class will remain immutable as we will mark properties of the base class as private.
Second question is are immutable class and immutable objects two different concepts in java?
Consider what happens when the sub-class overrides an accessor method of the base class. While the base fields are still immutable, every observer will see the state changing.
If we do not mark a class as final then it can be subclassed but still it will be the object of sub-class that will be mutable and not the base class. The state/properties of base class will remain immutable as we will mark properties of the base class as private.
Suppose we have a class C1 that has only final fields but is not itself declared as final.
When the someone creates a class C2 as a subclass of C1, they can include mutable fields in C2. Since instances of C2 are also instances of C1, we now have the situation that some instances of C1 may be changed. Therefore, we cannot consider C1 to be an immutable class.
Are immutable class and immutable objects two different concepts in java?
Yea. Kind of. It depends on who you talk to:
Some people would argue that an immutable object is any object that is an instance of an immutable class.
Other people would argue that an immutable object is any object that cannot be changed after it is constructed. For example here. And that would (by some stretch) include objects that cannot change purely for reasons of API (i.e. method) semantics.
I would avoid using the term "immutable objects" without clarifying which of those two definitions apply.
To illustrate the ambiguity, an instance created using new C1() could be viewed as an immutable object (since it cannot be changed) or not (since it is not an instance of an immutable class).
"The concept of immutability in java says's that class needs to be marked final to make it immutable. My question is if we do not mark a class as final then it can be subclassed but still it will be the object of sub-class that will be mutable and not the base class. The state/properties of base class will remain immutable as we will mark properties of the base class as private". Can some one please answer this?
Technically, it is a statement, not a question. 2) I did address it. What didn't you understand about my answer?
| common-pile/stackexchange_filtered |
A_n in L^2(R) convergence study
I want to study in $\mathcal{H}=L^2(\mathbb{R})$ the convergence of
$A_nf(x)=\int_\mathbb{R}e^{-n(x-y)^2}f(y)dy$
Solving....
It's easy to see that $A_n\to A=0$ pointwise, now i want to show strong convergence with
$||(A_n-A)f||^2\to 0$ using monotone convergence theorem
$||(A_n-A)f||^2=||A_nf||^2=\int_\mathbb{R}|e^{-n(x-y)^2}f(y)|^2dy$
for the Cauchy–Schwarz_inequality $\le\int_\mathbb{R}|e^{-n(x-y)^2}|^2dy\int_\mathbb{R}|f(y)|^2dy$
now $f(x)\in L^2(\mathbb{R}) \Rightarrow f(x)^2\in L^1(\mathbb{R})$ so that I have
$\le C\int_\mathbb{R}|e^{-n(x-y)^2}|^2dy$....
How can i conclude this? and for monotone convergence?
$A_nf=f*K_n$, where $$K_n(t)=e^{-nt^2}.$$So Young's Inequality shows that $$||A_nf||_2\le||f||_2||K_n||_1=n^{-1/2}||f||_2||K_1||_1\to0.$$
Edit So now the OP asks about uniform convergence. That's even simpler; the Cauchy-Schwarz inequality shows that $$||A_nf||_\infty\le||f||_2||K_n||_2=n^{-1/4}||f||_2||K_1||_2\to0.$$
You are right! and what about uniform convergence?
maybe i can say i can't have because there is $n^{-1/2}$?
| common-pile/stackexchange_filtered |
two versions of g++(4.4 and 4.7) exist in centos6.5 ,how to use g++4.7
I will learn c++11 so I installed g++4.7, when I use g++ --version,it shows version of 4.4. But when I use find / -name g++ ,it shows
/opt/centos/devtoolset-1.1/root/usr/bin/g++
/usr/bin/g++
and the first version is 4.7,the second version is 4.4,how to set g++4.7 to deafault. Thanks!
I have solved it. Use ln -s /opt/centos/devtoolset-1.1/root/usr/bin/* /usr/local/bin/ and hash -r.
Neither of these compilers actually supports C++11. The first version of gcc in which support is not experimental is g++ 4.9.
Really?Thank you very much!
| common-pile/stackexchange_filtered |
How to determine whether site is secure from common user's point of view?
What should be done (or at least declared) so end-users, visitors will start trusting your website?
Some thoughts:
secured login (via https)
privacy policy (hidden email and other private information)
strong password check
custom error page with feedback
recaptcha
I really think it depends a great deal on who your end-users are, and what kind of site it is. E.g. security.stackexchange.com would have a tougher time than cooking.stackexchange.com... and of course a bank even tougher (though bank users are usually easier...)
recaptcha is an excellent example - it's impressive for a common user (looks like serious security), but in reality there is very little security benefit to using it (aka security theater).
https/SSL everywhere. Securing just the login is useless, as its trivial to steal session cookies from unsecured connections (see http://codebutler.github.com/firesheep/).
Never send them their password. In a proper designed system, that is impossible anyway (only storing hashed, salted passwords, not the actual password). Yet way too many sites still do that.
Better yet, don't store the password/user identity at all. Make someone else do it -- openID/LiveID etc.
Security is probably the last thing a common user thinks about when trusting a web site.
Think about going to huge brick and mortar companies' websites. When people go to macys.com or bloomingdales.com, the last thing they are thinking about is security. Those stores have such a strong brand, that people feel safe despite what the security situation might be.
Think about how many people are tricked by phishing attempts. If the common user actually cared, they shouldn't be falling for common phishing tricks.
Very slightly away from topic, but you'll see my point:
Banks have started using security services as a value add in marketing, for example 2 factor authentication and smart cards - these used to be just for high net worth clients and company accounts, but are being sold to individuals on the basis of providing extra security to them.
Also Trusteer Rapport is an example of a 3rd party application which provides extra security and authentication over connections between user and website. It is being heavily marketed on the public's fear of phishing.
Basic security controls done well just aren't newsworthy enough, so to get users to trust your site due to IT controls, you need something a bit extra.
| common-pile/stackexchange_filtered |
Traditional Open Source Java Application
I've been using Java for my "home projects" for a while and now there's a need to make some of my projects available through the repositories like github or Google Code.
As long as I'm the only developer for the project and as long as I always work in Eclipse, there are absolutely no questions about building and running it. Everything just work fine. The question is basically, what do I do if someone asks me to share? :-)
The first thing to do is, making the project buildable with tools like Maven. In this case, no matter if someone has Eclipse installed, or prefers IDEA, he may just download my project and build using Maven. Or, import to Eclipse/IDEA and build.
So, to clarify the question, here are 2 points:
What should my directory structure look like? Folders like src, bin, anything else?
What is the way my application should run? For instance, my application requires log4j in order to run. Maven resolves this dependency, when building the package. But when the package is built, you should manually provide the path to you log4j (with java -cp ...) - what's the way to eliminate this requirement if I'd like to just provide .sh and .cmd scripts for user's convenience?
Seriously though, if anyone wants to contribute to an OSS or Public project, I don't think they should have any issues getting their environment up and running as long as you provide a decent README.
Those wanting to close for "not constructive" consider that this question asks "If I want to adapt an Eclipse project for Maven, how should I do it".
This seems reasonably constructive to me. Maybe could use some focus.
From my personal experience, maven is the best available tool for building and managing dependencies, so my answers for your questions are all about maven. Also, with maven you need to provide only pom.xml file containing definition of dependencies and project build configuration. Project you share does not need to contain any JARs inside, as it used to be with ant based projects.
What should my directory structure look like? Folders like src, bin,
anything else?
As in a standard maven project :) (http://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html)
src/main/java application sources (Java classes)
src/test/java application tests (unit test for class com.company.Foo
should be also in the com.company package, it introduces some order
and sometimes makes testing easier)
src/main/resources application resources, for example message.properties file serving internationalization
src/test/resources resources for tests, for example dumps of websites if you would like to do integration tests with http server returning some content.
There are also other directories, such as src/main/webapp - all of them are described in the maven docs.
What is the way my application should run? For
instance, my application requires log4j in order to run. Maven
resolves this dependency, when building the package. But when the
package is built, you should manually provide the path to you log4j
(with java -cp ...) - what's the way to eliminate this requirement if
I'd like to just provide .sh and .cmd scripts for user's convenience?
Maven takes care about all of these things, following your configuration defined in pom.xml file. If you program requires some extra-libraries, maven will download it for you, and place in the package containing your application. If, for example some dependencies will be provided by your application server (log4j JAR), you can also set that dependency is gonna be provided and then, it will be available for running application, for unit tests, but it will not be places in the final package.
What is additional great feature provided by maven is build profiles. You can for example use some setting in the day-to-day development mode, but in case of building package for production, other settings will be used.
If you and your peers just want to keep using Eclipse then share the projects including .project and other hidden files, and avoid from referencing external jar-files and other resources. If you need log4j.jar then put it in your project and add it to the build path.
This works very well, even for multiple programmers and is the fastest way to get up and running.
No doubt someone will come along eventually and add a Maven or Ant project for you. :)
@David Harkness: well, just was not exactly sure if someone considers it "flame" :-)
Note if you are going to share your eclipse project files, make sure all the paths to jars, folders etc is saved in them as a relative path. You can check by opening these files in any text editor.
Sharing code is never bad. If you want to maximize the chances that others will contribute to it, make it as easy as possible to get started. If you can provide a Maven project, you'll increase your chances. But don't hold out until you make it there.
What should my directory structure look like? Folders like src, bin, anything else?
If you are on maven, it is taken care of. (src/main/java, src/test/java et al.)
What is the way my application should run?
I think maven exec plugin should help you here. Take a look at http://mojo.codehaus.org/exec-maven-plugin/java-mojo.html and see whether it fits the bill.
Buddy if anyone wants to work on a public or oss project they are probably capable enough in Java to get the environment setup on their own. That being said, your options are:
a) Maven (super simple, I have a GitHub project setup using this, just download with Git and import using IDE Maven Option). I've tried to use this option with every major ide and never had any issues. This is especially useful if you have a lot of dependencies.
b) Check in your eclipse project files, I know this sounds stupid, but most IDE's let you import Eclipe projects.
c) Write a README file! (obvious I know).
d) Write an ant file to do all the heavy lifting (initialize everything) if there is some processes to setup the project.
I would not worry about project structure. Again, if anyone wants to work with you, they are probably smart enough to get things up and running without bothering you much. I would not worry about compiling and running the project either, but if you really want, you can setup Ant or Maven to do it.
As Maven aficionados like to say, Maven is opinionated software. By this they mean that Maven will do more things for you with less effort if you adhere to the conventions it imposes. Directory structure is probably the most important one. I don't advise you to adopt Maven without reading about it first, you can start here.
Once you have sorted out building your project with Maven and adopted the m2eclipse plugin to get Maven to work with Eclipse, you may want to check out the Maven assembly plugin, with which you will be able to produce a zip file that packs together your project with all its dependencies and possibly a launch script that takes care of setting your CLASSPATH without invoking your project's startup class.
| common-pile/stackexchange_filtered |
implementing transaction scope
While implementing transaction scope in mvc application. is it better to create a transaction manager while calling the wcf service. or is it better to have the transactionscope within the business object?
I'd personally go for separation of concerns. Business object has nothing to do with transaction scope, so i'd vote for transaction manager (if it's a big application) or combine it with your DAL (if it's not that modular/big).
if you have WCF service, would you wrap in transaction while the client proxy calling the service?
It depends on your particular problem. If there's the need to lock for data access, atomic execution and data is critical - then yes. If there are only simple select queries then there's no need to.
| common-pile/stackexchange_filtered |
When should a task be executed asynchronously vs on the main thread?
I understand that all changes to the views should be on the main thread, but at what level of complexity should I start to consider using dispatch_async?
Should it be used for:
- number crunching (like computing a complex probability)
- saving to some form of persistent storage
- updating a model object
- initializing bulky objects
I'm comfortable with iOS until it comes to threading. I always do asynchronous network requests, but aside from that I just try to use minimal resources on the main thread and that still makes for a slightly laggy app.
I searched multiple sources for this so if there is a similar question, point it out and forgive my ignorance.
Most things that are not UI-related, but might adversely affect the responsiveness of your app, are all candidates for tasks to be moved to the background queue. Don't worry about not being comfortable yet, but some of these resources might help:
Concurrency Programming Guide
WWDC 2012 - Building Concurrent User Interfaces on iOS
WWDC 2011 - Mastering Grand Central Dispatch
WWDC 2011 - Blocks and Grand Central Dispatch in Practice
Synchronization section of the Threading Programming Guide (and the relevant Eliminating Lock-Based Code from the Concurrent Programming Guide referenced above, which briefly describes how to achieve synchronization without locks)
Thanks for all the helpful links. Let me just pose this question to get it out of my head- would it be anything other than insensitive design to make every non-UI task asynchronous?
@user I wouldn't be inclined to make every non-UI task asynchronous. I would counsel a more pragmatic approach, that anything that affects app responsiveness be considered a candidate for asynchronous design. Proper asynchronous design is not always trivial (e.g. the need to synchronize objects/variables being accessed from multiple threads), so I'd suggest you go through that effort when required rather than on principle.
A very simple answer: The device should be responsive, i.e. react to user interactions as far as possible immediately. Thus any operation that takes more than a fraction of a second should be executed as a separate thread, whenever possible.
Of course multithreading can be tricky, but if the threads are well separated, it is rather straightforward. The most important thing one has to keep in mind is that most classes of the UIKit, and many other classes like arrays are not thread-safe, and accessing them from multiple threads must be synchronized, e.g. by using a @synchronize block. So, read the docs, e.g. here, carefully.
Profile your app. Seriously, nothing else will tell you where exactly your app becomes laggy. I've seen unexpected bottlenecks like UILabel rendering and string processing in my own code. Without Instruments I would never even consider optimizing them.
| common-pile/stackexchange_filtered |
How can I better clean headphones that sit inside the ear?
I have a pair of headphones that I use regularly, including while I'm working out. They're the kind of headphones that sit inside the ear (earbuds), so inevitably they'll get kind of gross after a couple uses. They have the removable rubber tips on them, but I don't think that helps keep everything out of the speaker part.
It's easy to wipe the occasional earwax out of the removable rubber tips, but for the inside parts where the speaker is I'm having trouble with making sure those are clean. Normally I'm able to clean any earwax that makes it through the rubber tip using a toothpick or a piece of paper that I've folded up, but I would actually like it to get sanitized when it's cleaned, since I use them while working out they could harbor some bacteria, and I would rather not get swimmer's ear from them.
Is there a better way I can be cleaning these kinds of headphones?
Try a q-tip (or the like) with some ammonia. Ammonia dissolves wax so it should work pretty well for cleaning the ear-wax from your earbuds.
+1. Ammonia is also what you should be using to clean your ears from wax, since Q-tips can puncture the eardrums [follow that up with a shower, to take all the gunk out after]. ( from the local ENT )
Do not put that in your ears though
You can use isopropyl alcohol (known as rubbing alcohol) or the sprays for cleaning LCD diplays.
Put very small amount of this liquid on a cotton pad (usually used for medical or cosmetic purposes).
Lay the cotton pad on flat surface and rub the opening of the headphones against it.
I think you should hold the headphones over the cotton piece, because if they are under it some alcohol mey leak when pressing the cotton and can go inside the earphones.
Instead of cotton pad you can use a towel the same way.
Since you have mentioned you have in-ear earphones, the way I'd do it is by removing the rubber buds first.
That gives you some space to work with and you can see where the earwax has accumulated.
Now you can use earbuds/cotton/toothpick, to clean the muck off it.
Put the buds back on after cleaning, and done!!
| common-pile/stackexchange_filtered |
Why would they show the Hunger Games live?
It appears from the movie (never read the book) that the Hunger Games are shown "live". That there is no delay in the broadcast. It appears to be shown live as the audience is shown watching scenes that the Gamemakers and the Capitol would never want them to see. For example, after Rue is killed, Katniss throws up the sign for "respect" which causes the uprising of district 11. Why show that scene at all in a taped or delayed broadcast?
The riots did not take place in the book at that time. The producers did not want to make the other movies until after they were sure it was going to make money (there have been many YA S/F book to movie flops). As a result, they did not know if there would be another chance to portray the gravity of those actions to the audience.
We have live news, live sports, live Super Bowl half-time, etc. It would be a lot easier on everybody (videographers, commercials, tech crews, players, entertainers, announcers) if live TV didn't exist. But people like it. It gets them involved, which is the point. And so we have it. The capital could pretend it were live, but that secret would have to be known by a non-trivial number of people, and would be be hard to keep.
@PaulDraper Actually, post-Janet Jackson, we don't actually have truly live TV in many cases; they often show it on a short (5-10 second) delay to allow the producers or their censors to bleep profanities or stop the feed in the case of something they don't want to show. That certainly seems reasonable from a realism perspective here.
@Joe, that is true. Nowaday, 5-10 seconds is common. If you're on top of things, it's enough to fix it if something really bad happens, albeit in a very abrupt and conspicuous way. It's a good idea. On the other hand, the Hunger Games could have been "pre-Janet" -- they hadn't had a wardrobe malfunction (yet).
In the book at least, it appears the Capitol can and does cut out scenes they don't want shown:
The Hunger Games (Book 1), Part II "The Games", Ch. 18:
Slowly, one stem at a time, I decorate her body in the flowers.
Covering the ugly wound. Wreathing her face. Weaving her hair with
bright colors.
They’ll have to show it. Or, even if they choose to turn the cameras
elsewhere at this moment, they’ll have to bring them back when they
collect the bodies and everyone will see her then and know I did it. I
step back and take a last look at Rue. She could really be asleep in
that meadow after all.
“Bye, Rue,” I whisper. I press the three middle fingers of my left
hand against my lips and hold them out in her direction. Then I walk
away without looking back.
The Hunger Games (Book 1), Part III "The Victor", Ch. 27:
They play her death in full, the spearing, my failed rescue attempt,
my arrow through the boy from District 1’s throat, Rue drawing her
last breath in my arms. And the song. I get to sing every note of the
song. Something inside me shuts down and I’m too numb to feel
anything. It’s like watching complete strangers in another Hunger
Games. But I do notice they omit the part where I covered her in
flowers.
Right. Because even that smacks of rebellion.
The capitol didn't show the act of Katniss covering Rue's body with flowers or her salute. But the floral burial must have been shown when the bodies were picked up. It's not covered in the book, but while the actions of the tributes are monitored and edited for possible seditious acts, the recovery of bodies might be watched less carefully.
But the floral burial must have been shown when the bodies were picked up - I wonder why Katniss thinks that? It seems like it would be easy enough to remove the flowers, then start the cameras and pick up the body. I don't recall much about the actual body pickups really getting covered much in the books.
@Zoredache, the bodies are picked up remotely by tractor beam, so there wouldn't be anybody available to remove the flowers.
Good answer. The main issue with filtering out shots or events is that they leave a hole, and that gets people wondering what was cut out.
Per the Hunger Games wikia, gambling is one of the primary reasons for the popularity of the show. This includes "in-play betting" such as on the outcome of individual encounters.
Introducing a broadcast delay would allow unfair opportunities for those with insider knowledge.
The Hunger Games is a major source of gambling and produce intense
betting, both in the Capitol and the districts. Katniss mentions
people from District 12 betting on the two tributes reaped, and that
"odds are given on their ages, whether they're Seam or merchant, and
if they will break down and weep." In the Capitol, betting takes
place throughout the Games, starting before training and increasing in
intensity until a victor is determined.
Even most "live" TV or Radio is aired with a delay of just a few seconds so that censors can block or "dump" a portion before it hits the air. A broadcast delay of 5 seconds wouldn't be enough to tip gambling odds any significant amount.
@phantom42 - I don't disagree with that. Note that the superbowl is broadcast instantly (to prevent the "delayed-wire" effect on online gambling) whereas the half time show now has a 30-second delay or longer.
I can see why there wouldn't be a delay for viewers from the Capitol, but why not delay and edit for the Districts? The Districts (especially the poorer ones) would have no way of knowing, and considering the Capitol has forced them to provide tributes I don't see why the Capitol would care if gambling was unfair for the Districts.
Even "instant" or "live" broadcasts still have a few-second delay due to time traveled over the wire.
we also know that even in live broadcasts that have the 5second or 30 second delay, they still miss cutting portions/editing language. while you dont see it, id put money that the hunger games have multiple channels. so you can choose who u watch, OR watch the main channel that shows the highlights/major events.
The delay is long enough to censor but it may be of note that it is not an exact silence. A recent example is the Monty Python Live that just showed a few weeks back, had jumping camera angles and very abrupt beeps, and even let "fuck" through: Would make it quite dangerous if the Capitol were to try and censor certain things while showing live. Just a note here if it helps anyone :)
The "Hunger Games" are shown live because, like @Richard said, there is gambling involved (not going to go further this road).
But there is also the sponsors. If it wouldn't be shown live, then how would the sonsors have any impact in the games?
For example, in the 1st hunger games Katniss is sent a cream to help heal her fire wounds, and in the second she is sent a sort of tap take enables her to get water. If they were not shown live, then these gifts, that from what the books say cost a lot of money, wouldn't come in time.
I agree with the sentiment, but surely a thirty-second delay wouldn't make that much of a different, would it?
I would say no, it wouldn't make a strong impact. However in the books it also says that every gift is more expensive with the more time it passes. Now, I'm sure the prices are not incremented by the second (don't remember reading about this). But the whole gifting in the movie (in the books the action is all focused on Katniss) is taken care by Haymitch, so I can argue that 30 seconds can make a differece to him, but it's just an opinion.
My main point is, they are shown live due to the gambling aspect, like you refered, and the possibility to outsidders to intervene in the outcome.
from book 1
Taking the kids from our districts forcing them to kill one another while we watch- this is the Capitol's way of reminding us how totally we are at their mercy. How little chance we would stand of surviving another rebellion.
Whatever words they use, the real message is clear. "Look how we take your children and sacrifice them and there's nothing you can do. If you lift a finger, we will destroy every last one of you. Just as we did in District Thirteen."
hubris.
Please consider deleting two of your (three, so far) answers and collating all the 'answer' material you feel is relevant into the single remaining answer. (To edit an existing answer, click the '[edit]' link beneath the text; to delete answers simply click on 'delete'.) To respond to comments, you can leave a comment (on your own questions/answers) and then the use of the @username syntax will notify the appropriate user (if that user has commented on, or edited, your question/answer).
The reason the poor districts were rioting was because
they were oppressed and hungry. Cutting scenes from a show will
not make them less oppressed or less hungry.
Censors assume they can control the thoughts of the audience.
There is no sure way to control the way your audience thinks.
For me, all of Katniss' choices are a slap in the face of the capitol and its policies
choosing to ally herself with a weak contestant (Rue)
choosing to help the weak
choosing to protect and take care of the young
choosing to share her food even tho she was also hungry
The only probable way to censor her is to not show her at all,
and even if they don't show her I doubt it will make the
poor districts less hungry or less oppressed.
Welcome to SFF.SE. You make good points but this doesn't really answer the question, which is why the Hunger Games were shown live rather than edited and shown to the Districts with a delay (or not at all).
| common-pile/stackexchange_filtered |
Engineering notation with Haskell
Is there a existing Haskell function which provide an engineering notation formatting (as String)?
If not, I read that printf can be extended by adding an instance to PrintfArg. Do you believe this is a good solution ?
By engineering notation, I mean an exponent notation whose exponent is a multiple of 3.
After some research, I manage to get what I want.
The function to get the engineering format work in a few steps :
1. Dissociate the exponent from the mantissa
It's necessary to get the exponent apart of the mantissa.
The function decodeFloat (provided by base) decode a floating point number and return both the mantissa and the exponent in power of 2 (mant2 * 2 ^ ex2).
2. Get the mantissa and exponent expressed in the correct base
A conversion in power of 10 is required. This is the role of this function.
decompose :: Double -> (Double,Int)
decompose val = if mant2 > 0
then (mant10,ex10)
else (-mant10,ex10)
where
(mant2,ex2) = decodeFloat val
res = logBase 10 (fromIntegral (abs mant2)::Double) + logBase 10 (2 ** (fromIntegral ex2::Double))
ex10 = floor res
mant10 = 10**(res - (fromIntegral ex10::Double))
3. Set the exponent at a multiple of 3
The function ingen test the result of the integer division of the exponent and perform the adjustement on mantissa and exponent.
ingen :: Double -> (Double,Int)
ingen val
| mod ex 3 == 0 = (mant,ex)
| mod ex 3 == 1 = (mant*10,ex-1)
| mod ex 3 == 2 = (mant*100,ex-2)
where
(mant,ex) = decompose val
Here are some conversions :
Prelude> ingen 10e7
(99.99999999999979,6)
Prelude> ingen 10e-41
(100.0,-42)
Prelude> ingen (-72364e81)
(-72.36399999999853,84)
I performed some tests with quickCheck on a wide range and a large amount of value. The conversion seems to work well despite a very small difference of value (rounding during the calculations due to the precision?).
However, another verifications should be made.
If you find a mistake or an improvement in these functions, please share.
You could add 1 additional step and give it the desired precision (number of digits after decimal) and then do an according round. At least in my case, you only need engineering mode for display purposes....
In Data.Text.Format there's an expt function that will help format numbers in that way, though it's in a hugely obfuscated library, I'm afraid, and you'll have to convert from Text to String.
It seems that this is the only one available, but you could always make one yourself.
This does not seem to conform to the specification, "whose exponent is a multiple of 3". Specifically, expt 0 10 gives "1e2", and 2 is not a multiple of 3.
@DanielWagner that's odd; in the library, it says it's in enginerring notation. That might be a bug.
I don't know of a standard function. Adding something to printf would be one way to go, but it would be a bit annoying to use (since you would need to add a new type for engineering-notation-formatted numbers, and convert numbers to this type before handing them off). Simply writing a standalone function with a type like
showEngineer :: Double -> String
may be a simpler and more readable solution in the long-term.
| common-pile/stackexchange_filtered |
Error creating bean with name 'webMvcRequestHandlerProvider'
I'm new on Spring "things"
I use this version:
spring-boot-starter-parent
1.5.3-RELEASE
And I have this dependencies on the pom.xml
io.springfox springfox-swagger2 2.8.0
io.springfox springfox-swagger-ui 2.8.0
And I think this generated the error below, but I don't know why.
Can help me?
Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'webMvcRequestHandlerProvider' defined in URL [jar:file:/C:/Users/tjoel/.m2/repository/io/springfox/springfox-spring-web/2.8.0/springfox-spring-web-`enter code here`2.8.0.jar!/springfox/documentation/spring/web/plugins/WebMvcRequestHandlerProvider.class]: Unsatisfied dependency expressed through constructor parameter 1; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'requestMappingHandlerMapping' defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping]: Factory method 'requestMappingHandlerMapping' threw exception; nested exception is java.lang.AbstractMethodError
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:749) ~[spring-beans-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:189) ~[spring-beans-4.3.14.RELEASE.jar:4.3.14.RELEASE]
add more detail of your code so that others can analysis it
Hi, tanks for your help, but I resolve the problem. In proprieties file, i change the name of the aplication for the main packege.
You need add spring-context dependency. It solved my problem.
org.springframework
spring-context
4.3.14.RELEASE
According to this issue:
Its failing because you have multiple spring contexts going. You may have to experiment with removing/adding @Configuration and @EnableWebMvc configurations
| common-pile/stackexchange_filtered |
Examples of usage of JSF
Are their any good websites (characterized by high usage), that use JSF for their back-end? I have just started working with the basics of the framework. If I see some websites using JSF, may be I will be able to better appreciate the use of the technology.
Also could you mention the benefits of using JSF validation viz a viz the browser side validation of content using JavaScript.
I believe the first part of your question is a duplicate of http://stackoverflow.com/questions/5514/public-popular-websites-using-javaserver-faces
See here for a list of JSF sites.
In addition see the references for two famous JSF component frameworks:
Richfaces
Icefaces
As for the validation - it better be on both sides - on the client side (javascript) for better usability, and on the server side for better security.
i see very few common names there... It seems some pages of apple.com use jsf.
what do you mean by "common names" ?
I am sorry. I meant famous names (like so which i understand is built on asp.net mvc).
there's page by apple.com and other websites serving small communities. Some famous websites are there. but when i visit those sites, the url does not contain "faces". i believe that typically all facelets will reside in the faces directory. why does the url not contain the word faces?
this is configurable. You can have pretty urls with JSF (with prettyfaces), so the url tells you nothing about the technology.
ebay, volvo, bmw, costco, TNT, Lufthanza and thousands of more websites use JSF for complete or parts of their websites in production.
Check your answer at http://www.primefaces.org/whouses. (Who uses primefaces.) Primefaces is JQuery based UI component library for JSF and is one of the most popular UI Libraries in JSF.
Whoever runs Primefaces , runs on JSF.
You can also check presentation on ebay supplier portal about how ebay uses JSF to achieve scalability and performance. Search on youtube for "eBay, Connecting Buyers and Sellers Globally via JavaServer Faces" (Oct 2014)
The presentation in PDF format is here: https://oracleus.activeevents.com/2014/connect/fileDownload/session/DB08F809615ABF16F149FEC02B892C10/CON2892_Paulsen-J1eBaySelling.pdf
On the validation questions:
Server side Advantages:
Most common validation rules can be declaratively specified i.e. validation rules are specified in tag attributes. Since there is very little code written, this is highly maintainable
For the rest of the validation rules, one can write custom Validator implementations. These implementations (unlike custom components) are straight forward. Although they are more work than declarative validation, but still more maintainable than the JavaScript approach.
Server side Dis-advantages:
Usability is the biggest issue here. Any validation failures can be reported only when the complete HTML form is submitted (not when the value is keyed in). In JSF 2.0 this downside can be overcome by making ajax calls to your validation logic and reporting failures as values are keyed in
JavaScript Advantages
Usability - as detailed above - can report failures as values are keyed in
JavaScript Disadvantages
Even with JS libraries like jQuery, it can be pretty difficult to implement and maintain js code that supports all browsers. Adding support to a new browser can be very expensive.
All data required to complete validation must be pre-loaded when the response is rendered. Whereas in the server side approach the validation code can lookup any data it needs.
The 'server side Dis-advantages' (well, the one...) is not valid anymore with ajax and client-side validation enhancements in e.g. PrimeFaces. The 'javascript disadvantages' as well with modern ui frameworks on top of jquery or not
| common-pile/stackexchange_filtered |
pass parameters containing white space on command line
I would like to pass a parameter containing white space. For example:
C:\temp>exiftool.exe -"Region Person Display Name" temp.jpg
Can someone please give me direction here? The obvious does not appear to work.
Your parameter with spaces is getting passed fine for most applications. I suspect it is simply incorrect usage of EXIFTOOL.EXE.
Found out what was going on via trial and error. This is not contained in the exif documentation: when passing a tag to exiftool.exe, you need to drop the white space from the tag name. For example, if the tag name is "Region Person Display Name", the code will read:
C:\temp>exiftool.exe -"RegionPersonDisplayName" temp.jpg
From what I can tell, this 'feature' is not documented.
According to the man, in linux, you just remove the space :
$ exiftool -LightSource opencamera2.jpg
Light Source : Daylight
| common-pile/stackexchange_filtered |
loopback cloudant connector connection pooling
I am using loopback's Cloudant connector through my Node.js code. https://docs.strongloop.com/display/public/LB/Cloudant+connector
Do i need to take care of connection pooling (programatically or through configurations) ? Or is it being taken care by default ?
Can anybody please direct me to specific documentation which talks about this so that i can take an informed design decision.
It looks like the loopback connnector (https://github.com/strongloop/loopback-connector-cloudant) uses the cloundant library (https://github.com/cloudant/nodejs-cloudant) which uses the nano library (https://github.com/dscape/nano). The nano library talks about pooling as follows:
pool size and open sockets
a very important configuration parameter if you have a high traffic website and are using nano is setting up the pool.size. by default, the node.js http global agent (client) has a certain size of active connections that can run simultaneously, while others are kept in a queue. pooling can be disabled by setting the agent property in requestDefaults to false, or adjust the global pool size using:
http.globalAgent.maxSockets = 20;
you can also increase the size in your calling context using requestDefaults if this is problematic. refer to the request documentation and examples for further clarification.
...
The nodejs documentation says the default is infinity:
From https://nodejs.org/api/http.html#http_agent_maxsockets:
agent.maxSockets
By default set to Infinity. Determines how many concurrent sockets the agent can have open per origin. Origin is either a 'host:port' or 'host:port:localAddress' combination.
If I am following all this correctly, it looks like by default connection pooling should be taken care of. In extremely high load scenarios you may need to turn it down.
| common-pile/stackexchange_filtered |
PageSpeed and CDN images
I currently have a site that services images located on AWS S3 via AWS Cloudfront.
I'm not looking to install PageSpeed and I want to take advantage of the image optimization and lazy loading (out of page view) that PageSpeed offers.
My question is:
Do I need to move the images from S3 onto the Server with PageSpeed to take advantage of the image optimization and lazy loading. eg: to the images need to be local to where PageSpeed is installed or can they be external on S3 in this case?
I can see how to direct pagespeed to loadfiles (images) from the file system as below.
pagespeed LoadFromFile http://static.example.com/ /var/www/static/;
Would the answer to this be it would be faster to have the images local and use "LoadFromFile" but it's possible to use a remote repository?
thankyou
Adam
EDIT:
I can now see the following:
pagespeed LoadFromFileMatch "^https?://example.com/~([^/]*)/static/"
"/var/www/static/\\1";
It appears this might allow PageSpeed to check for local resources and then fetch from a remote HTTP location if required.
mod_pagespeed can optimize images from anywhere. By default, it optimizes images only from the same domain as the HTML, you can authorize mod_pagespeed to optimize images from any domain with:
pagespeed Domain www.example.com;
Note: This will just tell mod_pagespeed to rewrite the URLs for resources on that domain. If example.com does not have mod_pagespeed installed on it as well, this will fail! If that is the case, you can use:
pagespeed MapRewriteDomain modpagespeed.domain.com other.domain.com;
This will tell mod_pagespeed to change the domain the rewritten resources are served from so that you can actually serve the rewritten versions.
For more information see https://developers.google.com/speed/pagespeed/module/domains
| common-pile/stackexchange_filtered |
How to completely uninstall and remove Ubuntu from PC
I tried to install Ubuntu using the download and instructions from:
https://ubuntu.com/download/desktop#how-to-install
I downloaded ubuntu-24.04-desktop-amd64.iso and then it created the folder:
Drive (F:) Ubuntu 24.04 LTS
I downloaded balenaEtcher to create a bootable flash drive as stated in the instructions but it did not work.
Now I want to delete completely the ISO image file/folder and uninstall and perhaps start again. I cannot seem to just delete the folder or find a way to uninstall or remove it.
The attached image shows the folder structure on my PC.
Additionally, I want to completely remove any Linux/Ubuntu traces from my PC. I wonder whether my more recent difficulties installing Ubuntu are related to files from older versions.
Thanks
It looks like you have double clicked on the downloaded ISO file and mounted then twice. Right click on the F: Drive and click Eject. Repeat for the H: Drive. Then delete the file from the Downloads folder.
@karel are you sure OP has actually installed Ubuntu? They may have inadvertantly mounted the ISO file twice.
... the Linux "penguin" in the sidebar is likely an (unrelated) WSL installation. See for example Why do I have a "Linux" part in Windows 10 File Explorer?
This question is similar to: How to remove Ubuntu from dual boot system with Windows?. If you believe it’s different, please edit the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem.
@user68186, thanks i have removed it now.
@steeldriver, thank you. I had tried to completely uninstall wsl previously but this setting must have lingered on.
@DavidDE, thank you. I searched through previous questions to see if there was anything similar before posting but could not quite find what I was looking for. The post you refer to looks like it is related but i think the person seems to have a different problem and the main answers do not relate to the situation I had.
Right click on the "DVD Drive"s, and click on Eject/Unmount.
Then delete the ISO.
I took the screenshot for my answer, but you can have it.
Thank you for the edit. I did not have a windows machine around to take the screenshot.
Thank you. I had tried to look at "unmounting" the folder but could not find a way. I had also tried to "eject" one of the folders, but it did not quite work maybe because the actual dvd-drive does not work properly (my computer is quite old). I could not work out what was going wrong.
It has worked now and I am going to try again to install.
| common-pile/stackexchange_filtered |
Dep confuses the projects in vendor to that in GOPATH
I have projectA which imports packages from projectB.
Both are existing in the GOPATH.
I use dep for dependency management and projectB is added as a dependency in the Gopkg.toml of the projectA.
If I clear this projectB from the vendor directory of projectA or if I add it explicitly to the ignored = ["projectB"] It compiles fine.
Otherwise I get the kind of following errors:
"gitlab.internal.com/client/vendor/gitlab.internal.com/runtime/protocol/client".Connector does not implement "gitlab.internal.com/runtime/protocol/client".Connector (wrong type for ApplicationContext method)
have ApplicationContext() *"gitlab.internal.com/client/vendor/gitlab.internal.com/runtime/core".ApplicationContext
want ApplicationContext() *"gitlab.internal.com/runtime/core".ApplicationContext
The only difference from the 'have' and 'want' packages above is the path from where it is coming from. (One from GOPATH, the other from vendor/ of the projectA, which has this compilation issue)
I have the following questions:
Why is GOPATH being searched at all, given it is available in the
vendor/? (Why does it state it 'wants' the dependency only from this place!?)
Is there a way to get the dep pick the dependency
explicitly from vendor/?
Deleting the projectB from the GOPATH doesn't solve the problem either.
What is the issue here?
You should never have a package with a vendor/ directory vendored in your project (or more loosely, you can't depend on a package with a vendor/ directory -- i.e. "libraries" shouldn't vendor their dependencies). Whatever tools you are using should have a way to flatten the vendored dependencies into your top-level project. If you have the ability to change over now, go modules makes this much easier to do.
| common-pile/stackexchange_filtered |
Control Order of SQL Scripts in Install Shield
How do you control the order to run SQL scripts in Install Shield 2008? I moved the one I want to run first on the script list, but it seems to not be running.
When using a InstallScript installer, they are processed in the order their respective components are processed. To override this, you must right click on SQL Scripts and turn on batch mode.
| common-pile/stackexchange_filtered |
Wordpress: "Warning: Illegal string offset" in multiple places
Hopefully, you guys can help me here. What's going on with these issues in my WordPress install, see the errors and referenced code block.
Errors:
Warning: Illegal string offset 'domain' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 583
Warning: Illegal string offset 'context' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 587
Warning: Illegal string offset 'singular' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Warning: Illegal string offset 'plural' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Warning: Illegal string offset 'context' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Warning: Illegal string offset 'domain' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 583
Warning: Illegal string offset 'context' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 587
Warning: Illegal string offset 'singular' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Warning: Illegal string offset 'plural' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Warning: Illegal string offset 'context' in /home/customer/www/xxxxxxxx.xxx/public_html/wp-includes/l10n.php on line 588
Code:
function translate_nooped_plural( $nooped_plural, $count, $domain = 'default' ) {
if ( $nooped_plural['domain'] ) {
$domain = $nooped_plural['domain'];
}
if ( $nooped_plural['context'] ) {
return _nx( $nooped_plural['singular'], $nooped_plural['plural'], $count, $nooped_plural['context'], $domain );
} else {
return _n( $nooped_plural['singular'], $nooped_plural['plural'], $count, $domain );
}
}
Change if ( $nooped_plural['context'] ) to if ( isset($nooped_plural['context']) )
Unfortunately still providing the following errors:
Warning: Illegal string offset 'singular' in /home/customer/www/xxx.xxx/public_html/wp-includes/l10n.php on line 590
Warning: Illegal string offset 'plural' in /home/customer/www/xxx.xxx/public_html/wp-includes/l10n.php on line 590```
Warning: Illegal string offset 'something' message means that you're trying to access the value of $someVar['something'] but it doesn't exist.
To avoid it, you should always use isset($someVar['something']) to check if the index exists (or not) in the given variable.
Thanks for that, I went and changed all outstanding variables within that section without the 'isset' keyword to include that.
| common-pile/stackexchange_filtered |
NodeJS exec can't find MachineGuid from registry using powershell cmdlet
I am trying to obtain the MachineGuid through NodeJS using child_process.exec with cmdlet Get-ItemPropertyValue
const command = "Get-ItemPropertyValue -Path 'HKLM:\\Software\\Microsoft\\Cryptography' -Name MachineGuid"
const options = { shell: 'powershell.exe' }
const id = execSync(command, options).toString()
This displays an error that the property MachineGuid is not part of the key.
The command works with different keys and using Get-ItemProperty will result an empty string for Cryptography, but gives correct results for something like COM3
Also, when running through a regular powershell and not inside a node shell, it will actually return the MachineGuid as expected.
Using Test-Path -Path 'HKLM:\\Software\\Microsoft\\Cryptography' in node returns True\r\n so apparently the key can be found but not the values
Alternatives considered:
disable/enable group policy for editing registry (no difference)
running as administrator (no difference)
using reg.exe, but this might be disabled by administrator as group policy (which is why i try to use this approach in the first place)
node-machine-id package, but that also uses reg.exe under the hood
regedit package, but this won't display any values for cryptography as well
pretty lost here and could not find anything useful through googling, so wondering if someone has an idea why it doesn't work and how to work around it
you can try this npm package written by me - @camol/winreg-rs
const { JsRegistry } = require("@camol/winreg-rs");
const registry = new JsRegistry('HKLM');
const value = registry.getKeyValue("SOFTWARE\\Microsoft\\Cryptography", "MachineGuid")
console.log({value})
| common-pile/stackexchange_filtered |
Llama_index issue behind HTTP request
I'm having an issue using Llama_Index to use an index previously generated for custom content ChatGPT queries.
I generated the index with the following code:
from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper, ServiceContext
from langchain import OpenAI
import sys
import os
def construct_index():
# set maximum input size
max_input_size = 40960
# set number of output tokens
num_outputs = 20000
# set maximum chunk overlap
max_chunk_overlap = 20
# set chunk size limit
chunk_size_limit = 600
# define prompt helper
prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
# define LLM
llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.5, model_name="text-davinci-003", max_tokens=num_outputs))
documents = SimpleDirectoryReader("./data").load_data()
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
index.save_to_disk('./index.json')
return index
To use query something the code I use is the following:
def ask_ai(query):
index = GPTSimpleVectorIndex.load_from_disk('./index.json')
response = index.query(query)
return response.response
This is a popular code that in fact works when I run it in a virtual environment (py ./index.py) adding a line to call by default the construct_index or ask_ai methods.
However, when I try to put it behind an HTTP request, like in an AWS Lambda or Flask API, the ask_ai fails in the load_from_disk method for both approaches with the same error:
ERROR:app:Exception on /api/query [POST]
Traceback (most recent call last):
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "C:\pmi\python-flask-restapi\app.py", line 17, in qyery_chatgpt
return appService.ask_ai(query)
File "C:\pmi\python-flask-restapi\app_service.py", line 10, in ask_ai
index = GPTSimpleVectorIndex.load_from_disk('index.json')
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\llama_index\indices\base.py", line 352, in load_from_disk
return cls.load_from_string(file_contents, **kwargs)
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\llama_index\indices\base.py", line 328, in load_from_string
return cls.load_from_dict(result_dict, **kwargs)
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\llama_index\indices\vector_store\base.py", line 260, in load_from_dict
return super().load_from_dict(result_dict, **config_dict, **kwargs)
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\llama_index\indices\base.py", line 304, in load_from_dict
docstore = DocumentStore.load_from_dict(result_dict[DOCSTORE_KEY])
File "C:\pmi\python-flask-restapi\flaskapi\lib\site-packages\llama_index\docstore.py", line 72, in load_from_dict
for doc_id, doc_dict in docs_dict["docs"].items():
KeyError: 'docs'
For the implementation of Flask I took it from here:
https://github.com/bbachi/python-flask-restapi
Just by leaving a single method in the AppService (the ask_ai) and invoking it from a single route in the controller. The docker file for the AWS container-based lambda is:
FROM public.ecr.aws/lambda/python:3.10
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
COPY myfunction.py ./
COPY index.json ./
CMD ["myfunction.lambda_handler"]
In both cases the index.json holds the index generated with the above-provided code and for the lambda the myfunction.py contains the ask_ai method implementation behind the lambda_handler method.
Has someone faced such issue? any help is much appreciated in advance!
I used Chat GPT to examine this issue and it led me to find out the cause, actually it was a big mistake on my side because I dumbly generated the index with a different version of Llama_Index than the one I was later using to load it, I haven't noticed that.
If someone's facing the same issue try to stick to version 0.5.6.
| common-pile/stackexchange_filtered |
Cannot delete phantom table using `bq` or BigQuery UI
I'm trying to remove a table from a dataset using bq without success:
BigQuery error in rm operation: Not found: Table carbon-web-...:AS_....Orders_01Jun2014_31May2015_3704438_01
The table is listed whenever I run bq ls AS_....
I'm seeing similar behavior when I try to access the table from the BigQuery UI. When I click on the link to the table, I receive an error message:
Unable to find table: carbon-web-...:AS_....Orders_01May2017_31May2017
Is there a way to force a refresh on the metadata for this dataset?
These are tables in transient state that shouldn't have been exposed. We found a bug in a feature that we were rolling out with listing tables where in some rare scenarios tables in transient state would show up in the list. We have reverted that now.
It's good to hear that you found the cause of your problem. You could make your answer more helpful for future readers by explaining how you diagnosed it, and in particular, which evidence will identify this problem when it's seen elsewhere.
@TobySpeight I believe this was a bug with BigQuery and required an update to the service itself.
| common-pile/stackexchange_filtered |
How to I include extra bars in grouped bar charts in ChartJS?
So, currently, I have a bar chart in ChartJS, which uses the database to make price comparisons between different companies for differently-sized items (all items are the same, only the size is different).
The database looked like this:
id, company, price_for_small, price_for_medium, price_for_large, country
1, "abcd" , 20 , 30 , 40 , "USA"
... and so on.
I have 3 sizes, so until now the bar chart had 3 groups:
The user would select a country from a dropdown list and the chart would display a price comparison between the different companies for that particular country.
I was asked to do two things: Add a 4th size, which I did. And then add a new type of item to the first 3 sizes. So for example, instead of 1 item in 3 sizes, I have Item A and Item B, both of which come in small, medium and large, and A comes in extra large as well. I have successfully added them to the database and the pricing works right. I just need to add them to the charts.
Since too many charts would look cluttered, what I want to do is:
When the user is filling the form with country, item, size, etc.. I want the graph to show only the charts for the item the user selects. So if they select Item A, the charts show Item A in the 4 sizes for that particular country. If they select B, it shows the 3 sizes for B.
First, I tried just adding the new item as bars in the already existing charts, but it didn't work because of the dataset grouping. Now I'm trying to find a way to show different graphs depending on the item selected. Is this possible at all? Sorry, new to JavaScript.
HTML:
<canvas width="600" height="480"id="myChart"></canvas>
JS:
plotGraph();
var myChart = new Chart(ctx, {
type: 'bar',
data: price_data,
options: chartOptions
});
var ctx = document.getElementById("myChart").getContext('2d');
var company1_data, company2_data, company3_data, price_data, chartOptions;
//Function to plot the bar chart
function plotGraph(){
company1_data = {
label : 'Company 1',
data: [req_comp1.small, req_comp1.medium, req_comp1_large,
req_comp1.extra_large],
borderWidth: 0,
yAxisID: "y-axis-prices"
};
---- Same with the other two companies -----
price_data ={
labels: ["Small", "Medium", "Large", "Extra Large"],
datasets: [company1_data, company2_data, company3_data]
Hi Tom, I have a real hard time understanding the exact need. Can you refine the question that what data you have, and what is expected output in a concise way? It's not that you have posted the question with insufficient information, I think it's me who can't understand it properly.
| common-pile/stackexchange_filtered |
What is the Scriptural evidence for the Protestant view that the AntiChrist will be Jewish?
In Eschatology, there is a lot of speculation about the identity of the AC, and if he's active on the world stage now, [in other words, a well known leader but just not having revealed his true identity yet] There is also a common view that he will be Jewish, and this is related to the idea that the Jews will think he is the Messiah.
**This is not to be seen in any way as a Anti-Semitic - I' am actually part Jewish, and very proud of this part of my heritage.
I'm happy to have answers from Jews and Catholics, but I'm only looking for answers from the 66 Books of the Canon.
"There is also a common view that he will be Jewish". Could you give us examples of people who believe this? I don't think it's really common. And remembering that lots of internet hits doesn't mean lots of people believe it.
DJ, it's the predominant view taught in church of God Holiness, southern Baptist, Bible Methodist, independent, Nazarene, Calvary Chapel and some Pentecostal denominations. Are you able to help?
@DJClayworth Early Church Fathers held this viewpoint!
I believe this is the Catholic/Church Fathers' view also.
there are a number of protestant churches who actually believe, based on the fulfillment of biblical prophecy and historical evidence from the early church, dark ages and reformation era, that the actual beast is the pagan Rome and out of this the Papal Roman Catholic Church! It does not surprise me therefore that the Catholic Church would attempt to cover this up by pointing the finger at the Jews! A question to ask is this, who gave the pope the power to change Gods statutes and laws as talked about in prophecy? (Daniel 7.21, Revelation 13.5)
I agree there is a lot of speculation about the identity of the Antichrist. SOME Christians might think he will be a Jew who will claim to be the Christ, but others think he might be the second beast. Others think he might be a Muslim. Bottom line is the Bible does not say the Antichrist will be Jewish - or a Muslim, or anything else. It's all a matter of interpretation.
What is the Scriptural evidence for the Protestant view that the AntiChrist will be Jewish?
This is not a view that is shared uniquely by some Protestants. Many Catholics, Orthodox and other denominations hold this viewpoint.
The problem with prophecy is that it becomes understood, only after the event foretold have run their course!
The Church Father, Irenaeus was the first to write that the Antichrist would be of Jewish origin from the Tribe of Dan.
As is well known, there is some lack of consistency among the Church Fathers in the way each tried to synthesize the variegated traditions that formed the Christian expectation of the Antichrist. This is no less true when it came to the question of the Antichrist's origins. The starting point for much of New Testament and later Christian thought, as well as much Jewish apocalyptic thought in this area, was the book of Daniel, particularly the portrayal of the final wicked ruler of Daniel's fourth kingdom in the visions of chapters 2 and 7, and filled in with details from chapters 8 and 11 which were thought to point beyond the past historical appear- ance of Antiochus Epiphanes. Paul (assuming it was indeed Paul) plainly draws upon this tradition in his description of the 'man of lawlessness' in 2 Thessalonians 2: 1-12. The book of Revelation too is indebted to this tradition in its depiction of the beast from the sea (13: 1-9), who embodies in some sense the four kingdoms of Daniel 2 and 7. This tradition of the fourth kingdom, wedded always in some way in the literature of our period to the Roman empire, lent itself to fears and speculation about the return of Nero as the head of the revived fourth kingdom, as may be seen in the Ascension of Isaiah and in Sibylline Oracles 3, 4, ands. And Origen can refer to his three main sources for Antichrist teaching, Daniel, the writings of Paul (2 Thess. 2: 1-12), and the sayings of Jesus in the Gospels (pels. 6. 45), without even men- tioning the Johannine writings. But it is precisely in those writings (1 John 2: 18, 22; 4: 3; 2 John 7), written towards the end of the first century AD, that the term 'Antichrist' makes its first known appearance in literature, and there no hint survives of a Roman or even a chiefly political foe. The author shows no trace of the Danielic fourth kingdom tradition at all. Rather, the emphasis is on deception, error, and false teaching, specifically about Jesus. This tradition is easily linked with words of Jesus in the Olivet Discourse about false prophets and false Christs working signs and wonders so as to deceive those who might still be looking for the Christ and, if such were possible, even the elect (Matt. 24: 5, ii, 23-24).
The idea of a Jewish Antichrist in Christian thought may be quite old. Bousset, with others, believed Paul had in view an Antichrist who would deceive the unbelieving Jews into thinking he was their Christ. Bousset goes on to intimate that Paul in 2 Cor. 6: 15 knows a name for the Antichrist, Belial, citing Test. Dan 5, the Sibyls, and Asc. Isa. But G. Jenks has recently shown this to be mistaken. At Qumran, Belial is a Satan figure, not an Antichrist. Jenks has established with some success the view that 'hellenistic Jewish literature was not familiar with an Antichrist figure such as occurs in the later Antichrist literature of early Christianity'. If one has in mind, as Jenks does, centrally important, human, messianic pretenders, and not merely evil
'Endtyrants' (such as show up in Daniel, and possibly in 4Q246 and the Testament of Moses) who may pose a military threat to the people of God, Jenks appears to be correct.
But our first explicit mention of a Jewish Antichrist comes in the writings of Irenaeus, where it occurs already in tandem with the people of God, the opinion that he will also spring from the tribe of Dan (AH 5. 11 30. 2). In AH 5. 25 Irenaeus details the career of the Antichrist: from 2 Thess. 2, tying the notice of Antichrist's taking his seat in the temple with Christ's 'abomination of desolation' in Matt. 24: 15 (cf. Dan. 9: 27); from Dan. 7, the little horn from the fourth beast; and from Jesus' parable of the unjust judge, Luke 18: 2 ff., wherein the judge is the Antichrist and the widow is the earthly Jerusalem: 'he shall remove his kingdom into that [city], and shall sit in the temple of God, leading astray those who worship him, as if he were Christ'. - Antichrist from the Tribe of Dan
The Early Church believed that the Antichrist will have Jewish roots of some sort.
St. Methodius of Olympus 250 - 311
"When the Son of Perdition appears, he will be of the tribe of Dan, according to the prophecy of Jacob. This enemy of religion will use a diabolic art to produce many false miracles, such as causing the blind to see, the lame to walk, and the deaf to hear. Those possessed with demons will be exorcised. He will deceive many and, if he could, as our Lord has said, even the faithful elect.
"Even the Antichrist will enter Jerusalem, where he will enthrone himself in the temple as a god (even though he will be an ordinary man of the tribe of Dan to which Judas Iscariot also belonged).
St. John Chrysostom 347-407
"Antichrist will be possessed by Satan and be the illegitimate son of a Jewish woman from the East. This world will be faithless and degenerate after the birth of Anti-Christ”
The Church Fathers on the Antichrist
Many Christians believe that the Jewish nation as a whole will be converted to Christianity. However, they are still expecting the Messiah to come. Some have ventured to speculate that it will only be altered they have been deceived by the Antichrist.
In some sense this seems a logical step, since the Chosen People, as true to their heritage would not acknowledge an Antichrist (false messiah) of Gentile origins. It simply a logical future situation in the minds of many, whether Protestant, Catholic or other denominations.
The widespread conversion of the Jews to Christianity is a future event predicted by many Christians, often as an end time event. Some Christian groups consider the conversion of the Jews to be imperative and pressing and make it their mission to bring this about. However, since the Middle Ages, the Catholic Church has formally upheld Constitutio pro Judæis (Formal Statement on the Jews), which stated:
We decree that no Christian shall use violence to force them [the Jews] to be baptized, so long as they are unwilling and refuse. ...
Despite such papal declarations, personal, economic and cultural pressure on the Jews to convert persisted, often stirred by clerics. Persecution and forcible displacements of Jews occurred for many centuries, and were regarded as not inconsistent with the papal bull because there was no "violence to force baptism". There were occasional gestures to reconciliation. Pogroms and forcible conversions were common throughout Christian Europe, including organized violence, restrictive land ownership and professional lives, forcible relocation and ghettoization, mandatory dress codes, and at times humiliating actions and torture. The object often was for the Jews to choose between conversion, migration or dying. The Anglican Ministry Among Jewish People, founded in 1809, used non-coercive means in its outreach and missionary efforts.
A 2008 survey of American Christians by the Pew Forum on Religion and Public Life found that over 60% of most denominations believe that Jews will receive eternal life after death alongside Christians
The biblical basis for this expectation is found in Romans 11:25–26:
I do not want you to be ignorant of this mystery, brothers, so that you may not be conceited: Israel has experienced a hardening in part until the full number of the Gentiles has come in. And so all Israel will be saved... (NIV).
The meaning of Romans 11:25-26a has been disputed. Douglas J. Moo calls Romans 11:26 "the storm center in the interpretation of Romans 9–11 and of New Testament teaching about the Jews and their future." Moo himself interprets the passage as predicting a "large-scale conversion of Jewish people at the end of this age" through "faith in the gospel of Jesus their Messiah".
Pope Benedict XVI in his book Jesus of Nazareth: Holy Week (2011) has suggested that the church should not be targeting Jews for conversion efforts, since "Israel is in the hands of God, who will save it ‘as a whole’ at the proper time." - Conversion of the Jews
At the same time, there is the possibility that the two witnesses of the Apocalypse will come to the aid of the Jewish nation in the end times.
In the Book of Revelation, the two witnesses are two of God's prophets who are seen by John of Patmos, during the "Second woe" recorded in Revelation 11:1-14. They have been variously identified by theologians as two people, as two groups of people, or as two concepts. Dispensationalist Christians believe that the events described in the Book of Revelation will occur before and during the Second Coming. The two witnesses are never identified in the Christian Bible. Some believe they are Enoch and Elijah. as in the Gospel of Nicodemus, since they are the only two that did not see death as required by the Scriptures. Others believe them to be Moses and Elijah because they appeared during the transfiguration of Jesus, or because Enoch was not Abraham's descendant. Some also believe that they are Moses and Elijah due to the description of what they are to do. They have the power to shut the heavens (Elijah) and turn water into blood (Moses). - Two witnesses
What will be the result of the death of the Two Witnesses mentioned in the Book of Revelation? Perhaps nothing more than Israel’s true conversion to the Christian Faith.
If the Jews were in effect to be deceived by one of their own, God in his infinite mercy would send his Holy Prophets Elijah and Enoch to save the remnant of God’s Chosen People.
11 I was given a reed like a measuring rod and was told, “Go and measure the temple of God and the altar, with its worshipers. 2 But exclude the outer court; do not measure it, because it has been given to the Gentiles. They will trample on the holy city for 42 months. 3 And I will appoint my two witnesses, and they will prophesy for 1,260 days, clothed in sackcloth.” 4 They are “the two olive trees” and the two lampstands, and “they stand before the Lord of the earth.” 5 If anyone tries to harm them, fire comes from their mouths and devours their enemies. This is how anyone who wants to harm them must die. 6 They have power to shut up the heavens so that it will not rain during the time they are prophesying; and they have power to turn the waters into blood and to strike the earth with every kind of plague as often as they want.
7 Now when they have finished their testimony, the beast that comes up from the Abyss will attack them, and overpower and kill them. 8 Their bodies will lie in the public square of the great city—which is figuratively called Sodom and Egypt—where also their Lord was crucified. 9 For three and a half days some from every people, tribe, language and nation will gaze on their bodies and refuse them burial. 10 The inhabitants of the earth will gloat over them and will celebrate by sending each other gifts, because these two prophets had tormented those who live on the earth.
11 But after the three and a half days the breath of life from God entered them, and they stood on their feet, and terror struck those who saw them. 12 Then they heard a loud voice from heaven saying to them, “Come up here.” And they went up to heaven in a cloud, while their enemies looked on.
13 At that very hour there was a severe earthquake and a tenth of the city collapsed. Seven thousand people were killed in the earthquake, and the survivors were terrified and gave glory to the God of heaven. - Revelation 11: 1-13
Interesting suggestion that the Son of Perdition will come from the tribe of Dan. But how could that genealogy be proven? My understanding is that since the destruction of the templei n 70 A.D. there are no records to show which tribe a Jewish person comes from.
@Lesley Personally, I do not believe it, but Early Christians thought things differently! Your question is obviously unanswerable.
While your knowledge appears extensive, your answer is very long and very tedious, and very tiring for an old man like me. The question requested Scriptural evidence, not your opinion, not the pope's, nor the early church leaders.
@AFL There are Scripture references throughout this post! Pax! Tennman7 stated he would be happy to hear answers from Catholics. For the record this is not my personal opinion, as I do not have one on this subject. It is simply too speculative.
| common-pile/stackexchange_filtered |
Join multiple videos with moviepy. I.e. join the videos one after another, NOT merge those video together
As suggested, concatenate_videoclips joins the videos, but I want a different way.
It looks like concatenate_videoclips merges 2 videos together so that they are played simultaneously.
What I want from moviepy is to join 3 videos A, B and C, as a single video, D.
In the joined video D, A is played first, followed by B, with C being the last video played.
So I would like to join the videos in Order, instead of merging them in a single frame.
Thanks!
I tried concatenate_videoclips, which takes the paraprahsed video files via VideoFileClip.
In the joined video, I want each video to be played on after another. However, they are played simultaneously
Please provide enough code so others can better understand or reproduce the problem.
Here's how I do it (creating video slides A, B, C and grouping them one after another as a final clip D):
from moviepy.editor import ImageClip, concatenate_videoclips
def create_final_video(slides: list, slide_duration: int):
videoclips = []
for slide in slides:
image_clip = ImageClip(img=slide["image"], duration=slide_duration)
image_clip = image_clip.set_fps(30)
videoclips.append(image_clip)
video = concatenate_videoclips(videoclips, method="compose").set_position(("center", "center"))
video.write_videofile("final_clip_D.mp4")
And then just run it like this:
my_image_slides = [
{"image": "path/to/imageA.jpg"},
{"image": "path/to/imageB.jpg"},
{"image": "path/to/imageC.jpg"}
]
create_final_video(my_image_slides, 7)
final_clip_D.mp4 will be 21 seconds long.
| common-pile/stackexchange_filtered |
AWS Terraform most_recent filtering on different ACM certificate statues is not supported
Have recently started getting Error: most_recent filtering on different ACM certificate statues is not supported while deploying AWS terraform that previously deployed.
No changes have been made and I can't find much on this error through google.
Code producing the error:
data "aws_acm_certificate" "primary" {
domain = local.primary_ssl_cert_domain
types = ["AMAZON_ISSUED"]
most_recent = true
}
What is TF code producing the error?
By "Have recently started" you mean that it worked before, or you added most_recent = true now to your code?
It's always been there. Was working yesterday, not working today. Wondering if it's a bug in AWS?
The error message derives from this line in the provider code. The recently executed code checks the certificate details of the discovered filtered certificates at that point, and then compares among them for the "most recent". Not also that the error message is a typo and actually should say:
Error: most_recent filtering on different ACM certificate statuses is not supported
This is easier to understand. The ACM certificate documentation displays the information about the statuses. The problem you have is that your filters matched multiple certificates, and AWS is unable to further filter by "most recent" because the discovered certificates have different statuses. It would make sense that this would "suddenly stop working" because you could easily have one or more certificates that changed status in a day.
You would need to either modify your discovered certificates to have the same status, or modify your filters to only discover certificates with the same status prior to the "most recent" filtering.
| common-pile/stackexchange_filtered |
How to unfold a dictionary of dictionaries into a pandas DataFrame for larger dictionaries?
Consider the following dictionary of dictionaries in python3.x
dict1 = {4: {4:25, 5:39, 3:42}, 5:{24:94, 252:49, 25:4, 55:923}}
I would like to unfold this into a pandas DataFrame. There appear to be two options:
df1 = pd.DataFrame.from_dict(dict1, orient='columns')
print(df1)
4 5
3 42.0 NaN
4 25.0 NaN
5 39.0 NaN
24 NaN 94.0
25 NaN 4.0
55 NaN 923.0
252 NaN 49.0
whereby the columns for this are the main dictionary keys 4 and `5', the row indices are the subdictionary keys and the values are the subdictionary values.
The other option is
df2 = pd.DataFrame.from_dict(dict1, orient='index')
print(df2)
4 5 3 24 252 25 55
4 25.0 39.0 42.0 NaN NaN NaN NaN
5 NaN NaN NaN 94.0 49.0 4.0 923.0
whereby the columns are the keys of the inner "sub-dictionary", the row indices are the keys of the main dictionary, and the values are the subdictionary keys.
Is there a standard approach which allows us to unfold the python dictionary as follows?
key inner_key values
4 3 42
4 4 25
4 5 39
5 24 94
5 25 4
5 55 923
5 252 49
It would be best not to manipulate the DataFrame after using from_dict(), as for far larger python dictionaries, this could become quite memory intensive.
List comprehension
A list comprehension should be fairly efficient:
dict1 = {4: {4:25, 5:39, 3:42}, 5: {24:94, 252:49, 25:4, 55:923}}
cols = ['key', 'inner_key', 'values']
df = pd.DataFrame([[k1, k2, v2] for k1, v1 in dict1.items() for k2, v2 in v1.items()],
columns=cols).sort_values(cols)
print(df)
key inner_key values
2 4 3 42
0 4 4 25
1 4 5 39
3 5 24 94
5 5 25 4
6 5 55 923
4 5 252 49
pd.melt + dropna
If you don't mind working from df1, you can unpivot your dataframe via pd.melt and then drop rows with null value.
df1 = df1.reset_index()
res = pd.melt(df1, id_vars='index', value_vars=[4, 5])\
.dropna(subset=['value']).astype(int)
print(res)
index variable value
0 3 4 42
1 4 4 25
2 5 4 39
10 24 5 94
11 25 5 4
12 55 5 923
13 252 5 49
Thanks for the explanation! Much appreciated
pd.DataFrame([[i,j,user_dict[i][j] ] for i in user_dict.keys() for j in user_dict[i].keys()],columns=['key', 'inner_key', 'values'])
Output:
key inner_key values
0 4 4 25
1 4 5 39
2 4 3 42
3 5 24 94
4 5 252 49
5 5 25 4
6 5 55 923
| common-pile/stackexchange_filtered |
use io_service::post (boost) when deadline_timer is waiting
I have a problem while using deadline_timer and io_service::post as below:
#include "boost/asio.hpp"
#include "boost/thread.hpp"
int main()
{
boost::asio::io_service io_service;
boost::asio::deadline_timer timer1(io_service);
boost::asio::deadline_timer timer2(io_service);
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
boost::this_thread::sleep(boost::posix_time::seconds(5));
printf("1 ");
});
timer2.expires_from_now(boost::posix_time::seconds(2));
timer2.async_wait([](const boost::system::error_code& error) {
printf("2 ");
});
boost::thread t([&io_service]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([]() {
printf("4 ");
});
});
io_service.run();
t.join();
getchar();
return 0;
}
I thougth that the result is "1 2 3 4" but the result is "1 3 4 2". Anyone can show me how to the callback of timer2(print "2") is performed before as the result "1 2 3 4" with boost library (and don't change the expire time of timer1 and timer2).
Thanks very much!
the first timer expiration blocks the io (main) thread from running, in the mean time the other thread posts a couple of items to asio work queue, once timer 1's callback completes, the second timers expiration is processed which causes the callback to be queued but not executed. since "3" & "4" where already queued (while "1" was blocking the main thread), they go ahead of "2"
The point of asio is to not block. By putting long running work in the first timers callback (the sleep) you have prevented the io thread from running in a timely manner. You should offload that work into a dedicated thread, and post its completion back to asio.
This is actually a pretty complicated example.
The io_service will run on the main thread. Here is the order of operations
Main Thread:
Request Timer at T0 + 1
Request Timer at T0 + 2
Spawn thread
Execute all pending io (io_service.run())
Secondary Thread:
Sleep 5 seconds
Request Timer
Request Timer
First of all, nothing will execute in the io_service until io_service.run() is called.
Once io_service.run() is called, a timer for 1 second in the future is scheduled. When that timer fires, it first sleeps for 5 seconds before printing 1.
While that thread is executing, the secondary thread also comes up, and sleeps for 5 seconds. This thread is setup and scheduled before the timer executing in the handler for timer1 is completed. Since both of these threads sleep for 5 seconds, '2' and '3' are immediately posted to the io_service.
Now things get a bit tricky. It seems likely that the timeout for timer2 should have expired by now (it being at least 5 seconds in the future), but there were two commands directly posted to the io_service while it was handling timer1.
It seems that in the implementation details, boost gives priority to directly posted actions over deadline timer actions.
The io_service makes no guarantees about the invocation order of handlers. In theory, the handlers could be invoked in any order, with some permutations being significantly unlikely.
If handlers need to be invoked in a very specific order, then consider restructing the asynchronous call chains in a manner that enforces the desired handler chain. Additionally, one may find it necessary to use the guaranteed order of handler invocation that strand provides. Consider not trying to control complex handler invocations through with brittle sleeps and timers.
⚠ Without reworking the call chains or modifying the timers, an extremely fragile solution that depends heavily upon implementation details is available here ⚠
Your first problem is that you're trying to block inside a handler:
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
boost::this_thread::sleep(boost::posix_time::seconds(5)); // <--- HERE
printf("1 ");
});
What happens in the above code is that after timer1 waits for one second, it posts the callback to the io_service. Inside the io_service::run function this callback is executed but this execution happens inside the main thread, so it halts for five seconds, preventing timer2 from posting its handler for execution into io_service. It does so until the sixth second of the program execution (6 = 5+1).
Meanwhile the thread t gets executed and at fifth second of program execution it posts those two printf("3") and printf("4") to io_service.
boost::thread t([&io_service]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([]() {
printf("4 ");
});
});
Once the handler from timer1 unblocks, it allows timer2 to post its handler to io_service. That again happens at sixth second of program exectuion, that is, once the printf("3") and printf("4") have already been posted!
All in all, I believe what you're looking for is this:
#include "boost/asio.hpp"
#include "boost/thread.hpp"
int main()
{
boost::asio::io_service io_service;
boost::optional<boost::asio::io_service::work> work(io_service);
boost::asio::deadline_timer timer1(io_service);
boost::asio::deadline_timer timer2(io_service);
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
printf("1 ");
});
timer2.expires_from_now(boost::posix_time::seconds(2));
timer2.async_wait([](const boost::system::error_code& error) {
printf("2 ");
});
boost::thread t([&io_service, &work]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([&work]() {
printf("4 ");
work = boost::none;
});
});
io_service.run();
t.join();
return 0;
}
| common-pile/stackexchange_filtered |
How to get records from last Month (not last 30 days) in mysql query
I am having a part in MySQL query where I need to select the records for the last month but not the last 30 days
For example, if the current month is Jan, then I need to have the records of last month which will be Dec, but again it should show the records of Dec 2017 only and not includes previous years.
I am currently using following SQL part for the same.
$sql .= " AND MONTH(checkup_date) = MONTH(NOW() - INTERVAL 1 MONTH)";
and it is showing the records for all the Decembers in database i.e Dec 2017, Dec 2016 and so on.
I even tried the following
$sql .= " YEAR(checkup_date) AND MONTH(checkup_date) = MONTH(NOW() - INTERVAL 1 MONTH)";
but it will work only if last month is not earlier than Jan
i.e it will work if current month is Feb, so last month will be Jan and year will be 2018
but in my case the current month is Jan, so last month will be Dec but the year here is 2018, so it isn't and won't show any records.
So guys please help me in fixing this query so that I can get the last year in case the current month is Jan and in another case, year remains the current year.
Appreciate your help in solving it.
This is the query of getting all rows in a table belonging to last month in mysql:
SELECT * FROM table
WHERE YEAR(date_created) = YEAR(CURRENT_DATE - INTERVAL 1 MONTH)
AND MONTH(date_created) = MONTH(CURRENT_DATE - INTERVAL 1 MONTH)
An alternative is:
SELECT * FROM table
WHERE EXTRACT(YEAR_MONTH FROM date_created) = EXTRACT(YEAR_MONTH FROM CURDATE() - INTERVAL
1 MONTH)
You can simply do:
where checkup_date < curdate() - interval (1 - day(curdate())) day and
checkup_date >= (curdate() - interval (1 - day(curdate())) day) - interval 1 month
By not using functions on checkup_date, then method can take advantage of an appropriate index that h as that column.
to get last month's
select DATE_ADD("2018-01-10", INTERVAL -1 MONTH)
| common-pile/stackexchange_filtered |
Disable move of cursor during auto-complete selection
I need to disable the movement of the cursor during auto-complete selection. I have a contentEditable div as the search bar for friends. when I type any letters an auto-complete suggestions appear underneath the search bar. During a selection of the auto-complete suggestions, I need to use the arrow keys to select. But this will also move the cursor on the search bar and it will also disable any current selection in the search bar. Can I somehow disable the movement of cursor for some specific buttons/keys on the keyboard in javascript?
Note:
Setting the cursor at the last position is not a solution for me.
I want to disable left-, right-, up-, down arrow key and also carriage return key
You can call preventDefault() on keydown events that have a keycode corresponding to an arrow key or the return key. This will prevent the cursor from moving.
The event.keyCode property will be equal to the following during a keydown event:
37: Left
38: Up
39: Right
40: Down
13: Return
So, you can attach an event listener to your div and cancel these events if there is an auto-complete suggestion active:
var autoCompleteOn = false; //Set this flag in an auto-complete handler
yourDiv.addEventListener("keydown",function(e){
//v------all arrow keys------------v v---Return----v
if(autoCompleteOn && ((e.keyCode >= 37 && e.keyCode <= 40) || e.keyCode == 13)){
e.preventDefault();
return false;
}
}, false);
Note: I used addEventListener for simplicity, but for compatibility, you will have to use attachEvent or onkeydown also.
Example jsFiddle.
Do you use jquery?
if so you should try this
$("#whatever").bind("keypress", function(e) {
if (e.keyCode >36 && e.keyCode <41) return false;
});
Search for javascript keycode for all keys
| common-pile/stackexchange_filtered |
Slick group by count
Let's say I have organizations, each organization has different groups, and users subscribe to groups.
case class OrganizationEntity(id: Option[Int], name: String)
case class GroupEntity(id: Option[Int], organizationId: Int, name: String)
case class GroupUserEntity(groupId: Int, userId: Int)
I need to get all groups of an organization, with the organizationName, and the quantity of users subscribed to that group.
In SQL, this can be easily done with this query:
SELECT g.*, o.organizationname, COUNT(DISTINCT gu.userid) FROM `group` g
LEFT JOIN organization o ON g.orgid = o.organizationid
LEFT JOIN group_user gu ON g.groupid = gu.groupid
WHERE g.orgid = 1234
GROUP BY g.groupid;
But I am struggling to replicate that in slick,
I've started writing this, but I am stuck now:
def findByOrganizationId(organizationId: Int) = {
(for {
g <- groups if g.organizationId === organizationId
o <- organizations if o.id === organizationId
gu <- groupUsers if g.id === gu.groupid
} yield (g, o.name, gu)).groupBy(_._3.groupid).map { case (_, values) => (values.map { case (g, orgname, users) => (g, orgname, users.) } }.result
}
.length should do the trick
You can just add .length to do the count in your code.
I think should also work directly in the yield so you don't need the groupBy:
def findByOrganizationId(organizationId: Int) = {
(for {
g <- groups if g.organizationId === organizationId
o <- organizations if o.id === organizationId
gu <- groupUsers if g.id === gu.groupid
} yield (g, o.name, gu.length)).result
}
| common-pile/stackexchange_filtered |
How do I break down the math symbols in this equation
$$\frac{n}{\phi(n)}=\frac{n}{n\prod_{p|n}\left(1-\frac{1}{p}\right)}=\frac{1}{\prod_{p|n}\left(1-\frac{1}{p}\right)}$$
How do I learn to understand these equations by myself as I can't seem to find the mathematical notation descriptions online?
The $\phi(n)$ refers to Euler's totient function. As explained here, the $\prod_{p\mid n}$ refers to taking a product over all distinct primes $p$ that divide $n$.
For symbols you don't know, you can get help from https://en.wikipedia.org/wiki/List_of_mathematical_symbols
The big pi, $\prod$ denotes a product. The subscript on this tells you which numbers this product is over. In this example, the subscript says $p|n$ which means "$p$ divides $n$" i.e. the product is over all the prime numbers $p$ that divide $n$ (the prime factors of $n$). $\phi(n)$ denotes the Euler-Totient function. This counts the number of integers $m<n$ which are co-prime to $n$, i.e. have $\gcd(m,n)=1$.
As an example, say we have $n=105=3\times5\times7$. Then $$\prod_{p|n}\left(1-\frac1p\right)=\left(1-\frac13\right)\times\left(1-\frac15\right)\times\left(1-\frac17\right)=\frac{16}{35}$$
| common-pile/stackexchange_filtered |
PySpark Issues streaming from Kafka
I was trying to connect to the kafka(0.9.0) stream through PySpark for one of my applications. Facing following issue:
Steps taken
Started kafka using following commands
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
Using kafka-python library i have started kafka-producer. No issues with that, I was able to consume them back through Python.
Now if consume the same through pyspark(1.5.2) as shown in the following code:
import sys
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from pyspark import SparkContext
sc = SparkContext(appName="PythonStreamingKafka")
ssc = StreamingContext(sc, 3)
zkQuorum, topic = 'localhost:9092', 'test'
kvs = KafkaUtils.createStream(ssc, zkQuorum,"my_group", {topic: 3})
lines = kvs.map(lambda x: x.value)
counts = (lines.flatMap(lambda line: line.split(" "))
.map(lambda word: (word, 1))
.reduceByKey(lambda a, b: a+b)
)
counts.pprint()
ssc.start()
ssc.awaitTermination()
I executed the above code using following command
spark-submit --jars spark-streaming-kafka-assembly_2.10-1.5.2.jar test.py
I get the following error:
15/12/17 15:37:20 INFO ClientCnxn: Socket connection established to 0:0:0:0:0:0:0:1/0:0:0:0:0:0:0:1:9092, initiating session
15/12/17 15:37:20 INFO PythonRunner: Times: total = 157, boot = 156, init = 1, finish = 0
15/12/17 15:37:20 INFO Executor: Finished task 3.0 in stage 4.0 (TID 5). 1213 bytes result sent to driver
15/12/17 15:37:20 INFO TaskSetManager: Finished task 3.0 in stage 4.0 (TID 5) in 958 ms on localhost (1/4)
15/12/17 15:37:20 INFO PythonRunner: Times: total = 305, boot = 304, init = 1, finish = 0
15/12/17 15:37:20 INFO Executor: Finished task 0.0 in stage 4.0 (TID 2). 1213 bytes result sent to driver
15/12/17 15:37:20 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 2) in 1115 ms on localhost (2/4)
15/12/17 15:37:20 INFO PythonRunner: Times: total = 457, boot = 456, init = 1, finish = 0
15/12/17 15:37:20 INFO Executor: Finished task 1.0 in stage 4.0 (TID 3). 1213 bytes result sent to driver
15/12/17 15:37:20 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 3) in 1266 ms on localhost (3/4)
15/12/17 15:37:20 INFO PythonRunner: Times: total = 306, boot = 304, init = 2, finish = 0
15/12/17 15:37:20 INFO Executor: Finished task 2.0 in stage 4.0 (TID 4). 1213 bytes result sent to driver
15/12/17 15:37:20 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 4) in 1268 ms on localhost (4/4)
15/12/17 15:37:20 INFO DAGScheduler: ResultStage 4 (runJob at PythonRDD.scala:393) finished in 1.272 s
15/12/17 15:37:20 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
15/12/17 15:37:20 INFO DAGScheduler: Job 2 finished: runJob at PythonRDD.scala:393, took 1.297262 s
15/12/17 15:37:21 INFO JobScheduler: Added jobs for time<PHONE_NUMBER>000 ms
15/12/17 15:37:21 INFO SparkContext: Starting job: runJob at PythonRDD.scala:393
15/12/17 15:37:21 INFO DAGScheduler: Got job 3 (runJob at PythonRDD.scala:393) with 3 output partitions
15/12/17 15:37:21 INFO DAGScheduler: Final stage: ResultStage 6(runJob at PythonRDD.scala:393)
15/12/17 15:37:21 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 5)
15/12/17 15:37:21 INFO DAGScheduler: Missing parents: List()
15/12/17 15:37:21 INFO DAGScheduler: Submitting ResultStage 6 (PythonRDD[15] at RDD at PythonRDD.scala:43), which has no missing parents
15/12/17 15:37:21 INFO MemoryStore: ensureFreeSpace(5576) called with curMem=100677, maxMem=556038881
15/12/17 15:37:21 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 5.4 KB, free 530.2 MB)
15/12/17 15:37:21 INFO MemoryStore: ensureFreeSpace(3326) called with curMem=106253, maxMem=556038881
15/12/17 15:37:21 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 3.2 KB, free 530.2 MB)
15/12/17 15:37:21 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:61820 (size: 3.2 KB, free: 530.3 MB)
15/12/17 15:37:21 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:861
15/12/17 15:37:21 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 6 (PythonRDD[15] at RDD at PythonRDD.scala:43)
15/12/17 15:37:21 INFO TaskSchedulerImpl: Adding task set 6.0 with 3 tasks
15/12/17 15:37:21 INFO TaskSetManager: Starting task 0.0 in stage 6.0 (TID 6, localhost, PROCESS_LOCAL, 2024 bytes)
15/12/17 15:37:21 INFO TaskSetManager: Starting task 1.0 in stage 6.0 (TID 7, localhost, PROCESS_LOCAL, 2024 bytes)
15/12/17 15:37:21 INFO TaskSetManager: Starting task 2.0 in stage 6.0 (TID 8, localhost, PROCESS_LOCAL, 2024 bytes)
15/12/17 15:37:21 INFO Executor: Running task 0.0 in stage 6.0 (TID 6)
15/12/17 15:37:21 INFO Executor: Running task 2.0 in stage 6.0 (TID 8)
15/12/17 15:37:21 INFO Executor: Running task 1.0 in stage 6.0 (TID 7)
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 0 blocks
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 3 ms
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 2 ms
15/12/17 15:37:21 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 2 ms
C:\Spark\python\lib\pyspark.zip\pyspark\shuffle.py:58: UserWarning: Please install psutil to have better support with spilling
15/12/17 15:37:21 INFO PythonRunner: Times: total = 158, boot = 154, init = 1, finish = 3
C:\Spark\python\lib\pyspark.zip\pyspark\shuffle.py:58: UserWarning: Please install psutil to have better support with spilling
15/12/17 15:37:22 INFO PythonRunner: Times: total = 298, boot = 294, init = 1, finish = 3
C:\Spark\python\lib\pyspark.zip\pyspark\shuffle.py:58: UserWarning: Please install psutil to have better support with spilling
15/12/17 15:37:22 INFO PythonRunner: Times: total = 448, boot = 444, init = 1, finish = 3
15/12/17 15:37:22 INFO PythonRunner: Times: total = 152, boot = 151, init = 1, finish = 0
15/12/17 15:37:22 INFO Executor: Finished task 0.0 in stage 6.0 (TID 6). 1213 bytes result sent to driver
15/12/17 15:37:22 INFO TaskSetManager: Finished task 0.0 in stage 6.0 (TID 6) in 784 ms on localhost (1/3)
15/12/17 15:37:22 INFO PythonRunner: Times: total = 320, boot = 318, init = 2, finish = 0
15/12/17 15:37:22 INFO Executor: Finished task 2.0 in stage 6.0 (TID 8). 1213 bytes result sent to driver
15/12/17 15:37:22 INFO TaskSetManager: Finished task 2.0 in stage 6.0 (TID 8) in 952 ms on localhost (2/3)
15/12/17 15:37:22 INFO PythonRunner: Times: total = 172, boot = 171, init = 1, finish = 0
15/12/17 15:37:22 INFO Executor: Finished task 1.0 in stage 6.0 (TID 7). 1213 bytes result sent to driver
15/12/17 15:37:22 INFO TaskSetManager: Finished task 1.0 in stage 6.0 (TID 7) in 957 ms on localhost (3/3)
15/12/17 15:37:22 INFO DAGScheduler: ResultStage 6 (runJob at PythonRDD.scala:393) finished in 0.959 s
15/12/17 15:37:22 INFO TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool
15/12/17 15:37:22 INFO DAGScheduler: Job 3 finished: runJob at PythonRDD.scala:393, took 0.987050 s
15/12/17 15:37:23 INFO ClientCnxn: Client session timed out, have not heard from server in 3000ms for sessionid 0x0, closing socket connection and attempting re
connect
-------------------------------------------
Time: 2015-12-17 15:37:18
-------------------------------------------
15/12/17 15:37:23 INFO JobScheduler: Finished job streaming job<PHONE_NUMBER>000 ms.0 from job set of time<PHONE_NUMBER>000 ms
15/12/17 15:37:23 INFO JobScheduler: Total delay: 5.780 s for time<PHONE_NUMBER>000 ms (execution: 5.725 s)
15/12/17 15:37:23 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
15/12/17 15:37:23 INFO JobScheduler: Starting job streaming job<PHONE_NUMBER>000 ms.0 from job set of time<PHONE_NUMBER>000 ms
Although the "Time" sections continue to appear.
No issues with pyspark or kafka, everything works perfectly well. How can I resolve this issue?
I think error in this line:
zkQuorum, topic = 'localhost:9092', 'test'
zookeeper port should be 2181
zkQuorum, topic = 'localhost:2181', 'test'
Source: http://spark.apache.org/docs/latest/streaming-kafka-integration.html
Without iterating the RDD, you can't process the each and every record by using of foreachRDD as DSstream will create the contineus RDDs.
from pyspark import SparkConf, SparkContext
from operator import add
import sys
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
import json
from kafka import SimpleProducer, KafkaClient
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')
def handler(message):
records = message.collect()
for record in records:
lines = record.map(lambda x: x.value)
counts = (lines.flatMap(lambda line: line.split(" "))
.map(lambda word: (word, 1))
.reduceByKey(lambda a, b: a+b)
)
counts.pprint()
def main():
sc = SparkContext(appName="PythonStreamingDirectKafkaWordCount")
ssc = StreamingContext(sc, 10)
brokers, topic = sys.argv[1:]
kvs = KafkaUtils.createDirectStream(ssc, [topic], {"metadata.broker.list": brokers})
kvs.foreachRDD(handler)
ssc.start()
ssc.awaitTermination()
| common-pile/stackexchange_filtered |
TVöD rank for an engineer in Germany
I am a French citizen currently employed by an Institute part of the Helmholtz association. I had been looking forward to starting this position for a while and was a bit in a rush when signing my contract (not ideal I know). I overlooked an important point that I fear is too late to change now anyway.
The job posting did only mention that an "engineering degree" was required, without specifying if a master's degree was needed. I personally hold 2 master's degree from the double-degree program I followed during my studies (might be irrelevant). I have ~2.5 years of experience prior to this position.
Without going into details, it's quite a demanding job and de facto all people that I know of that had this position held at least a master's degree in the past. But I am classified as TVöD 11 level 1 nonetheless. Is this normal ? It is clearly stated in my contract that I would be payed as TVöD 11 but I was wondering if there were any law superseding contracts that forced public employers to pay you according to the degree you hold, or if it does not matter if clearly stated in the contract. I'm not too confident anyway but I would like to clarify it with my employer at least for those who'll follow.
Thanks for your help!
Edit: I talked about it with my boss. The justification for the pay group 11 is that my job does not encompass design activities. This is true, it is mostly a technician job even if they only employ engineers with a master's degree to do it. I think it's somewhat fair, even if the job is quite specialized. They would like to get it to a higher pay group but were blocked when they tried to in the past. They actually don't get to see my contract and were quite surprised that I was placed at level 1. They will discuss with the person responsible for this to get me to level 2. Will update when I get a definitive answer.
As far as I know, there must be a Tätigkeitsbeschreibung, i.e. a description of your tasks. You might not have seen it, but your employer probably had to write it. The salary is a result of this description.
no legal advice or expert here, but, if there is such a law superseding the contract you signed, you may have to check if your degree(s) are officially recognized in Germany. I remember something like "you have to have a degree from a German uni, to officially use the Ing. prefix" (I may be completely wrong, I am sure german people can correct me during their Mittagspause)
On level 11, a Master's degree cannot be required for this position. As already mentioned, you need to see the "Tätigkeitsbeschreibung" to check if pay scale category 11 is correct. Pay scale categories 9-12 are for those with a Bachelor's degree (or a degree from a University of Applied Sciences). Also check if in your ~2.5 years of experience, you did anything also mentioned in your "Tätigkeitsbeschreibung" and can prove that. The "Stufe 1" may simply be mentioned because it your job title was one that would count as a lower pay scale in TV-L, and then it would normally not count.
Note that having a Master's degree does not automatically mean you get a into a higher pay scale class. What is relevant is if the degree is needed for this position. For instance, technicians with a Master's degree are not paid pay scale level 13.
I have not heard about the Tätigkeitsbeschreibung before, I will check with my employer. This is a fixed-term position and as I said all the former employees had a master's degree or above. I wouldn't say it's absolutely necessary though so I don't think I can argue about that. But the field I work in is extremely specific and the whole of my 2.5 years of experience are highly relevant to my current position so as you guys said I think it's worth discussing it. Thanks !
Just to make thinks more complicated: you mean TVöd-Bund, yes? There are other TVöd as well.
I think so, to be precise it says "TV EntgO Bund" in my contract.
The degree associated with a certain payment group only serves as orientation/minimal requirement. It is also usually coupled to the position within the funding, so even if they wanted to offer you a higher group there, they couldn't. Roughly speaking, if they hired you e.g. as an untrained lab assistant, then you get paid as an untrained lab assistant, even if you also happen to have a PhD in the same field.
What is different is the level (Stufe) within each group. This only depends on work experience relevant to the job. So if your work experience was in something similar to what you are hired for now, this may be worth discussing with your boss.
Thanks for the explanation. My previous experience is in the exact same field and pretty relevant to this position. I was not employed in Germany though, would that be an issue ? I will discuss it during our next meeting anyway.
@qwetzal If you have not been employed in the "Öffentliche Dienst" before (basically in a position that pays according to TVöD), the worthiness of previous work experience (an with it the level into which you are sorted), even if done in Germany, is something that you need to negotiate with your future employer. Normally the levels work like this: after one year you switch to level 2, after 2 more to level 3 and so on.
@qwetzal Normally the country does not matter. Depending on whoever makes the decisions in the bureaucracy, the result may vary though. I've heard everything, from people getting their PhD-time (in a different country) counted as work experience for a post-doc position to "only work in the precisely same position in the same federal state can be counted"...
@mlk Thank you guys, I will have a chat with my employer, I have a good relation with them and they are honest people so hopefully they can push in my direction. Will edit this post with the answer I got.
| common-pile/stackexchange_filtered |
Code First Migration on Production
I am working with EF Code First Migration and before I push to the production, I created the InitialCreate file from scratch and then set the AutomaticMigrationsEnabled flag to 'False'. Then inside my web.config file, I defined my Db Initialization strategy to 'MigrateDatabaseToLatestVersion'..Then I simply deployed my website into production, so far things seems to work fine.
Now, let say I If add new field into of Entity class and push this change into production, it will try to migrate db to latest version and it fail with the error information :
Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending model changes to a code-based migration or enable automatic migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to true to enable automatic migration.
What I would need to do in this situation? Do I need to set the AutomaticMigrationsEnabled flag to True, or something else which would be suitable for production?
Thanks
Best Regards,
Ammar
Did you add the migration after you updated the Entity?
Do I need to explicitly update the migration by running the command add-migration? Wouldn't EF do by itself?
Yes, you need to do that since you turned off Automatic Migrations. That's what "Either write the pending model changes to a code-based migration" this line is telling you to do. When you deploy the app, the migration gets applied on the first call to the database and you'll see a new entry in the migrations table. Try it in a lower environment first if you want to see how it works.
Do I need to delete the previously create InitialCreate.cs before running the Add-migration command or it can be update by any other command?
You don't need to delete it.
So each time when my model change and I would like to push on production I will have to run add migration command to update the class.. And it will keep adding files in Migration directory, Is there a way I will have a single InitialCreate.cs file with all the new model changes including the initial one?
No, keep each group of changes in their own migration. That way, you can revert them if needed. See this link http://msdn.microsoft.com/en-us/data/jj591621.aspx#specific
| common-pile/stackexchange_filtered |
How to get this simple histogram?
I want to create this:
But I've come up with this one:
I've searched the internet first but the histograms they were plotting were so much complicated.
\begin{tikzpicture}
\begin{axis}[
symbolic x coords={$0$, $1$, $2$},
xtick=data]
\addplot[ybar,fill=blue] coordinates {
($0$, 0.25
($1$, 0.50)
($2$, 0.25)
};
\end{axis}
\end{tikzpicture}
You easily can draw it like normal rectangles using tikz and fill with north east lines from the patterns library.
\documentclass[border=5pt,tikz]{standalone}
\usetikzlibrary{patterns}
\begin{document}
\begin{tikzpicture}[yscale=4]
\draw (-1.5,0)--(3,0) (-1,0)--(-1,.75);
\draw[pattern=north east lines] (-.5,0) rectangle (.5,.25) (.5,0) rectangle (1.5,.5) (1.5,0) rectangle (2.5,.25);
\foreach \x in {0,1,2}
\draw (\x,0) -- (\x,-1pt) node[below]{\footnotesize\x};
\foreach \y in {0.25,0.50}
\draw (-1,\y) -- ({-1cm-4pt},\y) node[left]{\footnotesize\y};
\end{tikzpicture}
\end{document}
The filling part wasn't actually necessary for me. So, it's much appreciated, thanks a lot!
You're welcome, good luck!
| common-pile/stackexchange_filtered |
How to construct a Map using with object properties as key using java stream and toMap or flatMap
I have list of objects and need to create a Map have key is combination of two of the properties in that object. How to achieve it in Java 8.
public class PersonDTOFun {
/**
* @param args
*/
public static void main(String[] args) {
List<PersonDTO> personDtoList = new ArrayList<>();
PersonDTO ob1 = new PersonDTO();
ob1.setStateCd("CT");
ob1.setStateNbr("8000");
personDtoList.add(ob1);
PersonDTO ob2 = new PersonDTO();
ob2.setStateCd("CT");
ob2.setStateNbr("8001");
personDtoList.add(ob2);
PersonDTO ob3 = new PersonDTO();
ob3.setStateCd("CT");
ob3.setStateNbr("8002");
personDtoList.add(ob3);
Map<String,PersonDTO> personMap = new HashMap<>();
//personMap should contain
Map<String, PersonDTO> personMap = personDtoList.stream().collect(Collectors.toMap(PersonDTO::getStateCd,
Function.identity(),(v1,v2)-> v2));
}
}
In the above code want to construct personMap with key as StateCd+StateNbr and value as PersonDTO. As existing stream and toMap function only support single argument function as key can't able to create a key as StateCd+StateNbr.
Write a String getPersonKey() function that takes a PersonDTO and returns your desired key, then replace your PersonDTO::getStateCd argument with the name of this function.
Try it like this.
The key argument to map is the key and a concatenation of the values you described.
The value is simply the object
Map<String, PersonDTO> personMap =
personDtoList
.stream()
.collect(Collectors.toMap(p->p.getStateCd() + p.getStateNbr(), p->p));
If you believe you will have duplicate keys, then you have several choices include a merge function.
the one shown below preserves the value for the first key (existing) encountered.
Map<String, PersonDTO> personMap =
personDtoList
.stream()
.collect(Collectors.toMap(p->p.getStateCd() + p.getStateNbr(), p->p,
(existingValue, lastestValue)-> existingValue));
the next one saves all instances of PersonDTO and puts same key values in a list.
Map<String, List<PersonDTO>> personMap =
personDtoList
.stream()
.collect(Collectors.groupingBy(p->p.getStateCd() +
p.getStateNbr()));
as the two answers above, you can use the toMap function that gets 2 functions as input, the first one is the way to get a unique key, the second is how to get the data.
We prefer to hold the unique key in the class its self and to call the function for the key directly from that class using the Class::Method as the function for the keys, it makes the code much more readable instead of using nested lambda functions
so using WJS example:
Map<String, PersonDTO> personMap =
personDtoList
.stream()
.collect(Collectors.toMap(p->p.getStateCd() + p.getStateNbr(), p->p));
Map<String, PersonDTO> personMap =
personDtoList
.stream()
.collect(Collectors.toMap(PersonDTO::getUniqueKey, p->p));
Note that this is exactly the same as the above logic wise
Imo, this is poor design. You're adding a method to a class that has nothing to do with the class itself. What if the OP had three fields to possibly combine taking two at a time for some future use in a different Map? Would you have methods in that class for all possible combinations? That would result in a bloated class.
I have fixed with following implementation.
Function<PersonDTO, String> keyFun = person -> person.getStateCd()+person.getStateNbr();
Map<String, PersonDTO> personMap = personDtoList.stream().collect(Collectors.toMap(keyFun,
Function.identity(),(v1,v2)-> v2));
This will work in case while creating dynamic key if we encounter duplicate key.
| common-pile/stackexchange_filtered |
Wrap automatically inserted TextBoxes with HTML in ASP.NET
I want to generate TextBoxes in my ASP.NET webpage. It works fine
foreach (var Field in db.A_Settings)
{
TextBox t = new TextBox();
t.ID = Field.ID.ToString();
t.CssClass = "smallinput";
t.Text = Field.Text;
LabelPlaceHolder.Controls.Add(t);
}
And it nicely generates something like this:
<input name="ctl00$ContentPlaceHolder1$1" type="text" value="ValueA" id="ContentPlaceHolder1_1" class="smallinput">
<input name="ctl00$ContentPlaceHolder1$2" type="text" value="ValueB" id="ContentPlaceHolder1_4" class="smallinput">
<input name="ctl00$ContentPlaceHolder1$3" type="text" value="ValueC" id="ContentPlaceHolder1_5" class="smallinput">
It is correct, but in fact I want to wrap it with some HTML, like
<p>
<label>Label for the first TextBox obtained from database</label>
<span class="field">
<input name="ctl00$ContentPlaceHolder1$1" type="text" value="ValueA" id="ContentPlaceHolder1_1" class="smallinput">
</span>
</p>
I couldn't found how to do it this way, so I was thinking about putting it into List<TextBox>, but I'm stuck here either (the same problem - no idea how to wrap the object with HTML).
Is there any way to do this?
For any posts like "Why don't you add those TextBoxes manually?" I'll send a photo of me hitting my head at keyboard, while there will be a dump of SQL with dozens of fields that needs to be handled displayed on the screen :)Or a photo of a lemur. Lemurs are okay, too
Not the cleanest solution, but should work...
foreach (var Field in db.A_Settings)
{
TextBox t = new TextBox();
t.ID = Field.ID.ToString();
t.CssClass = "smallinput";
t.Text = Field.Text;
//add literal control containing html that should appear before textbox
LabelPlaceHolder.Controls.Add(new LiteralControl("html before"));
LabelPlaceHolder.Controls.Add(t);
//add literal control containing html that should appear after textbox
LabelPlaceHolder.Controls.Add(new LiteralControl("html after"));
}
I would probably use a Repeater control:
<asp:Repeater ID="SettingsRepeater" runat="server">
<ItemTemplate>
<p>
<asp:Label ID="ItemLabel" runat="server"></asp:Label>
<span class="field">
<asp:TextBox ID="ItemTextbox" runat="server"></asp:TextBox>
</span>
</p>
</ItemTemplate>
</asp:Repeater>
And bind the list to the repeater:
SettingsRepeater.DataSource = db.A_Settings;
SettingsRepeater.DataBind();
Then write your ItemDataBound code to set the existing values.
It would be problematic with gathering the fields' values and then saving them back into database - that is why my IDs are the same as from database. And values may be set by simply doing <asp:TextBox ID="ItemTextbox" runat="server" Text='<%#Eval("Text")'> Besides, Repeater is sometimes buggy and problematic to use. Thanks for the answer anyway!
You could create a hidden field in the repeater item that contains the field id, or give the textbox an attribute: ItemTextbox.Attributes["id"] = Field.ID; and read that back as you iterate the repeater items. But to each his own.
You want HtmlGenericControl controls.
foreach (var Field in db.A_Settings)
{
TextBox t = new TextBox();
t.ID = Field.ID.ToString();
t.CssClass = "smallinput";
t.Text = Field.Text;
var label = new HtmlGenericControl("label");
label.Controls.Add(new LiteralControl("LABEL TEXT"));
var p = new HtmlGenericControl("p");
p.Controls.Add(label);
var span = new HtmlGenericControl("span");
span.Attributes.Add("class", "field");
span.Controls.Add(t);
p.Controls.Add(span);
LabelPlaceHolder.Controls.Add(p);
}
You can create custom ASP.NET Controls that can render any HTML you need. Have a look at this:
Developing a Simple ASP.NET Server Control. This way you can create your a control called CustomTextBox which will render a Textbox inside a paragraph.
| common-pile/stackexchange_filtered |
Macro : Generate dynamic check boxes for VBA form
Is there any way to create checkboxes in a vba form for cell headers ?
That is all the column header name should be the checkboxes .
For example ..
One Two Three Four
1 2 3 4
1 2 3 4
1 2 3 4
In the vba form it should be contain
One
Two
Three
Four
as check boxes ..
Am I expecting right ? or Please explain any other way..
Thanks in advance..
Note : there should be no software or tools .
Put the following code in your UserForm:
Option Explicit
Private Sub UserForm_Initialize()
Dim LastColumn As Long
Dim i As Long
Dim chkBox As MSForms.CheckBox
LastColumn = Worksheets("Sheet1").Cells(1, Columns.Count).End(xlToLeft).Column
For i = 1 To LastColumn
Set chkBox = Me.Controls.Add("Forms.CheckBox.1", "CheckBox_" & i)
chkBox.Caption = Worksheets("Sheet1").Cells(1, i).Value
chkBox.Left = 5
chkBox.Top = 5 + ((i - 1) * 20)
Next i
End Sub
You will need to modify the code to suit your specific needs, but that will get you started.
But it only adds the last column to the form. I expect all the columns to the form..
Something like this in the frmABC module (could be in Sub UserForm_Initialize() or anywheter else you would like this to run):
frmABC.chkX.Caption=worksheets("tab_name").range("A2")
| common-pile/stackexchange_filtered |
What's the meaning of rocks in that context?
— Met him what? he asked.
— Here, she said. What does that mean?
He leaned downward and read near her polished thumbnail.
— Metempsychosis?
— Yes. Who’s he when he’s at home?
— Metempsychosis, he said, frowning. It’s Greek:from the Greek. That means the transmigration of souls.
— O, rocks! she said. Tell us in plain words.
Which is from Joyce's book,Ulysses.
Does it mean?——something that threatens or causes
disaster—often used in plural.
Does it mean difficulties,especially comparing with "plain"?
i found that the translations are weird, I want some confirmation from native speakers...
Joyce's use of words is famously strange, but this looks to me like a random word used as an expletive (an exclamation of annoyance).
@KateBunting - often seen in Irish material, especially where Joyce is involved, and has been called "Molly Bloom's favourite expression"
@KateBunting I found that rocks means testis too.
Joyce's use of English is highly varied and sometimes quite strange, but as the comment by Kate Bunting indicates this seems to be a simple expletive, in which an ordinary word is sued as an expression of annoyance, rather than a profanity or blasphemy that many people might use in such a situation.
I remember in several of the stories of Saki the word "Rats" is used in a similar way, In particular in "Tobermory" a character thinks but does not say "Rats" and the narrator refers to them as "Those rodents of disbelief".
As the comment by
Michael Harvey
points out, the expression "Oh Rocks" is frequently used by the charactere Molly Bloom. In "James Joyce's concept of the underthought: a reflection on some similarities with the work of Wittgenstein" by Mike Harding the author writes:
I still regard Joyce as the greatest creative novelist, but after having logged on to various websites to see the current state of Joycean play, I logged off fairly swiftly with Molly Bloom's favourite expression in mind: Oh, Rocks!
I would mention that both older usages, intentional wordplay, and the assumptions that the reader knows classical, biblical, and other literature, make Joyce a sometimes confusing author for any modern reader, and particularly for a learner.
thank you for citing the details.Does Mike Harding mean the novel, Ulysses is not so great as people think ? I feel a little boring too, but as I have read about 4 chapters, something compels me to finish it. I think people love the story style more ,as the evolution make them so.
@William I think he means the recent critical comments on the novel, but I am not sure. In any case that is very much a matter of opinion.
This is a "minced oath", but remember this chapter is known as "Calypso". And Molly is playing the role of the sea-witch that has captured Ulysses on her island...
I always think of a "minced oath" as something that suggests the original, like "darn" for "Damn" or "frick" or "fudge" for "fuck". But perhaps any substitute expletive counts.
I found that rocks means testis too.
| common-pile/stackexchange_filtered |
What escrow system does the official Bitcoin bounty progam use?
There is a list of bounties on the wiki here:
https://en.bitcoin.it/wiki/Active_Bounties
How are the BTC pledges collated and eventually paid? I assume there is some kind of escrow system but I can't find out anything about it.
Usually they're a multisig transaction between a couple of core developers. One recent one was a 2-of-3 between Greg Maxwell and a couple of others. It's mainly just to prevent hit-by-a-bus scenarios, we trust in these people absolutely.
| common-pile/stackexchange_filtered |
Use firebase tokens to query an API in a client
I used firebase authentication to secure my ASP.NET CORE api.
I actually store the users in the database my API uses. Note that in my database the google identifiers are the uids generated by the firebase authentication and that the classic identifiers (login + password) are generated in my API.
When the user connects with Google, the token is created in the client(Angular) so I send it to my API (of course I don't store it), I just check if the token is valid and if the id which is contained in the token corresponds to the identifier of one of the users which is stored in my database.
In my client, for google authentification:
async GoogleAuth() {
try {
return new Promise((resolve, reject) => {
signInWithPopup(this.auth, this.provider).then(() => {
this.auth.onAuthStateChanged((user) => {
if (user) {
user
.getIdToken()
.then((idToken) => {
this.sendTokenUserGoogleToAPI(user, idToken)
.then((data: any) => {
localStorage.setItem('token', data.token);
resolve(data);
})
.catch((error) => {
console.log("googleAuth : " + error)
reject(error);
});
})
.catch(() => { });
}
});
});
});
} catch (e) { }
}
For the classic connection (login + password), the data is sent directly to my API and I create a personalized token with the user ID in my backend and I send the token to the client who generates a personalized token to from the token.
In my API, for classic authentication :
[HttpPost]
[ActionName("signin")]
public async Task<ActionResult> SignIn([FromBody] UserLoginViewModel userModel)
{
var user = await _context.Users.FirstOrDefaultAsync(u =>
u.Login == userModel.Login && u.Password == userModel.Password);
if (user == null)
{
_logger.LogWarning("Connection attempt failed.");
return NotFound(new { message = "User or password invalid." });
}
if (user.IsLocked)
{
return new ObjectResult(new { message = "Your account has been blocked." }) { StatusCode = 403 };
}
var token = await FirebaseAuth.DefaultInstance.CreateCustomTokenAsync(user.UserId);
var login = user.Login;
return Ok(new
{
login,
token
});
}
In my client, when I receive the token from my API
signInWithCustomToken(getAuth(),token)
.then((userCredential) => {
const user = userCredential.user;
console.log(token)
user!.getIdToken(true).then((idToken) => {
localStorage.setItem('token', idToken)
}).catch((error) => {
console.log(error)
})
})
So I'm guessing I shouldn't generate the token in my backend and only return the id to generate the token in the client? I regenerate the token in my client so that the user can access the chat
With my client, is it better that I get the token by querying firebase each time or is it better that I store this token locally to be able to use it in my requests?
For the moment, I store it locally but I think that it can be problematic if the token changes or if an attacker modifies his token because I verify thanks to firebase that the user is connected, if the local token changes, firebase will always say that the user is logged in but in my api the token will not be valid.
Answered below, based on an assumption that you're asking about a Firebase Authentication ID token, and a link to Android docs. In the future please provide more detailed information (such as the tag I added) and code of what you're doing (which also quite handily won't require translation).
The ID token you get from Firebase Authentication is an exp property/claim that shows you until when it's valid. Firebase's own SDKs refresh the token about 5 minutes before it expires, so your code should probably do the same. In fact, if you listen for when the ID token changes (Android web), you don't have to force a refresh yourself and can just piggyback on the work the SDK already does for you.
If the users are signing in client-side you should not have to generate a custom token for them. Can you edit your question to show more of the code? Code is better than words 99% of the time here, as we're all better at interpreting it (and it's much less ambiguous). Also see how to ask a good question and how to create a minimal, complete, verifiable example.
Don't post large updates in comments, and definitely not in an answer (it'll get deleted, as it's not an answer). Instead, edit your question to include the minimal-but-complete information needed. Instead of describing your implementation, I recommend showing us the actual code that any of us can run to reproduce the problem. That'll make it much more likely someone here can help you.
Do you understand better?
Thanks, that helps. The approach in my answer is still how to ensure you always have the right ID token, which is listening to onIdTokenChanged: https://firebase.google.com/docs/reference/js/v8/firebase.auth.Auth#onidtokenchanged
For the moment I do not use it but I would like it because indeed I have problems because it can happen that my API receives expired tokens and that the user is still connected to my client.
What I was actually asking, is it better to use onIdTokenChanged and store it in localstorage or fetch the token for each request with user.getIdToken ?
There's no significant difference between the two, as getIdToken will also simply return the existing value from a local cache. Only if you force a refresh every time would there be a significant difference, but that's an anti-pattern (that we unfortunately see quite regularly).
ok thanks and i had one last question if you still have time
For users who log in in the classic way. What do you recommend for authenticating users to firebase ?
| common-pile/stackexchange_filtered |
Add numbers without carrying
Lets say I have two numbers,
Number 1:
1646
Number 2:
2089
You see adding them left to right without the carry on adds up to 3625 how would I do this in python? I'm not sure but I need it to add 9+6 which is 15 but not to carry on the 1 when it adds 8+4 is there any way to add like this in python? And the end result i'm looking for is
3625
Since it doesn't carry on remaining numbers
you can try spliting the numbers by converting them to strings and adding them separately.
yes, but is their any more efficient way of doing this?
Does it matter if there is a more efficient way of doing it? Surely that would be the most legible, straightforward, and easy-to-understand way of doing it.
When you say efficient, do you mean fast or fewest lines of code? Because if you mean the first, is it really going to slow down your code?
Hm, a little of both but I won't want a huge code to do this, roipii has that but I'm not sure how the zip works or what it's exactly doing
This will work with varying N of integers of varying lengths:
from itertools import izip_longest
nums = [1646,
2089,
345]
revs = [str(n)[-1::-1] for n in nums] # nums as reversed strings
izl = izip_longest(*revs, fillvalue = '0') # zip the digits together
xsum = lambda ds: str(sum(map(int, ds)))[-1] # digits -> last digit of their sum
rres = ''.join(xsum(ds) for ds in izl) # result, but as reversed string
res = int(rres[-1::-1]) # result: 3960
Similar idea, but using map rather than izip_longest. I like this one better.
revs = [str(n)[-1::-1] for n in nums] # nums as reversed strings
dsum = lambda *ds: sum(int(d or 0) for d in ds) # str-digits -> their sum
sums = map(dsum, *revs)[-1::-1] # digit sums, in order
ones = [str(s % 10) for s in sums] # last digits of the sums
res = int(''.join(ones)) # result: 3960
Clever to flip it around and use izip_longest. :)
Because I like to take poetic license with open-ended requests for 'more efficient':
In [49]: a,b = 1646,2089
In [50]: ''.join(str(sum(map(int,x)))[-1] for x in zip(str(a),str(b)))
Out[50]: '3625'
hm, that looks good but can you explain what the zip is doing?
@ruler you take 2 iterables and 'zip' them together (like a zipper) to make a number of 2-tuples equal to the length of the shortest iterable. But honestly this is more useful as code golf than anything else. Write your own function that does the above - albeit in more lines - that a human can actually read. Just because I managed to do it in one line does not make it 'pythonic'.
All respect to roppi, this is very clever (golfy ;)) code, but @ruler its not code you should consider for production for the very reason that you aren't sure what its doing. It will fall down if the strings are different lengths.
@roipi can enumerate be used for this?
I think it's a bit clearer without the map: int(''.join(str(int(a)+int(b))[-1] for a, b in izip(str(n1), str(n2))) )
This will work even if the numbers of digits is not the same in both the numbers.
num1, num2, i, temp = 1646, 2089, 10, 0
total = num1 + num2
while num1 or num2:
if (num1 % 10) + (num2 % 10) > 9: temp += i
num1, num2, i = num1 // 10, num2 // 10, i * 10
print total - temp
Output
3625
The following code demonstrates what you want, but please just consider this as a hint. The var names and lack of exception handling will jeopardize your real code.
import operator
n1 = 1646
n2 = 2089
l1 = [int(x) for x in str(n1)]
l2 = [int(x) for x in str(n2)]
res1 = map(operator.add, l1, l2)
res2 = [str(x)[-1] for x in res1]
num = "".join(res2)
print int(num)
Might I recommend changing the last loop to num = "".join(res2)?
@SethMMorton Ofcourse:) I'm new to python too, I'll change it now.
This is kind of ugly because I do not know how to index a digit in an integer, only a string, but it works.
If the two strings are always the same size:
A = str(1646)
B = str(2089)
result = ""
for i in range(0,len(A)):
result += str((int(A[i]) + int(B[i]))%10)
result = int(result)
If the two strings are not always the same size, find which one is bigger (length wise). Say the length of the biggest is X and the others length is Y where X > Y. Append the first X-Y indexes of the bigger string onto result, then repeat the above with the remaining digits.
You also want to only take the units digit after adding int(A[i]) + int(B[i]), otherwise if the result is > 9...
The numbers might be different length, so convert both to strings, reverse order (makes indexing easier), reverse using extended slice ([::-1], Reverse a string in Python), reverse back,
result=""
A=str(1646)[::-1]
B=str(2089)[::-1]
for ndx in range(0,max(len(A),len(B)):
result += str(int(A[ndx])+int(B[ndx]))
resut = int(result[::-1])
You can get carry pretty easily, and handle unequal length strings (explicitly),
#!/bin/env python
a=1646
b=20893
A=str(a)[::-1]
B=str(b)[::-1]
lenA = len(A)
lenB = len(B)
length = max(lenA,lenB)
print "length: "+str(length)
#string add,no carry
total=""
for ndx in range(0,length):
digit = 0
if(ndx<lenA):
digit += int(A[ndx])
if(ndx<lenB):
digit += int(B[ndx])
digit = digit%10
#print "digit: "+str(digit)+", carry: "+str(carry)
total += str(digit)
print "a: " +str(a)+"+b: " +str(b)
result = total[::-1]
resut = int(result)
print "result: " +str(result)
#string add,with carry
total=""
carry=0
for ndx in range(0,length):
digit = carry
if(ndx<lenA):
digit += int(A[ndx])
if(ndx<lenB):
digit += int(B[ndx])
carry = digit/10
digit = digit%10
#print "digit: "+str(digit)+", carry: "+str(carry)
total += str(digit)
if(carry>0):
total += str(carry)
#print "digit: "+str(digit)+", carry: "+str(carry)
print "a: " +str(a)+"+b: " +str(b)
result = total[::-1]
resut = int(result)
print "result: " +str(result)
First convert the numbers into strings so we can iterate over the digits:
>>> n1 = str(1646)
>>> n2 = str(2089)
We can then add the corresponding digits then do %10 to get the last digit. All numbers which result from adding two integers 0-9 will be <= 18, therefore %10 will always return the last digit.
>>> [(int(a)+int(b))%10 for a,b in zip(n1,n2)]
[3, 6, 2, 5]
We then convert each int into a string, join the digits and convert back to a int:
>>> int(''.join(map(str,((int(a)+int(b))%10 for a,b in zip(n1,n2)))))
3625
Or alternatively (converting the digits as you go):
>>> int(''.join(str((int(a)+int(b))%10) for a,b in zip(n1,n2)))
3625
Use izip_longest(...,fillvalue="0") to add numbers of different lengths.
| common-pile/stackexchange_filtered |
does dbe.forum module exist in django
Is this module deprecated? What is the replacement for it, if so?
Django Version: 1.6
Python Version: 2.7.6
The following is stack trace:
Traceback:
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
101. resolver_match = resolver.resolve(request.path_info)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve
320. sub_match = pattern.resolve(new_path)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve
222. return ResolverMatch(self.callback, args, kwargs, self.name)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/urlresolvers.py" in callback
229. self._callback = get_callable(self._callback_str)
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/functional.py" in wrapper
32. result = func(*args)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/urlresolvers.py" in get_callable
100. not module_has_submodule(import_module(parentmod), submod)):
File "/app/.heroku/python/lib/python2.7/site-packages/django/utils/importlib.py" in import_module
40. __import__(name)
Exception Type: ImportError at /
Exception Value: No module named dbe.forum
It looks like somewhere in your code you try to import dbe.forum. Make sure you have dbe listed under INSTALLED_APPS in your settings, or remove any reference to the dbe module in your models/views/forms code.
| common-pile/stackexchange_filtered |
Seeking help to find the exact value of $ \int_{0}^{\frac{\pi}{2}} \frac{x}{\sin ^{2n} x+\cos ^{2n} x} d x $ using substitutions?
Latest Edit
The closed form for $$I_n=
\frac{\pi}{4} \int_{0}^{\frac{\pi}{2}} \frac{d x}{\sin ^{2n} x+\cos ^{2n} x}
$$
is
$$
\boxed{I_{n}=\frac{\pi^2}{8 n} \sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \csc \frac{(2 k+1) \pi}{2 n}}
$$
Proof:
Letting $t\mapsto \tan x$ yields $$
\begin{aligned}
\int_{0}^{\infty} \frac{\left(1+t^{2}\right)^{n-1}}{t^{2 n}+1}dt &=\sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \int_{0}^{\infty} \frac{t^{2 k}}{t^{2 n}+1} d t .
\end{aligned}
$$
By my post, $$\int_{0}^{\infty} \frac{x^{r}}{x^{m}+1} d x=\frac{\pi}{m} \csc \frac{(r+1) \pi}{m},$$
We can now get its closed form: $$
\boxed{I_{n}=\frac{\pi^2}{8 n} \sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \csc \frac{(2 k+1) \pi}{2 n}}
$$
Original version
As the integral is not so difficult for $n=1,2,3$, I just show how to find the exact value of the integral when $n=4.$
$$
I:=\int_{0}^{\frac{\pi}{2}} \frac{x}{\sin ^{8} x+\cos ^{8} x} d x
,$$
I changed the integral, as usual, by letting $x\mapsto \frac{\pi}{2} -x$, $$I=
\frac{\pi}{4} \int_{0}^{\frac{\pi}{2}} \frac{d x}{\sin ^{8} x+\cos ^{8} x}
$$
Then multiplying both numerator and denominator by $\sec^8x$ and letting $t=\tan x $ yields $$
I=\frac{\pi}{4} \int_{0}^{\infty} \frac{\left(1+t^{2}\right)^{3}}{t^{8}+1} d t
$$
I was then stuck by the powers and start to think how to reduce them. Thinking for couple of days, I found a way to solve it with partial fractions only. Now I am going to share it with you.
Observing that $$
\int_{0}^{\infty} \frac{d t}{t^{8}+1}\stackrel{t\mapsto\frac{1}{t}}{=}
\int_{0}^{\infty} \frac{t^{6}}{t^{8}+1} d t$$ and
$$
\int_{0}^{\infty} \frac{t^2d t}{t^{8}+1}\stackrel{t\mapsto\frac{1}{t}}{=}
\int_{0}^{\infty} \frac{t^{4}}{t^{8}+1} d t,$$
we can reduce the power of the numerator to 2 that $$
I=\frac{\pi}{2} \underbrace{\int_{0}^{\infty} \frac{1+3 t^{2}}{t^{8}+1} d t}_{J}
$$
To handle the power 8 in the denominator, we resolve the integrand into partial fractions. $$
J=\frac{1}{2 \sqrt{2}} \left[\underbrace{\int_{0}^{\infty} \frac{t^{2}-(3-\sqrt{2})}{t^{4}+\sqrt{2} t^{2}+1} d t}_{K}-\underbrace{\int_{0}^{\infty} \frac{t^{2}-(3+\sqrt{2})}{t^{4}-\sqrt{2} t^{2}+1} d t}_{L} \right]
$$
To deal with $K$ and $L$, we play a little trick. $$
\begin{aligned}
K &=\int_{0}^{\infty} \frac{1-\frac{3-\sqrt{2}}{t^{2}}}{t^{2}+\frac{1}{t^{2}}+\sqrt{2}} d t \\
&=\int_{0}^{\infty} \frac{\frac{\sqrt{2}-2}{2}\left(1+\frac{1}{t^{2}}\right)+\frac{4-\sqrt{2}}{2}\left(1-\frac{1}{t^{2}}\right)}{t^{2}+\frac{1}{t^{2}}+\sqrt{2}} d t \\
&=\frac{\sqrt{2}-2}{2} \int_{0}^{\infty} \frac{d\left(t-\frac{1}{t}\right)}{\left(t-\frac{1}{t}\right)^{2}+(2+\sqrt{2})}+\frac{4-\sqrt{2}}{2} \int_{0}^{\infty} \frac{d\left(t+\frac{1}{t}\right)}{\left(t+\frac{1}{t}\right)^{2}-(2 -\sqrt{2})}\\
&=\frac{\sqrt{2}-2}{2 \sqrt{\sqrt{2}+2}}\left[\tan ^{-1}\left(\frac{t-\frac{1}{t}}{\sqrt{\sqrt{2}+2}}\right)\right]_{0}^{\infty}+0 \\
&=\frac{(\sqrt{2}-2) \pi}{2 \sqrt{\sqrt{2}+2}}
\end{aligned}
$$
Similarly,
$$
\begin{aligned}
L &=-\frac{2+\sqrt{2}}{2} \int_{0}^{\infty} \frac{d\left(t-\frac{1}{t}\right)}{\left(t-\frac{1}{t}\right)^{2}+(2-\sqrt{2})} =-\frac{(2+\sqrt{2}) \pi}{2 \sqrt{2-\sqrt{2}}}
\end{aligned}
$$
$$
\therefore J=\frac{1}{\sqrt{2}}\left[\frac{(\sqrt{2}-2) \pi}{2 \sqrt{\sqrt{2}+2}}+\frac{(2+\sqrt{2}) \pi}{2 \sqrt{2-\sqrt{2}}}\right] =\frac{\pi}{4} \sqrt{10-\sqrt{2}}$$
Hence we can conclude that
$$\boxed{I=\frac{\pi^{2}}{8} \sqrt{10-\sqrt{2}}}$$
How about when $n\geq 5$, $$ \int_{0}^{\frac{\pi}{2}} \frac{x}{\sin ^{2n} x+\cos ^{2n} x} d x ?$$
Would you please help me?
You may want to make your solution an answer to this question.
Nice and clean (+1). The wonderful thing is that using beta function produces a bit different result.
Would you please show us?
Nice, clean, well done and exact result ! My only problem is that I cannot upvote more than once. Cheers :-)
@LaxmiNarayanBhandari. Yes, please : show the solution with the beta function. I cannot understand why the result could be different. It could be very interesting. Thanks & cheers :-)
Thank both of you for your appreciation.
$\int_0^{\infty} \frac { (t^2 + 1)^3}{t^8 + 1} \ dt$ screams to me to use complex analysis.
Opinions and alternatives are welcome
@Lai. If you do not want complex integration, you are the best !
I am so happy to hear that!
I’m voting to close this question because it is currently not a question. The found answer should be removed from the question and put in the answer section.
The closed form for $$I_n=
\frac{\pi}{4} \int_{0}^{\frac{\pi}{2}} \frac{d x}{\sin ^{2n} x+\cos ^{2n} x}
$$
is
$$
\boxed{I_{n}=\frac{\pi^2}{8 n} \sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \csc \frac{(2 k+1) \pi}{2 n}}
$$
Proof:
Letting $t=\tan x$ yields $$
\begin{aligned}
I_{n} &=\frac{\pi}{4} \int_{0}^{\infty} \frac{\left(1+t^{2}\right)^{n-1}}{t^{2 n}+1} \\
&= \frac{\pi}{4} \sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \int_{0}^{\infty} \frac{t^{2 k}}{t^{2 n}+1} d t .
\end{aligned}
$$
By my post, $$\int_{0}^{\infty} \frac{x^{r}}{x^{m}+1} d x=\frac{\pi}{m} \csc \frac{(r+1) \pi}{m},$$
We can now get its closed form: $$
\boxed{I_{n}=\frac{\pi^2}{8 n} \sum_{k=0}^{n-1}\left(\begin{array}{c}
n-1 \\
k
\end{array}\right) \csc \frac{(2 k+1) \pi}{2 n}}
$$
For examples:
\begin{aligned}
I_{2}&=\frac{\pi^2}{16}\left(\csc \frac{\pi}{4}+\csc \frac{3 \pi}{4}\right)=\frac{\sqrt{2} \pi^{2}}{8} \\
I_{3}&=\frac{\pi^2}{24}\left(\csc \frac{\pi}{6}+2 \csc \frac{3 \pi}{6}+\csc \frac{5 \pi}{6}\right)=\frac{\pi^{2}}{4} \\
I_{4}&=\frac{\pi^2}{32} \left[\left(\begin{array}{l}
3 \\
0
\end{array}\right) \csc \frac{\pi}{8}+\left(\begin{array}{l}
3 \\
1
\end{array}\right) \csc \frac{3 \pi}{8}+\left(\begin{array}{l}
3 \\
2
\end{array}\right) \csc \frac{5 \pi}{8}+\left(\begin{array}{l}
3 \\
3
\end{array}\right) \csc \frac{6\pi}{8}\right]= \frac{\pi^2}8\sqrt{10-\sqrt2}
\end{aligned}
Too long for a comment.
Very interested by the post, I played using the same approach and considered the more general case of
$$I_n=\int_{0}^{\frac{\pi}{2}} \frac{x}{\sin ^{n} (x)+\cos ^{n} (x)} \,dx=\frac \pi 4\int_{0}^{\frac{\pi}{2}} \frac{dx}{\sin ^{n} (x)+\cos ^{n} (x)} $$ Using, as you did, $x=\tan ^{-1}(t)$, then
$$I_n=\frac \pi 4\int_{0}^\infty \frac{\left(t^2+1\right)^{\frac{n}{2}-1}}{t^n+1}\,dt$$
The method does not work for odd values of $n$ (this is normal since the solution is given in terms of Meijer G-functions). But, for even values of $n$, we need one more integral each time and the result, is given by
$$J_m=I_{2n}=\frac \pi 4\int_{0}^\infty \frac{\left(t^2+1\right)^{m-1}}{t^{2m}+1}\,dt=\frac {\pi^2} 4 a_m$$ The very first $a_m$ are
$$\left\{\frac{1}{2},\frac{1}{\sqrt{2}},1,\frac{\sqrt{10-\sqrt{2}}}{2},\sqrt{5},\frac{1}{3} \sqrt{\frac{379}{2}-44 \sqrt{3}}\right\}$$
Now, using a CAS, what is interesting is that, when $m>6$, if $m$ is odd, the next ones are given by the solution of polynomial equations (cubic for $m=7,9$, quintic for $m=11$, sextic for $m=13$ and so on).
For example, to obtain $J_7$, we need to solve $y^3-10 y^2-32 y+328=0$ which makes
$$J_7=\frac{1}{6} \left(5+14 \sin \left(\frac{1}{3} \sin
^{-1}\left(\frac{71}{98}\right)\right)\right)$$
This is where I got with complex integration.
$\frac {\pi}{2}\int_0^{\infty} \frac {(t^2 + 1)^3}{t^8 + 1} = \frac {\pi}{4}\int_{-\infty}^{\infty} \frac {(t^2 + 1)^3}{t^8 + 1}$
Using the contour of the the semicircle in the the upper half plane....
$\lim_\limits{R\to \infty} \left|\frac {(R^2 + 1)^3}{R^8 + 1}R\right| = 0$
The integral along the circular part of the path equals 0.
There are 4 poles inside the contour. Evaluating the residues....
$\frac {\pi}{4}(2\pi i) \left(\frac{(e^{i\frac {\pi}{4}} + 1)^3}{8e^{\frac{7\pi}{8}i}} +\frac{(e^{i\frac {3\pi}{4}} + 1)^3}{8e^{\frac {21\pi}{8}i}}+\frac{(e^{i\frac {5\pi}{4}} + 1)^3}{8e^{\frac {35\pi}{8}i}} + \frac{(e^{i\frac {7\pi}{4}} + 1)^3}{8e^{\frac {49\pi}{8}i}}\right)$
$\frac {\pi}{32}(2\pi i) \left(-e^{\frac {\pi}{8}i}(e^{\frac {\pi}{4}i} + 1)^3 -e^{\frac {3\pi}{8}i}(e^{\frac {3\pi}{4}i} + 1)^3 - e^{\frac {5\pi}{8}i}(e^{\frac {5\pi}{4}i} + 1)^3 - e^{\frac {7\pi}{8}i}(e^{\frac {7\pi}{4}i} + 1)^3\right)$
$e^{ki}(e^{2ki} + 1)^3 = e^{ki}(e^{6ki} + 3e^{4ki} + 3e^{2ki} + 1) = e^{7ki} + 3e^{5ki} + 3e^{3ki} + e^{ki}$
$(-\frac {\pi^2}{16}i)(2e^{\frac{\pi}{8}i} + 8e^{\frac{3\pi}{8}i} + 8e^{\frac{5\pi}{8}i} + 2e^{\frac{7\pi}{8}i}+6e^{\frac{9\pi}{8}i} + 6e^{\frac{15\pi}{8}i})$
$e^{ix} + e^{(\pi - x)i} = 2i\sin x\\
2e^{\frac{\pi}{8}i} + 2e^{\frac{7\pi}{8}i} + 8e^{\frac{3\pi}{8}i} + 6e^{\frac{9\pi}{8}i} + 6e^{\frac{15\pi}{8}i} = 4i\sin\frac {\pi}{8} + 16i\sin \frac {3\pi}{8} - 12 i \sin \frac{\pi}{8}$
$(\frac {\pi^2}{16})(16\sin \frac {3\pi}{8} - 8 \sin \frac{\pi}{8})$
$(\frac {\pi^2}{16})(8\sqrt {2+\sqrt 2} - 4\sqrt {2-\sqrt 2})$
$(\frac {\pi^2}{4})(2\sqrt {2+\sqrt 2} - \sqrt {2-\sqrt 2})$
$\sqrt {10 - \sqrt 2} = (2\sqrt {2+\sqrt 2} - \sqrt {2-\sqrt 2})$
Per David's suggestion,
Suppose we take the contour from $0$ to $R$ along the real line, from $R$ to $Re^{\frac {\pi}{4}i}$ along the arc, and back to $0$ in a line.
$\int_0^R\frac {(t^2 + 1)^3}{t^8+1}\ dt = \int_0^R\frac {t^6}{t^8+1}\ dt + 3\int_0^R\frac {t^4}{t^8+1}\ dt+ 3\int_0^R\frac {t^2}{t^8+1}\ dt + \int_0^R\frac {1}{t^8+1}\ dt$
$\int_0^R \frac {t^n}{t^8 + 1}\ dt + \int_0^R \frac {(e^{\frac {\pi}{4}i} t)^n}{(e^{\frac {\pi}{4}i} t)^8 + 1}\ d(e^{\frac {\pi}{4}i} t)\\
(1-e^{\frac {(n+1)\pi}{4}i})\int_0^R \frac {t^n}{t^8 + 1}\ dt$
for $n\le 6$
$\lim_\limits{R\to \infty} \int_0^R \frac {t^n}{t^8 + 1}\ dt = \frac {2\pi i}{1-e^{\frac {(n+1)\pi}{4} i}} \text { Res}_{z=e^{\frac {\pi}{8}i}} \left(\frac {z^n}{z^8 + 1}\right)$
$\frac {2\pi i}{1-e^{\frac {(n+1)\pi}{4} i}} \left(\frac {e^{\frac {n\pi}{8}i}}{8e^{\frac {7\pi}{8}i}}\right)$
$\frac {2\pi i}{8\left(e^{\frac {(7-n)\pi}{8}i}-e^{\frac {(n+9)\pi}{8} i}\right)}$
$\frac {2\pi i}{8\left(2i\sin \frac {(7-n)\pi}{8}\right)}$
$\lim_\limits{R\to \infty} \int_0^R \frac {t^n}{t^8 + 1}\ dt = \frac {\pi}{8}\csc \frac {(7-n)\pi}{8}$
$\frac {\pi}{2}\int_0^{\infty} \frac {(t^2 + 1)^3}{t^8 + 1} = \frac {\pi^2}{16}(\csc \frac {\pi}{8} + 3 \csc \frac {3\pi}{8} + 3 \csc \frac {5\pi}{8} + \csc \frac {7\pi}{8}) $
$\frac {\pi^2}{4}\sqrt{10-\sqrt 2}$
I haven't had time to go into the details so I don't know if this works, but... with a term like $t^8+1$ in the denominator, frequently you can reduce the calculation to one single residue by using a different contour. Try integrating around the boundary of the region $|z|\le R$, $0\le{\rm Arg}(z)\le\pi/4$. In this region, $t^8+1$ has only one singularity. I repeat that I haven't checked whether it works in this case - the numerator might mess things up.
@David When I posted this, I considered a contour that would only surround one pole, but this was causing problems in the numerator. I couldn't figure out how to do that without expanding the numerator and solving 4 integrals. 4 integrals vs. 4 poles -- same amount of work either way. But, if I find the time, I may update this answer with that approach.
Didn't have time to check through the details, but I'm not surprised.
Utilize the known integral $\int_0^\infty \frac{t^{n-1}}{1+t^m}=\frac\pi m\csc\frac{\pi n}m$
\begin{align}
I=&\>\frac{\pi}{4} \int_{0}^{\infty} \frac{\left(1+t^{2}\right)^{3}}{t^{8}+1} d t\\
=&\>\frac{\pi^2}{32}\left( \csc\frac\pi8+3\csc\frac{3\pi}8 +3\csc\frac{5\pi}8+\csc\frac{7\pi}8 \right)\\
=&\>\frac{\pi^2}{16}\left( \csc\frac\pi8+3\csc\frac{3\pi}8 \right)
= \frac{\pi^2}8\sqrt{10-\sqrt2}
\end{align}
An alternative approach:
We can use power-reducing formulas to get
\begin{align*}
\sin^8(x)&=\frac1{128}(35-56\cos(2x)+28\cos(4x)-8\cos(6x)+\cos(8x))\\
\cos^8(x)&=\frac1{128}(35+56\cos(2x)+28\cos(4x)+8\cos(6x)+\cos(8 x))
\end{align*}
Therefore,
\begin{align*}
\frac\pi4\int_0^{\frac\pi2}\frac{1}{\sin^8(x)+\cos^8(x)}\mathop{dx}&=16\pi\int_0^{\frac\pi2}\frac{1}{35+28\cos(4x)+\cos(8x)}\mathop{dx}\\
&=4\pi\int_0^{2\pi}\frac{1}{35+28\cos(x)+\cos(2x)}\mathop{dx}&(4x\mapsto x)\\
&=2\pi\int_0^{2\pi}\frac{1}{17+14\cos(x)+\cos^2(x)}\mathop{dx}\\
\end{align*}
Now, set $u=\tan\left(\frac x2\right)\implies du=\frac12\sec^2\left(\frac x2\right)\mathop{dx}$. From this, we can derive that $\cos(x)=\frac{1-u^2}{1+u^2}$ and $dx=\frac2{1+u^2}\mathop{du}$. So, after some simplifying, our integral becomes
\begin{align*}
2\pi\int_0^\infty\frac{1+u^2}{8+8u^2+u^4}\mathop{du}&=2\pi\int_0^\infty\frac{1+u^2}{(u^2+4+2\sqrt2)(u^2+4-2\sqrt2)}\mathop{du}\\
&=2\pi\int_0^\infty\frac{3+2\sqrt2}{4\sqrt2}\cdot\frac{1}{u^2+4+2\sqrt2}-\frac{3-2\sqrt2}{4\sqrt2}\cdot\frac{1}{u^2+4-2\sqrt2}\mathop{du}\\
&=\frac{\pi}{2\sqrt2}\left.\left[\frac{3+2\sqrt2}{\sqrt{4+2\sqrt2}}\arctan\left(\frac u{\sqrt{4+2\sqrt2}}\right)-\frac{3-2\sqrt2}{\sqrt{4-2\sqrt2}}\arctan\left(\frac u{\sqrt{4-2\sqrt2}}\right)\right]\right|_0^\infty\\
&=\frac{\pi^2}{4\sqrt2}\left[\frac{3+2\sqrt2}{\sqrt{4+2\sqrt2}}-\frac{3-2\sqrt2}{\sqrt{4-2\sqrt2}}\right]\\
&=\frac{\pi^2}{8}\sqrt{10-\sqrt2}\\
\end{align*}
| common-pile/stackexchange_filtered |
How to find the most frequently repeated number from the command line, in an array?
I need to take single digit integers from the command line and put them into an array, and then find the most frequent integer. Sometimes this program seems to work, and other times, it doesn't.
public class MostFrequent {
public static void main(String[] args){
int num=0;
int[] freq= new int[9];//intialize array for integers 0-9
for (int i=0; i<args.length; i++){
try {
num = Integer.parseInt(args[i]);
freq[num]++;//adds to array counter
}
catch (NumberFormatException nfe) {
}
}
int max=0,j;
for (j=0; j<10; j++){
while(freq[j]>max){
max=freq[j];
}
}
System.out.println("The digit that appears most frequently is " + freq[j]);
}
}
Thanks everyone for your help, this is what ended up working for me, and thanks to whoever mentioned making the array more dynamic, that helped as well. Here is the code I finished with:
public class MostFrequent {
public static void main(String[] args){
int num=0;
int[] freq= new int[args.length];//intialize array for integers 0-9
for (int i=0; i<args.length; i++){
try {
num = Integer.parseInt(args[i]);
freq[num]++;//adds to array counter
}
catch (NumberFormatException nfe) {
}
}
int max=0,j;
for (j=1; j<args.length; j++){
while(freq[j]>freq[max]){//compares a max array val to the previous val
max=j;
}
}
System.out.println("The digit that appears most frequently is " + max);
}
}
What happens when it doesn't work?
I was getting array bound errors, and I think it was printing the number of times a number was entered, rather than displaying which number that was. I believe I have it fixed now
The logic in your second loop is flawed. Also, you haven't allocated room for all digits in your array, you need an int[10] for this. One way to solve it is like this:
int[] freq = new int[10];//intialize array for integers 0-9
...
int maxindex = 0;
for (int j = 1; j < 10; j++){
if (freq[j] > freq[maxIndex]) {
maxIndex = j;
}
}
System.out.println("The digit that appears most frequently is " + j + ", that appears " + freq[j] + " times.";
I changed your first line to allow any length
int[] freq = new int[args.length];
Change your loop
int max=freq[0];
int maxIndex = 0;
for (j=1; j<10; j++){
if(freq[j]>max)
{
max=freq[j];
maxIndex = j;
}
}
Also, you have wrong output.
Instead of (that will give you last number)
System.out.println("The digit that appears most frequently is " + freq[j]);
Use (which prints max number and its number of occurence)
System.out.println("The digit that appears most frequently is " + maxIndex + " - " + max + "x");
Here is the implementation to find most frequent number.
#include<iostream>
using namespace std;
// Returns maximum repeating element in arr[0..n-1].
// The array elements are in range from 0 to k-1
int maxRepeating(int* arr, int n, int k)
{
// Iterate though input array, for every element
// arr[i], increment arr[arr[i]%k] by k
for (int i = 0; i< n; i++)
arr[arr[i]%k] += k;
// Find index of the maximum repeating element
int max = arr[0], result = 0;
for (int i = 1; i < n; i++)
{
if (arr[i] > max)
{
max = arr[i];
result = i;
}
}
/* Uncomment this code to get the original array back
for (int i = 0; i< n; i++)
arr[i] = arr[i]%k; */
// Return index of the maximum element
return result;
}
// Driver program to test above function
int main()
{
int arr[] = {2, 3, 3, 5, 3, 4, 1, 7};
int n = sizeof(arr)/sizeof(arr[0]);
int k = 8;
cout << "The maximum repeating number is " <<
maxRepeating(arr, n, k) << endl;
return 0;
}
| common-pile/stackexchange_filtered |
IIS 8.5 URL ReWrite
I have an asp.net application hosted on two servers.
Server 1 : http(s)://server1.domain.com
Server 2 : http://server2.domain.com
Now i have to write URL Rewrite rule so that all the "HTTP" requests coming to server1 should be redirected to server2 but all the "HTTPS" requests coming to server1 should be handled by server1 it self.
Thanks in advance for the help...
Try using the following url rewrite rule:
<rule name="Redirect to HTTP" stopProcessing="true">
<match url="https://*.server1.domain.com/(*)" />
<action type="Redirect" url="http://server2.domain.com/{R:1}" redirectType="SeeOther" />
</rule>
Thanks for the Reply.. but the following code worked for me..
| common-pile/stackexchange_filtered |
Trying to put Array of URL's into a range of cells in Google Sheets
I can see I have 28 entries in an Array of URLs (resultURLs), and when I try to setValues in a range with the same number of entries, I get the message "Cannot convert Array to Object[][]. I am setting an array of 28 entries to 28 rows and 1 column in the Sheet. Why would I get this error? Thanks for your help!
function assignEditUrls() {
var form = FormApp.openById('1gCHxEc1pv3j-NBVc85wv4l4sXDadbLEzVW08iKWrLJg');
//enter form ID here
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Form Responses');
//Change the sheet name as appropriate
var data = sheet.getDataRange().getValues();
var urlCol = 4; // column number where URL's should be populated; A = 1, B = 2 etc
var responses = form.getResponses();
var timestamps = [], urls = [], resultUrls = [];
for (var i = 0; i < responses.length; i++) {
timestamps.push(responses[i].getTimestamp().setMilliseconds(0));
urls.push(responses[i].getEditResponseUrl());
}
for (var j = 0; j < responses.length; j++) {
resultUrls.push(urls[j]);
}
sheet.getRange(2, urlCol, resultUrls.length).setValues(resultUrls);
}
| common-pile/stackexchange_filtered |
How to create a personal wiki+blog on github using org-mode?
I Love org-mode! I tried to create my personal pages using org-export-html. Org-mode can export latex-math using mathjax very well, and many many other feathers. I love that! I want a tidy and beautiful personal site integrated with wiki and blogs hosting on github.
How to configure org-mode to build such a wiki+blog site with theme support?
I have the same pending project.
Sacha Chua, one of my favorite orgmode power-user, asked recently on her blog "Do you have an Emacs-based personal wiki? What do you use, and what do you think about it?": you might be interested in the comment on her post:
http://sachachua.com/blog/2011/11/planning-an-emacs-based-personal-wiki-org-muse-hmm/
Thanks! I read it before. I want an easy start example. I will try re-configure my org-mode export when at large.
Have you looked at http://orgmode.org/worg/org-blog-wiki.html and http://orgmode.org/worg/org-web.html ?
Most of them on worg are based on ruby or obsolete. I will try to study the org-publish method again. Anyway, I can publish HTMLs, thought the style is not so beautiful. :-)
I think, this project fits my need:
http://renard.github.com/o-blog/
https://github.com/renard/o-blog
o-blog does not support separate org file for generating
That's why I switch to the plain org-publish now. :-)
I am not sure if I understood correctly your question, because I think there is nothing to do!
The wiki sites of git-hub have their own git repository.
Do a git checkout and add files edited in orgmode.
When you will commit and pull them, they will be output correctly in html.
(That's what I did for the project repositorium in git-hub)
Hope it helps,
Jeremy
Thanks. I know that. Maybe my question can be changed to: how to easily configure org-mode to build a wiki+blog site with better theme support.
| common-pile/stackexchange_filtered |
Jenkins install plugins offline
Installed Jenkins on a Linux server and want to install some plugins manually.
I want to install Blue Ocean and Artifactory.
For both plugins I downloaded the hpi file and tried to install it, but I get a lot of dependency errors.
Do I now have to install those dependencies manually as well? or is there a better way to do this?
I tried to install one of those dependency and that one also had some dependency errors as well :-(
The Linux server is not able to access the internet.
Thanks!
Robert
Same case here. Our Jenkins is setup in OpenShift which is not allowed to connect to the Internet. Downloading a plugin 1-by-1 is tiresome. Not to mention that each plugins have its own dependencies that needs to be downloaded as well.
Here's what you would do...
Run a Jenkins locally in a machine that can download plugins.
Download and update all the plugins you want using the Update Center.
Go %JENKINS_HOME%/plugins directory. Inside this folder you would see *.jpi. These are your plugins. Its dependencies will be downloaded as well.
Rename it to *.hpi then keep it in some directory.
To test...
In your local Jenkins delete everything in %JENKINS_HOME%/plugins directory then put all *.hpi in this directory.
Restart your local Jenkins.
Verify if the plugins you require are installed and updated.
Can you explain what the rename step is for?
@Ya Jenkins renames the installed plugins to the extension .jpi to know which plugin was already installed. If you rename it to .hpi it will try to install them after the restart.
You are correct that BlueOcean has a lot of dependencies.
Given you are not able to connect to the internet you will need to download all 21 BlueOcean related hpi files and upload them from the Manage Jenkins > Manage Plugins > Advanced tab.
Alternatively, if you have access to the server that your Jenkins instance is running on you can copy the hpi files into the %JENKINS_HOME%/plugins folder. The corresponding directories (exploded from the hpi, which is just a zip file) will be created on Jenkins restart.
The easiest way to acquire all 21 plugin files is to open The Jenkins plugin page and search for blueocean. Download the same numbered version of all 21 and upload them one by one. Order shouldn't be an issue. As long as they are all present on restart the dependencies will resolve.
Same process goes for any other plugin. If you're able to get the machine connected to the internet it will make the process a lot simpler as you will be able to use the update center, which manages dependencies and update notifications.
download jenkins-plugin-manager. (https://github.com/jenkinsci/plugin-installation-manager-tool/blob/master/README.md)
download the plugin with dependencies.
java -jar jenkins-plugin-manager-2.12.11.jar -d jenkins_plugins --plugins git:5.0.0
copy .jpi files to %JENKINS_HOME%/plugins folder.
reboot jenkins.
| common-pile/stackexchange_filtered |
Getting unexpected results while using ode45
I am trying to solve a system of differential equations by writing code in Matlab. I am posting on this forum, hoping that someone might be able to help me in some way.
I have a system of 10 coupled differential equations. It is a vector-host epidemic model, which captures the transmission of a disease between human population and insect population. Since it is a simple system of differential equations, I am using solvers (ode45) for non-stiff problem type.
There are 10 differential equations, each representing 10 different state variables. There are two functions which have the same system of 10 coupled ODEs. One is called NoEffects_derivative_6_15_2012.m which contains the original system of ODEs. The other function is called OnlyLethal_derivative_6_15_2012.m which contains the same system of ODEs with an increased withdrawal rate starting at time, gamma=32 %days and that withdrawal rate decays exponentially with time.
I use ode45 to solve both the systems, using the same initial conditions. Time vector is also the same for both systems, going from t0 to tfinal. The vector tspan contains the time values going from t0 to tfinal, each with a increment of 0.25 days, making a total of 157 time values.
The solution values are stored in matrices ye0 and yeL. Both these matrices contain 157 rows and 10 columns (for the 10 state variable values). When I compare the value of the 10th state variable, for the time=tfinal, in the matrix ye0 and yeL by plotting the difference, I find it to be becoming negative for some time values. (using the command: plot(te0,ye0(:,10)-yeL(:,10))). This is not expected. For all time values from t0 till tfinal, the value of the 10 state variable, should be greater, as it is the solution obtained from a system of ODEs which did not have an increased withdrawal rate applied to it.
I am told that there is a bug in my matlab code. I am not sure how to find out that bug. Or maybe the solver in matlab I am using (ode45) is not efficient and does give this kind of problem. Can anyone help.
I have tried ode23 and ode113 as well, and yet get the same problem. The figure (2), shows a curve which becomes negative for time values 32 and 34 and this is showing a result which is not expected. This curve should have a positive value throughout, for all time values. Is there any other forum anyone can suggest ?
Here is the main script file:
clear memory; clear all;
global Nc capitalambda muh lambdah del1 del2 p eta alpha1 alpha2 muv lambdav global dims Q t0 tfinal gamma Ct0 b1 b2 Ct0r b3 H C m_tilda betaHV bitesPERlanding IC global tspan Hs Cs betaVH k landingARRAY muARRAY
Nhh=33898857; Nvv=2*Nhh; Nc=21571585; g=354; % number of public health centers in Bihar state %Fix human parameters capitalambda= 1547.02; muh=0.000046142; lambdah= 0.07; del1=0.001331871263014; del2=0.000288658; p=0.24; eta=0.0083; alpha1=0.044; alpha2=0.0217; %Fix vector parameters muv=0.071428; % UNIT:2.13 SANDFLIES DEAD/SAND FLY/MONTH, SOURCE: MUBAYI ET AL., 2010 lambdav=0.05; % UNIT:1.5 TRANSMISSIONS/MONTH, SOURCE: MUBAYI ET AL., 2010
Ct0=0.054;b1=0.0260;b2=0.0610; Ct0r=0.63;b3=0.0130;
dimsH=6; % AS THERE ARE FIVE HUMAN COMPARTMENTS dimsV=3; % AS THERE ARE TWO VECTOR COMPARTMENTS dims=dimsH+dimsV; % THE TOTAL NUMBER OF COMPARTMENTS OR DIFFERENTIAL EQUATIONS
gamma=32; % spraying is done of 1st feb of the year
Q=0.2554; H=7933615; C=5392890;
m_tilda=100000; % assumed value 6.5, later I will have to get it for sand flies or mosquitoes betaHV=66.67/1000000; % estimated value from the short technical report sent by Anuj bitesPERlanding=lambdah/(m_tilda*betaHV); betaVH=lambdav/bitesPERlanding; IC=zeros(dims+1,1); % CREATES A MATRIX WITH DIMS+1 ROWS AND 1 COLUMN WITH ALL ELEMENTS AS ZEROES
t0=1; tfinal=40; for j=t0:1:(tfinal*4-4) tspan(1)= t0; tspan(j+1)= tspan(j)+0.25; end clear j;
% INITIAL CONDITION OF HUMAN COMPARTMENTS q1=0.8; q2=0.02; q3=0.0005; q4=0.0015; IC(1,1) = q1*Nhh; IC(2,1) = q2*Nhh; IC(3,1) = q3*Nhh; IC(4,1) = q4*Nhh; IC(5,1) = (1-q1-q2-q3-q4)*Nhh; IC(6,1) = Nhh; % INTIAL CONDITIONS OF THE VECTOR COMPARTMENTS IC(7,1) = 0.95*Nvv; %80 PERCENT OF TOTAL ARE ASSUMED AS SUSCEPTIBLE VECTORS IC(8,1) = 0.05*Nvv; %20 PRECENT OF TOTAL ARE ASSUMED AS INFECTED VECTORS IC(9,1) = Nvv; IC(10,1)=0;
Hs=2000000; Cs=3000000; k=1; landingARRAY=zeros(tfinal*50,2); muARRAY=zeros(tfinal*50,2);
[te0 ye0]=ode45(@NoEffects_derivative_6_15_2012,tspan,IC); [teL yeL]=ode45(@OnlyLethal_derivative_6_15_2012,tspan,IC);
figure(1) subplot(4,3,1); plot(te0,ye0(:,1),'b-',teL,yeL(:,1),'r-'); xlabel('time'); ylabel('S'); legend('susceptible humans'); subplot(4,3,2); plot(te0,ye0(:,2),'b-',teL,yeL(:,2),'r-'); xlabel('time'); ylabel('I'); legend('Infectious Cases'); subplot(4,3,3); plot(te0,ye0(:,3),'b-',teL,yeL(:,3),'r-'); xlabel('time'); ylabel('G'); legend('Cases in Govt. Clinics'); subplot(4,3,4); plot(te0,ye0(:,4),'b-',teL,yeL(:,4),'r-'); xlabel('time'); ylabel('T'); legend('Cases in Private Clinics'); subplot(4,3,5); plot(te0,ye0(:,5),'b-',teL,yeL(:,5),'r-'); xlabel('time'); ylabel('R'); legend('Recovered Cases');
subplot(4,3,6);plot(te0,ye0(:,6),'b-',teL,yeL(:,6),'r-'); hold on; plot(teL,capitalambda/muh); xlabel('time'); ylabel('Nh'); legend('Nh versus time');hold off;
subplot(4,3,7); plot(te0,ye0(:,7),'b-',teL,yeL(:,7),'r-'); xlabel('time'); ylabel('X'); legend('Susceptible Vectors');
subplot(4,3,8); plot(te0,ye0(:,8),'b-',teL,yeL(:,8),'r-'); xlabel('time'); ylabel('Z'); legend('Infected Vectors');
subplot(4,3,9); plot(te0,ye0(:,9),'b-',teL,yeL(:,9),'r-'); xlabel('time'); ylabel('Nv'); legend('Nv versus time');
subplot(4,3,10);plot(te0,ye0(:,10),'b-',teL,yeL(:,10),'r-'); xlabel('time'); ylabel('FS'); legend('Total number of human infections');
figure(2) plot(te0,ye0(:,10)-yeL(:,10)); xlabel('time'); ylabel('FS(without intervention)-FS(with lethal effect)'); legend('Diff. bet. VL cases with and w/o intervention:ode45');
The function file: NoEffects_derivative_6_15_2012
function dx = NoEffects_derivative_6_15_2012( t , x )
global Nc capitalambda muh del1 del2 p eta alpha1 alpha2 muv global dims m_tilda betaHV bitesPERlanding betaVH
dx = zeros(dims+1,1); % t % dx
dx(1,1) = capitalambda-(m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-muh*x(1,1);
dx(2,1) = (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-(del1+eta+muh)*x(2,1);
dx(3,1) = p*eta*x(2,1)-(del2+alpha1+muh)*x(3,1);
dx(4,1) = (1-p)*eta*x(2,1)-(del2+alpha2+muh)*x(4,1);
dx(5,1) = alpha1*x(3,1)+alpha2*x(4,1)-muh*x(5,1);
dx(6,1) = capitalambda -del1*x(2,1)-del2*x(3,1)-del2*x(4,1)-muh*x(6,1);
dx(7,1) = muv*(x(7,1)+x(8,1))-bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-muv*x(7,1);
%dx(8,1) = lambdav*x(7,1)*x(2,1)/(x(6,1)+Nc)-muvIOFt(t)*x(8,1);
dx(8,1) = bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-muv*x(8,1);
dx(9,1) = (muv-muv)*x(9,1);
dx(10,1) = (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/x(9,1);
The function file: OnlyLethal_derivative_6_15_2012
function dx=OnlyLethal_derivative_6_15_2012(t,x)
global Nc capitalambda muh del1 del2 p eta alpha1 alpha2 muv global dims m_tilda betaHV bitesPERlanding betaVH k muARRAY
dx=zeros(dims+1,1);
% the below code saves some values into the second column of the two arrays % t muARRAY(k,1)=t; muARRAY(k,2)=artificialdeathrate1(t); k=k+1;
dx(1,1)= capitalambda-(m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-muh*x(1,1);
dx(2,1)= (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/(x(7,1)+x(8,1))-(del1+eta+muh)*x(2,1);
dx(3,1)=p*eta*x(2,1)-(del2+alpha1+muh)*x(3,1);
dx(4,1)=(1-p)*eta*x(2,1)-(del2+alpha2+muh)*x(4,1);
dx(5,1)=alpha1*x(3,1)+alpha2*x(4,1)-muh*x(5,1);
dx(6,1)=capitalambda -del1*x(2,1)-del2*( x(3,1)+x(4,1) ) - muh*x(6,1);
dx(7,1)=muv*( x(7,1)+x(8,1) )- bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc) - (artificialdeathrate1(t) + muv)*x(7,1);
dx(8,1)= bitesPERlanding*betaVH*x(7,1)*x(2,1)/(x(6,1)+Nc)-(artificialdeathrate1(t) + muv)*x(8,1);
dx(9,1)= -artificialdeathrate1(t) * x(9,1);
dx(10,1)= (m_tilda)*bitesPERlanding*betaHV*x(1,1)*x(8,1)/x(9,1);
The function file: artificialdeathrate1
function art1=artificialdeathrate1(t)
global Q Hs H Cs C
art1= Q*Hs*iOFt(t)/H + (1-Q)*Cs*oOFt(t)/C ;
The function file: iOFt
function i = iOFt(t)
global gamma tfinal Ct0 b1
if t>=gamma && t<=tfinal
i = Ct0*exp(-b1*(t-gamma));
else
i =0;
end
The function file: oOFt
function o = oOFt(t)
global gamma Ct0 b2 tfinal
if (t>=gamma && t<=tfinal)
o = Ct0*exp(-b2*(t-gamma));
else
o = 0;
end
I give up.. I trying fixing code indentation, but its a real mess! Please post your code properly
I did my best at formatting your code, but obviously there are some errors left since it appears you have commented out some code parts and you regularly have multiple statements on the same line, it is impossible for us to examine your code.
If your working code is even remotely as messy as the code you posted, then that should IMHO the first thing you should address.
I cleaned up iOFt, oOFt a bit for you, since those were quite easy to handle. I tried my best at NoEffects_derivative_6_15_2012. What I'd personally change to your code is using decent indexes. You have 10 variables, there is no way that if you let your code rest for a few weeks or months, that you will remember what state 7 is for example. So instead of using (7,1), you might want to rewrite your ODE either using verbose names and then retrieving/storing them in the x and dx vectors. Or use indexes that make it clear what is happening.
E.g.
function ODE(t,x)
insectsInfected = x(1);
humansInfected = x(2);
%etc
dInsectsInfected = %some function of the rest
dHumansInfected = %some function of the rest
% etc
dx = [dInsectsInfected; dHumansInfected; ...];
or
function ODE(t,x)
iInsectsInfected = 1;
iHumansInfected = 2;
%etc
dx(iInsectsInfected) = %some function of x(i...)
dx(iHumansInfected) = %some function of x(i...)
%etc
When you don't do such things, you might end up using x(6,1) instead of e.g. x(3,1) in some formulas and it might take you hours to spot such a thing. If you use verbose names, it takes a bit longer to type, but it makes debugging a lot easier and if you understand your equations, it should be more obvious when such an error happens.
Also, don't hesitate to put spaces inside your formulas, it makes reading much easier. If you have some sub-expressions that are meaningful (e.g. if (1-p)*eta*x(2,1) is the number of insects that are dying of the disease, just put it in a variable dyingInsects and use that everywhere it occurs). If you align your assignments (as I've done above), this might add to code that is easier to read and understand.
With regard to the ODE solver, if you are sure your implementation is correct, I'd also try a solver for stiff problems (unless you are absolutely sure you don't have a stiff system).
| common-pile/stackexchange_filtered |
Is the static gauge pressure of a free jet always atmospheric?
Let's say I have a free jet of air leaving a pipe into the atmosphere. I know that the static gauge pressure at the pipe exit is equal to the atmospheric. But what about the static gauge pressure 10 meters away if the air is still traveling as a free jet? Is it still atmospheric?
I'm confused. Bernoulli's eqn. says the static pressure inside the jet should be less than atmospheric. As you go further out and the jet slows down, then it should approach atmospheric pressure. The pressure gradient between the atmosphere outside and the low pressure inside the jet leads to air getting sucked into the jet (entrainment).
I'm not saying 10 meters radially across the jet. I'm asking if we put another point that is 10 meters away from the opening of the pipe (where the pressure is atmospheric) along the streamline.
10 meters away, the jet should have slowed considerably, so it should be close to atmospheric pressure according to Bernoulli.
Bernoulli's equation says no such thing. Why would the static pressure be less? The ambient air and jet have radically different stagnation pressures, so it would be improper to use Bernoulli's Equation across the streamlines of the jet and those of the ambient atmosphere. Secondly, as long as the jet is subsonic it will exit at zero gauge pressure due to expansion/compression waves propagating into the reservoir and throttling the flow. Look into pressure-thrust in gas turbine vs. rocket engines for more on this score...
| common-pile/stackexchange_filtered |
UWP when many GirdViews(one row enabled) nested in ListView, ListView's performance is very bad
I'm designing a page like NetFlix home page. It has many(20+) GirdViews(one row enabled) nested in an outer ListView. All items are uncertained, so they must be generated at run time.
So I designed like belows:
Xaml:
<Page.Resources>
<DataTemplate x:Key="FirstLevelListViewItemTemplate" x:DataType="model:CategoriesItem">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition/>
</Grid.RowDefinitions>
<TextBlock Text="{x:Bind CategoryName}" FontSize="28"/>
<GridView
Grid.Row="1"
Height="200"
ScrollViewer.HorizontalScrollBarVisibility="Hidden"
ScrollViewer.HorizontalScrollMode="Enabled"
ScrollViewer.VerticalScrollMode="Disabled"
ItemsSource="{x:Bind categories, Mode=OneWay}">
<GridView.ItemsPanel>
<ItemsPanelTemplate>
<ItemsWrapGrid Orientation="Vertical"/>
</ItemsPanelTemplate>
</GridView.ItemsPanel>
<GridView.ItemTemplate>
<DataTemplate x:DataType="model:CategoryItem">
<controls:ImageEx IsCacheEnabled="True" Width="250" Source="{x:Bind cover_image_url}"/>
</DataTemplate>
</GridView.ItemTemplate>
</GridView>
</Grid>
</DataTemplate>
</Page.Resources>
<!--<ScrollViewer>-->
<ListView x:Name="list"
SelectionMode="None"
ItemTemplate="{StaticResource FirstLevelListViewItemTemplate}">
<!--Disable ListView UI virtualization-->
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel/>
</ItemsPanelTemplate>
</ListView.ItemsPanel>
</ListView>
<!--</ScrollViewer>-->
Model file:
public class CategoriesItem
{
public string CategoryName { get; set; }
public List<CategoryItem> categories { get; set; }
}
public class CategoryItem
{
public string cover_image_url { get; set; }
}
.cs file
used to load data from local:
list.ItemsSource = await GetData();
So
Scene 1:
Run app with ListView's UI virtualization enabled, bind process is very quick and memory is about 300MB. Then use mouse to scrolling up/down ListView quickly, the ListView scrolls up and down very slow, and show data very slow after a screen blank.
Scene 2:
Run app with ListView's UI virtualization disabled, this time the bind process takes a very long time, and the memory may rise to 1GB. But scrolling up/down ListView quickly, it's all OK.
My needs:
Scrolling up/down ListView quickly performs fast.
Bind process quick and memory low.
How to fix?
How to fix this, many thanks.
ScrollViewer can disable ListView's data visualization.
The same design mode on ios and android performs excellent, fast speed and low memory, scroll up/down ok.
I want the live visual tree shows all the ListViewItems, because I will use all the ListViewItems to do some extra things.
For explain this behavior,we could refer UI virtualization, UI virtualization is the most important improvement you can make.
This means that UI elements representing the items are created on demand. For an items control bound to a 1000-item collection, it would be a waste of resources to create the UI for all the items at the same time, because they can't all be displayed at the same time. ListView and GridView (and other standard ItemsControl-derived controls) perform UI virtualization for you. When items are close to being scrolled into view (a few pages away), the framework generates the UI for the items and caches them. When it's unlikely that the items will be shown again, the framework re-claims the memory.
You could turn it off with change ItemsPanelTemplate as StackPanel
<ListView>
<ListView.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel />
</ItemsPanelTemplate>
</ListView.ItemsPanel>
</ListView>
Bind process quick and memory low.
You could make a the data virtualization that cut the json data into slices, then use ListView LoadMoreItemsAsync to load more.
Maybe you described above is a workaround, but situations like ListView/GridView in ListView is still common, the performance in uwp is not satisfactory comparing to android/ios.
emm, If you could post this on User Voice that would be much appreciated!
Hi, a user voice has been created, see https://wpdev.uservoice.com/forums/110705-universal-windows-platform/suggestions/38013196-gridview-listview-nested-in-listview-performsreall
Finally, I cut the json file, and load them when scroll to footer. Thx.
| common-pile/stackexchange_filtered |
'AND' vs '&&' as operator
I have a codebase where developers decided to use AND and OR instead of && and ||.
I know that there is a difference in operators' precedence (&& goes before and), but with the given framework (PrestaShop to be precise) it is clearly not a reason.
Which version are you using? Is and more readable than &&? Or is there no difference?
Note that ~ is the bit-wise NOT operator and not the logical. ;-)
Yes, i know. Bad habits :) . It is a little bit strange that in PHP there are 'and', 'or' and 'xor', but there is no 'not', isn't it?
@ts: the correct answer here is the one provided by R. Bemrose
http://stackoverflow.com/questions/2803321/and-vs-as-operator/2803576#2803576
! is the logical not operator
@doublejosh your comment does not help anyone because who knows what answer you are referring to, and if the asker possibly already changed the answer? which answer is (or was) misleading?
@chiliNUT quite right. At the time it must have made sense. Looks like the lurking incorrect answer has been punished at this point :)
@doublejosh way to write back so quickly on a response to a comment from over a year ago, thanks!
Looks like a bug in php. php should use natural language where possible and avoid random doodles. Lets leave the days of assembly behind please.
If you use AND and OR, you'll eventually get tripped up by something like this:
$this_one = true;
$that = false;
$truthiness = $this_one and $that;
Want to guess what $truthiness equals?
If you said false... bzzzt, sorry, wrong!
$truthiness above has the value true. Why? = has a higher precedence than and. The addition of parentheses to show the implicit order makes this clearer:
($truthiness = $this_one) and $that
If you used && instead of and in the first code example, it would work as expected and be false.
As discussed in the comments below, this also works to get the correct value, as parentheses have higher precedence than =:
$truthiness = ($this_one and $that)
+1: this should be made loud and clear in PHP documentation, or PHP should change and give same precedence to these operators or DEPRECATE and or once for all. I saw too many people thinking they are exactly the same thing and the answers here are more testimonials.
Actually, other languages (for example, Perl and Ruby) also have these variants with the same precedence distinction so it wouldn't be sensible to deviate from this standard (however puzzling it might be for beginners) by making precedence equal in PHP. Not to mention the backward compatibility of tons of PHP applications.
People's inability to read the documentation for a language does not make the language's decisions wrong. As Mladen notes, Perl and Ruby also use these extra operators, and with the same precedences. It allows for constructs such as $foo and bar(), which are nice shortcuts for if statements. If unexpected behaviour (from bad documentation, or not reading it) were a reason not to use something we wouldn't be talking about using PHP at all.
I spent 3 minutes to find wrong line: $this = true, :( and what about $truthiness = ($this and $that); it's look better for me :)
I agree with Dmitriy - wrapping the boolean evaluation in parentheses helps to clarify the intent of the code.
I think the operator and it's function as they now exist are valuable and consistent with other languages, it's the job of the programmer to understand the language.
What about $truthiness = $this or $that; ?
@DmitriyKozmenko That would also be true. $truthiness = $this or $that; would be true, as would $truthiness = $this || $that; but I suspect you knew that already. $truthiness = $that or $this; would have $truthiness be equal to false, though.
I used to avoid && and || because I wrongly thought they were legacy and because they did not read as "English" but they are almost always what I want.
@zzapper Just remember that && and || are also short-circuiting logic operators. Basically, if the first argument to && is false, the second argument is never evaluated. Likewise for ||, if the first argument is true, the second argument is never evaluated.
So it sets truthiness to true, but returns false.
That's even scarier than truthiness being true.
Using the same $this and $that would an if ($this and $that) statement execute its block ?
I said false and... it isn't? really?? I guess you wanted to mean $this=false and $that=true..
Whoever doesn't use parenthesis to show intent in their code is simply asking for it. Did they never learn how to do arithmetic? Hiding behind lazy coding practices and relying on innate precedence to never change, is blind arrogance. I strongly disagree with Marco's comment about and and or should be depreciated. All those upvoters should be ashamed of themselves ;)
I really like the readability of $is_fruit = ($is_apple or $is_banana);
When I run this example in the CLI I get false, both using Psy Shell and TinkerWell, but if I run the same example through a web server the answer is true.
What would happen if someone executed the code ($truthiness = $this_one) and $that, though? Would this do anything?
Depending on how it's being used, it might be necessary and even handy.
http://php.net/manual/en/language.operators.logical.php
// "||" has a greater precedence than "or"
// The result of the expression (false || true) is assigned to $e
// Acts like: ($e = (false || true))
$e = false || true;
// The constant false is assigned to $f and then true is ignored
// Acts like: (($f = false) or true)
$f = false or true;
But in most cases it seems like more of a developer taste thing, like every occurrence of this that I've seen in CodeIgniter framework like @Sarfraz has mentioned.
It's worth noting that the "true" isn't ignored if that expression is part of a larger statement. Consider the case if ($f = false or true) $f = true; - the result would be that $f does become true in the end, because the expression evaluates as true overall.
no, you simply overwrote the variable later. the expression still evaluated to false, then you overwrote it with true in the next line.
Actually, he was right. First, $f gets assigned false - but the condition evaluates to true, so then $f is overwritten. If the condition evaluated to false, $f would never been overwritten either way.
It's ridiculous to suggest the developers should follow their own taste. Forget the nightmare of another developer trying to maintain the same code, the developer who wrote the code itself would be making semantic mistakes in any code written because s/he preferred and over &&, where and works as expected in only some situations and && works as expected in all situations.
Since and has lower precedence than = you can use it in condition assignment:
if ($var = true && false) // Compare true with false and assign to $var
if ($var = true and false) // Assign true to $var and compare $var to false
For safety, I always parenthesise my comparisons and space them out. That way, I don't have to rely on operator precedence:
if(
((i==0) && (b==2))
||
((c==3) && !(f==5))
)
Personally I think adding extra unnecessary parentheses makes it more confusing to read than having just what you need. For example, I think this is much easier to read: if (($i == 0 && $b == 2) || ($c == 3 && $f != 5))
I think this is the prettiest piece of code I've looked at all day. Good job.
As PHP is an interpreted language it will run fast if you don't use unnecessary whitespaces or new lines on your code. If you do the same on a compiled language it only will take more time to compile but it will not take effect on runtime. I don't mean doing it once will mark a difference but on an entire application using php + javascript both wrote like the example... load times will be larger for sure.
Explanation: Whitespaces and new lines are ignored, but to ignore them, they have to be checked. This happens at runtime on interpreted langs and when compiling on a compiled ones.
@JoelBonetR if you're using php opcache or similar your concern about load times is irrelevant. I'd hope nobody is running a production php site without it...
@PeloNZ so you can write dirty because it will be cached anyway and the entire project will take only a second more to load when refresh, huh? To code clean is for yourself and your teammates, concerning about timings was only a point that most people ignore or simply didn't know.
Whitespace is only for readability and has practically no effect on loading times. Do you write all your JS in minified form so as to save time running a JS compiler?
Precedence differs between && and and (&& has higher precedence than and), something that causes confusion when combined with a ternary operator. For instance,
$predA && $predB ? "foo" : "bar"
will return a string whereas
$predA and $predB ? "foo" : "bar"
will return a boolean.
Let me explain the difference between “and” - “&&” - "&".
"&&" and "and" both are logical AND operations and they do the same thing, but the operator precedence is different.
The precedence (priority) of an operator specifies how "tightly" it binds two expressions together. For example, in the expression 1 + 5 * 3, the answer is 16 and not 18 because the multiplication ("*") operator has a higher precedence than the addition ("+") operator.
Mixing them together in single operation, could give you unexpected results in some cases
I recommend always using &&, but that's your choice.
On the other hand "&" is a bitwise AND operation. It's used for the evaluation and manipulation of specific bits within the integer value.
Example if you do (14 & 7) the result would be 6.
7 = 0111
14 = 1110
------------
= 0110 == 6
Another nice example using if statements without = assignment operations.
if (true || true && false); // is the same as:
if (true || (true && false)); // TRUE
and
if (true || true AND false); // is the same as:
if ((true || true) && false); // FALSE
because AND has a lower precedence and thus || a higher precedence.
These are different in the cases of true, false, false and true, true, false.
See https://ideone.com/lsqovs for en elaborate example.
which version are you using?
If the coding standards for the particular codebase I am writing code for specifies which operator should be used, I'll definitely use that. If not, and the code dictates which should be used (not often, can be easily worked around) then I'll use that. Otherwise, probably &&.
Is 'and' more readable than '&&'?
Is it more readable to you. The answer is yes and no depending on many factors including the code around the operator and indeed the person reading it!
|| there is ~ difference?
Yes. See logical operators for || and bitwise operators for ~.
I guess it's a matter of taste, although (mistakenly) mixing them up might cause some undesired behaviors:
true && false || false; // returns false
true and false || false; // returns true
Hence, using && and || is safer for they have the highest precedence. In what regards to readability, I'd say these operators are universal enough.
UPDATE: About the comments saying that both operations return false ... well, in fact the code above does not return anything, I'm sorry for the ambiguity. To clarify: the behavior in the second case depends on how the result of the operation is used. Observe how the precedence of operators comes into play here:
var_dump(true and false || false); // bool(false)
$a = true and false || false; var_dump($a); // bool(true)
The reason why $a === true is because the assignment operator has precedence over any logical operator, as already very well explained in other answers.
This isn't true, they all return false.
Here's a little counter example:
$a = true;
$b = true;
$c = $a & $b;
var_dump(true === $c);
output:
bool(false)
I'd say this kind of typo is far more likely to cause insidious problems (in much the same way as = vs ==) and is far less likely to be noticed than adn/ro typos which will flag as syntax errors. I also find and/or is much easier to read. FWIW, most PHP frameworks that express a preference (most don't) specify and/or. I've also never run into a real, non-contrived case where it would have mattered.
| common-pile/stackexchange_filtered |
Generate match lists from two dimensional map
I have a two-dimensional map that describes the links between two types of variables, which looks something like this:
df = data.frame(matrix(vector(), 4, 4))
rownames(df) <- c("x1", "x2", "x3", "x4")
colnames(df) <- c("y1", "y2", "y3", "y4")
df["x1","y3"] <- 1
df["x2","y2"] <- 1
df["x4","y4"] <- 1
df["x2","y3"] <- 1
The actual dataframe is roughly 1000x100 in size. For each row, I need to generate a list of all the columns with a 1 in that row. The actual dataframe has 1000 rows and 100 columns, so I'm looking for the most efficient process possible.
I tried using nested for loops, but this process is quite inefficient. Is there a faster way?
Thanks!!
You can use apply and iterate over the rows, using which to identify which columns have a 1. The names function just gives the column name rather than the index.
apply(df, 1, function(x) names(which(x == 1)))
# $x1
# [1] "y3"
#
# $x2
# [1] "y2" "y3"
#
# $x3
# character(0)
#
# $x4
# [1] "y4"
| common-pile/stackexchange_filtered |
invalidate() not resulting in a call to onDraw()
When I call invalidate() rapidly (such as on response to a touch event), at some point onDraw() stops getting called. Here's some simple code that reproduces the problem:
public class TestView extends View {
public TestView(Context context) {
super(context);
}
public TestView(Context context, AttributeSet as) {
super(context, as);
}
public void onDraw(Canvas c) {
Log.d("tag", "drawing");
}
public boolean onTouchEvent(MotionEvent event) {
Log.d("tag", "touchevent");
invalidate();
return true;
}
}
When you click and drag, initially you get both "touchevent" and "drawing" alternating in the log. After a while of rapidly moving across the screen, "drawing" disappears and the function onDraw is never called again.
I'm running things in the Android emulator targeting android 4.4.2
does onTouchEvent get called when you start seeing this problem?
Yeah; onTouchEvent is still called as it should be.
The only thing I would think of is to try to disable View's drawing optimizations by calling setWillNotDraw(false); in constructor.
Thanks, I actually tried that and forgot to include it in the code above. It didn't fix things.
Seems like this is a bug with the emulator, I've tried the exact same code with the Genymotion emulator and everything works perfectly. Haven't figured out how to fix it in the regular emulator except by not calling invalidate() more than once every 50 milliseconds or so.
I'm having the same issue with the emulator. Real devices work fine. Is there any bug report about this somewhere?
| common-pile/stackexchange_filtered |
Mysqldump without password in crontab
I try to backup my database with mysqldump and cronjobs.
Well, I added the following command to the crontab of user root:
*/30 * * * * mysqldump -u root -pVERYSECUREPASSWORD --all-databases > /var/www/cloud/dump_komplett.sql &> /dev/null
This works fine so far, but the problem is that the password is set in this command.
So I want to include a .database.cnf file that look like this
[mysqldump]
user=root
password=VERYSECUREPASSWORD
and changed the mysqldump command to
mysqldump --defaults-extra-file="/var/crons/mysql/.database.cnf" --all-databases -u root > /var/www/cloud/dump_komplett.sql
to solve this problem.
But this command fails with the error:
mysqldump: Got error: 1045: Access denied for user 'root'@'localhost' (using password: YES) when trying to connect
I don't know what's wrong.
Here are some commands I also tried:
mysqldump --defaults-extra-file="/var/crons/mysql/.database.cnf" --all-databases > /var/www/cloud/dump_komplett.sql
mysqldump --defaults-file="/var/crons/mysql/.database.cnf" --all-databases > /var/www/cloud/dump_komplett.sql
mysqldump --defaults-file="/var/crons/mysql/.database.cnf" --all-databases -u root > /var/www/cloud/dump_komplett.sql
and .database.cnf contents I also tried:
[client]
user=root
password=VERYSECUREPASSWORD
[mysqldump]
host=localhost
user=root
password=VERYSECUREPASSWORD
[client]
host=localhost
user=root
password=VERYSECUREPASSWORD
Do any of those commands work from your command line?
I tried any of these commands from command line, too. (Not only crontab)
Check this out http://stackoverflow.com/a/6861458/1860929
Thanks for the Link, the user has to be specified in the command and not in the file with the "u" parameter.
Cool then, have added that as the answer.
The user has to be specified in the command and not in the file with the u parameter.
For more details on scheduling cron jobs using mysqldump, check this answer
I found out that password should be between quotes
[client]
user=root
password="VERYSECUREPASSWORD"
Took me a while to figure out why it didn't work with passwords with lots of non alfanumeric symbols
Use quotes in long passwords with @#%^! characters in the password
| common-pile/stackexchange_filtered |
Selenium IJavaScriptExecutor isn't working
For whatever reason IJavaScriptExecutor doesn't seem to be working in Selenium, for me at least.
Potentially changes have been made to the library and I'm referencing incorrectly? But I have no syntax errors.
code is -
IJavaScriptExecutor js = (IJavaScriptExecutor)driver;
js.ExecuteScript("alert('Hello');");
have you launched the home page before executing the above code?
Maybe you're using a RemoteWebDriver instance which does not support JavaScript like HtmlUnit?
It should be normally possible to execute JavaScript from the major modern web browsers
Make sure to use the most recent version of Selenium.WebDriver package from NuGet and the appropriate package for the browser, you're trying to automate, i.e. Selenium.WebDriver.ChromeDriver
Check out Selenium with C Sharp to see the project setup instructions, you might also be interested in a sample project you can use as a basis.
| common-pile/stackexchange_filtered |
Bitwise operations in Java give different value with similar code?
Short question. I am quite new to doing bit/bytewise operations in Java, but I noticed something strange.
The code below gives as output:
2
0
code:
String[] nodes = new String[3];
boolean in;
int index = 0;
int hash = "hi".hashCode();
in = (nodes[(index = nodes.length - 1) & hash]) != null;
System.out.println(index);
index = (nodes.length - 1) & hash;
System.out.println(index);
Why is it, that index has a different value even though the operation to assign a value to index is identical in the 5th and 7th line of code?
Like I said, I'm new to bitwise/bytewise operations so I'm probably missing some background information.
I have no idea what that code is supposed to do or demonstrate. You're wrong about the assignments being identical though, so...
First time you do index = nodes.length - 1, second time you do index = (nodes.length - 1) & hash, so why do you expect them to be the same?
On the first print, index = nodes.length - 1. On the second, index = (nodes.length - 1) & hash;. Note that you are ignoring in.
(index = nodes.length - 1) & hash
assigns nodes.length - 1 to index, then bitwise-ands it with hash.
index = (nodes.length - 1) & hash
bitwise-ands nodes.length - 1 with hash, then assigns the result to index.
Not the same thing.
Your first assigment to index happens in the parenthesis here:
in = (nodes[(index = nodes.length - 1) & hash]) != null;
Which is
index = nodes.length - 1
and has a value of "two". Your second assignment (index = (nodes.length - 1) & hash;) differs; you added a & hash which presumably has the two bit as 0 thus when you perform the bitwise & it becomes 0.
| common-pile/stackexchange_filtered |
Compute $\int_{\gamma} z\, dz$ for any smooth path $\gamma$ which begins at $z_0$ and ends at $w_0$
Compute the complex line integral$$\int\limits_{\gamma} z\, \mathrm{d}z$$ for any smooth path $\gamma$ which begins at $z_0$ and ends at $w_0$.
Confused as to how I am supposed to go about parametrizing any smooth path $\gamma$. I know how to solve this for a line from $z_0$ to $w_0$ but I am stumped by "any" smooth path.
"Any" means you can pick whatever path you like. A straight line from $z_0$ to $w_0$. One that circles around a whole lot. One that starts off in the opposite direction, turns around, and then goes straight to $w_0$. This implies that no matter what path you choose, you'll get the same answer.
Hint: $z$ is the derivative of $\frac {z^{2}} 2$.
@ndhanson3 What about $f(z)=z$ makes it so that the integral is independent of path?
@logitechmouse The Fundamental Theorem of Calculus that you learned in calc 1 actually applies to complex functions as well, under some restrictions. Since $f(z)=z$ is defined for any $z\in \mathbb{C}$, the FTC applies. Things get more complicated when you encounter functions like $f(z)=\frac{1}{z}$ and you have paths around 0.
Allow $\gamma$ to be an arbitrary continously differentiable curve $\gamma(t) : [a,b] \to \Bbb C : t\mapsto u(t) + iv(t)$. Go to the definition of the path integral to convert $\int_\gamma z,dz$ to a sum of integrals $\int_a^b \phi_1(t),dt + i\int_a^b \phi_2(t),dt$ where $\phi_1, \phi_2$ are the functions (in terms of $u, v$) you get from the definition. Discover that by the Fundamental Theorem of Calculus, the values of these integrals will depend only on $\gamma(a)$ and $\gamma(b)$, independent of how $\gamma$ gets from one to the other.
Lastly, note that this only worked because $z$ is the derivative of another function, as Kavi Rama Murthy pointed out. "Defined everywhere" is not enough to make it work.
$f(z)=z$ is an entire function in the complex plane, so $\int_\gamma z\,dz$ only depends on the endpoints of $\gamma$ and not the "interior" of it (like a conservative vector field). The result for any path from $z_0$ to $w_0$ is the same as the result for the straight line from $z_0$ to $w_0$: $\frac12(w_0^2-z_0^2)$.
| common-pile/stackexchange_filtered |
Behaviour of Laplace transform as $s \to \infty$
I have shown that if $f$ is a continuous function with scaling symmetry given by $f(mt)=m^kf(t)$ for any $m>0$, than $(\mathcal{Lf})(s) = m^{-k-1}(\mathcal{Lf})(\frac{s}{m})$.
For this $f$, what can be said about $\int_{0}^{\infty}f(t)\,dt$ and what is the behaviour of $\mathcal{L}f$ as $s \to \infty$
My initial thought is that the integral $\int_{0}^{\infty}f(t)\,dt$ diverges, because $f$ is defined as a function with scaling symmetry - it increases exponentially and therefore since you integrate on $\mathbb{R_+}$ it will diverge - I'm not sure if that argument is actually correct though.
As for the other part of the question - I think it converges to $0$, because of the definition: $\mathcal{L}(f) = \int_{0}^{\infty}e^{-st}f(t)\,dt $
If your condition is true for all $m>0$ and all $t\geq 0$, then actually $f(m) = f(m\cdot 1) = m^k f(1)$, so the functions you are looking at are just $f(t) = c t^k$.
There's an argument missing from the identity $(\mathcal{Lf})(s) = m^{-k-1}\frac{s}{m}(\mathcal{Lf})$. The Laplace transform on the RHHS is evaluated where?
You're indeed right, sorry about that typo... It should be evaluated at $s/m$ and not multiplied by $s/m$ ...
Can you elaborate on how you get $f(t)=ct^k$ Fabian? Does this mean that that the integral diverges?
edit: I think I realized why... if $f(t) = ct^k$ we see that $f(mt) = c(mt)^k$ which is the same as $f(mt) = m^k * f(t) = m^k * c*t^k = c(mt)^k$...
| common-pile/stackexchange_filtered |
Ansible returns 'AnibleUnicode' instead of JSON, unable to filter
I'm unable to filter the following output with json_query, even though it works on a jmespath validator website.
Let me explain in detail with the code.
type_debug returns AnsibleUnicode for members value, so when I tried to use json_query to filter output it returned null. I need to get value from key ip, but I don't know how to do it in a good way.
---
- hosts: localhost
gather_facts: false
vars:
- RPN:
stdout: >-
"members": [
{
"id": 0,
"ip": "x.x.x.x",
"owner": "buzut",
"private_ip": "<IP_ADDRESS>",
"speed": 100,
"status": "active"
}
]
tasks:
- debug:
msg: "{{ RPN.stdout | type_debug }}"
I dont see your playbook about jmepath? And your json is not valid
From your question I understand that you like to gather all member with a json_query. To do so I've corrected the JSON slightly.
---
- hosts: localhost
become: false
gather_facts: false
vars:
RPN:
stdout: >-
{
"members": [{
"id": 0,
"ip": "x.x.x.x",
"owner": "buzut",
"private_ip": "<IP_ADDRESS>",
"speed": 100,
"status": "active"
}]
}
tasks:
- name: Show type(s)
debug:
msg:
- "{{ RPN.stdout | type_debug }}"
- "{{ RPN.stdout | from_json | type_debug }}"
- "{{ RPN.stdout | to_json | type_debug }}"
- name: Show members
debug:
msg: "{{ RPN.stdout | from_json | json_query('members') }}"
resulting into an output of
TASK [Show type(s)] *********
ok: [localhost] =>
msg:
- AnsibleUnicode
- dict
- str
TASK [Show members] *********
ok: [localhost] =>
msg:
- id: 0
ip: x.x.x.x
owner: buzut
private_ip: <IP_ADDRESS>
speed: 100
status: active
I need to get value from key ip
- name: Show value of key 'ip'
debug:
msg: "{{ RPN.stdout | from_json | json_query('members[*].ip') }}"
resulting into an output of
TASK [Show value of key 'ip'] *********
ok: [localhost] =>
msg:
- x.x.x.x
Further Documentation
Selecting JSON data: JSON queries
| common-pile/stackexchange_filtered |
Some confuse in spectral sequence and its calculate
We have the Leray's theorem:
Let $\pi:E\longrightarrow B$ be a fiber bundle with fiber $F$ over a simply connected base space $B$. Assume that in every dimension $n$, $H^{\ast}(F)$ is of finite rank and free. Then there exist a spectral sequence $\left\{(E_r,d_r)\right\}$ with
$$
E_2^{p,q}=H^{p}(B)\otimes H^{q}(F)
$$
and a filtration $\left\{D_i\right\}$ on $H^{\ast}(E)$ s.t.
$$
E_{\infty}=\bigoplus_{p,q}{E_{\infty}^{p,q}}\simeq GH^{\ast}\left( E \right)
$$
i.e there is a filtration $\left\{D_i\cap H^n\right\}$ on $H^{n}:=H^{n}(E)$ s.t.
$$
H^n=D_0\cap \underset{E_{\infty}^{0,n}}{\underbrace{H^n\supset D_1}}\cap \underset{E_{\infty}^{1,n-1}}{\underbrace{H^n\supset D_2}}\cap \underset{E_{\infty}^{2,n-2}}{\underbrace{H^n\supset \cdots }}
$$
What confuses me is why the quotient $\frac{D_p\cap H^n}{D_{p+1}\cap H^n}=E_{\infty}^{p,n-p}
$
Another thing that puzzles me is that when we use Leray's theorem to calculate the cohomology of $\mathbb{C}\mathbb{P}^2$, the stable page $E_{\infty}=E_3$ looks like
When $p\geqslant 5$, the cohomology of $H^{\ast}(\mathbb{C}\mathbb{P}^2)$ are all trivial, I don't know how can we attact with it. The concept of spectral sequence makes me feel a cloud of fog. Do we have any good ways to understand it in geometric topology?
I would be grateful if you could help me!
Have you seen the details in the construction of the spectral sequence? The identification of the $E_\infty$ page with the associated graded is not difficult, but requires some bookkeeping. Also, be clear about the fibration in the second half of your question. Assuming it's $S^1 \to S^5 \to \mathbb{C}P^2$, note that $H^5(S^5) \cong \mathbb{Z}$, so there must be something in the $(p,q) = (4,1)$ box since that's the only nonzero contribution to degree 5. Multiplicativity implies there is something in the $(4,0)$ box, which must be killed by something in the $(2,1)$ box, etc.
| common-pile/stackexchange_filtered |
How to set the default style to a React Component
The React component is great because it lets us define our own tags, which can be more powerful than the original ones.
Currently I am only using the React component to define the HTML structure, and setting the className of each tag to let the style sheet modify its appearance. Is there a way to give a default style to components without using inline CSS?
One idea I have is to add one CSS file associated with one component, or add a tag directly in the jsx file for each component, then compile the CSS or jsx to put all the CSS in one file and added it in part.
I do not think it is a clear solution to that atm. BUT at react conf in january they talked about their new "inline"-styling scheme which came with React-Native. With that you can create javascript objects which holds the styles. StyleSheet.create({ 'myComponent' : { margin: 5, padding: 5 }});
This lets you keep the component CSS apart and you could then add the css to your component structure.
Is this the inline style? I think it's better to let the user have the privilege to modified the style of the component via the style sheet. This will let us build components with beautiful UI as well as the Scalability.
Its inline style, but you separate it in a .js file instead of .css. This means it almost has the same purpose as a stylesheet. The user would only need to alter one file. Here is an example from a React-Native project im working on: https://jsfiddle.net/0aqgxr5v/ PS: it works with React as well.
It is easily expressed with inline style approach in React:
var compStyle = {
color: 'white',
backgroundImage: 'url(' + imgUrl + ')',
margin: 5
};
React.render(<div style={compStyle}>Component</div>, mountNode);
Nah, it's better to not using inline style because I want others to have the ability to modified the style via css.
consider that A has written a component with a beautiful UI, and B wants to use this component, somehow B wants to change some part of the style via css, but if A used inline css so there's no chance for B to modify that via css.
That is unrelated to the original question, but it is easy to pass new styles object into component via props and merge them with the default styles. So you can modify default styles with 'inline approach'.
| common-pile/stackexchange_filtered |
How to say 一〇年 in a sentence
I see this structure a lot:
まるで一〇年以上.......
What does the 一〇 mean and how do I pronounce it? Is it いちまる?
Sometimes you find 〇 being used together with kanji numerals (一, 二, 三, ...) to signify "zero".
In horizontal writing it is more common to express numbers using Arabic numerals (0, 1, 2, 3, ...) — and Arabic numerals are often also used in vertical writing.
Here
一〇年以上 = 十年以上 = 10年以上
jū nen ijō
more than ten years
| common-pile/stackexchange_filtered |
Converting an int to String in Python
my_url = "https://swgoh.gg/db/missions/lightside/?stage=M0{}"
for i in range(1,10):
my_url = my_url.format(i)
print(my_url)
print(i)
The variable i increments properly, but my_url always end ins stage=M01. Any help on fixing this to increment with i?
typo: use for i in range(1,10): url = my_url.format(i) print(url) print(i)
you are overwriting the variable my_url therefore the {} is deleted and the format does not apply, what you must do is create a temporary variable.
for i in range(1,10): print('https://swgoh.gg/db/missions/lightside/?stage=M0'+str(i))
Yeah, I see that now. I feel pretty dumb about it. I just thought I had the python formatting off, but it was a lot simpler than that.
It is required to update my_url as in your case, in 2nd iteration format() looks for {} inside new string and doesn't get it that's why it prints the same string obtained in the first iteration.
Please change your code like this.
for i in range(1,10):
my_url = "https://swgoh.gg/db/missions/lightside/?stage=M0{}".format(i)
print(my_url)
print(i)
| common-pile/stackexchange_filtered |
Ionic Swipe Left does not work with item-icon-right
Ionic Swipe Left does not work with item-icon-right
Plunker showing the behavior
I'm running into some confusing behavior. With an ion-list and ion-items, I'm setting attributes to enable swiping.
<ion-list can-swipe="true">
<ion-item ng-repeat="element in ctrl.data" class="item-remove-animate item-icon-right">
<span>{{ element }}</span>
<span class="item-note">Date</span>
<i class="icon ion-chatbubble-working"></i>
<ion-option-button class="button-assertive icon ion-trash-a" ng-click="ctrl.delete($index) ">
</ion-option-button>
</ion-item>
</ion-list>
However, there's two issues that I'm running into
The remove animation does not ever seem to work
If set "item-icon-right" on the ion-list, then the swipe messes up completely. If I add an "i" tag with the class, swipe works.
Does anyone know why this is happening? I'm new to CSS and ion, so it's a bit difficult for me to know what to look for while debugging.
If you could tell me your thought process or point to related articles for debugging these unexpected behaviors, I'd really appreciate it
For deletion animation, it seems that collection-repeat does not add "ng-leave" class due to a performance hit. (ref)
For the swipe left bug, I had to remove "icon" from the ion-option class.
Here a Sample hope this help you
<ion-list class"animate-ripple" show-delete="false" can-swipe="true" swipe-direction="both">
<ion-item ng-show="orders_list.length==0">
<h2>No orders, Please go to Menu and Select Create a Order</h2>
</ion-item>
<ion-item item="order"
ng-repeat="order in orders_list | custom:filterOrder"
class="item-text-wrap" >
<a class="item item-icon-right" style=" border:none; text-align:left"
ng-click="showOrdersDetail(order.order_id)">
<i class="icon ion-chevron-right"></i>
<h2>Order No. {{order.order_id}}</h2>
<h5>TOTAL:{{order.total_pagar}}</h5>
<h5>ITEMS:{{order.total_items}}</h5>
</a>
<div ng-if="filterOrder.tipo_estado === 'ACT'">
<ion-option-button class="button-assertive" ng-click="del_order(order,{{$index}});">
Del
</ion-option-button>
<ion-option-button class="button-balanced" ng-click="pay_order(order);">
Pay
</ion-option-button>
</div>
<div ng-if="filterOrder.tipo_estado === 'PAG'">
<ion-option-button class="button-balanced" ng-click="showLinceseList(order.order_id);">
Apply a<br> License
</ion-option-button>
</div>
</ion-item>
</ion-list>
| common-pile/stackexchange_filtered |
Is there a way to condition-define the code for Delphi 11.2 specifically instead of 11.x?
I know I can use {$IFDEF VER350} to detect Delphi 11.x.
What can I use to detect 11.2 specifically? For instance, if 11.1 or 11.3 versions do not have an issue, but 11.2 does.
The duplicate question shows the additional constants defined for each update version. You will have to check whether there is constant for 11.2 and not for 11.3 to know you are dealing with 11.2 update.
@DalijaPrasnikar good catch, I forgot about those extra constants
VERxxx values are not updated during point releases.
There are CompilerVersion, RTLVersion, and RTLVersionXXX constants that you can use with the {$IF} directive.
CompilerVersion and RTLVersion are not guaranteed to always be updated on point releases. Sometimes RTLVersion is, depending on the extent of updates made to the RTL in a point release, and if Embarcadero remembers to increment the value.
For your particular case, there are RTLVersion11x constants defined for each point release of 11.x:
RTLVersion: Comp = 35;
RTLVersion111: Boolean = True;
RTLVersion112: Boolean = True;
RTLVersion113: Boolean = True;
So, you can first check if RTLVersion (or CompilerVersion) is 35.x for an 11.x release, and then check if RTLVersion112 exists and RTLVersion113 does not exist, eg:
{$IF ((RTLVersion >= 35) AND (RTLVersion < 36))
AND RTLVersion112
AND (NOT RTLVersion113)}
Starting with 12.0, there is a new GetRTLVersion() function, which is documented as including a minor version for update releases:
Returns the RTL version number of the system unit when it is compiled. The RTLVersion constant can be used for the expression in the conditional compilation. GetRTLVersion includes two bytes:
Upper byte: It holds the RTL major version.
Lower byte: It holds the RTL minor version. Usually, the minor version is the number of the update release. For example, for RAD Studio 12 - Update 1, GetRTLVersion would return $2401.
But, this would be a runtime check, not a compile-time check, and it doesn't help you for 11.x and earlier versions.
| common-pile/stackexchange_filtered |
If a relation is euclidean, is it necessarily asymmetric?
$R$ is relation on set $A$, that is $R\subseteq A \times A $. $R$ is euclidean if $(\forall x,y,z\in A)(xRy\land xRz \Rightarrow yRz)$. $R$ is asymmetric if $(\forall x,y\in A)(xRy\Rightarrow \lnot(yRx)).$
For example, if R euclidean relation on A, and $(1,2)\in R$, then because $R$ euclidean and $(1,2), (1,2)\in R \Rightarrow (2,2)\in R$, which means it isn't asymmetric (because every asymmetric relation is necessarily not reflexive).
But if $R$ is an empty relation, then it's both asymmetric and euclidean, which means an euclidean relation is not necessarily asymmetric. Or am I thinking too much into it?
What is the euclidean relation?
@Wuestenfux: https://en.wikipedia.org/wiki/Euclidean_relation
@Wuestenfux I edited the post so that it contains the definitions of euclidean and asymmetric relations now. Thanks for pointing it out :)
No. From $xRy ∧ xRz$, by commutativity of $\land$ both $yRz$ and $zRy$ follow.
@MauroALLEGRANZA Yes, but isn't an empty relation a relation that is both euclidean and asymmetric?
What I mean is: An Euclidean relation is not necessarily asymmetric. This means that the def of Euclidean does not imply asymm. We can have "degenerate" case...
Let R be an Euclidean relation on A. and let $(x,y) \in R $
$xRy \land xRy \Rightarrow yRy $ which means Euclidean relation cant be asymmetric if there exists an $(x,y) \in R$ in case of Empty Relation we know that it doesnt have any elements
so this proposition doesnt contain it
| common-pile/stackexchange_filtered |
Group by unique value with data.table
I have a data.table with more than 130 000 rows.
I wanted to group two cols : dates and progress by a variable id and put the values in a vector so I used aggregate().
df_agr <- aggregate(cbind(progress, dates) ~ id, data = df_test, FUN = c)
However it takes around 52 seconds to aggregate the data + I lose the date format from the col dates.
An example of the dataframe :
id dates progress
1: 3505H6856 2003-07-10 yes
2: 3505H6856 2003-08-21 yes
3: 3505H6856 2003-09-04 yes
4: 3505H6856 2003-10-16 yes
5: 3505H67158 2003-01-14 yes
6: 3505H67158 2003-02-18 yes
7: 3505H67862 2003-03-06 yes
8: 3505H62168 2003-04-24 no
9: 3505H62168 2003-05-15 yes
10: 3505H65277 2003-02-11 yes
The result I get :
id progress dates
1 3505H62168 1, 2 5, 6
2 3505H65277 2 2
3 3505H67158 2, 2 1, 3
4 3505H67862 2 4
5 3505H6856 2, 2, 2, 2 7, 8, 9, 10
I was quite surprised to see that everything is converting into an integer + each row who seems to contain "independent" vectors are, in fact, vectors from a list :
'data.frame': 5 obs. of 3 variables:
$ id : chr "3505H62168" "3505H65277" "3505H67158" "3505H67862" ...
$ progress:List of 5
..$ 1: int 1 2
..$ 2: int 2
..$ 3: int 2 2
..$ 4: int 2
..$ 5: int 2 2 2 2
$ dates :List of 5
..$ 1: int 5 6
..$ 2: int 2
..$ 3: int 1 3
..$ 4: int 4
..$ 5: int 7 8 9 10
I tried to convert back the dates in the right format with :
lapply(df_agr$dates, function(x) as.Date(x, origin="1970-01-01"))
but I got :
$`1`
[1] "1970-01-06" "1970-01-07"
$`2`
[1] "1970-01-03"
$`3`
[1] "1970-01-02" "1970-01-04"
$`4`
[1] "1970-01-05"
$`5`
[1] "1970-01-08" "1970-01-09" "1970-01-10" "1970-01-11"
So it seems the origin is not "1970-01-01" as it's written in the documentation, maybe the lowest date from the data ?
So my question is : how to get the same result I got with aggregate() with data.table while keeping the date format ?
So it means how to group by unique id with data.table. I tried :
setDT(df)[,list(col1 = c(progress), col2 = c(dates)), by = .(unique(id))]
But of course I got the followed error :
Error in [.data.table(df, , list(col1 = c(progress), col2 =
c(dates)), : The items in the 'by' or 'keyby' list are length (5).
Each must be same length as rows in x or number of rows returned by i
(10).
Data :
structure(list(id = c("3505H6856", "3505H6856", "3505H6856",
"3505H6856", "3505H67158", "3505H67158", "3505H67862", "3505H62168",
"3505H62168", "3505H65277"), dates = structure(c(12243, 12285,
12299, 12341, 12066, 12101, 12117, 12166, 12187, 12094), class = "Date"),
progress = c("yes", "yes", "yes", "yes", "yes", "yes", "yes",
"no", "yes", "yes")), .Names = c("id", "dates", "progress"
), class = c("data.frame"), row.names = c(NA, -10L
))
by = .(id) instead of by = .(unique(id))
@ErdemAkkas Yeah but I want to group by unique id.
You can use paste0 I think as below, you need to change the date to character so that it doesn't coverted to its numeric counterpart, running below query without converting dates to numeric will give you values like, 12166, 12187. In your query you are also using "c" as to combine the objects, however we should use paste to combine, also in data.table when you use .(id) in by it should give you unique values on by items unless your query have something which is not making things unique for example in this case if you avoid the collapse statement you won't get the unique keys on ID, I hope this is helpful. Thanks:
df_agr <- aggregate(cbind(progress, as.character(dates)) ~ id, data = df, FUN = paste0)
> df_agr
id progress V2
1 3505H62168 no, yes 2003-04-24, 2003-05-15
2 3505H65277 yes 2003-02-11
3 3505H67158 yes, yes 2003-01-14, 2003-02-18
4 3505H67862 yes 2003-03-06
5 3505H6856 yes, yes, yes, yes 2003-07-10, 2003-08-21, 2003-09-04, 2003-10-16
>
Using data.table:
setDT(df)[,.(paste0(progress,collapse=","), paste0(as.character(dates),collapse=",")), by = .(id)]
id V1 V2
1: 3505H6856 yes,yes,yes,yes 2003-07-10,2003-08-21,2003-09-04,2003-10-16
2: 3505H67158 yes,yes 2003-01-14,2003-02-18
3: 3505H67862 yes 2003-03-06
4: 3505H62168 no,yes 2003-04-24,2003-05-15
5: 3505H65277 yes 2003-02-11
OR just pointed out by David Arenberg, much easier way in data.table is, Thanks for valuable comments:
setDT(df)[, lapply(.SD, toString), by = id]
Thank you so much. It works perfectly it only takes around 2-3 seconds with data.table instead of 52 seconds with aggregate
@DavidArenburg Thanks for your feedback and comments , added in the solution.
A dplyr version.
library(dplyr)
df %>%
group_by(id) %>%
summarize (progress = paste(progress, collapse=","),
dates = paste(dates, collapse=",") )
# id progress dates
# <chr> <chr> <chr>
# 1 3505H62168 no,yes 2003-04-24,2003-05-15
# 2 3505H65277 yes 2003-02-11
# 3 3505H67158 yes,yes 2003-01-14,2003-02-18
# 4 3505H67862 yes 2003-03-06
# 5 3505H6856 yes,yes,yes,yes 2003-07-10,2003-08-21,2003-09-04,2003-10-16
| common-pile/stackexchange_filtered |
How to recursively delete self referencing class C++
I have been scouring forums trying to understand how to recursively free all my memory from heap but the process is not completing.
The Game class is like a tree object, map[2][8][8] is the data on it. Game class represents a chess board and *moves[64] is an array of children Games.
My move generation function adds moves an increments roots to the amount of moves generated.
When I do game.clear() to delete all children, it deletes the lowest layer of children, but fails after. What is going wrong with my code? My thought process is that delete is called on Game game, which calls the destructor, in the destructor, it recursively loops through all moves until you are at the bottom layer, then just works its way back up.
class Game {
public:
char data[2][8][8];
int roots = 0;
Game *moves[64];
~Game() {
for (int i = 0; i < roots; i++) {
delete moves[i];
moves[i] = NULL;
}
}
void clear() {
for (int i = 0; i < roots; i++) {
delete moves[i];
moves[i] = NULL;
}
}
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 8; j++) {
delete[] map[i][j];
}
delete[] map[i];
}
delete[] map;
private:
....
void make_move(int x, int y, int dx, int dy) {
Game *move = new Game;
for (int i = 0; i < 2; i++)
for (int x = 0; x < 8; x++)
for (int y = 0; y < 8; y++)
(*move).map[i][x][y] = map[i][x][y];
(*move).t = t + 1;
if(map[0][x][y] == -map[0][x + dx][y + dy])
(*move).h = h + map[1][x + dx][y + dy];
(*move).map[0][x + dx][y + dy] = map[0][x][y];
(*move).map[0][x][y] = 0;
(*move).map[1][x + dx][y + dy] = map[1][x][y];
(*move).map[1][x][y] = 0;
moves[roots++] = move;
}
}
int num = 0;
int paths = 0;
void benchmark(Game &game, int depth) {
if (depth == 0)
return;
game.add_moves();
for (int i = 0; i < game.roots; i++)
benchmark(*game.moves[i], depth - 1);
num += game.roots;
paths++;
}
void print_benchmark(Game &game, int depth) {
int t1 = clock();
benchmark(game, 1);
int t2 = clock();
printf("num=%d, t=%.3f, mps=%.2f, paths=%d\n", num, (t2 - t1) / (double)CLOCKS_PER_SEC, num / ((t2 - t1) / (double)CLOCKS_PER_SEC), paths);
}
int main() {
Game game;
Handler io;
io.set(game, "reset");
print_benchmark(game, 5);
game.clear();
cin.ignore();
return 0;
}
delete &obj_gametree looks very suspect, can you please create a Minimal, Complete, and Verifiable Example and show us? You really need to show us some actual working code, your GameTree class doesn't make sense either, a Game destructor in a GameTree class?
Also, you do have a constructor which initializes GameTree::moves?
Shouldnt "delete[] moves" delete the array and therefore call all destructors of the array?
Keep your data in STL containers (e.g. like vector) and you do not need to care about destruction. Explicit new and delete is in many cases not the best way.
@Kellerspeicher I tried using lists and strings since they delete when they are out of reference, but it massively slowed my program.
@Anedar Yes thank you, I have just added that.
@JoachimPileborg more code has been added, delete has been moved to delete a pointer, sorry about that, and also I didn't know a 3d char array would have to be initialized? I'm not to great at programming but isn't that allocated on the stack or something?
If STL is making things slow it is usually a matter of wrong usage. For example filling a vector by use of push_back() will reallocate frequently which is time consuming. Prior usage of reserve() is preventing it. In general STL containers are not slower than managing your memory yourself.
Every new has to have an associated delete and vice versa. I don't see the new associated to map you delete in clear(). What is map by the way? Are map and data the same?
data[][][] is not on the heap, so delete on it is wrong
@Kellerspeicher I am trying to use vectors now. thank you for the suggestion. if you have time would you explain what you mean by push_back()? what function should I use to avoid constant reallocation?
@LucaPizzamiglio sorry for the confusion, data and map are the same, I just changed them. new is located at function make_move, and so does that mean map[][][] does not have to be deleted at all?
If you want to append a new entry at the end of a vector you can do that using push_back. The vector is getting longer and needs more memory. If the reserved memory no longer is enough the container will reallocate to get more memory. This will slow down things. So if you know how much you will need, you should use reserve before. That is the difference between capacity and size.
@MichaelChoi map[][][] has not to be deleted, the Game object has to be deleted. In this case, map[][][] is part of the object itself.
@Kellerspeicher Hi, sorry to keep bothering you. I ran some benchmark tests to compare vectors with arrays and arrays seem to be outperforming, meaning I am doing something wrong, right? I am going to take your word that vectors and STLs are better but I am not sure why. I copied and pasted my class and changed the vector to an array, so they are algorithmically identical. In the constructor of the vector version, I reserved the same 64 slots. push_back is used. Here are the results (mps is moves per second):
moves=594365
vectors:
t=7.520, mps=79037.90
arrays:
t=1.775, mps=334853.52
Please update your question to use a consistent set of variable names, and so that we have something that compiles.
There is a little trade-off for the comfort of containers. But it shall not be more than factor 4 as in your case. This is a clear evidence that some thing is wrong. But it is hard to evaluate without a Minimal, Complete, and Verifiable example.
There is a compromise between vector and array the boost::array. May be that better fits your needs. But I never used it.
So just a tip for getting your "code smell" sensors calibrated, deleting something you just took the address of is almost never correct. The only time I could think it'd be valid is to take the address of a bound reference that was dereferenced from a heap allocated pointer. But that's rare and itself a bit smelly, too. If I were to ever see a delete &smth without a comment that absolutely justifies its use, I'd really question the quality of the code. Just to give you an idea.
| common-pile/stackexchange_filtered |
Bibtex php preg_match_all
I have a text file with a Bibtex export.
The text file has a number of entries following the pattern below.
@article{ls_leimeister,
added-at = {2013-01-18T11:14:11.000+0100},
author = {Wegener, R. and Leimeister, J. M.},
biburl = {http://www.bibsonomy.org/bibtex/27bb26b4b4858439f81aa0ec777944ac5/ls_leimeister},
journal = {International Journal of Technology Enhanced Learning (to appear)},
keywords = {Challenges Communities: Factors Learning Success VirtualCommunity and itegpub pub_jml pub_rwe},
note = {JML_390},
title = {Virtual Learning Communities: Success Factors and Challenges},
year = 2013
}
I want to use php and considered preg_match_all
The following didnt get me anywhere:
<EMAIL_ADDRESS>file_get_contents($file_path),$results);
I wanted to start simple, but that didnt really work.
I am kinda new to php RegEx.
The perfect final output would be:
Array
(
[0] => Array
(
['type'] => article
['unique_name'] => ls_leimeister
['added-at'] => 2013-01-18T11:14:11.000+0100
['author'] => Wegener, R. and Leimeister, J. M.
['biburl'] => http://www.bibsonomy.org/bibtex/27bb26b4b4858439f81aa0ec777944ac5/ls_leimeister
['journal'] => International Journal of Technology Enhanced Learning (to appear)
['keywords'] => Challenges Communities: Factors Learning Success VirtualCommunity and itegpub pub_jml pub_rwe
['note'] => JML_390
['title'] => Virtual Learning Communities: Success Factors and Challenges
['year'] => 2013
)
[1] => Array
(
[...] => …
)
)
@renanbr renanbr recommends: renanbr/bibtex-parser https://github.com/renanbr/bibtex-parser (which I assume is his own invention).
All BibTex documentation that I have seen will have the year value wrapped in curly brackets. Is this a typo while posting?
Try this : Here I have fetched only type and unique_name, by looking at it, you can fetch all others.
$str = '@article{ls_leimeister,
added-at = {2013-01-18T11:14:11.000+0100},
author = {Wegener, R. and Leimeister, J. M.},
biburl = {http://www.bibsonomy.org/bibtex/27bb26b4b4858439f81aa0ec777944ac5/ls_leimeister},
journal = {International Journal of Technology Enhanced Learning (to appear)},
keywords = {Challenges Communities: Factors Learning Success VirtualCommunity and itegpub pub_jml pub_rwe},
note = {JML_390},
title = {Virtual Learning Communities: Success Factors and Challenges},
year = 2013
}';
preg_match_all('/@(?P<type>\w+){(?P<unique_name>\w+),(.*)/',$str,$matches);
echo $matches['type'][0];
echo "<br>";
echo $matches['unique_name'][0];
echo "<br>";
echo "<pre>";
print_r($matches);
Output array format will be little different from yours, but you can change this format to yours.
Thanks this works, but the other lines are the more diffcult ones.
The number of lines is variable and also some lines have '{ ... },' and others dont.
Yes, i know it is difficult, but you try to do it.
preg_match_all('/@(\w+){(.+),\s+(\S+)\s+=\s+{(.)},(.)/',$file_content,$results);
This yields the first line as well. How can I tell RegEx to retrieve an indefinite number of lines with the same format?
I would need to read out the matches for the entries and then do another preg_match for the different matches.
I need to preg_match for all entries first and then do the preg_match I previously posted.
preg_match_all('/@\w+{.+,\s+$([\s\S]+)}$/',$file_content,$results);
preg_match_all('/@\w+{.+,\s+$(.*)}$/',$file_content,$results);
Neither of them work, any help?
@PrasanthBendra Thanks, with your code, I easy to match the keywords such as title, author and so on by:preg_match_all('/(?P<unique_name>\w+)\s=\s\{(.*?)\},/s',$str,$matches);, note that /s means multiline, see http://php.net/manual/en/reference.pcre.pattern.modifiers.php
Pattern: /^@([^{]+)\{([^,]+),\s*$|^\s*([^\R@=]+) = \{(.*?)}/ms (Demo)
This pattern has two alternatives; each containing two capture groups.
type and unique_name are captured and stored in elements [1] and [2].
all other key-value pairs are stored in elements [3] and [4].
This separated array storage allows reliable processing when constructing the desired output array structure.
Input:
$bibtex='@BOOK{ko,
title = {Wissenschaftlich schreiben leicht gemacht},
publisher = {Haupt},
year = {2011},
author = {Kornmeier, M.},
number = {3154},
series = {UTB},
address = {Bern},
edition = {4},
subtitle = {für Bachelor, Master und Dissertation}
}
@BOOK{nial,
title = {Wissenschaftliche Arbeiten schreiben mit Word 2010},
publisher = {Addison Wesley},
year = {2011},
author = {Nicol, N. and Albrecht, R.},
address = {München},
edition = {7}
}
@ARTICLE{shome,
author = {Scholz, S. and Menzl, S.},
title = {Alle Wege führen nach Rom},
journal = {Medizin Produkte Journal},
year = {2011},
volume = {18},
pages = {243-254},
subtitle = {ein Vergleich der regulatorischen Anforderungen und Medizinprodukte
in Europa und den USA},
issue = {4}
}
@INBOOK{shu,
author = {Schulz, C.},
title = {Corporate Finance für den Mittelstand},
booktitle = {Praxishandbuch Firmenkundengeschäft},
year = {2010},
editor = {Hilse, J. and Netzel, W and Simmert, D.B.},
booksubtitle = {Geschäftsfelder Risikomanagement Marketing},
publisher = {Gabler},
pages = {97-107},
location = {Wiesbaden}
}';
Method: (Demo)
$pattern='/^@([^{]+)\{([^,]+),\s*$|^\s*([^\R@=]+) = \{(.*?)}/ms';
if(preg_match_all($pattern,$bibtex,$out,PREG_SET_ORDER)){
foreach($out as $line){
if(isset($line[1])){
if(!isset($line[3])){ // this is the starting line of a new set
if(isset($temp)){
$result[]=$temp; // send $temp data to permanent storage
}
$temp=['type'=>$line[1],'unique_name'=>$line[2]]; // declare fresh new $temp
}else{
$temp[$line[3]]=$line[4]; // continue to store the $temp data
}
}
}
$result[]=$temp; // store the final $temp data
}
var_export($result);
Output:
array (
0 =>
array (
'type' => 'BOOK',
'unique_name' => 'ko',
'title' => 'Wissenschaftlich schreiben leicht gemacht',
'publisher' => 'Haupt',
'year' => '2011',
'author' => 'Kornmeier, M.',
'number' => '3154',
'series' => 'UTB',
'address' => 'Bern',
'edition' => '4',
'subtitle' => 'für Bachelor, Master und Dissertation',
),
1 =>
array (
'type' => 'BOOK',
'unique_name' => 'nial',
'title' => 'Wissenschaftliche Arbeiten schreiben mit Word 2010',
'publisher' => 'Addison Wesley',
'year' => '2011',
'author' => 'Nicol, N. and Albrecht, R.',
'address' => 'München',
'edition' => '7',
),
2 =>
array (
'type' => 'ARTICLE',
'unique_name' => 'shome',
'author' => 'Scholz, S. and Menzl, S.',
'title' => 'Alle Wege führen nach Rom',
'journal' => 'Medizin Produkte Journal',
'year' => '2011',
'volume' => '18',
'pages' => '243-254',
'subtitle' => 'ein Vergleich der regulatorischen Anforderungen und Medizinprodukte
in Europa und den USA',
'issue' => '4',
),
3 =>
array (
'type' => 'INBOOK',
'unique_name' => 'shu',
'author' => 'Schulz, C.',
'title' => 'Corporate Finance für den Mittelstand',
'booktitle' => 'Praxishandbuch Firmenkundengeschäft',
'year' => '2010',
'editor' => 'Hilse, J. and Netzel, W and Simmert, D.B.',
'booksubtitle' => 'Geschäftsfelder Risikomanagement Marketing',
'publisher' => 'Gabler',
'pages' => '97-107',
'location' => 'Wiesbaden',
),
)
Here is the site that I extracted new sample input strings from.
| common-pile/stackexchange_filtered |
Ensure javascript badge / widget html is not changed
One of my clients wants to distribute a javascript widget that people can put on their websites. However he wants to ensure that the backlink is left intact (for SEO purposes and part of the price of using the widget). So the javascript he's going to distribute might look like this:
<script id="my-script" src="http://example.com/widget-script.js"></script>
<div style='font-size:10px'><a href='http://www.example.com/backlinkpage.html'>
Visit Exaxmple.com</a></div>
widget-script.js would display some html on the page. But what wew want to ensure is that some wiley webmaster doesn't strip out the back link. If they do we might display a message like "widget installed incorrectly" or something. Any ideas / thoughts.
Some code taken from this question.
There's no 100% way of preventing this, I'm afraid.
You could insert the link yourself with Javascript, but then it'd be for naught as far as PageRank goes.
You could give them the HTML with the link having an ID like mycompanybacklink and check with Javascript whether the element exists or not. If it doesn't, don't display the badge or whatever. If it does, you can verify that the link's href is your website and its text is what you want. You would have to edit the HTML you posted as sample so that the link comes before the script, not after. The element could still exist, however, but be blocked by some other element or simply hidden with CSS. You could then also do something akin to what jQuery does now with its :hidden selector: Instead of looking at the CSS property by itself (which is what a webmaster is most likely to try) you can just see whether the element itself or its parents take up any space in the document. I think this is done with offsetWidth and offsetHeight but I am not sure. Worth looking into, though....
I'm not trying to prevent somebody with a lot of javascript knowledge to not be able to remove the link. Just something simple that most webmasters if they delete it will get a message and put it back in.
Then just do the first technique. Check with the Javascript whether the link exists or not and whether the href of it is still pointing where you want.
If you wanted to ensure that the link is always there with the widget, you could just have it printed via JavaScript. However, I don't think search engines would pick it up as a backlink.
I think you're just going to have to trust that your users will act in good faith and show you the courtesy of not modifying/removing the link. You also need to accept that no matter what you do, a determined webmaster will be able to use your widget without displaying the link, and some inevitably won't, but they are likely to be in the minority (unless your backlink is just really intrusive or obnoxiously distracting).
Any JavaScript/HTML solution could simply be edited out by the webmaster. You'd have to make your widget in flash if you really want to prevent tampering.
| common-pile/stackexchange_filtered |
How to compile an Android app with aapt2 without using any build tools?
I'm trying to compile an Android application I just created in Kotlin on Android Studio, without using Gradle or any other build tools. My intention is to speed up compiling applications without having to use Android Studio or install build tools like Gradle, ant or buck. I'm running into an issue linking files with aapt2.
I'm compiling all files in res folder into a zip file with aapt2. But when I try to link them aapt2 spits out errors.
The errors seem to be due to missing app compat libraries,and my questions is how to successfully link all these and kotlin files to create a deployable apk file.
The following compiles all files in res folder and creates resources.zip file.
$AAPT compile --dir $APP_DIR/src/main/res -o $APP_DIR/build/intermediate/resources.zip
This however fails.
$AAPT link -o $APP_DIR/build/intermediate/ -I $PLATFORM $APP_DIR/build/intermediate/resources.zip --manifest $APP_DIR/src/main/AndroidManifest.xml
with following errors
error: resource style/Theme.AppCompat.Light.DarkActionBar (aka com.example.myapplication:style/Theme.AppCompat.Light.DarkActionBar) not found.
./projects/MyApplication/app/src/main/res/values/styles.xml:6: error: style attribute 'attr/colorPrimary (aka com.example.myapplication:attr/colorPrimary)' not found.
./projects/MyApplication/app/src/main/res/values/styles.xml:7: error: style attribute 'attr/colorPrimaryDark (aka com.example.myapplication:attr/colorPrimaryDark)' not found.
./projects/MyApplication/app/src/main/res/values/styles.xml:8: error: style attribute 'attr/colorAccent (aka com.example.myapplication:attr/colorAccent)' not found.
error: resource style/ThemeOverlay.AppCompat.Dark.ActionBar (aka com.example.myapplication:style/ThemeOverlay.AppCompat.Dark.ActionBar) not found.
./projects/MyApplication/app/src/main/res/values/styles.xml:11: error: style attribute 'attr/windowActionBar (aka com.example.myapplication:attr/windowActionBar)' not found.
./projects/MyApplication/app/src/main/res/values/styles.xml:12:
error: style attribute 'attr/windowNoTitle (aka com.example.myapplication:attr/windowNoTitle)' not found.
error: resource style/ThemeOverlay.AppCompat.Light (aka com.example.myapplication:style/ThemeOverlay.AppCompat.Light) not found.
error: failed linking references.
This appears to be due to missing app compat libraries.I've tried manually downloading appcompat-v7-27.1.1.aar file and linking it but this to fails.
If anyone has come across a solution to this please enlighten me. Thanks.
I want to replicate what's available here with aapt2
https://github.com/authmane512/android-project-template/blob/master/build.sh
I doubt, this would "speed up compiling applications". Gradle does a good job avoiding unnecessary compilations. In which part do you think you could do it faster?
Speeding up part is questionable I agree. I just wanted to see if I could do it without using Gradle for a project I'm working on. My plan is to pre-download dependencies gradle pulls in, into a folder and add them during aapt2 linking stage to create an intermediate apk file which I can then merge with kotlin binary files.
Actually it is very difficult and messy to use extra support libraries, You are getting those errors, because you are using support library's theme which is not a part of android.jar, Instead you can use android.jar's default theme.
Just put
public class MainActivity extends Activity {...}
in place of
public class MainActivity extends AppCompatActivity {...}
and change your manifest's App theme pointing to 'styes.xml', So basically you have to change the styles XML file to this one
<resources>
<!--
Base application theme, dependent on API level. This theme is replaced
by AppBaseTheme from res/values-vXX/styles.xml on newer devices.
-->
<style name="AppBaseTheme" parent="android:Theme.Light">
<!--
Theme customizations available in newer API levels can go in
res/values-vXX/styles.xml, while customizations related to
backward-compatibility can go here.
-->
</style>
<!-- Application theme. -->
<style name="AppTheme" parent="AppBaseTheme">
<!-- All customizations that are NOT specific to a particular API-level can go here. -->
</style>
</resources>
Also you should use below code in 'Manifest.xml' file to point the Theme of your project or Activity in Styles XML file under values
android:theme="@style/AppTheme"
this reference article is very helpful for what exactly you are trying to dohttps://geosoft.no/development/android.html, Also sign your APK in order to run your app on device, otherwise it can show error while installing.
Thanks.
Use the below link for reference of using aapt/appt2 for compiling java app, so that you can make one for compiling kotlin on your own.
https://github.com/HemanthJabalpuri/AndroidExplorer
| common-pile/stackexchange_filtered |
Inequality involving three functions
I have the follwing inequality, which I am not sure if it is correct or not.
$$\int_{0}^{h} \int_{0}^{h} \max(u,v) f(u) f(v) du dv \geq \int_{0}^{h} \int_{0}^{h} \min(u,v) du dv \int_{0}^{h}\int_{0}^{h} f(u)f(v) du dv, $$ where $f$ is an $L^2$ function , $f$ is the a.e derivative of $F$, and $F$ is also in $L^2$, $0 \leq u, v \leq h $
The final thing that I am trying to prove is the following:
$$\int_{0}^{h} \int_{0}^{h} \max(u,v) f(u) f(v) du dv \geq \int_{0}^{h} \int_{0}^{h} \min(u,v) du dv \int_{0}^{h} f^2 du .$$
I guess you mean $0\le u,v\le h$
right, you are correct here
You certainly are in trouble if $f$ is concentrated near $0$, say $f=1$ on $[0,\delta]$ and $0$ on $(\delta,h]$ with $\delta\ll h$.
The question is unclearly formulated. In particular, could $h$ be $\infty$?
No h is finite. I am pasting a link to the place from where the basic problem has arisen. https://statistics.stanford.edu/sites/default/files/EFS%20NSF%20159.pdf. On pg 9 of this the authors are trying to prove that $ \frac{1}{h^2}\int_0^h (f -f_h)^2 - \frac{1}{12} \int_{0}^{h}f^{'}^{2}$ can be small. The $1/12$ fraction only comes if we are able to get some kind of inequality as above. Since the author has not explicity mentioned hence I was trying to derive it. The $1/12 $ fraction appears to be coming from the min value that the expression can have.
Thank you. Don't you miss condition (1.4) from the manuscript in your question?
Your inequalities are false in general, in view of homogeneity considerations. Indeed, note first that
\begin{equation}
\int_0^h\int_0^h \min(u,v) du dv=\int_0^h dv\int_0^v du\, u+\int_0^h du\int_0^u dv\, v=\frac{h^3}3.
\end{equation}
Let now $f=1$ on $[0,h]$. Then the left side of your displayed inequalities is
\begin{equation}
\int_0^h\int_0^h \max(u,v) f(u) f(v)\, du\, dv
\le\int_0^h\int_0^h h \, du\, dv=h^3,
\end{equation}
whereas their right sides are, respectively, $\frac{h^3}3\,h^2$ and $\frac{h^3}3\,h$, which are greater than $h^3$ if $h>3$.
Can you kindly elaborate your answer? In particular, why "Then the left side of your displayed inequalities is $\leq h^3$"?
I have provided details.
I woud state this simpler: if $f$ is dimensionless, and $u,v,h$ are measured in feet, then the left hand side has dimension $ft^3$ while the RHS has $ft^5$ and $ft^4$. And we have the same inconsistency whatever the dimension of $f$ is.
@AlexandreEremenko : This is how I was actually thinking (except that it was centimeters rather than feet :-); I certainly prefer the simple and natural Gauss metric unit CGS system (https://en.wikipedia.org/wiki/Gaussian_units) to any other), and this is what I meant by the "homogeneity considerations".
@IosifPinelis: when I studied in high school (in mid 1970s) we were taught that CGS is out of date, and replaced with SI system https://en.wikipedia.org/wiki/International_System_of_Units.
@AlexandreEremenko : I know that, and I think it is a great pity. Instead of the simple and natural formula $F=q_1q_2/r^2$ in CGS for the Coulomb law, in SI they have this terrible extra factor $1/(4\pi\epsilon_0)$. Instead of only three basic CGS units, in SI they have 7 basic units (if my count is correct), including (yes) the basic unit candela; and then they have about 3 million derived SI units, mostly named after persons. Looks quite crazy to me! When I was in high school (late 60s), we only had to deal with CGS, thankfully.
Both your inequalities are wrong, because the LHS can be negative.
Indeed, take $h=3,\; f(x)=\delta(x-1)-\delta(x-2)$ then the left hand side will
be $-1$. Now approximate the deltas with smooth functions.
On the other hand, in your second inequalty (the final thing) the right hand side
is always non-negative, while in the first inequality it is zero for the example above.
Thanks a lot for pointing this out.
Can you kindly elaborate your answer, giving us details?
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.