text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Mock assert function called before exception is thrown
I am trying to unit test some code using Mock. I would like to raise an exception, and test that the exception is caught and another function is called before it is re-raised
except exception as e:
car.create_log(car_details)
raise e
The unit test:
car = Car()
car.registrations.update = Mock()
car.registrations.update.side_effect = RegistrationError()
car.create_log = Mock()
car.register_car('123123')
car.create_log.assert_called_once()
self.assertRaises(RegistrationError)
I can confirm the method throws an error but cannot test that the method create_log is called before the error is re-raised.
This is how you should use assertRaises:
with self.assertRaises(RegistrationError):
car.register_car('123123')
car.create_log.assert_called_once()
or you can pass it a callable and arguments:
self.assertRaises(RegistrationError, car.register_car, '123123')
| common-pile/stackexchange_filtered |
Expo creates build with wrong version code
I'm trying to deploy a new version of my app on Google Store, but expo creates the build with the same version code.
I incremented the versionCode from 1 to 2.
"android": {
"package": "some-package",
"versionCode": 2,
"adaptiveIcon": {
"foregroundImage": "./assets/app-icon.png",
"backgroundColor": "#FFFFFFFF"
}
}
I also incremented the version property.
"expo": {
"name": "app-name",
"slug": "app-name",
"version": "2.0.0",
...
}
When I run eas build --platform android, the build is successful, but the version and version code are the following:
Profile
SDK Version
Version
Version Code
production
48.0.0
1.0.1
1
The Version should be 2.0.0 and the Version code should be 2.
What should I do to fix this?
I had this problem as well but try to upload your build on Google and it should display the correct versions. If not, insert the following into your
Android/app/build.gradle file
def version = new File("$rootDir/../package.json")
def packageJson = new JsonSlurper().parseText(version.text)
def actualVersion = packageJson.version
def actualVersionCode = packageJson.versionCode
defaultConfig {
versionCode actualVersionCode
versionName actualVersion
}
Note that this code gets its values from package.json
It should display the following result but once you upload your build to Google Play Store, the version and version code is present!
If you are using expo, you find out that most times, adding the version numbers in the app.json codes doesnt seem to work, as in my case. To solve the issue, I let Expo to handle that automatically for me. To do that, simply add:
"autoIncrement": true
to the build section inside the eas.json file. For instance, in my own case, that part of the file looked like this:
"build": {
"development": {
"developmentClient": true,
"distribution": "internal"
},
"preview": {
"distribution": "internal",
"android":{
"buildType": "app-bundle"
},
"autoIncrement": true
},
"production": {
"autoIncrement": true
}
The one under production is automatically added. I added the one under preview to solve the issue. Hope this helps someone.
| common-pile/stackexchange_filtered |
How can I get string as input to Bigrams in nltk
I am new to nltk and I am using python.
I am taking string as input to Bigrams. When I am showing the items of this. I am getting each character as a word.
import nltk
string = "Batman Superman"
bigram = nltk.bigrams(string)
print bigram.item()
[('B','a'),('a','t'),('t','m'),('m','a'),('a','n'),('n',' '),(' ','S'),
('S','u'),('u','p'),('p','e'),('e','r'),('r','m')('m','a'),('a','n')]
But I want output as [('Batman','Superman')]
please tell me how I get this output only taking string as input to
Bigrams function but not taking list as input.
You have to tokenize your string first. Refer to here
Ok, so what is happening here is that the bigrams function is expecting a tokenized version of you corpus, that is a list of words in order.
When you pass it a string, nltk is doing its best and converts that string into a list of chars, and then produces the bigrams of that list, which happens to be pairs of chars.
If you want to get word-chunk bigrams, you will need to tokenize your input sentence like so:
>>> string = "Batman Superman"
>>> tokenized = string.split(" ")
>>> list(nltk.bigrams(tokenized))
[('Batman', 'Superman')]
But i want [('Batman','Superman')] as output even though taking input as string.. Is this possible ??
No, because the bigrams function expects a list of tokens to work with. If you want it to be shorter, you could do nltk.bigrams(string.split(" ")).
A tokenized version of the string will represent the same data, just in the proper format for nltk to work with.
if i convert string into tokens and passes as input to bigrams function. then it will club the every possible pair of consecutive words(tokens). If i want to find the meaningful bigram how i will get meaningful bigram.
Please define "meaningful bigram." If you want to ask about semantic parsing, that should be a separate question.
No problem! If this answer has been helpful, please upvote/accept it
| common-pile/stackexchange_filtered |
php , simple_html_dom.php, get selected option
I have a html block like this :
$localurl = '
<select name="cCountry" id="cCountry" style="width:200" tabindex="5">
<option value="251">Ascension Island</option>
<option selected="selected" value="14">Australia</option>
<option value="13">Austria</option>
';
I'm trying to extract the selected value in this case "Australia" using simple_html_dom ( http://simplehtmldom.sourceforge.net/ ). So far I have build a function but is not working :
//extract the selected value
function getValue_selected($value, $localurl)
{
$html = file_get_html($localurl);
$i = 0;
foreach ($html->find('select[option selected="selected"]') as $k => $v) {
if ($v->name == $value) {
$shows[$i]['Location'] = $v->value;
}
}
$value = $shows[$i]['Location'];
$html->clear();
unset($html);
return $value;
}
$selected_value = getValue_selected('cCountry', $localurl)
An alternative such QueryPath would be accepted too .
Suggested third party alternatives to SimpleHtmlDom that actually use DOM instead of String Parsing: phpQuery, Zend_Dom, QueryPath and FluentDom.
@Gordon How can I get it with QueryPath ?
the right answer is:
$html->find('#cCountry',0)->find('option[selected=selected]',0);
My guess is that you're trying to access $shows when it is defined outside of the function. If this is the problem, you either need to put global $shows; at the top of the func, or, better still, modify the signature to pass it in. Something like:
getValue_selected($value, $localurl, &$shows)
{/* your function here */ }
getValue_selected($val1, $val2, $shows);
Originally I made the function to get hidden inputs : foreach ($html->find('input[type="hidden"]') as $k => $v) { . I could set the input name as $value . (e.g if there is I call the function getValue_selected('postingID', $localurl) which
retrieves the value "98765" from the html content ) . Now I've tried to modify it to make it work on "select" input types as you can see above but doesn't seem to work .
I still say you're trying to access a variable which is undefined in scope. Have you tried adding a variable $select = array(); to the inside of the function?
| common-pile/stackexchange_filtered |
Android ImageView should be larger than screen
I have an Image that is 1500x2048
I want this to display at its native resolution, I will be handling zooming/scaling and transformation via android's animation functions.
Currently just using the ImageView by default it forces my image to fix within the screen size. I have tried expanding my XML elements to be layout_width:"2048dip" but to no success
How would this be done properly?
what would be a success? you can use scaleType or android:adjustViewBounds="true"
Try using the scaleType attribute.
| common-pile/stackexchange_filtered |
Opening files within archives from vim
I know that vim can open files within zip or tar{,.gz} archives interactively: open the archive first, then navigate to the correct entry and press enter.
lesspipe (https://github.com/wofr06/lesspipe) (which is a preprocessor for less) provides the additional convenience of allowing one to directly input both the archive name and the name of the file within the archive at the same time (less foo.tgz:file-within-foo) (yes, I know, such a scheme leads, in theory, to issues with files containing colons in their names; in practice this is rare...). I was wondering if a similar ability was available (perhaps as a plugin) for vim.
Clarification: Fundamentally, what I'm asking for should be "relatively" simple (and mostly focused on a usability POV), because most of the archive-handling ability is actually already present in vim: I am just looking for a plugin that will transform, say, vim foo.tgz:file-within to vim foo.tgz followed by selecting file-within in the listing offered by vim.
you want to see the content of a .rar file from vim which is not possible. but vim can show the content of zip or tar{,.gz}.
nope, .tgz is just .tar.gz.
Most Unix variants support FUSE, which allows programs to define new filesystem types. There are several FUSE filesystems that expose archive contents as a directory tree. This way any application can see archive contents as ordinary files transparently (see e.g. here).
For example, with avfs, you get read-only access:
mountavfs
cd ~/.avfs$PWD
vim foo.tgz\#/file-within-foo
With archivemount, you need to mount each archive explicitly, but you get read-write access.
mkdir foo.d
archivemount foo.tgz foo.d
vim foo.d/file-within-foo
fusermount -u foo.d; rmdir foo.d
| common-pile/stackexchange_filtered |
Do I need two replicating NAS devices for a trustworthy storage?
I want to implement NAS solution for home usage, to keep data I care about. NAS offers RAID against disk failure, but what if the NAS box itself fails? Do I need two replicating NAS devices for a trustworthy storage? The amount of data is few Tb in size and while it is not many for a hard drive in these days, I think that much might be expensive to keep just in the cloud.
An idea that comes to my head is to buy two small single drive NAS boxes replicating each other. Would you recommend better solution?
You have to define trustworthy, and what you are willing to pay to get that level of trust. It is entirely possible that two fail (power surge, fire, etc).
Looks like prices are in the range of about 20 - 30$ per month for a terabyte of storage. I can buy a brand new 10 Tb NAS at least twice a year for the money it costs to keep the same data in the cloud, and one of the "unrecoverable disasters" is running out of money one day to support your cloud storage (unemployment, retirement, etc). No, cloud seems not yet there.
Assuming you're using RAID-1 on your NAS (and if you're not the first answer to this question is instantly obvious) that means you're picking up 10TB HDDs for about $90 each - and there's another answer. Stop buying consumer-grade HDDs for your NAS, and failures will be substantially reduced. After doing RAID-1 with professional-grade HDDs, backups are the answer, as joeqwerty says - whether to the cloud or to some local alternative storage.
NAS offers RAID against disk failure, but what if the NAS box itself
fails? Do I need two replicating NAS devices for a trustworthy
storage?
RAID is not a backup. You should back up the data on your NAS to a trusted storage device if you want to be able to restore your data in the event that the NAS fails outright.
| common-pile/stackexchange_filtered |
Can't parse a String into Date with a device while everywhere else it's ok
Here is my method:
func dateFromString(dateString: String) -> NSDate? {
let dateFormatter = NSDateFormatter()
dateFormatter.timeZone = NSTimeZone(abbreviation: "GMT+01")
dateFormatter.dateFormat = "M/d/yyyy hh:mma"
return dateFormatter.dateFromString(dateString)
}
When I try to parse this String : let date = dateFromString("11/16/2015 5:04am") I get date == nil ONLY on my iPhone 5 8.4. Everywhere else (iPhone 6 9.1 and any simulator version) it's ok.
Any idea how this might be possible?
Seems weird. Can you confirm that this happens on a device with those specs, or just on the Xcode simulator. I'm thinking we could narrow it down to a simulator issue at least..
Use dateFormatter.dateFormat = "MM/dd/yyyy hh:mma"
Yes it doesn't work ONLY on the iphone 5 8.4 while it's fine on the iPhone 6 9.1 and any simulator version/device. Changing the dateFormat like "MM/dd/yyyy hh:mma" doesn't change the problem.
Try to set the locale: dateFormatter.locale = NSLocale(localeIdentifier: "en_US_POSIX")
It fixed it indeed. Thanks. But the problem was maybe somewhere else. I tried with specifying a locale, it was ok. Then I tried without, then it wasn't ok again. Then I thought, okay let's try to change the locale on the phone. It was on Australian locale, changed it to a random locale and then it worked without specifying the locale the formatter. I tried to put back to Australia locale and it was still working. As it's an old iOS version, maybe it was a bug that has been fixed since. Just in case, I'll keep the locale in the formatter.
| common-pile/stackexchange_filtered |
Running multiple Applescripts in order in Automator
I am trying to automate this process.
step 1: change system date to a specific date.
step 2: open an application.
step 3: change system date back to normal.
Now on Automator, I have three apple scripts placed like this.
on run {input, parameters}
tell application "Terminal"
do script with command "sudo date<PHONE_NUMBER>18"
activate
end tell
delay 1
tell application "System Events"
keystroke "mypassword" & return
delay 3
end tell
end run
on run {input, parameters}
tell application "Terminal"
do script with command "open -a applicationName"
activate
end tell
end run
on run {input, parameters}
tell application "Terminal"
do script with command "sudo ntpdate -u time.apple.com"
activate
end tell
delay 1
tell application "System Events"
keystroke "mypassword" & return
delay 3
end tell
end run
The problem is that Automator only runs the first code. I'm not sure how to make it run all the codes in order.
Sorry if this is a stupid question. I am completely new to automator and applescript.
Thank you
I'm not quite sure why you chose to use three separate AppleScripts. You can combine them all into one AppleScript as I have done in this following example. I'm not quite sure why you used the “activate” commands. I don't think they are necessary so I removed those lines of the code. Anyway, this following code should work for you…
tell application "Terminal"
do script with command "sudo date<PHONE_NUMBER>18"
end tell
delay 1
tell application "System Events"
keystroke "mypassword" & return
delay 3
end tell
tell application "Terminal"
do script with command "open -a applicationName"
delay 1
do script with command "sudo ntpdate -u time.apple.com"
end tell
delay 1
tell application "System Events"
keystroke "mypassword" & return
delay 3
end tell
Alternately, launching Terminal app to run shell scripts is not necessary all the time as you can run shell scripts in AppleScript by using the “do shell script” command. This following applescript code is your code using only eight lines of code.
do shell script "sudo date<PHONE_NUMBER>18"
tell application "System Events" to keystroke "mypassword" & return
delay 3
do shell script "open -a applicationName"
delay 1
do shell script "sudo ntpdate -u time.apple.com"
delay 1
tell application "System Events" to keystroke "mypassword" & return
If my versions of your code throw errors, it may be necessary to adjust the delay commands or re-insert the activate commands
If you are hell-bent on using your version of the code and three separate Applescripts, just remove the on run {input, parameters} and end run lines of code from each AppleScript and that should eliminate your problem
Did you test your code ? I haven't, but I can already see that your last code block will not work. do shell script launches a shell process without a terminal, so it can't receive user input. Therefore, keystroking the password will be pointless.
| common-pile/stackexchange_filtered |
Unable to create new session id when we run 2 or more than project
When we run 2-3 projects on same browser, we are getting same session id. I need to know that how to create a new session id for other's project(including it's tab). So old projects(with it's tab) and new projects (with it's tab) have different session id without closing the tab or browser.
Is it same site/application in same domian?
I am using my created js file in both the project.
Application's are different but I have attached the same js file in both.
Can share uour code about session
Sorry, I can't but I am using sessionStorage and localStorage.
// Save data to sessionStorage
sessionStorage.setItem('key', 'value');
// Get saved data from sessionStorage
var data = sessionStorage.getItem('key');
// Remove saved data from sessionStorage
sessionStorage.removeItem('key');
// Remove all saved data from sessionStorage
sessionStorage.clear();
Ok how do you get sessionid from server when you get sessionid then write on storage with your app prefix
// Get the text field that we're going to track
var field = document.getElementById("field");
// See if we have an autosave value
// (this will only happen if the page is accidentally refreshed)
if (sessionStorage.getItem("autosave")) {
// Restore the contents of the text field
field.value = sessionStorage.getItem("autosave");
}
// Listen for changes in the text field
field.addEventListener("change", function() {
// And save the results into the session storage object
sessionStorage.setItem("autosave", field.value);
});
Hi you can use below code for your aim,
var params = null;
var appName = 'oldApp';
var savingData = {
param1:'data1',
param2:'data2',
param3:'data3'
}
sessionStorage.setItem(appName, JSON.stringify(savingData))
var appName = 'newApp';
var savingData = {
param1:'data1-for new',
param2:'data2-for new',
param3:'data3-for new'
}
sessionStorage.setItem(appName, JSON.stringify(savingData))
params = JSON.parse(sessionStorage.getItem('oldApp'));
console.log(params);
sessionStorage.getItem('newApp');
params = JSON.parse(sessionStorage.getItem('newApp'));
console.log(params);
| common-pile/stackexchange_filtered |
Stopping a class variable from re-initializing when called again?
So here is some example code
class hi(object)
def __init__(self):
self.answer = 'hi'
def change(self):
self.answer ='bye'
print self.answer
def printer(self):
print self.answer
class goodbye(object):
def bye(self):
hi().change()
goodbye().bye()
hi().printer()
When i run this code i output
'bye'
'hi'
I want it to output
'bye'
'bye'
Is there a way to do this while still initializing self.answer to hi or will this cause it to always re-initialize to 'hi' no matter what i do afterwards?
goodbye.bye is working on a completely different hi instance to the one that you're calling printer on! And they're instance attributes, not class attributes.
Look up the Singleton pattern
possible duplicate of Is there a simple, elegant way to define Singletons in Python?
you posted your expected output, but it's unclear why you want that behaviour.
The short answer is to make answer a class attribute and change a class method, but it's not clear why you want to do this at all - could you provide a less abstract example?
You are using instance attributes not class attributes as jonrsharpe pointed out.
Try something like this:
class hi(object):
answer = 'hi'
def __init__(self):
pass
@classmethod
def change(cls):
cls.answer ='bye'
print cls.answer
def printer(self):
print self.answer
class goodbye(object):
def bye(self):
hi.change()
goodbye().bye()
hi().printer()
Here answer is now a class attribute and change() is a classmethod
| common-pile/stackexchange_filtered |
Search table using textbox and drop-down list to filter database (Real-Time Search)
I have a problem in filtering the database, I have this code but it doesn't show the filtered database after I clicked the submit button search.
<form method="POST" action="client.php">
<div id="Search" style="display:none">
<h4>Search Client</h4>
<table>
<tr>
<td>
<input type="text" name="text" placeholder="Keyword" />
</td>
<td>
   
</td>
<td>
<select id="search_by" name="search_by">
<option value="Reference">Reference</option>
<option value="Lastname">Lastname</option>
<option value="Firstname">Firstname</option>
<option value="Province">Province</option>
<option value="Request">Request</option>
<option value="Status">Status</option>
</select>
</td>
<td>
   
</td>
<td>
<input type="submit" name="btn_search" value="Search">
</td>
</tr>
</table>
<br>
<?php
$res=mysqli_query($con,"SELECT*FROM client_info");
echo "<table style='font-size:12px;border-spacing:5px; background-color:white; width:100%;'>";
echo "<tr>";
echo "<th> Reference No </th>";
echo "<th> Lastname </th>";
echo "<th> Firstname </th>";
echo "<th> Middlename </th>";
echo "<th> Street </th>";
echo "<th> Brgy </th>";
echo "<th> Town </th>";
echo "<th> Prov </th>";
echo "<th> Mobile </th>";
echo "<th> Email </th>";
echo "<th> Event </th>";
echo "<th> Venue </th>";
echo "<th> No. of Attendants </th>";
echo "<th> Request </th>";
echo "<th> Payment Ammount </th>";
echo "<th> Payment Status </th>";
echo "</tr>";
while ($row=mysqli_fetch_array($res)) {
echo "<tr>";
echo "<td>". $row["ref_no"] . "</td>";
echo "<td>". $row["last_name"] . "</td>";
echo "<td>". $row["first_name"] . "</td>";
echo "<td>". $row["middle_name"] . "</td>";
echo "<td><center>". $row["street"] . "</center></td>";
echo "<td><center>". $row["brgy"] . "</center></td>";
echo "<td><center>". $row["town"] . "</center></td>";
echo "<td><center>". $row["prov"] . "</center></td>";
echo "<td><center>". $row["mobile"] . "</center></td>";
echo "<td><center>". $row["email_add"] . "</center></td>";
echo "<td><center>". $row["event"] . "</center></td>";
echo "<td><center>". $row["venue"] . "</center></td>";
echo "<td><center>". $row["number_attendants"] . "</center></td>";
echo "<td><center>". $row["request_res"] . "</center></td>";
echo "<td><center>". $row["payment_amount"] . "</center></td>";
echo "<td><center>". $row["payment_res"] . "</center></td>";
echo "</tr>";
}
echo "</table>";
?>
</form>
<?php
if (isset($_POST['btn_search'])) {
if ($_POST['search_by'] == 'Reference') {
$res=mysqli_query($con,"SELECT*FROM client_info WHERE ref_no LIKE '%".$_POST['text']."%'");
echo "<table style='font-size:12px;border-spacing:5px; background-color:white; width:100%;'>";
echo "<tr>";
echo "<th> Reference No </th>";
echo "<th> Lastname </th>";
echo "<th> Firstname </th>";
echo "<th> Middlename </th>";
echo "<th> Street </th>";
echo "<th> Brgy </th>";
echo "<th> Town </th>";
echo "<th> Prov </th>";
echo "<th> Mobile </th>";
echo "<th> Email </th>";
echo "<th> Event </th>";
echo "<th> Venue </th>";
echo "<th> No. of Attendants </th>";
echo "<th> Request </th>";
echo "<th> Payment Ammount </th>";
echo "<th> Payment Status </th>";
echo "</tr>";
while ($row=mysqli_fetch_array($res)) {
echo "<tr>";
echo "<td>". $row["ref_no"] . "</td>";
echo "<td>". $row["last_name"] . "</td>";
echo "<td>". $row["first_name"] . "</td>";
echo "<td>". $row["middle_name"] . "</td>";
echo "<td><center>". $row["street"] . "</center></td>";
echo "<td><center>". $row["brgy"] . "</center></td>";
echo "<td><center>". $row["town"] . "</center></td>";
echo "<td><center>". $row["prov"] . "</center></td>";
echo "<td><center>". $row["mobile"] . "</center></td>";
echo "<td><center>". $row["email_add"] . "</center></td>";
echo "<td><center>". $row["event"] . "</center></td>";
echo "<td><center>". $row["venue"] . "</center></td>";
echo "<td><center>". $row["number_attendants"] . "</center></td>";
echo "<td><center>". $row["request_res"] . "</center></td>";
echo "<td><center>". $row["payment_amount"] . "</center></td>";
echo "<td><center>". $row["payment_res"] . "</center></td>";
echo "</tr>";
}
echo "</table>";
}
}
?>
</div>
Why did you repeat your sentence 3x?
triple X effect ? :p
Xander Cage is here!
@NathanRobb probably wouldn't let them post that much code without some more text to explain the problem.
@Don'tPanic I imagine there's a reason for that though lol
@NathanRobb Yeah, definitely. In this case, it looks like the problem is simple enough, though, even though there's more code than necessary.
@NathanRobb just added an answer. (At least it's what I think the problem is.)
I think it does show the filtered results. It just looks like it doesn't because you're outputting the non-filtered results every time, then outputting the filtered results if the search form has been submitted. You just need to run a different query depending on whether or not the search form was submitted. Something like this.
// search form
if (isset($_POST['btn_search'])) {
if ($_POST['search_by'] == 'Reference') {
$res = mysqli_query($con, "SELECT * FROM client_info WHERE ref_no LIKE '%".$_POST['text']."%'");
}
} else {
$res = mysqli_query($con, "SELECT * FROM client_info");
}
// display your query results
Also, your query is vulnerable to SQL injection. That's beside the point of the problem here, but you should look into using prepared statements instead of concatenating post values into your SQL.
| common-pile/stackexchange_filtered |
Git failed to check out font file
I was trying to clone a web application project to my windows 7.
However I got some error messages:
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Bd.otf (No such file or directory)
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Blk.otf (No such file or directory)
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Hv.otf (No such file or directory)
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Lt.otf (No such file or directory)
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Md.otf (No such file or directory)
error: unable to create file assets/cashback/fonts/Helvetica Neue LT Std\HelveticaNeueLTStd-Th.otf (No such file or directory)
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'
This works fine when I cloned it from a Linux machine. Even I downloaded and installed the font, this is still happening. I am using TortoiseGit 1.8.12.
Has anyone seen anything similar?
Is there any special char in the file name originally? Is there case-sensitive naming expected? Because Linux and Windows do similar approach to file system but with few differences.
The mixture of slashes and backslashes is suspicious. It may be trying to create Helvetica Neue LT Std\HelveticaNeueLTStd-Bd.otf as a filename, which unix can do but Windows can't.
Definately the backslashes. Just tried that in a test repo on a linux box and windows git produces the same message that it cannot create "a\b.txt" for me.
ahh.. yep.. I did not see that and thought it was related the font somehow...thanks for pointing that out.
| common-pile/stackexchange_filtered |
listing of method names and signature defined in a nodejs module
Let's say in my module X1, I have the following functions. Similary in other modules, other functions are defined. I want to list down all of them.
module.exports.getX1Details
module.exports.getX1Availability
module.exports.getX1Price
What is the best way to list all the functions defined in a module ?
For example
(X1.getX1Details(prod_id), X1.getX1Availability(prod_id) and X1.getX1Price(prod_id)
(X2.XYZ1(id), X2.XYZ2(id) and X2.XYZ3(id) etc ...)
Why can't you just simply list down all the keys like this? var b = require('./b'); for (var key in b) { console.log(key); } If this is what you mean.
when you dont know the module name, how will you do it ? There can be many modules sitting inside a folder. Second, how do I get the parameters that are being passed with the functions ? dont know if it is possible.
| common-pile/stackexchange_filtered |
Do I need an International Driving Permit as German tourist in New Jersey
According to both the Hertz Rental Car agency at Newark Airport and a call to the NJ MVC I do need an international driving permit in addition to my German license when driving in New Jersey. However, according to two local tourist agencies in Germany and most importantly the website of the NJ MVC, http://www.state.nj.us/mvc/Licenses/Visitors.htm , I do not need but am only advised to have one.
Does anyone know where to get reliable information on this? Maybe even a listing for all states, so when I'm driving around on trips I don't have to go through this for each of the 20 or so east coast states again?
Surely if you're going to be driving in several states it's less burdensome just to get the IDP than to do all that research. Also, the police not requiring the IDP won't be very helpful if the rental agency does require it.
@phoog I am getting married in the states soon and some of my guests won't be visiting their home countries before arriving in the US. Also, it would only be me doing the research once which could save a lot of my guests that could still apply for one the burden to do so. Finally, some of those that already did rent a car inquired with their rental car agencies and they did say the IDP was not required.
You don't NEED it. You are advised to have it therefore basically it is the prerogative of the rental agency whether to rent to you without it or not. As a matter of law, an ignorant police officer can cite you for not having it however it will be thrown out in court if you contest it.
USA.gov is your online guide to government information and services.
Foreign Nationals Driving in the United States
People who drive in the U.S. must have a valid driver's license. Some states require an International Driving Permit from foreign nationals, in addition to a valid license from your own country. Check with the motor vehicle department of each state you will drive in for its requirements.
This is reliable information. It varies on a state by state basis.
@Sheik I found that quote myself. But then I didn't find out how or where to "check with the motor vehicle department of each state". That is, for New Jersey I did get information, but contradicting information, as described in the question
The answer is no
My family, friends and myself have rented cars from Hertz, Alamo, National, and Enterprise and never needed the international driving permit.
By policy of rental car companies, you're required to have it.
Practically, you don't need it it ever. All major car rentals in the US are used to handling German drivers licenses, even the old gray papers.
Which part? What about elaborating a bit?
| common-pile/stackexchange_filtered |
How to build a project under the src directory of Qt?
I found a .pro file 5.12.1\Src\qtbase\src\plugins\platforms\windows\windows.pro and opened it in Qt Creator. After configuring a MingW kit, I tried to build the project. But I got this error;
error: vulkan/vulkan.h: No such file or directory
#include <vulkan/vulkan.h>
Do I need to configure something before building the project? I do not want the vulkan support, just want to compile it successfully. I remember if I build the whole Qt src, I need to run a command like "configure xxxxx" in the src directory. Now I do not want to build whole Qt, just this project. There is a line in windows.pro that seems related to this problem.
qtConfig(vulkan): QT += vulkan_support-private
How should I do to remove the need for the vulkan stuff in order to build it successfully?
Yes, you will need to configure things. I suggest you read up on Building Qt 5 from source to understand what is involved.
| common-pile/stackexchange_filtered |
Query Lookup Relationship With SOQL Not Working
I have a SalesForce custom object named Loan__c which has a Lookup field called LoanLead__c and I'm trying to query that relationship with SOQL but I'm getting an error when I run the query SELECT Id, (SELECT Id FROM LoanLeads__r) FROM Loan__c on the object in the developer console query editor.
I've confirmed the Object API name of the related LoanLeads table is indeed LoanLead__c and I have tried modifying the query to select from the LoanLead__r table and a few others, but I get the same error.
The error I'm getting is:
SELECT Id, (SELECT Id FROM LoanLeads__r) FROM Loan__c
^
ERROR at Row:1:Column:28
Didn't understand relationship 'LoanLeads__r' in FROM part of query call. If you are attempting to use a custom relationship, be sure to append the '__r' after the custom relationship name. Please reference your WSDL or the describe call for the appropriate names.
This is what the field settings are for the lookup field:
I can't figure out what I'm doing wrong - everything I've read says this should work and it does work with other lookup fields on this same table. Any help would be greatly appreciated.
If Loan__c has a lookup to LoanLead__c then LoanLead__c is the parent. Meaning the query would be Select Id, (select Id from Loans__r) from LoanLead__c. Be sure the relation name is Loans for this to work
Yeah, that is what ended up working. SELECT Id, (SELECT Id FROM Loans__r) FROM LoanLead__c was what I needed.
The Child Relationship Name is Loans, so it should be Loans__r, not LoanLeads__r.
Well, not only that but I didn't realize the relationship was from LoanLead__c to Loans__c instead of the other way around.
What ended up working is SELECT Id, (SELECT Id FROM Loans__r) FROM LoanLead__c instead.
As Peter Noges suggested, the parent table was actually the LoanLead__c table and not Loan__c so what ended up working for me was SELECT Id, (SELECT Id FROM Loans__r) FROM LoanLead__c and also SELECT Id, LoanLead__r.Id FROM Loan__c.
When we query child object in sub-query then we use Child Relationship Name. If relationship is custom then we need to append suffix __r.
Select Id, Name, (Select Id, Name From Contacts__r) from Account
Select Id, Name, (Select Id, Name From Contacts) from Account
When we query parent object in that case we use field name with suffix __r instead of __c.
Select Id, Name, Account.Name, Custom_Account__r.Name From Contact
Custom Relationship
Standard Relationship
I hope this will help you.
| common-pile/stackexchange_filtered |
Insert hero image into a section
I would like to append a hero image into an HTML section. But the problem is this image doesn't take the whole page as intended when it is wrapped into a section.
For exemple
<div id="home">
<div class="text-vcenter">
<h1>Reunion Qualité</h1>
<h2>Meeting Manager</h2>
<svg height="10" width="605">
<line x1="0" y1="0" x2="611" y2="0" style="stroke:rgb(255,255,255);stroke-width:3" />
</svg>
<h3>Namur-Belgium</h3>
</div>
</div>
Will work, but
<div>
<div id="home">
<div class="text-vcenter">
<h1>Reunion Qualité</h1>
<h2>Meeting Manager</h2>
<svg height="10" width="605">
<line x1="0" y1="0" x2="611" y2="0" style="stroke:rgb(255,255,255);stroke-width:3" />
</svg>
<h3>Namur-Belgium</h3>
</div>
</div>
</div>
Won't. Why ? How can i fix this ?
Here is the CSS linked regarding the Hero image :
html, body {
height: 100%;
width: 100%;
font-family: 'Quicksand', sans-serif;
}
#home {
background: url(../img/qualite_01.jpg) no-repeat center center fixed;
color: #f6f8f8;
display: table;
height: 100%;
position: relative;
width: 100%;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
The reason the second example isn't working is because you haven't specified the height of the first div (the div that wraps the #hero div). So just add the following rule to your CSS:
body div {
height: 100%;
}
A percentage height defines the height as a percentage of containing block's height (see MDN article about height and MDN article about percentage). #home's height was set to 100%, but its containing block didn't have a height specified. In the first example, on the other hand, #home's containing element is body, which has a height of 100% specified.
You are right, this is the reason why. Now i'm having another problem. The reason why i wanted to wrap it into another div was to be able to append it with Jquery when i was on the wanted page. If i set the height of the parent div (the target of the Jquery function) to 100%, then i'll still have a white 100% blank space on every pages :/.
If you post a codepen with you JavaScript, HTML, and CSS I could help you more, but without that I wouldn't be able to provide any recommendations on what you could do.
Yes probably because of the image he can't find because of the relative path
I think it's because of the outer <div> tag you are wrapping it all in. Naturally, a <div> won't take up the entire width of the body. An example snippet is below. With the border on it, we can see that it does not take up the full width of the body. Why is it needed to wrap the working solution in an extra <div>?
body {
background-color: red;
}
div {
border: 5px solid blue;
}
<html>
<body>
<div>
Some Text
</div>
</body>
</html>
Second snippet to show changes plus adding margin and padding equal to 0, which removes whitespace around the page
html, body {
height: 100%;
width: 100%;
font-family: 'Quicksand', sans-serif;
margin: 0;
padding: 0;
}
#outerDiv {
height: 100%;
width: 100%;
border: 5px solid blue;
}
#home {
background: url(../img/qualite_01.jpg) no-repeat center center fixed;
color: #f6f8f8;
display: table;
height: 100%;
position: relative;
width: 100%;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
<div id="outerDiv">
<div id="home">
<div class="text-vcenter">
<h1>Reunion Qualité</h1>
<h2>Meeting Manager</h2>
<svg height="10" width="605">
<line x1="0" y1="0" x2="611" y2="0" style="stroke:rgb(255,255,255);stroke-width:3" />
</svg>
<h3>Namur-Belgium</h3>
</div>
</div>
</div>
I want to wrap it into an extra div because i need to append it with Jquery only when i'm on the home page.
I would suggest trying to give that div a class or an ID and adding it and add CSS like .outerDiv {height:100%; width:100%}. Just to match what you have declared for home and body
Yes it works by doing that, but i have a 100% white space on every pages then :/.
Add this - {margin:0; padding: 0;} to your html, body
It doesn't change anything :/
I added a second snippet that shows the white space around the page is gone now. Is that what you mean by 100% white space?
In fact the reason why i want to wrap the hero image html code into another div is to be able to append it with Jquery only when i'm on the home page. When i'll be on another page, nothing will be appended into that div and so nothing will show on screen regarding that section. If i set the height of that other div to 100%, then i'll have a 100% white blank space on every pages.
| common-pile/stackexchange_filtered |
plugin:vite:import-analysis - Failed to parse source for import analysis because the content contains invalid JS syntax. - Vue 3
I've updated my project from Vite 2.x to Vite 3.0.2 and suddenly i got this error:
[plugin:vite:import-analysis] Failed to parse source for import
analysis because the content contains invalid JS syntax. If you are
using JSX, make sure to name the file with the .jsx or .tsx extension.
/Volumes/Disk/Web/wce-system/src/i18n.js:51:20
There's nothing wrong in i18n.js file as it was working fine with Vite 2.x but im putting codes in here just in case you need:
import { nextTick } from "vue"
import { createI18n } from "vue-i18n"
import axios from "axios"
import tr from<EMAIL_ADDRESS>import en from<EMAIL_ADDRESS>
export const SUPPORT_LOCALES = ["tr", "en"]
export function setupI18n(options = { locale: "tr" }) {
const i18n = createI18n(options)
setI18nLanguage(i18n, options.locale)
return i18n
}
export function setI18nLanguage(i18n, locale, url) {
if (i18n.mode === "legacy") {
i18n.global.locale = locale
} else {
i18n.global.locale.value = locale
}
axios.defaults.headers.common["Accept-Language"] = locale
document.querySelector("html").setAttribute("lang", locale)
}
export async function loadLocaleMessages(i18n, locale) {
const messages = await import(
/* webpackChunkName: "locale-[request]" */ `./locales/${locale}.json`
)
i18n.global.setLocaleMessage(locale, messages.default)
return nextTick()
}
const i18n = createI18n({
legacy: false,
locale: "tr",
fallbackLocale: "tr",
globalInjection: true,
messages: {
tr,
en,
},
})
export default i18n
I got this error because I moved my index.html file into a subfolder. It HAS to be in the root, even though the documentation says you can build a subdirectory. I have found that to be false.
So i figured it out. This line:
const messages = await import(
/* webpackChunkName: "locale-[request]" */ `./locales/${locale}.json`
)
should have been:
const messages = await import(`./locales/${locale}.json`)
I cant explain why it is so but im leaving links below about issue.
Vite dynamic imports
This may help for further investigation
I had same error and fixed like this:
Change file format from .js to .jsx.
During migration to Vue3, my issue was not including the Vite @vite-js/plugin-vue plugin in vite.config.js. This is needed for Vite to support Vue Single-File Components (SFCs).
// vite.config.js
import vue from '@vitejs/plugin-vue'
export default {
plugins: [
vue()
],
}
@vitejs/plugin-vue repo
Had the same, my vite.config.js was in another location. I found out using vite build --debug, it says something like "no configuration".
This one solved it for me. I'm using a browser extension framework that itself uses vite config, but Storybook wouldn't be able to use it. So I added this example here.
I found this misleading error because of a simple syntax error in a JavaScript file part of a TypeScript React app:
import { Foo } from 'baz';
export class MyClass{
static myFunc() {
return ...;
//} <- missing curly brace here
}
Adding the missing brace fixed the issue.
Thank you for mentioning this. Sometimes it is overlooking a bracket, and here I was thinking it was a serious, in depth error. Cheers to you!
I've got this particular error after migrate from vue CLI to vite.
"Internal server error: Failed to parse source for import analysis because the content contains invalid JS syntax. Install @vitejs/plugin-vue to handle .vue files."
I googled and I found a lot of remarks and some solutions but none are available in my case.
And then after a long times , I look in my IDE and it dispaly a topic " remove the uncessary parentheses" and then after removing the uncessary parenthesis all was fine.
In my code I've write
export default ({ .. some code });
after removing the () the error disappear.
export default { .. some code };
Have a nice day
make sure all following files are in your project root.
svelte.config.js
tsconfig.json
tsconfig.node.json
vite.config.ts
For anyone getting this exact same error with SvelteKit (+server.page for example), the issue can be a syntax problem elsewhere in the code - so do as above and strip the code back, building it up until you can identify it
So the problem is that there is a problem somewhere? Not sure how this is helpful
Is this 2023? If your framework requires you to do this in order to spot bugs then there is an issue with the framework design.
I got the same when I used .tsx instead of .jsx lol. So I fixed the extension
Install vue with npm install
Then run your build using npm run build
| common-pile/stackexchange_filtered |
Plotting probability on horizontal greyscale graph from single variable
I want to create horizontal stack graph like below
Arr1, Arr2, Arr3, Arr4 are in range 0-1 and length is about 50000 each.
Short Example of length 10:
Array
1
2
3
4
5
6
7
8
9
10
Arr1
0.1
0.1
0.1
0.2
0.2
0.2
0.7
0.7
0.4
0.7
Arr2
0.6
0.6
0.6
0.1
0.1
0.1
0.1
0.1
0.5
0.1
Arr3
0.3
0.3
0.3
0.7
0.7
0.7
0.2
0.2
0.1
0.2
Arr4
0.4
0.6
0.7
0.2
0.1
0.3
0.4
0.5
0.3
0.9
B
a
a
a
b
b
b
a
a
a
b
How can I plot each Arr1, Arr2, Arr3, Arr4 from light to dark based on their value in horizontal graph (or greyscale with each value be different shade of grey).
The values closer to 0 will me light in color and values closer to 1 will be dark. 0 will be white and 1 will black when using greyscale.
In picture Arr1: low values will be light yellow and high values will be dar yellow(all values are between 0 and 1)
Last bar is different, Array: B is list of a and b, how can I plot different color for each occurrence of a and b.
In picture for B: occurrences of a will be colored green and occurrences of b will be colored pink.
I would not worry much about the layout of the outer box, I would just use ggarrange to put all graph in a single plot
What I tried for Array B
B=c(rep("a",1000),rep("b",1000),rep("a",1000))
df=data.frame(B)
df$ID <- seq.int(nrow(df))
df$char=as.numeric(factor(B))
ggplot(df, aes(x=ID,y = char, color = cut(char, breaks = c(0, 1, 2))))+geom_col()+coord_cartesian(ylim=c(0,1))+
scale_colour_manual(values = c('red','green'),limits=c('(0,1]', '(1,2]'))+
guides(color = guide_legend(title = 'letter'))
But legends are not perfect
The array B is important I cannot get the x-axis right or proper scale with
melt(id.vars = "Array") %>%
mutate(variable = str_extract(variable, "[0-9]+")) %>%
mutate(value = case_when(
value == "a" ~ -1,
value == "b" ~ 2,
TRUE ~ as.numeric(value)
)) %>%
mutate(zz = 1) %>%
ggplot(aes(x = Array, y = variable, group = Array, fill = value)) +
geom_col() + coord_flip()
I guess the variable is not set correct
First, with your sample data inclusding Array and etc, you may try
df %>%
melt(id.vars = "Array") %>%
mutate(variable = str_extract(variable, "[0-9]+")) %>%
mutate(value = case_when(
value == "a" ~ -1,
value == "b" ~ 2,
TRUE ~ as.numeric(value)
)) %>%
mutate(zz = 1) %>%
ggplot(aes(x = Array, y = variable, group = Array, fill = value)) +
geom_col() + coord_flip()
Second, you would be better to use fill instead of color, that's why your legend looks weird. Also, as you use fill, you should use scale_fill_manual this time. For example,
ggplot(df, aes(x=ID,y = char, fill = cut(char, breaks = c(0, 1, 2))))+geom_col()+
coord_cartesian(ylim=c(0,1))+
scale_fill_manual(values = c('red','green'),limits=c('(0,1]', '(1,2]'))+
guides(color = guide_legend(title = 'letter'))
#You may try
library(RColorBrewer)
df2 <- df %>%
melt(id.vars = "Array") %>%
mutate(variable = str_extract(variable, "[0-9]+")) %>%
filter(value %in% c("a","b")) %>%
mutate(col = value)
df3 <- df %>%
melt(id.vars = "Array") %>%
mutate(variable = str_extract(variable, "[0-9]+")) %>%
filter(!value %in% c("a","b")) %>%
mutate(col = cut(as.numeric(value), 6))
plot.df <- rbind(df2, df3)
yourScale <- c(brewer.pal(6, "Greys"), "green", "pink")
plot.df %>%
mutate(variable = as.numeric(variable)) %>%
ggplot(aes(x = Array, y = variable, group = Array, fill = col)) +
geom_col() + coord_flip() +
scale_colour_manual("col", values = yourScale)
how can I get different legend for the Array: B for corresponding letter to the color? its fine if it is a different graph. Also the x axis looks different
@RahilVora I forgot to change variable as numeric. It will make x axis looks same. I'll reply again after I give you the legend for B.
@RahilVora I add a new code but I think that it's not likely you wanted. It was pretty hard to color scale mixed of continuous and discrete.
Hi @Park, the x axis is still messed up even after converting variable to numeric. As you can see for B variable in Blue shades does not have same size for same number of a and b(3 'a' and 3'b'). I am okay working with first code with single df
| common-pile/stackexchange_filtered |
search and delete a shortcut
the project i am working on has to run on the windows startup i found the code, but what if the user don't wan't it to start on the startup!, do i have to tell the user to search for the sortcut and delete it, or simply make a button for that option, i know how to delete the file like this:
if(File.Exists(@"C:\Users\Maged\Desktop\remainder\WindowsFormsApplication1\bin\Debug\WindowsFormsApplication1.exe"))
{
File.Delete(@"C:\Users\Maged\Desktop\remainder\WindowsFormsApplication1\bin\Debug\WindowsFormsApplication1.exe");
}
i need to make the program searsh for MyStartupShortcut.lnk and delete it if exists.
here is my code on creating the shortcut:
void CreateStartupShortcut()
{
string startupFolder = Environment.GetFolderPath(Environment.SpecialFolder.Startup);
WshShell shell = new WshShell();
string shortcutAddress = startupFolder + @"\MyStartupShortcut.lnk";
IWshShortcut shortcut = (IWshShortcut)shell.CreateShortcut(shortcutAddress);
shortcut.Description = "A startup shortcut. If you delete this shortcut from your computer, LaunchOnStartup.exe will not launch on Windows Startup";
shortcut.WorkingDirectory = Application.StartupPath;
shortcut.TargetPath = Application.ExecutablePath;
shortcut.Save();
}
i need to delete this file no matter the director is, any help?
As you know how the shortcut was created, and you show code for that, you know where the shortcut is located - see the shortcutAddress in the code. Hence deleting it should be easy. The code shown at the top of the question deletes an executable, nota shortcut and so seems irrelevant to the problem of deleting the shortcut.
i know where the shortcut is located, but when the user will run it, it will change based on his Machine so i need to make a search and delete that shortcut for him
i think i get your point on shortcutAddresslocation, so how to preform Delete using that location
Will if(File.Exists(shortcutAddress )) { File.Delete(shortcutAddress ); } work? That is based on your own code from the top of the question.
Using shortcutAddress created as in the CreateStartupShortcut method the deletion of the shortcut should be achieved with the code:
if ( File.Exists(shortcutAddress) )
{
File.Delete(shortcutAddress );
}
| common-pile/stackexchange_filtered |
Unnecessary extra margins on a multiple traces plotly chart
I am building a Plotly chart with two traces. And for some reason by default I am getting some extra space/margin between my data bars and Y axis. Here
How to get rid of that extra space?
If I keep only one trace, it is just fine, the bars start right from the edge. Like this
The script to reproduce the chart with two traces
import random
import plotly.graph_objects as go
N = 45
my_x = ['#' + str(random.randint(50, 999)) for _ in range(N)]
my_y1 = [random.randint(5, 20) for _ in range(N)]
my_y2 = [random.randint(10, 30) for _ in range(N)]
trace1 = go.Bar(name='Food',
x=my_x,
y=my_y1)
trace2 = go.Scatter(name='Water',
x=my_x,
y=my_y2,
mode='markers',
marker_size=12)
fig = go.Figure()
fig.add_trace(trace1)
fig.add_trace(trace2)
fig.show()
I will appreciate any thoughts!
There may be other settings, but you can configure them below. Why is the last one set to -2, but this is a value I found manually. In the case of scatter plots, the first and last margins are automatically set, which may have an effect. fig.update_layout(xaxis_range=[-1, len(my_x)-2])
the scatter on it's own will add the space
make the xaxis continuous, define the range and ticks and the space is removed
import random
import plotly.graph_objects as go
import numpy as np
N = 45
my_x = ['#' + str(random.randint(50, 999)) for _ in range(N)]
my_y1 = [random.randint(5, 20) for _ in range(N)]
my_y2 = [random.randint(10, 30) for _ in range(N)]
trace1 = go.Bar(name='Food',
x=np.linspace(0,N,N),
y=my_y1)
trace2 = go.Scatter(name='Water',
x=np.linspace(0,N,N),
y=my_y2,
mode='markers',
marker_size=12)
fig = go.Figure()
fig.add_trace(trace1)
fig.add_trace(trace2)
fig.update_layout(xaxis={"range":[-.5,N+.5]})
fig.update_layout(xaxis={"tickmode":"array","tickvals":np.linspace(0,N,N), "ticktext":my_x})
Thank you, Rob! That works perfect!
| common-pile/stackexchange_filtered |
Order of placing clauses in a two-clause sentence
Which of the following is correct:
If you so desire, you may download the content.
You may download the content, if you so desire.
Apparently, they appear to be all same but, on critically examining the sentences, I find an issue with the first sentence. The issue is: the first clause in the sentence refers to a desire by using the words If you so desire, though the desire has not been expressed by then; the desire has been expressed in the subsequent clause; so, going logically, it is the second clause - that is, you may download the content - which should come first and, it is only after the desire expressed in second clause has been mentioned, the first clause - that is, If you so desire - should come.
The second sentence follows this order, so it is only the second sentence which appears correct to me. Am I correct?
As a native speaker who has little idea what Kettle just said: yeah, both work. "So" doesn't imply that which just happened or was stated, it just refers to a proximate thing, in past or future, before or after.
You're attempting to apply logic to English and it simply doesn't work that way.
Yes, this "so" is an anaphor and it demands an antecedent. Normally, an antecedent precedes the anaphor -- which is the reason we call it an antecedent. However, there is a case when the so-called antecedent follows the anaphor in sentence structure. Your first example is a good example of that case.
The "so" appears in a subordinate clause. It's antecedent appears in the main clause.
The subordinate nature of "if you so desire" sets up the expectation that a main clause -- a clause that comes before it in importance if not in word order -- will contain the antecedent. In the main clause of your examples, "[to] download the content" is the nature of the desire. That is the antecedent of "so".
We could paraphrase the example by including the antecedent in the subordinate clause and leaving the anaphor in the main clause, if the subordinate clause occurs first in the word order:
If you desire to download the content, you may [do so].
The anaphor must come second. It can be second in word order or second in clause importance, but it must be at least one of those. The anaphor fails if it comes first in both those considerations:
You may [do so] if you desire to download the content.
In this sentence, the "[to] do so" (or even just the bare "may") does not refer to "to download the content". It has to refer to something else, most likely to something from the prior sentence.
Both of your original examples are correct. The first one is correct because the clause containing the anaphor is second in importance, and we know that a more important clause must follow and must contain the antecedent. The second one is correct because the clause containing the anaphor is second in word order, and we know the antecedent of "so" before we reach the anaphor.
The second example is easier to understand and easier to explain, but it is not more correct than the first. Both examples are correct, both examples can be understood, and both examples can be explained.
Both sentences are equivalent and correct. Maybe the nuance can be shown by an example which has more than one choice:
"If you so desire, you may download, or print out, or email the content."
To me, a native speaker, when used this way, it indicates there is an anticipation of the different actions by the person spoken to (as you point out the desires are enumerated later). It is softer and more polite. The phrase itself carries a feeling of deference. For example, speaking to a friend, the equivalent might be:
"You could download, or print out, or email the content."
Putting if you so desire at the end, seems to say well, if that's what you want to do, more of a hedge by the speaker than anticipating and trying to fill a need of the person spoken to.
Both work. "So" doesn't imply that which just happened or was stated, it just refers to a proximate thing, in past or future, before or after. It's entirely equivalent in meaning to "if you would like", which also doesn't mind whether the paired phrase comes before or after.
| common-pile/stackexchange_filtered |
Why configure bridge in hostapd.conf
In a lot of hostapd documentation (is there a single comprehensive one that covers all options?) I read that it is necessary to configure the bridge to which the wireless interface belongs to, using the hostapd bridge directive instead of adding them the way you would usually configure an interface as part of a bridge. Why is this required? Is it possible to configure multiple wireless networks on different bridge that way?
I recently had a problem where the bridge directive was preventing the AP from working; removing the bridge fixed the problem, so I'm not sure it is required, or maybe it just sin't required any longer.
From what I understand, you can't add the WiFi device to the bridge until it is up and running in AP mode. So you can specify a bridge here, and hostapd will add it to the bridge once it becomes possible.
It's only needed if you want to bridge LAN and WiFi traffic, so the LAN and wireless clients become one big network on the same IP subnet. If you are configuring your machine as a router (e.g. with iptables, and the wireless clients are in a different IP subnet to the wired clients) then you don't need the bridge device.
Most commercial WiFi routers bridge the traffic between the WiFi and wired interfaces so using the bridge will replicate this behaviour and is much simpler to set up.
| common-pile/stackexchange_filtered |
MapStruct using default constructor over parameterised constructor of Kotlin data class(error: constructor in class cannot be applied to given types)
In my Micronaut Kotlin project I have two repositories. I am using MapStruct in both of them.
Source Class
@JsonInclude
data class Source(
var id: String,
var no: String,
var value: String,
)
Destination Class
@JsonInclude
data class Destination(
var id: String,
var no: String,
var value: String,
)
Mapper
@Mapper(unmappedTargetPolicy = ReportingPolicy.IGNORE)
interface SourceMapper {
fun convertToDestination(doc: Source): Destination
}
One of them first extracted the values of source class attributes and then passes these values to the parameterised constructor generated by kotlin data class
public Destination convertToDestination(Source doc){
if ( doc = = null ) {
return null;
}
String id = null;
String no = null;
String value = null;
id = doc.getId();
no = doc.getNo();
value= doc.getValue();
Destination d = new Destination(id, no,value);
}
Another one first creates the object then uses setter to assign value which fails at compile time as default constructor is not present for the data class.
public Destination convertToDestination(Source doc)
{
if ( doc == null ) {
return null;
}
Destination d = new Destination();
d.setId(doc.getId());
d.setNo(doc.getNo());
d.setValue(doc.getValue());
return d;
}
I have compared both the Gradle files and I am using same version of mapstruct.
```kapt("org.mapstruct:mapstruct-processor:1.5.0.RC1")``` and ```implementation("org.mapstruct:mapstruct:1.5.0.RC1")```
Please help me understand the behaviour and how we can control it.
One solution for the build issue is to add default values to every attribute in Kotlin data class but still the behaviour varies in both Microservices. Unsure on how to control that.
| common-pile/stackexchange_filtered |
Is mocking an essential part of unit testing?
I have a web service which calls a third party web service.
Now I want to unit-test my web service. For this, should I mock the third party web service or is it fine to call it during the tests?
Is there any standard document on unit testing?
You are trying to unit your component then you should mock others. Unit test on single component should not dependant on other componenets, I believe.
There are books on unit testing. There is no absolute truth or standard when it comes to unit testing. There are best practices and there are times when it's convenient to do things differently.
Yes, you should mock the third party web service for unit testing. Here are some advantages:
Your unit tests are independent of other components and really only test the unit and not also the web service. If you include the service in your unit tests and they fail, you won't know whether the problem is in your code or in the foreign web service.
Your tests are independent of the environment. If the Internet connection is ever down or you want to test on a machine that doesn't have Internet access at all, your test will still work.
Your tests will be much faster without actually connecting to the web service. This might not seem like a big thing but if you have many many tests (which you should), it can really get annoying.
You can also test the robustness of your unit by having the mock send unexpected things. This can always happen in the real world and if it does, your web service should react nicely and not just crash. There is no way to test that when using the real web service.
However, there is also integration testing. There, you test the complete setup, with all components put together (integrated). This is where you would use the real web service instead of the mock.
Both kinds of testing are important, but unit testing is typically done earlier and more frequently. It can (and should) be done before all components of the system are created. Integration testing requires a minimal amount of progress in all of most of the components.
Could you please suggest some Mock Web Service clients creation tutorial. Thanks.
I'm sorry, but I don't know any tutorial specifically for web service mocking. But if your code is well designed, you should have a (Java) interface for the third party service so that you can easily replace the actual implementation with a mock.
Ya it would be real fun if there were some predefined guidelines... I came across this article about unit testing webservices and it says that it is ok to go over the wire... http://www.drdobbs.com/web-development/unit-testing-web-services/211201695... also microsoft also recomends going to the server... http://msdn.microsoft.com/en-us/library/ms243399%28v=vs.80%29.aspx
This depend on what you're unit-testing.
So if you're testing out whether you can successfully communicate with the third-party webservice, you wouldn't obviously try to mock this. However if you're unit-testing some business use-cases that are part of your web-service offering (unrelated to what the other modules/web services are doing), then you might want to mock the third-party web service.
You should test both but both test cases does not fall under Unit testing.
Unit testing is primarily used for testing individual smaller pieces i.e. classes and methods. Unit testing is something that should preferably happen during the build process without any human intervention. So as part of Unit testing you should mock out the third party webservice. The advantage of mocking is that you can make the mocked object behave in many possible ways and test your classes/methods to make sure they handle all possible cases.
When multiple systems are involved, the test case falls under System/Integration/Functional testing. So as part of your System/Integration/Functional testing, you should call the methods in other webservice from your webservice and make sure everything works as expected.
Mocking is an usually essential in unit testing components which have dependent components. This is so because in unit testing , you are limited to testing that your code works correctly ( does what its contract says it will do ). If this method , in trying to do the part of its contract depends upon other component doing their part correctly , it is perfectly fine to mock that part out ( assuming that they work correctly ).
If you dont mock the other dependent parts , you soon run into problems. First , you cannot be certain what behavior is exhibited by that component. Second , you cannot predict the results of your own test ( because you dont know what was the inputs supplied to your tests )
| common-pile/stackexchange_filtered |
How do I get the values of a nipple.js joystick
I'm having 2 joysticks and want to get their position and size properties with JS. The official documentation s hard for me to understand, because it is very very theoretical and has no practical examples.
I tried using the console to get the properties by checking the value of the joysticks, but I miserably failed getting any information out of that.
All I need is to get both joystick centers and stick positions.
Here is my js:
/touchdevice is set to true the moment a touch action happens
if(touchdevice) {
/mstick and astick are predefined
mstick = document.querySelector("#mstick");
astick = document.querySelector("#astick");
window.mstick = nipplejs.create({
color: "#000000",
shape: "square",
zone: mstick,
threshold: 0.5,
fadeTime: 300
});
window.astick = nipplejs.create({
color: "#000000",
shape: "circle",
zone: astick,
threshold: 0.5,
fadeTime: 300
});
}
perhaps some examples? https://snyk.io/advisor/npm-package/nipplejs/example (was going to make a joke about being abreast of those examples ....)
I tried looking at the example, but it only confused me even more
My friend and fellow developer colleague Wef324 (on Discord) helped me with this question. Here is the end result:
if (touchdevice) {
//mstick and astick are predefined
const mstick = document.querySelector("#mstick");
const astick = document.querySelector("#astick");
window.mstick = {
position: {
x:0, y:0
},
distance: 0,
};
window.astick = {
position: {
x:0, y:0
},
distance: 0,
};
window.mstickInstance = nipplejs.create({
color: "#000000",
shape: "square",
zone: mstick,
threshold: 0.5,
fadeTime: 300,
});
window.astickInstance = nipplejs.create({
color: "#000000",
shape: "circle",
zone: astick,
threshold: 0.5,
fadeTime: 300,
});
window.mstickInstance.on("move", (event, nipple) => {
window.mstick.position = nipple.position;
window.mstick.distance = nipple.distance;
window.mstick.direction = nipple.angle.radian;
console.log(window.mstick);
});
window.astickInstance.on("move", (event, nipple) => {
window.astick.position = nipple.position;
window.astick.distance = nipple.distance;
window.astick.direction = nipple.angle.radian;
console.log(window.astick);
});
}
This code basically saves picked values the "move" event delivers with it and saves them into the corresponding window.stickname variable`.
Before you say: "Why don't you mark this answer as correct?". I have to wait 20 hours to do so. Not my fault, but stackoverflows fault.
That's a "safeguard" to force you to wait and see if someone else has a better solution, so that you don't go with the first thing you find that happens to work and that's that.
Thanks for informing me about this
| common-pile/stackexchange_filtered |
using std::stringstream as a the text that gets thrown as runtime error
So basically, I am checking if a window is created in SDL and if not a runtime error is thrown. The following was suggested to me by someone in code review.
if (!windowCreated())
{
std::stringstream buffer;
buffer << "Could not intialize SDL properly: " << SDL_GetError();
}
But in line std::stringstream buffer, I get an error saying incomplete type is not allowed.
I was playing around with it and noticed that this error disappears if I do std::stringstream buffer();, but get a new error saying expression must have integral or unscoped enum type in the next line.
The following function shows what I am trying to achieve.
void throwError()
{
bool windowCreated = false;
if (!windowCreated)
{
const char* SDL_ERROR = "someerrror";
std::stringstream buffer;
buffer << "error: " << SDL_ERROR;
throw std::runtime_error(buffer.str());
}
}
If you open your C++ textbook to the chapter that explains how to use std::stringstream, it'll tell you which header needs to get #included in order to use it. That should solve this compilation error. P.S. this is compilation error, and not a runtime error.
bool windowCreated = false; if (!windowCreated) - what is purpose of this?
@SamVarshavchik Thank you. It worked
fwiw, std::stringstream buffer(); declares buffer as a function that returns a std::stringstream.
Make sure you have #include <sstream> in your code in order to use std::stringstream (in your case, prefer std::ostringstream instead). You are likely missing that header file include, but the class has been forward-declared in another header file that you are including. That would explain why the compiler thinks the class is an incomplete type.
The reason why std::stringstream buffer(); doesn’t work is because that is not a declaration for a std::stringstream variable named buffer, but rather is a declaration for a function named buffer that takes no arguments and returns a std::stringstream.
| common-pile/stackexchange_filtered |
ListView highlight item isn't shown
I'm trying to highlight the currently selected item in a ListView. Below is the code I'm using; for some reason, while a similar code works perfectly in another ListView of this application, here the SelectedRectangle item is never displayed, althought the selected item changes when it should.
Rectangle {
id: deviceTree
width: (window.width * 2) / 3
height: 400
border {
width: 2
color: "black"
}
ListView {
id: deviceTreeView
model: deviceTreeModel
delegate: deviceTreeDelegate
highlight: SelectionRectangle {}
anchors.fill: parent
anchors.margins: 6
}
Component {
id: deviceTreeDelegate
Rectangle {
border.color: "#CCCCCC"
width: deviceTree.width
height: 30
smooth: true
radius: 2
MouseArea {
anchors.fill: parent
onClicked: { deviceTreeView.currentIndex = index; window.selectedDeviceChanged(deviceName) }
}
}
}
}
SelectedRectangle.qml
Rectangle
{
id: selectionRectangle
color: "lightsteelblue"
smooth: true
radius: 5
}
SOLUTION: The rectangle in deviceTreeDelegate was by default white and overlapped the selection rectangle. Using the property it's set as trasparent so that the selection can be seen.
This is due to the default Rectangle color being white and the highlight being stacked under the delegate. Setting the Rectangle color to "transparent" will allow the highlight to be visible through the delegate.
Your code gets two mistakes :
The component for the highlight property. The name of the type of the component is the same than the name of the QML file where it is defined. You defined it in a file named SelectedRectangle.qml so you have to write highlight: SelectionRectangle {} in your main QML file
The type of the highlight member is Component. So the component that you use for this member should have got a type which inherits Component. Or you use a QML component that inherits Rectangle and Rectangle does not inherit Component. You should enclose your SelectedRectangle in a Component object, just like you did for the delegate.
Finally, you should write something like this for your highlight component :
highlight: Component {
SelectedRectangle {}
}
Sorry, the naming is right in my code, I just copied it wrong. Plus, even enclosing the rectangle in a Component doesn't work.
| common-pile/stackexchange_filtered |
Creating an object for a form in React / Typescript
I'm wanting to create a form that will be used for both editing and creating a new object.
So if let's say I have,
interface IDog {
name: string
type: string
}
interface IDogForm {
dog: IDog
}
const DogForm = (props: IDogForm) => {
const { dog } = props
const [value, setValue] = useState<IDog>(dog)
return (
<input value={dog?.name} />
<input value={dog?.type} />
)
}
Since I'm using the same form for editing and creating a new object, sometimes dog can be undefined. In which case, I've added dog?.name as a check to see if dog exists. My main question is, would it be a good idea to do something like this in addition to checking for existence:
const [value, setValue] = useState<IDog>({
name: '',
type: ''
})
Or in the parent component, I could do something like this:
const dog = {} as IModal
Although, from what I've read it's not the greatest idea to do it that way because I'll lose type safety.
Why instead of adding the ? in the value you add it in the IDog interface after name and type?
Because they are required fields that need to be completed in the form
If you make dog optional in IDogForm:
interface IDogForm {
dog?: IDog
}
Then you can do something like this:
const newDog: IDog = { name: '', type: '' }
const DogForm = ({ dog }: IDogForm) => {
const [value, setValue] = useState(dog ?? newDog)
return <>
<input value={value.name} />
<input value={value.type} />
</>
}
Now dog is optional, but if its present then all fields on that dog are required. By initializing the state with dog ?? newDog you provide a blank dog to use when one is not passed in.
Now you can use this component with our without a dog.
const testA = <DogForm dog={{ name: 'Fido', type: 'good boy' }} />
const testB = <DogForm />
See playground
| common-pile/stackexchange_filtered |
how to set the name attribute for dymamically created div?..in jquery
I created a div dynamically whenever I clicking the "Add More" button. This is how I created my div, showing below:
<div id="row-template">
<div class="cell" >
<input type="text" value="" class="part-number"/>
</div>
</div>
Now I want to set the name attribute for this div:
<div class="cell" ><input type="text" value="" class="part-number"/></div>
I will create multiple divs and I want to get entire values in my java using request.getParameter("The Part Number Text Box Name");
My issue is my java variable is being null (String Part Number = request.get Parameter("The Part Number Text Box Name"))
It is giving me null values because I am unable set the name attribute for this text box and I have to set the unique name with index then only its easy to get in java with indexes.
The issues I want to resolve are:
I want to get my entire text box values to my java through request.getParameter(...) with the help of index
I want get my all row's length how many div I created.
Please help me out from these issues...Thanks in advance
see i want to get multiple created div values so i choosed the name parameter to get all the values to java with the unique name help of index values..
Have you tried adding the name attribute to the input element instead of the div element?
To borrow a bit from the previous answer:
var cnt = 0;
$("#mybutton").on("click", function () {
var html = "";
html += "<div id='row-template'>";
html += "<div class='cell'>";
html += "<input name='myName" + cnt +"' type='text' value='' class='part-number'/>";
html += "</div>";
html += "</div>";
console.log(html);
$("#WhereToAppend").append(html);
cnt++;
});
The code above is jquery code that will add a new input (and surrounding div) element to the DOM. It will give each input element a unique name by appending the value of "cnt" to the string "myName", in effect giving the input element a name of "myName0", "myName1", "myName2" for each time you trigger this specific click event.
From the W3Schools definition of the Input element:
The name attribute specifies the name of an element.
The name attribute is used to reference elements in a JavaScript, or to reference form data after a form is submitted.
Note: Only form elements with a name attribute will have their values passed when submitting a form.
so can u give any solution for me to get multiple created div that text box values to java using req.getParam...
If every input element has a name attribute, they will show up in the post parameters. Whether you are using java or .net is immaterial. You need each INPUT element to have name attribute - not the DIV elements.
I don't know how you append the div dynamically but in jquery it will look like this :
var count = 0;
$("MyAddButton").click(function () {
var html = "";
html += '<div id="row-template">';
//put the name you want here
html += '<div class="cell" name="myName" ><input type="text" value="" class="part-number"/</div>'
html += '</div>';
$("WhereToAppend").append(html)
count++;
});
right @Mathieu Labrie Parent i set the attribute name for that div id but i could not get that on jave its giving null value when usingreq.getParameter so how can i overcome this?...
I'm gessing that the getParameter is looking in the querystring. The div don't get pass in the querystring when you submit a form, only input. You could put that value into a hiddenfield.
no @Mathieu Labrie Parent if directly putting like( ) this value is getting in req.getParam but if setting name attributes with the current index values for that time it is not taking this value getting "null"..
I may not have a full answer for you but hopefully this will lead you in the right direction.
1)
If your page is invoked like this:
http://<your_server>:<your_port>/mytest.jsp?crazyParamName=crazyParamAnswer
You can use:
String answer = request.getParameter( 'crazyParamName' );
and the variable answer will be a String with the value: crazyParamAnswer
Use it to create your new inputs by inserting it into the page with a scriptlet like this <%= anwser %>
2) You can count how many divs you create when you click on the 'add more' button (using jQuery, I am assuming)
var numberOfCells = $('#row-template').find('.cell').size();
So, when you create a new button, you simply add one to the existing numberOfCells and use it as your index. Every time the button is clicked, the index will go up by one.
| common-pile/stackexchange_filtered |
Multiple $sum in $group in MongoDB
Given this MongoDB collection:
[
{ client: 'client1', type: 'Defect', time: 5 },
{ client: 'client1', type: 'Test', time: 5 },
{ client: 'client2', type: 'Management', time: 3 },
{ client: 'client2', type: 'Defect', time: 3 },
{ client: 'client3', type: 'Test', time: 4 }
]
I would like to get a sum of times from each issue_type like this:
{
client1: { 'Defect': 5, 'Test': 5 },
client2: { 'Management': 3, 'Defect': 3 },
client3: { 'Test': 4 }
}
And I've been trying to do this using the aggregation framework (to replace an existing map/reduce) but have only been able to get as far as getting counts for the combinations like this:
{ '_id': { client: 'Client1', class: 'Defect' }, sum: 5 }
{ '_id': { client: 'Client1', class: 'Test' } count: 5 }
{ '_id': { client: 'Client2', class: 'Management' }, count: 3 }
{ '_id': { client: 'Client2', class: 'Defect' }, count: 3 }
{ '_id': { client: 'Client3', class: 'Test' }, count: 4 }
Which is simple enough to reduce programmatically to the desired result but I was hoping to be able to leave that to MongoDB.
For any help that might be rendered many thanks in advance!
Edited
I'm adding this aggregation group
db.getCollection('issues').aggregate(
[
{
$group:
{
_id: {component:"$client"},
totalTime:{$sum: "$time" }
}
}
]
)
Your value are string not number. What if you have two documents with same key/value pairs like this { client: 'client1', type: 'Defect', time: '5' }? What are you trying to do here?
What version of MongoDB are you running?
Please post your aggregation code so we can help you based on what you have so far.
@user3100115 I want to $sum all hours in client-type. I'm reading a csv with a list of issues time imputation and I want to get some analysis information.
@Saleem mongod version: 3.0.9
I don't like your suggested output format. What you are essentially asking for
is taking your "data" and turning that into the "keys" of the result produced. To me this is the antithesis of clean object oriented design, since every "object" is completely different and you basically need to cycle the keys to determine what type of thing it is.
A better approach is to keep the keys as they are, roll-up with $group on the combination of "client" and "type", and then $group again to $push the data per "type" into an array for each grouped "client":
db.getCollection('issues').aggregate([
{ "$group": {
"_id": {
"client": "$client",
"type": "$type"
},
"totalTime": { "$sum": "$time" }
}},
{ "$group": {
"_id": "$_id.client",
"data": {
"$push": {
"type": "$_id.type",
"totalTime": "$totalTime"
}
}
}}
])
This gives you a result like this:
{
"_id" : "client1",
"data" : [
{
"type" : "Test",
"totalTime" : 5
},
{
"type" : "Defect",
"totalTime" : 5
}
]
}
{
"_id" : "client2",
"data" : [
{
"type" : "Defect",
"totalTime" : 3
},
{
"type" : "Management",
"totalTime" : 3
}
]
}
{
"_id" : "client3",
"data" : [
{
"type" : "Test",
"totalTime" : 4
}
]
}
Which to me is a perfectly natural and structured form of the result with each "client" as a document and a naturally iterable list as it's content.
If you are really insistent on the single object output format with named keys, then this source is easy to transform. And to my mind the simple code shows again how much better the previous result is:
var output = {};
db.getCollection('issues').aggregate([
{ "$group": {
"_id": {
"client": "$client",
"type": "$type"
},
"totalTime": { "$sum": "$time" }
}},
{ "$group": {
"_id": "$_id.client",
"data": {
"$push": {
"type": "$_id.type",
"totalTime": "$totalTime"
}
}
}}
]).forEach(function(doc) {
output[doc._id] = {};
doc.data.forEach(function(data) {
output[doc._id][data.type] = data.totalTime;
});
});
printjson(output);
Then you get your object as you like:
{
"client1" : {
"Test" : 5,
"Defect" : 5
},
"client2" : {
"Defect" : 3,
"Management" : 3
},
"client3" : {
"Test" : 4
}
}
But if you are really insistent on the server crunching all of the work and not even off-loading the reshaping of the result, then you can always fire this as mapReduce:
db.getCollection('issues').mapReduce(
function() {
var output = { },
data = {};
data[this.type] = this.time;
output[this.client] = data;
emit(null,output)
},
function(key,values) {
var result = {};
values.forEach(function(value) {
Object.keys(value).forEach(function(key) {
if ( !result.hasOwnProperty(key) )
result[key] = {};
Object.keys(value[key]).forEach(function(dkey) {
if ( !result[key].hasOwnProperty(dkey) )
result[key][dkey] = 0;
result[key][dkey] += value[key][dkey];
})
})
});
return result;
},
{ "out": { "inline": 1 } }
)
Which has the same kind of output:
{
"_id" : null,
"value" : {
"client1" : {
"Defect" : 5,
"Test" : 5
},
"client2" : {
"Management" : 3,
"Defect" : 3
},
"client3" : {
"Test" : 4
}
}
}
But since it is mapReduce, the interpeted JavaScript is going to run
much more slowly than the aggregation pipeline's native code, and of course would never scale to a result that produced a document larger than the 16MB BSON limit, because all the result is mashed into one document.
Plus, just look at the complexity of traversing Object keys, checking for keys, creating and adding. It's really just a mess, and an indicator of any further code working with such a structure.
So for my money, stay away from transforming perfectly well formed data into something where actual "values" are represented as "keys". It really makes no sense from a clean design perspective, nor does it make since to replace natural lists of "arrays" with traversing the keys of an object.
Wooooww, fantastic explain and resolve perfectly my question. I don't know about $push operator.
| common-pile/stackexchange_filtered |
Foreign key constraint @ManyToMany relationship preventing deletion
I've three associated records (Conference, SubmissionRecord, SubmissionAuthorRecord). Every SubmissionRecord has a Conference object and has a List<SubmissionAuthorRecord>.
When I delete a Conference record if the SubmissionRecord is associated with that Conference, it should cascade and delete as well. However, I keep getting a java.sql.SQLIntegrityConstraintViolationException: Cannot delete or update a parent row: a foreign key constraint fails (`viz`.`submission_record_author_set`, CONSTRAINT `FKgnqq52l26bitkmojk1oiuaki1`
FOREIGN KEY (`submission_record_s_id`) REFERENCES `submission_record` (`s_id`)) error message.
The table submission_record_author_set is create automatically and I have no entity that maps to it.
I understand the issue lies in the fact that the submission_record_author_set rows are preventing the SubmissionRecord from being deleted and have tried the @PreRemove method described here (How to remove entity with ManyToMany relationship in JPA (and corresponding join table rows)?) but to no avail. Maybe there's an issue with the ManyToMany annotation? Cause I do not see the equivalent annotation in the SubmissionAuthorRecord either.
@Entity
public class SubmissionRecord {
@Id
@GenericGenerator(name = "UseExistingIdOtherwiseGenerateUsingIdentity", strategy = "xyz")
@GeneratedValue(generator = "UseExistingIdOtherwiseGenerateUsingIdentity")
@JsonSerialize(using = ToStringSerializer.class)
@Column(name = "s_id")
private Long id;
@Exportable(name = "Submission Id", nameInDB = "s_submission_id")
@Column(name = "s_submission_id")
private String submissionId;
// internal set of authors of the associated
@ManyToMany(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
@JsonIgnore
private List<SubmissionAuthorRecord> authorSet;
@JoinColumn(name="conference_id")
@ManyToOne(fetch = FetchType.EAGER)
@OnDelete(action = OnDeleteAction.CASCADE)
private Conference conference;
//...
}
@Entity
public class Conference {
@Id
@GenericGenerator(name = "UseExistingIdOtherwiseGenerateUsingIdentity", strategy = "xyz")
@GeneratedValue(generator = "UseExistingIdOtherwiseGenerateUsingIdentity")
@JsonSerialize(using = ToStringSerializer.class)
private Long id;
private String creatorIdentifier;
private String conferenceName;
private String conferenceYear;
}
@Entity
public class SubmissionAuthorRecord {
@Id
@GenericGenerator(name = "UseExistingIdOtherwiseGenerateUsingIdentity", strategy = "xyz")
@GeneratedValue(generator = "UseExistingIdOtherwiseGenerateUsingIdentity")
@JsonSerialize(using = ToStringSerializer.class)
@Column(name = "s_author_id")
private Long id;
private String dataSet;
@Column(name = "s_author_name")
private String name;
}
The submission_author_record_set table looks like the following:
did you try to add "cascade=true" ?
@rmalchow you mean the annotation? Yep I have!
| common-pile/stackexchange_filtered |
async / await best way on REST API Ruby on Rails
I've a question because I'm a noob in Ruby but I need to develop an API.
I ask you to understand better how to implement the async/await. For example here :
members_controller.rb
#GET /members
# Get all the members
def index
begin
render json: Member.get_all_members
rescue => error
render json: {message: "An error occurs during the get all members", status: 404, error: error.message}
end
end
And the model member.rb
# Get all members
#
def self.get_all_members
begin
return self.all
rescue => error
raise "Exception thrown #{error.message}"
end
end
I want to manage well the await/async response.
Don't hesitate to send me good documentation or whatever which can help me.
I'm so lost because I come from the .NET framework env ! ahah.
Thanks a lot !
The Fetch or Axios call with async and await is on the client side ; not in the API. So you have to write it in javascript (for example) in the application that will consume your API. As such, your code seems fine. As a side-comment : Ruby on Rails is an overkill for an API. Please check Sinatra or Roda with Sequel.
thanks for your comment @thiebo. It's strange to don't have an await/async to call a database. Very very strange. And can me explain why Rails API is overkill ? It's the API framework so... I'm interesting by your suggestion !
The question is: why would you do an async function to call a database? Concerning RoR versus Sinatra versus other frameworks: it's just an opinion. Don't bother.
=> Scalability / performance.
When you make I/O calls - database queries, file reading, reading from HTTP etc- the thread that is handling the current HTTP request is just waiting.
It’s just waiting for a result to come back from the operating system.
Performing a database query, for example, ultimately asks the operating system to connect to the database, send a message and get a message in return. And so just to perform the app.
While I understand your POV and async here would generated many more things for you to handle. If want a more simple and excellent option look for phoenixframework its based on elixir. Its not oop, but functional programing.
?? What ? ahah. all the language does async/await. Java, C#, nodejs etc in backend so I think Ruby too ahah. I'll search.
ruby language and rails framework do not support async/await . meanwhile you might as take look at https://github.com/ruby-concurrency/concurrent-ruby
You can try to explore async-ruby & load_async
| common-pile/stackexchange_filtered |
Clustering with cluster size/mass constraint
I'm rather new to this field and maybe even using a bit naive terminology.
So I need to classify (cluster) some objects, but limiting each cluster in size, so for each cluster $C_i$,
$$\sum_{o \in C_i} m(o) \leq N$$
where $m(\cdot)$ is a some norm-like function.
In case the objects are on a euclidean plane, would be nice if also $\forall i \neq j, h(C_i) \cap j(C_j) = \emptyset $, where $h(\cdot)$ is a convex hull (though not strictly required)
Lots of thanks in advance for correcting my terminology, and links to both algorithm implementations and theory.
P.S. Without this limitation, we are more or less happy with DBSCAN results.
// Best regards, Sergei
Would it be reasonable to use k-means, increasing $k$ until the above constraint holds? I have no idea if this would give decent results, especially for your specific case (you likely won't get results comparable to those from DBSCAN); it's just the first idea that came to mind. Would also be very low-effort.
I guess this idea applies more generally to any algorithm with a limited enough set of parameters that you can easily adjust and check your results. Of course, this would be less efficient than an algorithm designed to keep in mind a maximum "cluster mass"--and you'd need to develop some heuristics. I'm just not aware of any existing algorithms that do what you need (though my knowledge is likely as limited as yours, if not moreso).
| common-pile/stackexchange_filtered |
Look at these cesium measurements again - the ratio of hyperfine constants between the two isotopes doesn't match what we'd expect from just their nuclear magnetic moments.
The deviation is only about 0.03%, but that's way beyond our measurement uncertainty. Something's affecting the s-electrons differently in each isotope.
Right, because those are the only electrons that actually penetrate the nucleus. The p-half electrons too, but mainly the s-states. What if the nuclear magnetization isn't point-like?
Exactly. If the magnetic moment is distributed throughout the nuclear volume, then electrons sampling different parts of that distribution will experience slightly different fields.
But both isotopes have the same electronic structure, so why would there be any difference?
The nuclear charge distributions are different. Cesium-133 and cesium-134 have different numbers of neutrons, which changes both the size and shape of the nuclear magnetic distribution.
That makes sense for the Bohr-Weisskopf effect, but there's also the charge distribution correction, isn't there?
The Breit-Rosenthal correction, yes. When the nuclear charge is extended rather than point-like, it creates additional electric field gradients that the electron experiences.
So we're seeing both electromagnetic multipole interactions being modified by nuclear structure. Can we separate these effects experimentally?
We'd need hyperfine measurements from multiple electronic states. The s-half and p-half states probe the nucleus differently, so comparing their anomalies can reveal the relative contributions.
But the differential anomaly between neighboring isotopes is typically only 0.02%. How do we get enough precision to extract meaningful nuclear structure information?
That's the challenge. We need frequency measurements accurate to parts per million, and we have to account for all the systematic effects in the atomic structure calculations.
What about using this backwards - if we can calculate the electronic contributions precisely enough, could we extract nuclear magnetic moment distributions?
In principle, yes. The contact interaction depends on the electron wavefunction at the nuclear origin, but the anomaly tells us how that interaction changes when we account for the extended nuclear structure.
The single-particle nuclear model gives reasonable predictions for simple cases, but complex nuclei with mixed spin and orbital contributions to their moments are much harder to interpret.
Still, it's remarkable that we can probe nuclear structure at this level just by measuring atomic | sci-datasets/scilogues |
How do I get Post Object data form Wordpress ACF - Flexible Content field
Using Advanced custom field plugin I have a custom post type that has various custom fields assigned to it.
I'm trying to output all of the data that is contained in the flexible content field "content"
I have it outputting "text_ad" ok but for some reason i can't figure our the "newsletter_article" which is a post object - Any direction to getting this working would be amazing.
Read this https://www.advancedcustomfields.com/resources/flexible-content/ and this https://www.advancedcustomfields.com/resources/post-object/
<?php
// check if the flexible content field has rows of data
if( have_rows('content') ):
// loop through the rows of data
while ( have_rows('content') ) : the_row();
if( get_row_layout() == 'text_ad' ):
echo the_sub_field('text_ad_title');
echo the_sub_field('text_ad_url');
echo the_sub_field('text_ad_description');
elseif( get_row_layout() == 'newsletter_article' ):
$post_object = get_sub_field('the_newsletter_article');
if( $post_object ):
$post = $post_object;
setup_postdata( $post );?>
<strong><a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></strong>
<?php
wp_reset_postdata();
endif;
endif;
endwhile;
else :
// no layouts found
endif;
?>
Please provide a screenshot of your ACF setup. Row layouts should be the name of the flexible content type.
Because I'am just getting one object I can access the data like this.
<?php
elseif( get_row_layout() == 'newsletter_article' ):
$post_object = get_sub_field('the_newsletter_article');
$newstitle = $post_object->post_title;
$newsdescription = $post_object->post_content;
?>
<div style="background-color: #cccccc; margin:0 0 20px 0;">
<p><strong><?php echo $newstitle; ?></strong></p>
<p><?php echo $newsdescription; ?></p>
</div>
<?php endif; ?>
| common-pile/stackexchange_filtered |
How do I ignore a System.Data.SqlClient.SqlException: String or binary data would be truncated. exception
I know that my data is too large and that is why I am getting the exception. However, this seems more like a warning to me. Is there a property in the SQLCommand that I can set that would have this just be a warning and insert the truncated data anyhow? Trying to eliminate all possible questions that would be quickly answered. I cannot change the size of the column in the SQLDatabase. I do not have permission to do this. I would like to avoid going to each string and saying if length > x take substring. I just want this to be ignored at the command level.
You can execute SET ANSI_WARNINGS OFF to suppress this message. Or you can use LEFT||RIGHT functions to truncate data and pass the maximum number of character for the column.
You can turn ANSI warnings to off and get around this, but this means either chaining commands to turn off and on or relying on Stored Procedures to handle this. I have not examined all database options, but I believe this can only be done on a batch level. Even if it can be done for the entire database, I don't think it can be set up table specific. And, even if can be done for the entire database, it is a bad idea. Best to only use the setting where you don't care about truncation.
In my opinion, truncate and ignore is a bad option, unless you really do not care about the data. And, if you don't care about it, why are you saving it anyway?
| common-pile/stackexchange_filtered |
backbone/underscore template in mustache format causing error on # pound/hash symbol?
I'm using backbone's underscore templating engine with the mustache formatting patterns.
I have already been successfully using it elsewhere in the project but now for the first time I'm using the looping list patterns from mustache to populate the template which is throwing an error which I'm a bit baffled by.
The error in chrome is:
"Uncaught SyntaxError: Unexpected token ILLEGAL"
and points to underscore's template function in the backtrace, which is pretty useless but in firebug i get a more helpful error like this:
Suggesting that the hash symbol '#' is the issue, which would make sense as I know that mustache is working ok as there are many other parts of the project using it well, also this is the first time I'm using the hash sybol in my templates.It looks like a problem either with the looping feature or with the interpolation/template settings for underscore.
Here's the relevant piece of my template:
<div class="thumblist thumblistleft" id="currentprojectslist">
<div class="thumb-list-header">
<h2>current projects</h2>
</div>
<div class="thumb-list-area">
<ol>
{{#worklist}} <!----- LOOK HERE --->
{{#current}}
<li><a>{{title}}</a></li>
{{/current}}
{{/worklist}}
</ol>
</div>
</div>
and here's a sample of the JSON (which all validates fine)
{blah blah blah lot in here before,"worklist":[{"thumb":"img/project-s.jpg","id":"340","title":"Test Project One","desc":"big load of content here","current":true}], and so on....}
I was initially following this example here for reference:
http://mustache.github.com/#demo
NOW HERES WHERE I THINK THE PROBLEM MIGHT BE:
underscore.js suggests using this before rendering a mustache template:
_.templateSettings = {
evaluate : /\{\[([\s\S]+?)\]\}/g,
interpolate : /\{\{([\s\S]+?)\}\}/g
};
also:
interpolate : /\{\{(.+?)\}\}/g
Also just the interpolate statement, ive tried both. However my regex knowledge is really poor and I have a feeling it might not accomodate the hash? At any rate.... I'm totally stumped.
Can someone help me out here?
is it even possible to loop like this? looking at underscore source i'm not sure:
http://documentcloud.github.com/underscore/docs/underscore.html#section-120
Thanks very much
The pattern /\{\{([\s\S]+?)\}\}/g will match {{#foo}} and capture #foo. You could avoid capturing the hash by using /\{\{#?([\s\S]+?)\}\}/g, altho I think it will cause other problems.
did you ever manage to loop like this? I would really like to, it's very annoying in underscore to write out a simple if across multiple evaluate lines to check if a flag is present in my JSON.
hey, sadly underscore won't do it natively, you have to include mustache.js and do mustache.render, as below.. It's really annoying i agree
I'm posting for the sakes of anyone else facing this issue.
After a lot of googling to no avail, i went through the underscore.js source with a fine toothed comb and basically you have to either use underscore's template syntax, writein ugly function processors into your JSON or include mustache.js into your source and call:
Mustache.render(mytemplate,mymodel)
and foresake underscore's
_.template(..) function
Annoying but whatever, I hope that helps someone
Came up against this problem today. The issue seems to be the order that Underscore does the templating: escape, interpolate, then evaluate. So you need to explicitly ignore any matches for {{# in your interpolation regex:
_.templateSettings = {
evaluate: /\{\{#([\s\S]+?)\}\}/g, // {{# console.log("blah") }}
interpolate: /\{\{[^#\{]([\s\S]+?)[^\}]\}\}/g, // {{ title }}
escape: /\{\{\{([\s\S]+?)\}\}\}/g, // {{{ title }}}
}
It doesn't actually work the same way as Mustache: there aren't proper blocks in Underscore's templating, so there is no need for a closing block {{/}}. You just need to match your statements up like you would with standard Underscore templates.
Note that with the above syntax you should make sure that there is a space between your expression and the {{ }}. Example: {{ test }} will work, {{test}} will not.
I am not using the # symbol but I experienced a similar error once I tried to use the triple mustache {{{name}}} for unescaped values with the setting:
interpolate : /\{\{\{(.+?)\}\}\}/g,
escape : /\{\{(.+?)\}\}/g,
If that's the reason you came here, the following setting works
interpolate : /\{\{\{(\s*\w+?\s*)\}\}\}/g,
escape : /\{\{(\s*\w+?\s*)\}\}(?!\})/g,
I tried Max's format but that didn't work for me because I have a mix of double and triple mustache expressions and while the triple expression worked fine it was stripping a character from each end of the variable name in the double mustache i.e. {{title}} led to underscore looking for a variable named itl
I came here for the original question, followed the answers and this regex was what worked for me, thank you!
If anyone comes across this: {{title}} led to underscore looking for a variable named itl, which is actually what Niyaz pointed out below Max's question: Add spaces between expression and braces and it'll work. So instead of {{title}} you need to write {{ title }}.
| common-pile/stackexchange_filtered |
How to upload a large file (> 1GB) directly to Amazon S3 from a given URL using NodeJS without actually saving the file in local?
I tried uploading a large file to S3 in many ways in NodeJS using aws-sdk but eventually ended up just uploading only 1mb of a file which is actually 1.2 GB.
So, I tried using streaming-s3 in node js, the code is shown below.
I referred https://www.npmjs.com/package/streaming-s3 for streaming-s3 package.
var streamingS3 = require('streaming-s3');
var request = require('request');
var url = 'XXXXXXXXX';
var rStream = request.get(url);
var uploader = new streamingS3(
rStream,
{accessKeyId: 'XXXXXXXXXXXXX',
secretAccessKey: 'XXXXXXXXX'
},
{
Bucket: 'XXXXXXXXX',
Key: 'XXXXXXX',
ContentType: 'text/html'
},
{
concurrentParts: 2,
waitTime: 10000,
retries: 1,
maxPartSize: 10 * 1024 * 1024
}
);
uploader.begin();
When I run this code, the chunks from the file are not actually getting uploaded to S3 in streams.
Only 1mb is getting uploaded, but not the entire file.
Is there any other way to upload a file to S3 from a URL ?
Try using s3-upload-stream. It uses Multipart Uploads. For objects larger than 100mb it is recommended to use Multipart Uploads.
I want to upload a file from URL, not from local. Seems like s3-upload-stream is for local file streaming.
s3-upload-stream pipes a data stream to an s3 object. So you can pipe your rStream directly to s3 by rStream.pipe(compress).pipe(upload). Check the first example on the npm docs of s3-upload-stream(given in answer)
Its not working, I tried it. Only 610 bytes got uploaded out of 800 MB file.
AWS has "managed upload" to take care of large file upload to S3. Tried a few mb of data and it gets uploaded albeit it takes a few seconds. S3-upload-stream does a great of optimzation and with larger parts (chunks) the optimzation is great. I hope that you arent using http to do that uploads, since 1GB files take a large amount of time to upload and http eventually times out, hence you couldn't see a response. The better way to do that, is to leverage the upload functionality to a worker and once done, send a notification through SSE (since you use nodejs - implementing SSE should be easier since its a javascript paradigm anyway). I can give a sample implementation with a few mb file here, but I urge to follow a different design incase of uploading 1GB files.
Here is an example that I use:
THis is with managedUpload - 10mb arnd 20sec, 50 mb file takes around 32s, 80mb takes (6MB chunks - 16parts)
@VideoFileValidator
async managedUploadFileToFolder(bucket: BucketModel, fileObj: any) {
return new Promise(async (resolve, reject) => {
// first make sure that the bucket exists
try {
const bucketList = await this.listBuckets();
if (bucketList.filter(item => item.Name.toUpperCase() === bucket.name.toUpperCase()).length >= 1) {
// means there is a bucket and obj can be created
// create obj params
const metadata: AWS.S3.Metadata = JSON.parse(JSON.stringify(bucket.fileData.metadata));
const keyFolder = bucket && bucket.folder ? APP_CONSTANTS.COMMON.APP_NAME + '/' + bucket.folder + '/'+bucket.fileData.name : 'tf-default-video';
const putObjParam: AWS.S3.PutObjectRequest = {
Body: Readable.from(this.utilService.bufferToStream(fileObj['buffer'])).pipe(zlib.createGzip()),
Bucket: bucket.name,
Key: keyFolder,
ContentType: 'multipart/form-data',
Metadata: metadata,
StorageClass: 'STANDARD'
}
this.s3MangedUpload = new AWS.S3.ManagedUpload({
leavePartsOnError: false,
partSize:1024*1024*6,
queueSize: 16,
params: putObjParam,
tags: this.utilService.createTags(false,true)
});
this.s3MangedUpload.send((err, uploadData) => {
console.log('aws err obtained:', err);
if(err) {
reject(AppConfigService.getCustomError('FID-S3-CUSTOM', `Error uploading the data:${err.message} - code: ${err.code} - status: ${err.statusCode}`));
}
resolve(plainToClass(CommonOutputResponse, {timestamp: AppUtilService.defaultISOTime(),
status: 'Success', message: `${fileObj['originalname']} Uploaded successfully with alias ${bucket.fileData.name}`,
data: {
uploadedToHttps: uploadData.Location,
uploadedToFolder: `s3://${bucket.name}//${bucket.folder}`
}
}));
});
this.s3MangedUpload.on('httpUploadProgress', (progress) => {
console.log('progress:', progress);
});
} else {
reject(AppConfigService.getCustomError('FID-S3-CUSTOM', `Object cannot be created since ${bucket.name} doesnt exist`));
}
} catch (err) {
console.log('err:', err);
reject(AppConfigService.getCustomError('FID-S3-CUSTOM', 'Object cannot be uploaded -' + err.message));
}
});
}
Code explained:
The decorator
@VideoFileValidator
validates if the input from http is a valid video file of types - mp4, mkv, avi (validates based on the file extension, so no way to read that large buffer)
The method by itself uses AWS.S3.managedUpload - defines the queue size as part of params and used the Bodyparam to define the stream
Note: The stream here is the one coming from http input. comes as a buffer and hence converted to stream
.send() takes care of the actual streaming of data. I am unsure if piping happens in the backend, but I guess that might be the case. We can check for upload progress, using the 'httpUploadProgress' event.
The s3-upload-stream on the other hand performs better with larger chunk sizes. I tried with the same 10mb and 50 mb files with the below configuration and was able to save a few seconds, so hopefully it might interest you. Again, uploading 1GB file, might needed a better architecture design
@VideoFileValidator
async streamFileUpload(bucket: BucketModel, fileObj: any){
const metadata: AWS.S3.Metadata = JSON.parse(JSON.stringify(bucket.fileData.metadata));
const keyFolder = bucket && bucket.folder ? APP_CONSTANTS.COMMON.APP_NAME + '/' + bucket.folder + '/'+bucket.fileData.name : 'tf-default-video';
const putObjParam: AWS.S3.PutObjectRequest = {
Bucket: bucket.name,
Key: keyFolder,
ContentType: 'multipart/form-data',
Metadata: metadata,
StorageClass: 'STANDARD'
}
return new Promise(async (resolve, reject) => {
const read = this.utilService.bufferToStream(fileObj['buffer']);
const compress = zlib.createGzip();
const upload = this.s3Upload.upload(putObjParam);
upload.maxPartSize(1024*1024*6); // 20 MB
upload.concurrentParts(15);
upload.on('part', function (details) {
console.log('still uploading ....',details);
});
upload.on('uploaded', function (details) {
console.log('upload completed',details);
upload.end();
resolve(plainToClass(CommonOutputResponse, {timestamp: AppUtilService.defaultISOTime(),
status: '200-OK', message: 'Uploaded succesfully',
data: {
location: details['Location'],
uploadedTo: details['Bucket'],
withName: details['Key']+''+bucket.fileData.name
}
}));
});
upload.on('error', (error) => {
console.log('error occured upploading the data:', error);
upload.end();
reject(AppConfigService.getCustomError('FID-S3-CUSTOM', 'Error occured:'+error['message']));
})
read.pipe(compress).pipe(upload);
})
}
"Part" - event to check on progress, "uploaded" - event that is emitted when file is uploaded completely. I more or less used the same configuration
| common-pile/stackexchange_filtered |
Passing MongoTemplate to Custom Repository implementation
Project is configured to used multiple MongoTemplate s
Mongo Ref is passed as
@EnableMongoRepositories(basePackages={"com.mypackage.one"}, mongoTemplateRef="mongoTemplateOne")
for repositories in the package com.mypackage.one
and
@EnableMongoRepositories(basePackages={"com.mypackage.two"}, mongoTemplateRef="mongoTemplateTwo")
for repositories in the package com.mypackage.two
For standard repositories it works fine. But for the scenarios, where I need custom behavior, I define say myRepoCustomImpl with my custom behavior needs.
Problem: I need to have access to the MongoTemplate which the the analogous standard Repository.
e.g.
If MyRepo is extending MyRepoCustom interface as
@Repository
interface MyRepo extends MongoRepository<MyEntity, String>, MyRepoCustom{}
MyRepoCustomImpl
@Service
public class MyRepoCustomImpl implements MyRepoCustom{
@Autowired
@Qualifier("mongoTemplateOne")
MongoTemplate mongoTmpl;
@Override
MyEntity myCustomNeedFunc(String arg){
// MyImplemenation goes here
}
}
If MyRepo is in package com.mypackage.one, the mongoTemplateOne will be used by myRepo, so there should be a some way to MyRepoCustomImpl will know that it should also use mongoTemplateOne, whenever I will make change in the mongoTemplateRef for MyRepo, say as
@EnableMongoRepositories(basePackages={"com.mypackage.one"}, mongoTemplateRef="mongoTemplateThree")
now I need to make changes to @Qualifier in the MyRepoCustomImpl!
There are lots of repos with custom behaviour, so its becoming tedious task.
Question: Instead isn't there any way that the MongoTemplate to be used should get automatically injected or resolved according to the Repo it is extending to?
The only thing additional you can do is to have a base class and initialize the MongoTemplate with the respective qualifier and make that variable Protected.
MongoTemplate isn't exposed by the MongoRepository interface. They could potentially expose the name of the MongoTemplate @Bean and that could provide a solution to your question. However, given the fact that they don't, I will provide an example below that may suit your needs.
First off mongoTemplateRef refers to the name of the @Bean to use, it doesn't specify the name of the MongoTemplate.
You will need to provide each MongoTemplate @Bean and then refer to it within your @EnableMongoRepositories annotation.
Since you are using spring-boot you can take advantage of the MongoDataAutoConfiguration class. Please take a look at what it does here https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/data/mongo/MongoDataAutoConfiguration.java.
The simplest example is this.
package: com.xyz.repo (this implementation relies on the configuration provided by MongoDataAutoConfiguration)
@Configuration
@EnableMongoRepositories(basePackages={"com.xyz.repo"}) //mongoTemplateRef defaults to mongoTemplate
public class XyzRepoConfiguration {
}
public abstract class BaseRepo {
@Autowired
MongoTemplate mongoTemplate;
}
@Service
public class MyRepoCustomImpl extends BaseRepo implements MyRepoCustom {
@Override
MyEntity myCustomNeedFunc(String arg){
// access to this.mongoTemplate is present
}
}
package: com.abc.repo
@Configuration
@EnableMongoRepositories(basePackages={"com.abc.repo"}, mongoTemplateRef=AbcRepConfiguration.TEMPLATE_NAME)
public class AbcRepoConfiguration {
public static final String TEMPLATE_NAME = "mongoTemplateTwo";
@Bean(name="mongoPropertiesTwo")
@ConfigurationProperties(prefix="spring.data.mongodb2")
public MongoProperties mongoProperties() {
return new MongoProperties();
}
@Bean(name="mongoDbFactoryTwo")
public SimpleMongoDbFactory mongoDbFactory(MongoClient mongo, @Qualifier("mongoPropertiesTwo") MongoProperties mongoProperties) throws Exception {
String database = this.mongoProperties.getMongoClientDatabase();
return new SimpleMongoDbFactory(mongo, database);
}
@Bean(name=AbcRepoConfiguration.TEMPLATE_NAME)
public MongoTemplate mongoTemplate(@Qualifier("mongoDbFactoryTwo") MongoDbFactory mongoDbFactory, MongoConverter converter) throws UnknownHostException {
return new MongoTemplate(mongoDbFactory, converter);
}
}
public abstract class BaseRepo {
@Autowired
@Qualifier(AbcRepoConfiguration.TEMPLATE_NAME)
MongoTemplate mongoTemplate;
}
@Service
public class MyRepoCustomImpl extends BaseRepo implements MyRepoCustom {
@Override
MyEntity myCustomNeedFunc(String arg){
// access to this.mongoTemplate is present
}
}
com.xyz.repo will rely on spring.data.mongodb properties within application.properties
com.abc.repo will rely on spring.data.mongodb2 properties within application.properties
I haven't used the AbcRepoConfiguration.TEMPLATE_NAME approach before, but it was compiling within my IDE.
Please let me know if you need any clarification.
MongoTemplate is not injected on your repository class but deeper in spring-data-mongodb so you cannot get it from your repository. Look at the code you will learn a lot of stuff.
So no you can't inject a bean according to the repo extend except if you disable spring-boot auto-configuration and component discovery and configure it by your self but it will be much longer than just changing @Qualifier name. Your IDE call help you easily and you might regret disable auto-configuration.
Sorry for the disappointment.
You can use following example.
1)
package com.johnathanmarksmith.mongodb.example;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
/**
* Date: 6/28/13 / 10:40 AM
* Author: Johnathan Mark Smith
* Email<EMAIL_ADDRESS> * <p/>
* Comments:
* This main really does not have to be here but I just wanted to add something for the demo..
*
*/
public class MongoDBApp {
static final Logger logger = LoggerFactory.getLogger(MongoDBApp.class);
public static void main(String[] args) {
logger.info("Fongo Demo application");
ApplicationContext context = new AnnotationConfigApplicationContext(MongoConfiguration.class);
logger.info("Fongo Demo application");
}
}
2)
package com.johnathanmarksmith.mongodb.example;
import com.mongodb.Mongo;
import com.mongodb.ServerAddress;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.data.mongodb.config.AbstractMongoConfiguration;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;
import java.util.ArrayList;
/**
* Date: 5/24/13 / 8:05 AM
* Author: Johnathan Mark Smith
* Email<EMAIL_ADDRESS> * <p/>
* Comments:
* <p/>
* This is a example on how to setup a database with Spring's Java Configuration (JavaConfig) style.
* <p/>
* As you can see from the code below this is easy and a lot better then using the old style of XML files.
* <p/>
* T
*/
@Configuration
@EnableMongoRepositories
@ComponentScan(basePackageClasses = {MongoDBApp.class})
@PropertySource("classpath:application.properties")
public class MongoConfiguration extends AbstractMongoConfiguration {
@Override
protected String getDatabaseName() {
return "demo";
}
@Override
public Mongo mongo() throws Exception {
/**
*
* this is for a single db
*/
// return new Mongo();
/**
*
* This is for a relset of db's
*/
return new Mongo(new ArrayList<ServerAddress>() {{ add(new ServerAddress("<IP_ADDRESS>", 27017)); add(new ServerAddress("<IP_ADDRESS>", 27027)); add(new ServerAddress("<IP_ADDRESS>", 27037)); }});
}
@Override
protected String getMappingBasePackage() {
return "com.johnathanmarksmith.mongodb.example.domain";
}
}
3)
package com.johnathanmarksmith.mongodb.example.repository;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.stereotype.Repository;
import com.johnathanmarksmith.mongodb.example.domain.Person;
/**
* Date: 6/26/13 / 1:22 PM
* Author: Johnathan Mark Smith
* Email<EMAIL_ADDRESS> * <p/>
* Comments:
* <p/>
* This is my Person Repository
*/
@Repository
public class PersonRepository {
static final Logger logger = LoggerFactory.getLogger(PersonRepository.class);
@Autowired
MongoTemplate mongoTemplate;
public long countUnderAge() {
List<Person> results = null;
Query query = new Query();
Criteria criteria = new Criteria();
criteria = criteria.and("age").lte(21);
query.addCriteria(criteria);
//results = mongoTemplate.find(query, Person.class);
long count = this.mongoTemplate.count(query, Person.class);
logger.info("Total number of under age in database: {}", count);
return count;
}
/**
* This will count how many Person Objects I have
*/
public long countAllPersons() {
// findAll().size() approach is very inefficient, since it returns the whole documents
// List<Person> results = mongoTemplate.findAll(Person.class);
long total = this.mongoTemplate.count(null, Person.class);
logger.info("Total number in database: {}", total);
return total;
}
/**
* This will install a new Person object with my
* name and random age
*/
public void insertPersonWithNameJohnathan(double age) {
Person p = new Person("Johnathan", (int) age);
mongoTemplate.insert(p);
}
/**
* this will create a {@link Person} collection if the collection does not already exists
*/
public void createPersonCollection() {
if (!mongoTemplate.collectionExists(Person.class)) {
mongoTemplate.createCollection(Person.class);
}
}
/**
* this will drop the {@link Person} collection if the collection does already exists
*/
public void dropPersonCollection() {
if (mongoTemplate.collectionExists(Person.class)) {
mongoTemplate.dropCollection(Person.class);
}
}
}
4)
package com.johnathanmarksmith.mongodb.example.domain;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
/**
* Date: 6/26/13 / 1:21 PM
* Author: Johnathan Mark Smith
* Email<EMAIL_ADDRESS> * <p/>
* Comments:
* <p/>
* This is a Person object that I am going to be using for my demo
*/
@Document
public class Person {
@Id
private String personId;
private String name;
private int age;
public Person(String name, int age) {
this.name = name;
this.age = age;
}
public String getPersonId() {
return personId;
}
public void setPersonId(final String personId) {
this.personId = personId;
}
public String getName() {
return name;
}
public void setName(final String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(final int age) {
this.age = age;
}
@Override
public String toString() {
return "Person [id=" + personId + ", name=" + name
+ ", age=" + age + "]";
}
}
https://github.com/JohnathanMarkSmith/spring-fongo-demo
You can directly inject MongoTemplate and MongoOperations in your service classes.
Try to auto-wire them and then you should be good.
Update:
Without autowiring with proper qualifier(since you have two repositories) this is not possible. As Custom class all together different from repository. If you had only one repository then autowire of mongotemplate would have been sufficient otherwise you have to provide the qualifier in impl as two beans of MongoTempalte are created.
I need to have access to the MongoTemplate which the the standard Repository? you have a problem in custom or standard?
Can you please provide more information like what have you tried. What i have understand from your question is you can access the custom method using repo and you want to access it using mongoTemplate? or you are not able to access to mongoTemplate after custom implmentation? Please clear my doubts so that we can assist.
@Mr.Arjun did you able to find the answer you are looking for?
Not yet, that's why put a bounty on that!
Basically, you are saying that MyRepoCustomImpl doesn't uses mongoTemplateOne what you have defined during EnableMongoRepository? and you will have to inject mongoTemplateOne using Qualifier inorder to use that template. right?
| common-pile/stackexchange_filtered |
db2 set multiple varchar variables
How do you set multiple varchar variables to latter use in multiple select staments?
The documentation has an integer example
SET (NEW_VAR.SALARY, NEW_VAR.COMM) = (50000, 8000);
I can't figure out how to do the same for a varchar
SET (NEW_VAR.var, NEW_VAR.var1) = ('%hello%','%world%');
SELECT
*
FROM
fooSchema.barTable AS b
WHERE
b.first_name like new_var.var and
b.last_name like new_var.var1;
Reading the documentation page you have linked to: "This statement can only be used as an SQL statement in a dynamic compound statement, trigger, SQL function, SQL method, or SQL procedure. It is not an executable statement and cannot be dynamically prepared."
As Mustaccio said, you could use variables between a compound statement, a trigger or a routine. For your example (a trigger):
CREATE TRIGGER trigger1
AFTER insert ON mytable
REFERENCING NEW AS NEW_VAR
FOR EACH ROW
BEGIN
DECLARE val INT;
SET (NEW_VAR.var, NEW_VAR.var1) = ('%hello%','%world%');
SELECT col INTO val
FROM fooSchema.barTable AS b
WHERE b.first_name like new_var.var
AND b.last_name like new_var.var1;
END @
Or in a compound statement
BEGIN
DECLARE var varchar(32);
DECLARE var1 varchar(32);
DECLARE val INT;
SET (NEW_VAR.var, NEW_VAR.var1) = ('%hello%','%world%');
SELECT col INTO val
FROM fooSchema.barTable AS b
WHERE b.first_name like new_var.var
AND b.last_name like new_var.var1;
END @
Note that I used an extra variable for the output of the select. A query like this should return just one row. Otherwise, you should use a cursor.
Remember that you should execute these scripts with the following option:
db2 -td@ -f yourFilename.sql
SELECT statements in compound SQL must be INTO something.
| common-pile/stackexchange_filtered |
Is it possible to use the same coding for an iOS app and Android app?
Will I have to edit my code specifically for each operating system? Or any coding will universally run? (remote database query app)
You have to pay a fee of 25$ to register an account.
Source: http://www.androidauthority.com/publishing-first-app-play-store-need-know-383572/
So there's no way to publish an android app for free?
No there isn't. The reason is to make it costly to spam the app store with bad quality apps or simply spam.
| common-pile/stackexchange_filtered |
SQL Server : output only one Postage Cost Against an Order with Multiple Line Items
I am querying orders;
Order# Customer Item Postage
11111 Customer1 Item1 £3.99
11112 Customer2 Item1 £3.99
11112 Customer2 Item2 £3.99
11112 Customer2 Item3 £3.99
My problem is for each order I only want to output one postage cost. In the above example, 11112 is outputting three postage costs.
Desired result;
Order# Customer Item Postage
11111 Customer1 Item1 £3.99
11112 Customer2 Item1
11112 Customer2 Item2
11112 Customer2 Item3 £3.99
Can anyone direct me to some documentation or provide from sample SELECT queries for dealing with this problem?
I have simplified the question and output, for those that would like to look at the actual Database structure, I have attached an image of the order table (Linnworks), ignore the Postal Services table it has no relevance to the question.
For each order I only want to output one postage cost ... which one?
can you show your query ?
The query is rather large and would just over complicate the question. I will update the question to show a desired result?!
Edit your question and provide sample data.
I think you can do what you want with row_number():
with q as (
<your query here>
)
select OrderNum, Customer, Item,
(case when seqnum = 1 then postage end) as Postage
from (select q.*,
row_number() over (partition by ordernum order by (select null)) as seqnum
from q
) q;
This looks interesting and might just be what I am looking for. I will update.
| common-pile/stackexchange_filtered |
Adding feature layer to ArcGIS Online from REST API?
I need to add a feature layer to ArcGIS Online account from code.
i have created a developer account where if i want to create a feature layer i have to upload a CSV file and it makes the feature layer.
Problem is: i need to create the sane feature layer using my code.
like is there any REST API or some API which ArcGIS provide this
option?
I've checked requests which were sent in ArcGIS Online to add Feature Service with Chrome dev tools and found that it uses 4 requests:
Create Service - create feature service with REST API without layers, see http://resources.arcgis.com/en/help/arcgis-rest-api/#/Create_Service/02r30000027r000000/
update Service item - fix service name, see http://resources.arcgis.com/en/help/arcgis-rest-api/#/Update_Item/02r30000009s000000/
addToDefinition request for admin part, see http://resources.arcgis.com/en/help/arcgis-rest-api/index.html#/Add_To_Definition_Feature_Service/02r300000230000000/ - this is the request to add layer definition into empty service
Last request to updateDefinition of the service (http://resources.arcgis.com/en/help/arcgis-rest-api/#/Update_Definition_Feature_Service/02r30000021z000000/).
There is definitely a REST API for working with ArcGIS Online and Portal.
This is the specific operation for publishing a hosted feature service from .csv
http://resources.arcgis.com/en/help/arcgis-rest-api/index.html#/Publish_Item/02r300000080000000/
Jhon thanks for reply.. but i'm looking for creating a feture layer from the code..
| common-pile/stackexchange_filtered |
Is it okay to post parodies of songs and poems in Stack Overflow?
For fun, I have a couple of times produced something a bit more creative than technical with a Stack Overflow answer. Just recently, I did a parody of "Love Shack" by the B-52's (link in my signature).
I am just wondering if there is anything more I need to do than to just mention the work and artist I am parodying, or if it is better I should avoid full-on parodies.
I'd be far more worried about this policy: "I don't down vote (except by accident, in which case I will undo it.)"
The parody is in an answer? It is appropriate for your personal blog, not SO.
New policy: downvote but don't inhale it.
Stacks are overplayed. I'm holding out for some Reverend Horton Heap.
Haiku maybe ok... I don't see long wall of text that most visitors would not be able appreciate (i.e. due to language barrier) or associate with anything (15 years old song is not necessary well known to whole generation of 20-something) as good answer.
@AlexeiLevenkov: Perhaps a link would make it easier to relate to?
If you can actually answer the question clearly and accurately with the parody, I'd say it's entirely acceptable. I'll even upvote it if I think it's really clever. If you do try your hand at one, you should give credit to the original artist to be safe, and besides, there really isn't any reason not to. Even Weird Al gives credit to the artists he parodies, as well as the ones whose songs appear in his polkas.1
If the parody doesn't address the question in any way, or it's not essential to the answer, then it's probably worth keeping out of the answer if readers find it distracting.
And if it takes up more space in the answer section than the actual answer...
1 Al hasn't made a direct parody of Love Shack, but it does appear in one of my favorite polka medleys, Polka Your Eyes Out.
I think in Weird Al's case, it is strictly necessary that he gives credit. He might not be copying the lyrics, but he is copying everything else. What he doesn't need is permission from the original artist, although he generally seeks this anyway.
@Cody Gray: Oh, yeah, looks like I got those two mixed up.
Hmm... interesting point about taking up space. I compressed the format a bit to make it more equitable, the song had many stanzas. I'll pick shorter songs in the future.
| common-pile/stackexchange_filtered |
Is the mapping $f: \mathbb{R} \rightarrow [0,1], \ x \mapsto \sum_{n=1}^\infty \frac{\lfloor x^n \rfloor \mod 2}{2^n}$ surjective?
Is the mapping
$$
f: \mathbb{R} \rightarrow [0,1], \ x \mapsto \sum_{n=1}^\infty \frac{\lfloor x^n \rfloor \mod 2}{2^n}
$$
surjective?
If not, what is its image?
If yes, what can be said about images of intervals, besides the obvious $f([-1,1]) = \{0,\frac{2}{3},1\}$?
I guess yes. Given a sequence of required parities, you should be able to place $x$ "firmly" between two large integers of the correct parity, and then make quickly decreasing adjustments to get the correct parity of $\lfloor x^n\rfloor$ for every $n$. I recall seeing a long time ago the claim that there is an $x$ such that $\lfloor x^n\rfloor$ is prime for every $n$. I think it was proved this way.
How do you get say $1/2$ or $1/4 + 1/32$? $x$ will not be integer, floor() must be odd very few times and even all the rest (or even very few times and odd all the rest).
I agree with Johan - it should be surjective. Here is a proof. I claim that $f([4,6])=[0,1]$. Let $[x,y]\subset [4,6]$ be such that $x^k=m$, and $y^k=m+1$. Then $y^{k+1}-x^{k+1}>x(y^k-x^k)=x\geq 4$. Which means $[x,y]$ contains inside itself two intervals $[x',y']$, and $[y',z']$ such that $(x')^{k+1}=m'$, $(y')^{k+1}=m'+1$, $(z')^{k+1}=m'+2$. Then one proceed recursively.
Thank you! -- If I'm not overlooking some subtlety, this proves surjectivity. What remains is the part of the question asking for images of intervals. Here in particular short intervals lying a little above 1 or a little below -1 are of interest, as for long enough intervals of big enough numbers one can just use your method to show that the image is all of $[0,1]$.
At least one can say that the image of any interval going beyond [-1,1] is uncountable by a similar argument. One will still have freedom for every $\ell$-th term of the sum where $\ell$ is sufficiently big.
Hm... Me on the contrary I started to believe that the image of [1,1+ϵ] is nowhere dense...
| common-pile/stackexchange_filtered |
Could not set unknown property 'jar' for extension 'launch4j' of type edu.sc.seis.launch4j.Launch4jPluginExtension
Hi I am getting above error in app.gradle while building gradle project. I am using gradle version 8.8. Below is my app.gradle file. Can someone please help with this. I have recently upgraded gradle from 5.2 to 8.8. After upgrading I have got below issue.
buildscript {
repositories {
maven {
url "https://plugins.gradle.org/m2/"
}
}
dependencies {
classpath "org.springframework.boot:spring-boot-gradle-plugin:3.1.5"
classpath "edu.sc.seis.launch4j:launch4j:3.0.6"
classpath "com.gorylenko.gradle-git-properties:gradle-git-properties:2.4.2"
}
}
apply from: "${buildscript.sourceFile.parent}/java.gradle"
apply plugin: org.springframework.boot.gradle.plugin.SpringBootPlugin
apply plugin: edu.sc.seis.launch4j.Launch4jPlugin
apply plugin: com.gorylenko.GitPropertiesPlugin
// the spring boot plugin uses the bootJar task instead of jar
bootJar.manifest.attributes = jar.manifest.attributes
bootJar {
// archiveBaseName = "${project.name}-exec.jar"
archiveFileName.set("${project.name}-exec.jar")
}
launch4j {
mainClassName = "org.springframework.boot.loader.JarLauncher"
jar = "${buildDir}/libs/${project.name}-exec.jar"
outputDir = "libs"
headerType = "console"
// icon = "${projectDir}/src/main/resources/favicon.ico"
copyright = "test"
companyName = "test"
internalName = "test"
bundledJrePath = "./jre"
}
build.finalizedBy createExe
it looks like you updated Launch4J plugin version, not only Gradle version ? there is no such property jaron launch4j extension anymore, please refer to https://github.com/TheBoegl/gradle-launch4j?tab=readme-ov-file#how-to-configure
| common-pile/stackexchange_filtered |
debug a debug dll linked to a release executable
In visual studio how can I debug/step into calls made by an exe to a library ?
The exe is available only in release mode and calls the library which is built in debug mode
I made a simple vs solution with just the exe and started it ..then I opened a source file from the binary and added breakpoints.. but vs doesnot activate the breakpoints saying "no symbols loaded for this file" ...obviously I am missing something here.. (if I remember correctly I used to be able to debug the calls before)
You can debug binaries that are built in release mode with the following caveats:
You'll need the pdbs which were built against the release library.
Breakpoints will not be possible in any code which has been inlined/optimized away.
Depending on the architecture certain variables values will be hidden/garbage, you have to take things with a pinch of salt when debugging release binaries.
To add PDB files for the release binary, go to:
Debug -> Options and Settings -> Symbols
thanks.. but in this case I dont even have pdb files for the exe. I have the pdb files for the dll and the dll is debug.... Also I just want to add breakpoints in the source code of the dll..to see how the exe is exercising the dll functionality..
Hmm, that should work just fine, are you sure that the exe is loading the version of the library that you've just built? If it was built on your machine the pdbs should be found automatically.
Hi @mhk , Did u solve this problem? what is the solution for attaining this ?
| common-pile/stackexchange_filtered |
converting a char* to BSTR* which contains special characters
I'm trying to convert a char* to a BSTR*, and my char* has special characters in it from being encrypted. I have tried several approaches found on the web, but back in the calling vb code, I always end up with something different. I'm pretty sure this has to do with the special characters, because if I don't have them in, it seems to be ok....
my code is something along these lines...
_export myFunction(BSTR *VBtextin, BSTR *VBpassword, BSTR *VBtextout, FPINT encrypt) {
BSTR password = SysAllocString (*VBpassword);
char* myChar;
myChar = (char*) password //is this ok to cast? it seems to remain the same when i print out.
//then I encrypt the myChar in some function...and want to convert back to BSTR
//i've tried a few ways like below, and some other ways i've seen online...to no avail.
_bstr_t temp(myChar);
SysReAllocString(VBtextout, myChar);
any help would be greatly greatly appreciated!!!
Thanks!!!!
What exactly are you trying to do? Are you trying to interface VB with C code?
Yes...exactly....the VB code is calling the C code...then back to the VB code.
did you get the solution for this problem?
If you're manipulating the buffer, you probably don't want manipulate the char * directly. First make a copy:
_export myFunction(BSTR *VBtextin, BSTR *VBpassword, BSTR *VBtextout, FPINT encrypt) {
UINT length = SysStringLen(*VBpassword) + 1;
char* my_char = new char[length];
HRESULT hr = StringCchCopy(my_char, length, *VBpassword);
If that all succeeds, perform your transformation. Make sure to handle failure as well, as appropriate for you.
if (SUCCEEDED(hr)) {
// Perform transformations...
}
Then make a copy back:
*VBtextout = SysAllocString(my_char);
delete [] my_char;
}
Also, have a read of Eric's Complete Guide to BSTR Semantics.
| common-pile/stackexchange_filtered |
trigger blockui from link within datagrid
I have the below datagrid which displays a list of student names as link.
<h:form id="gsform">
<p:dataGrid var="stuvar" rendered="#{gradeSheetController.listStudent != null}"
value="#{gradeSheetController.listStudent}" columns="5" layout="grid">
<p:commandLink actionListener="#{gradeSheetController.readStudentGradeSheet}"
update=":gsform:gscont, :gsform:buttoncont">
<h:outputText id="stname" style="font-size:16px" value="#{stuvar.studentFirstName}" />
<f:param name="selstudent" value="#{stuvar.studentSeq}" />
</p:commandLink>
</p:dataGrid>
I also have the below blockUI to freeze the screen until backend processing is done, currently used for a Save button.
<p:blockUI block=":entirePageBody" trigger="savebutton">
<h:panelGrid id="blockContent" columns="2">
<h:graphicImage library="images" name="loading.gif" style="margin-right:12px; vertical-align:middle;" />
<h:outputText value="Please wait, data is being processed..." style="white-space:nowrap;" />
</h:panelGrid>
</p:blockUI>
Now, I would also like to trigger the blockUI when the Student name link is clicked. Obviously, since the number of students will be dynamic and being within the datagrid, the code generated includes other aspects to the id like id="gsform:j_idt168:1:stname", id="gsform:j_idt168:2:stname" and so on.
Have no clue how to trigger the blockUI on click of the Student name link within the datagrid, please suggest.
Look at the 'client-side api' example: http://www.primefaces.org/showcase/ui/misc/blockUI.xhtml
Thank you. It worked. I was thinking blockUI can be invoked only using trigger!!
The documentation and showcase are your friend
Can you please create an answer yourself so others can see how it should be done?
Triggering/Hiding the blockUI from within dataGrid using onclick/oncomplete
<p:dataGrid var="stuvar" rendered="#{gsExamController.listStudent != null}"
value="#{gsExamController.listStudent}" columns="5" layout="grid">
<p:commandLink actionListener="#{gsExamController.readStudentGradeSheet}"
onclick="PF('bui').show()"
oncomplete="PF('bui').hide()"
update=":gsform:gscont, :gsform:remarkcont, :gsform:buttoncont">
<h:outputText style="font-size:16px" value="#{stuvar.studentFirstName}" />
<f:param name="selstudent" value="#{stuvar.studentSeq}" />
</p:commandLink>
</p:dataGrid>
<p:blockUI block=":entirePageBody" trigger="savebutton" widgetVar="bui">
<h:panelGrid id="blockContent" columns="2">
<h:graphicImage library="images" name="loading.gif" style="margin-right:12px; vertical-align:middle;" />
<h:outputText value="Please wait, data is being processed..." style="white-space:nowrap;" />
</h:panelGrid>
</p:blockUI>
A simple solution to your problem is to use Primefaces' selectors.
For example:
<p:commandLink value="Click me." styleClass="blockMeWhenClicked" />
<p:blockUI block=":ElementToBeBlocked" trigger="@(.blockMeWhenClicked)" />
| common-pile/stackexchange_filtered |
Python scraping Error MissingSchema: Invalid URL
Does anybody know why a get this error ?
MissingSchema: Invalid URL '/type/gymnasien/': No schema supplied. Perhaps you meant http:///type/gymnasien/?
This is my code:
import requests
from bs4 import BeautifulSoup as soup
def get_emails(_links:list, _r = [0, 10]):
for i in range(*_r):
new_d = soup(requests.get(_links[i]).text, 'html.parser').find_all('a', {'class':'my_modal_open'})
if new_d:
yield new_d[-1]['title']
d = soup(requests.get('http://www.schulliste.eu/type/gymnasien/').text, 'html.parser')
results = [i['href'] for i in d.find_all('a')][52:-9]
print(list(get_emails(results)))
The error seems pretty self-explanatory. What links do you have contained in _links? Most likely one of them is /type/gymnasien/, but what you probably meant to do is append /type/gymnasien/ to the base URL of http://www.schulliste.eu. I suspect if you wrote requests.get("http://www.schulliste.eu" + _links[i]) it would resolve that exception.
@Mihai Chelaru " _links is not defined" is the error I get if i try your suggestion Did you mean that:
d = soup(requests.get('http://www.schulliste.eu' + _links[i]).text, 'html.parser')
So from what I understand from your code, you're looking to scrape links to a bunch of schools, then use the get_emails() function to follow those links and scrape the school contact emails. If you look inside the results list that you pass to get_emails(), you will see that it contains some relative links internal to the site that requests does not know how to handle:
>>> print(results[1])
/type/gymnasien/
These links might not be the ones you want to even follow, so what you can do is try and remove them from the list of scraped links before you pass them to your get_emails() function:
results_filtered = [link for link in results if link.startswith('http://')]
Then you can use those results downstream and get_emails() should no longer complain about MissingSchema. The final code would look like this:
import requests
from bs4 import BeautifulSoup as soup
def get_emails(_links:list, _r = [0, 10]):
for i in range(*_r):
new_d = soup(requests.get(_links[i]).text, 'html.parser').find_all('a', {'class':'my_modal_open'})
if new_d:
yield new_d[-1]['title']
d = soup(requests.get('http://www.schulliste.eu/type/gymnasien/').text, 'html.parser')
results = [i['href'] for i in d.find_all('a')][52:-9]
results = [link for link in results if link.startswith('http://')]
print(list(get_emails(results)))
Which prints the following output:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
oh yes now I understand and it works as well. Thanks so much. Do I have to remove _r=[0,10] to get all email adresses?
No it doesnt :( NameError: name '_r' is not defined. Any idea why?
Oh sorry, if you remove that argument, then you can't use it in for i in range(*_r):, so you will have to instead iterate over the _links list, either by going for i in range(len(_links)): or iterating over links itself in the for loop with for link in _links: and then replacing requests.get(_links[i]) with requests.get(link).
many thanks, now it works :) But this code does just scrap the email addresses from this site and dont step to the next site right ( you have to press a continue button to get to the the next site, each side shows just 20 results, there are 2400 results all in all)? Is there a way to fix this with python ? Or do I have to scrap email adresses site by site?
Generally it's frowned upon to ask too many questions in a single post. It's best to create a separate question when you have something new to think about. On this particular site you'll notice there are next and back buttons, and if you click to get to the next page there's a parameter passed to the URL that gives the next 20 results ?bundesland=&start=20. What you can do is have your code go through these pages from start to end, i.e. add an outer loop that loops through and adds the links from these pages to results. I'll leave it up to you to figure out how to do that.
| common-pile/stackexchange_filtered |
max-age, no-Cache,must-revalidate on Cache-Control Header, Which takes predence here?
Cache-Control : max-age=86400, no-store, must-revalidate, no-cache
This is a response header set by the server for a JS file.
Does it mean the response is cached for 86400 seconds before revalidating?.
Which of the above one takes precedence and what is the resiult?.
Looks like no-cache is give precedence over all.
HTTP1.1 specification says "If the no-cache directive does not specify a field-name, then a cache MUST NOT use the response to satisfy a subsequent request without successful revalidation with the origin server."
Refer http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1
It also says "The max-age directive on a response implies that the response is cacheable (i.e., "public") unless some other, more restrictive cache directive is also present. "
Refer http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3
All the above are for HTTP/1.1 .
| common-pile/stackexchange_filtered |
Is there any version control tool workable for NetSuite?
So far, we update the script files by navigating to "Documents > Files > SuiteScripts" in NetSuite.
There seems no version control mechanism in it.
We use Git to control version on localhost and wonder if Git or something else can be used to control the versions of the files in NetSuite.
Is it possible to use RESTlet or Suitelet to control versions?
What you are looking for is there, it just sounds like you are not using it yet. I definitely suggest reading up on, and using the SDF (SuiteCloud Development Framework). It basically moves all of the development into the IDE, allowing deployment/testing/etc/ from the IDE.
Check out the help article here: SuiteCloud Development Framework
| common-pile/stackexchange_filtered |
OAuth 2.0 - Does the authorization server directly send the auth code to the redirect URI that the user specified?
Does the authorization server directly send the auth code to the Redirect URI that the client specified or else Is there an Intermediate to whom the auth code will be sent first? If the auth code will be sent to the redirect URL ,then that redirect URL is an endpoint of the client's backend server?
Yes, the authorization code is sent from the authorization server to the web-backend-server via the browser redirect URL
Why via the browser:
Because it's the application the user used to consent/login
Why auth code not token:
Because URLS are visible in browser and network appliances
The step after that is the web-backend-server will exchange the auth code with a token from the auth server
| common-pile/stackexchange_filtered |
Unable to Assign Primary Key Value to Another table column of same data type
I am unable to store the "ID" value of Student into "StudentID" of StudentCourses table.Its returning no error, data types are also matching.
ViewModel: StudentVM
public class StudentVM
{
public Student student { get; set; }
public SelectList Courses { set; get; }
public int SelectedCourseID { get; set; }
public StudentCourses studentCourse { get; set; }
}
The "St.Id" is giving me a value however its value is not getting transferred to "StudentID" field of StudentCourse table,without any error message.
[HttpPost]
public ActionResult Create(FormCollection Form,StudentVM Vm) {
// string strDDLValue = Vm.SelectedCourseID;
Student st = new Student() {Name=Vm.student.Name };
var stname=st.Name;
int stid = st.Id; //No transfer of value takes place.stid is null
StudentCourses stc = new StudentCourses() { CourseID = Vm.SelectedCourseID, StudentID = st.Id };
db.Students.Add(st);
db.StudentCourses.Add(stc);
db.SaveChanges();
return View();
}
The Object St is indeed returning values of input. please have a look.
All you have done is initialized a new instance of Student and set its Name property. Nowhere have you set its Id property so int stid = st.Id is going to return null or 0 depending where the property is int? or int (and a view model should not contain a data model when editing!)
St.Id does return a value please look at the pic i added.
Then you Student must have a default constructor setting the value which means that stid will be 2004 (not null as your claiming!) or your reading that value in the debugger after the db.SaveChanges(); line.
My issue is that value in st.Id is not being transferred to stid . Transfer of value is not occurring,that is the main issue. whereas transfer of st.Name to stname is occurring without any issues. Moreover db.savechanges() is happening at the last,My st.Id value is generated after instantiation of student class,i am assuming this has nothing to do with db.savechanges()
Of course it is only assigned after you call SaveChanges() - that is when the database assigns it!. What are you expecting to happen
You should not assign the ID's but the entire entities to eachother.
StudentCourses stc =
new StudentCourses() { CourseID = Vm.SelectedCourseID, Student = st };
This way, upon SubmitChanges() LsS will know the ID and assign the appropriate relations within a single transaction.
| common-pile/stackexchange_filtered |
Decode four-hex-digits in sublime text
Some lines of a file:
2e72 7372 6300 0000 8c04 0000 00e0 1a00
0010 0000 00d0 1a00 0000 0000 0000 0000
0000 0000 4000 0040 2e72 656c 6f63 0000
0c00 0000 0000 1b00 0010 0000 00e0 1a00
Is there a way to transform it in readable characters?
Yes, it is possible. You could try a plugin like this: https://www.sublimetext.com/forum/viewtopic.php?f=5&t=3728
I'm sure there are others out there, but this is what I found with a quick Google search. Just look for "hex to ascii plugin sublime" or something like that. Also, check out this related question.
| common-pile/stackexchange_filtered |
Android Room database transactions
With the new Room Database in Android, I have a requirement where there are two sequential operations that needs to be made:
removeRows(ids);
insertRows(ids);
If I run this, I see (on examining the db) that there are some rows missing - I assume they are being deleted after inserting. viz. the first operation is running in parallel to the second.
If I use a transaction block, such as this, then it's all fine - the first operation seems to complete before doing the second:
roomDb.beginTransaction();
removeRows(ids);
roomDb.endTransaction();
insertRows(ids);
It's also fine if I give a sleep in-between instead:
removeRows(ids);
Thread.sleep(500);
insertRows(ids);
There doesn't seem to be much documentation for Room, and was wondering if I should use the transaction block like the above when I have sequential operations to be done, or is there any better way of doing it.
EDIT: After @CommonsWare pointed out, @Query are asynchronous, while @Insert and @Delete are synchronous. In view of this, how would I get a query which deletes rows to be async:
@Query("DELETE from table WHERE id IN(:ids)")
int removeRows(List<Long> ids);
According to the build output I get Deletion methods must either return void or return int (the number of deleted rows), if I try to wrap the return type in a Flowable.
What exactly are the implementations of removeRows() and insertRows()? If they are plain @Delete and @Insert DAO methods, then they should be serialized naturally, as those methods are executed synchronously. The only place where Room does asynchronous stuff is on @Query with a reactive return value (LiveData, Flowable, etc.).
@CommonsWare, yes, while insertRows() are simple @Insert, the removeRows() have @Query calls. I guess that explains it. So, the I guess the answer to my question is to subscribe to the reactive response of the Queries.
@CommonsWare, Thanks for your help. I have edited the question with a follow-up based on your comment. How can I write a @Query that does a DELETE so that it I can observe it until completion?
A @Query that returns an int is supposed to be synchronous. As I wrote, the only place where Room does asynchronous stuff is on @Query with a reactive return value (LiveData, Flowable, etc.). It could be that this is a bug in Room somewhere. Is there a particular reason you are using @Query rather than @Delete? @Delete already offers IN support for a list of IDs.
Well, I just simplified the code in the question. In actuality, removeRows is only one of two deletion operations that I do. The other one has a more involved query which, if you think it'll help, I can add to the question. Can you also tell me or point me to where @Delete offers IN support for a list of IDs?
@Insert, @Update, and @Delete each accept a single ID, a collection of IDs, or a varargs of IDs. https://developer.android.com/topic/libraries/architecture/room.html#daos-convenience
But in https://developer.android.com...Delete.html, it says "All of the parameters of the Delete method must either be classes annotated with Entity or collections/array of it.". Can you show me the syntax used for passing a collection of IDs?
My apologies -- it's early here... :-) Yes, these methods need entities, not IDs, so the methods know what table to delete from. I had considered filing a feature request to allow us to specify the entity class in the annotation and accept IDs as parameters, though I don't think I wound up filing that one. Again, sorry for my confusion.
That's alright. So, it does seem like there's no straightforward solution (unless I @Query all the entities and pass them to @Delete, but that'll be a hit on the performance. Maybe that feature request is a good idea :-)
Well, again, @Query returning int should be synchronous, as that is not a reactive return value. Try putting a breakpoint where you are calling removeRows(), then step through the generated code and see if the query is being executed synchronously or asynchronously. You could do the same with your insertRows(). If one or the other is asynchronous (which they shouldn't be), then that would help explain your symptoms.
If both are being executed synchronously, then something is seriously messed up somewhere to explain your prior results, and we'd need a reproducible test case to get to the bottom of it.
Sure. I'll check this and get back.
@CommonsWare, you're right - I had a @Query call embedded in my code, based on which I had to do some computation and delete some of the rows. This former call was async, and was causing issues. The @Delete and @Insert calls themselves ARE synchronous. Thanks a bunch for you help.
@Rajath: So the '@Query' call is running asynchronously by default or you are running it explicitly inside a background thread? AFAIK, since '@Query' is not returning any observable it will not run on background thread by default.
@rajath I think @Delete (and also @Update) searches rows based on primary key. So if you set your ID as @PrimaryKey, then you create a dummy object with desired ID as key and then pass it to delete function.
Yes, it works https://developer.android.com/training/data-storage/room/accessing-data#convenience-delete
As pointed out on documentation for Transaction, you can do following:
@Dao
public abstract class ProductDao {
@Insert
public abstract void insert(Product product);
@Delete
public abstract void delete(Product product);
@Transaction
public void insertAndDeleteInTransaction(Product newProduct, Product oldProduct) {
// Anything inside this method runs in a single transaction.
insert(newProduct);
delete(oldProduct);
}
}
what if i am using interfaces?
@johnny_crq I was using interfaces and it was not hard to switch to abstract classes. alternatively you might try this ugly trick on interface @Transaction @Query("DELETE FROM products; INSERT INTO products VALUES(x,y,z)") over a method.
How would you test this in Espresso, if you are calling @Query right after Insert?
@guness why would that trick be ugly? That is plain SQL syntax, I think it's readable and maintainable, absolutely fine.
What about the situation when there is a relation between two objects and second object is in other DAO?
@LukaszTaraszka I think for this situation, there is a method on db itself.
it should be like db.startTransaction(); aDao.do(); bDao.do(); db.endTransaction() or some kotlin style blocks. but you have to do it outside of a dao. but you have to be careful about async queries.
insert update and delete annotations already run inside a transaction, wrapping these in another transaction doesn't add any value to it.
but they are in different transactions, wrapping them makes them run in single transaction no?
As @CommonsWare pointed out, @Query are asynchronous , while @Insert , @Delete , @Update are synchronous.
If you want to execute multiple queries in single transaction , Room also provides a method for that as mentioned below.
roomDB.runInTransaction(new Runnable() {
@Override
public void run() {
removeRows(ids);
insertRows(ids);
}
});
I hope this will solve your problem.
PCMIIW: Query are not always asynchronous. Query is asynchronous only when you are returning an observable e.g. Flowable or LiveData. Since in the question Query is used for removing element, return value is int and hence it will run synchronously.
For anyone wondering: runInTransaction: "[...]The transaction will be marked as successful unless an exception is thrown in the Runnable."
For Room transactions in Kotlin you can use:
Interface with implemented method, like:
@Dao
interface Dao {
@Insert
fun insert(item: Item)
@Delete
fun delete(item: Item)
@Transaction
fun replace(oldItem: Item, newItem: Item){
delete(oldItem)
insert(newItem)
}
}
Or use open function, like:
@Dao
abstract class Dao {
@Insert
abstract fun insert(item: Item)
@Delete
abstract fun delete(item: Item)
@Transaction
open fun replace(oldItem: Item, newItem: Item){
delete(oldItem)
insert(newItem)
}
}
You'll get error: Method annotated with @Transaction must not be private, final, or abstract. without open modifier.
I believe when we are using DAO interfaces, still we can perform transaction using default interface methods. We need to add the annotation @JvmDefault and @Transaction and we can perform any operation inside that, which belong to single transaction.
@Dao
interface TestDao {
@Insert
fun insert(dataObj: DataType)
@Update
fun update(dataObj: DataType): Completable
@Delete
fun delete(dataObj: DataType): Completable
@Query("DELETE FROM $TABLE_NAME")
fun deleteAllData()
@Query("SELECT * FROM $TABLE_NAME ORDER BY id DESC")
fun getAllData(): Single<List<DataType>>
@JvmDefault
@Transaction
fun singleTransaction(dataList: List<DataType>) {
deleteAllData()
dataList.forEach {
insert(it)
}
}
}
interface method with body!!
It gives me an error when I add @JvmDefault as you did in your example.
Can you please elaborate what error you are getting?
here's the solution to this problem:
@Query("SELECT * FROM friend WHERE id = :id")
Friend getFriendByID(int id);
@Delete
void delete(Friend friend);
Friend friendToBeDeleted = friendDAO.getFriendByID(id);
friendDAO.delete(friendToBeDeleted);
You have to go through two steps!
Those are two independent transactions.
| common-pile/stackexchange_filtered |
Verify length of each string of an array
I want to make a verification that if one of these strings has a length greater than 40 echo "this array contains a string with a length greater than 40". So in this case the index [11] contains a string length that is greater than 40. How can I do that?
array(5) {
[0]=>
string(19) "PEDRO MOACIR LANDIM"
[1]=>
string(19) "ADIR JOAO GASTALDON"
[2]=>
string(18) "ABEL PEDRO MARQUES"
[10]=>
string(28) "ADRIANO CESAR GARCIA JOAQUIM"
[11]=>
string(44) "AUTO VIAÇÃO CATARINENSE LTDA - FLORIANÓPOLIS"
}
Use http://php.net/manual/en/function.strlen.php to get the length of a string.
I would use array_filter(). This will return you list of array elements, which are longer than 40.
"strlen() returns the number of bytes rather than the number of characters in a string. " in this case strlen() is the wrong function @JasonK because this is UTF8 data. mb_strlen() needs to be used.
@cyadvert But he only needs to know if it contains a string that has more than 40 characters
foreach ($array as $key => $value) {
if(mb_strlen($value) > 40){
echo "this array contain a string wich lenght is greater than 40: ".$key;
}
}
You can do this:
<?php
$myStrings = array(
0 => "PEDRO MOACIR LANDIM",
1 => "ADIR JOAO GASTALDON",
2 => "ABEL PEDRO MARQUES",
10 => "ADRIANO CESAR GARCIA JOAQUIM",
11 => "AUTO VIAÇÃO CATARINENSE LTDA - FLORIANÓPOLIS"
);
foreach($myStrings as $key => $string){
//Get length of string
$len = strlen( $string );
if( $len > 40 ){
echo
'this array contain a string wich length is greater than 40. Array key:
' .$key. ' | array string: ' .$string;
break;
}
}
?>
Since only one of the strings is needed to be detected if it's greater than 40, it's better to break the loop once a number greater than 40 is encountered. This can reduce the number of iterations executed by your for loop if ever the number isn't located at the last index of your array.
foreach ($array as $arr) {
if(mb_strlen($arr) > 40){
echo "this array contain a string wich length is greater than 40";
break;
}
}
| common-pile/stackexchange_filtered |
In Mark 5:42 to whom did Christ give the order not to publish about the resurrected girl?
After raising the little girl from the dead Christ goes on to give a charge that know one should know about this incident
Mark 5:42 NASB
42 Immediately the girl got up and began to walk, for she was twelve years old. And immediately they were completely astounded. 43 And He gave them strict orders that no one should know about this, and He said that something should be given her to eat.
But its not clear to whom Christ gave this charge whether it was the parents,his disciples or the crowd.
To whom did did Christ give orders that no one should know about it?
The same story is also found in Luke 8:56, and Matthew 9:25, however Luke's Gospel may provide a clue, as it says after she arose, in verse 56;
"And her parents were astonished: but he charged them that they should tell no man what was done." (KJV)
This follows a similar pattern where Jesus would heal people and tell them not to spread the news that he had done the miracle. Some examples:
Man healed of leprosy - Matthew 8:2-4, Mark 1:40-44, Luke 5:12-14
Blindness cured - Matthew 9:28-30
Deaf and muteness cured Mark 7:32-36
Blindness cured - Mark 8:22-26
Jesus told Peter, James and John not to tell what they had seen on the mountain:
Matthew 17:1-9, Mark 9:2-9, Luke 9:28-36.
So since the crowd was put out, and only Peter, James and John were permitted in, with the girls parents I would say the 'them' in Mark 5:42 is the parents and the girl, and quite possibly Peter, James and John.
@Mac'sMusings - Thank you for your comment sir. Yes, I agree. It doesn't seem like the news would be kept secret. This has brought a question to mind, but as far as I'm aware, I think I need to ask an actual question, but not in the comments.
I agree. Peter, James and John from verse 37 and the father and mother from verse 40. No dispute. +1. He spoke to those who were 'astonished' KJV in the vicinity of the actual event, verse 42. What that means - is a matter of interpretation. But interpretation should not compromise the actual textual evidence.
In Mark 5:42 to whom did Christ give the order not to publish about the resurrected girl?
From the parallel story of the event . Luke 8:51-56, Jesus instructed the parents of the girl , not to tell any one what had happened. But why?
Luke 8:51-56 (NASB)
51 "When He came to the house, He did not allow anyone to enter with
Him, except Peter and John and James, and the girl’s father and
mother. 52 Now they were all weeping and lamenting for her; but He
said, “Stop weeping, for she has not died, but is asleep.” 53 And they
began laughing at Him, knowing that she had died."
54 "He, however, took her by the hand and called, saying, “Child,
arise!” 55 And her spirit returned, and she got up immediately; and He
gave orders for something to be given her to eat. 56 Her parents were
amazed; but He instructed them to tell no one what had happened."
Why did Jesus instruct them, not to tell anyone what had happened.
His humble approach fulfilled the prophetic words of Isaiah.
Isaiah 42:1-2 (NASB)
God’s Promise concerning His Servant.
42 “Behold, My Servant, whom I uphold; My chosen one in whom My soul
delights. I have put My Spirit upon Him; He will bring forth justice
to the nations. 2 “He will not cry out or raise His voice, Nor make
His voice heard in the street.
Obviously Jesus did not want to magnify his name, or create sensational reports about his miracles, that would draw attention away from God and his preaching of the good news of the Kingdom of God. Further ,apparently Jesus wanted to convince people that he was the Christ, the Messiah himself , and not from rumors based on his miracles.
Luke 4:43 (NRSV)
43" But he said to them, “I must proclaim the good news of the kingdom
of God to the other cities also; for I was sent for this purpose.”
| common-pile/stackexchange_filtered |
Box2D AndEngine: app hangs out when creating Joints during ContactListener?
I am using Box2D in AndEngine (for Android).
My purpose is to create a Force joint whenever 2 objects collide with each other.
When I try to create a Mouse Joint between 2 objects (bodies) during ContactListner process. The application will hang for some time then exit, without any error, just a notification of threads ending.
The Joint creating is OK when I call mEnvironment.CreateForceJoint(..) outside the ContactListener - somewhere while app is running in some physics.UpdateHandler().
Please help me to solve the problems, or find out the reason. Thanks for any help!
This is my code:
public class MyActivity extends SimpleBaseGameActivity {
private final String DEBUG_TAG = "MyActivity";
private GEnvironment mEnvironment;
private PhysicsWorld mPhysicsWorld;
private MyFixture FIXTURE_PLANET = GlobalSettings.FIXTURE_PLANET;
private MyFixture FIXTURE_VACUUM = GlobalSettings.FIXTURE_VACUUM;
// CODE TO CREATE RESOURCES and ENGINE OPTIONS....
@Override
protected Scene onCreateScene() {
Scene scene = new Scene();
scene.setBackground(new Background(0.8627f, 0.9020f, 1.0f));
//CODE: creating physic world
//.....
//creating game environment
mEnvironment = new GEnvironment(mPhysicsWorld, scene, mEngine);
//CODE: creating objects, register and attach them into scene
GMediaPlanet sunZone = mEnvironment.CreatePlanet(x1, y1, sunTextureRegion, FIXTURE_PLANET);
GMediaPlanet earthZone = mEnvironment.CreatePlanet(x2, y2, earthTextureRegion, FIXTURE_PLANET);
// enable contact listener, detect collision between bodies
mPhysicsWorld.setContactListener(new PlanetContactHandler());
return scene;
}
// ----------------------------------------------------
// Handling collision between letter cubes
// ----------------------------------------------------
/**
* Handling the collision of GMediaPlanets
*/
public class PlanetContactHandler implements ContactListener {
private final String DEBUG_TAG = "CONTACT";
// if there's one collision, do not handle others or re-handle it
private boolean mIsColliding = false;
@Override
public void beginContact(Contact contact) {
if (mIsColliding)
return;
//-----------------------------------------------
//suppose colliding objects to be sunZone and earthZone
//-----------------------------------------------
Object aTag = contact.getFixtureA().getBody().getUserData();
Object bTag = contact.getFixtureB().getBody().getUserData();
if (aTag != null && bTag != null) {
GMediaPlanet box = null;
GMediaPlanet cube = null;
if (aTag instanceof GMediaPlanet
&& bTag instanceof GMediaPlanet) {
box = (GMediaPlanet) aTag;
cube = (GMediaPlanet) bTag;
}
if (box != null && cube != null)
{
//!!!!!!!-----------------------------------------------------
//This code will HANG the app when called inside contact listner:
MouseJoint mTestJoint = mEnvironment.CreateForceJoint(box, cube);
//!!!!!!!-----------------------------------------------------
Vector2 target = Vector2Pool.obtain(box.GetLocation());
mTestJoint.setTarget(target);
Vector2Pool.recycle(target);
}
}
mIsColliding = true;
}
@Override
public void endContact(Contact contact) {
Log.d(DEBUG_TAG, "end colliding!");
mIsColliding = false;
}
@Override
public void preSolve(Contact contact, Manifold oldManifold) {
}
@Override
public void postSolve(Contact contact, ContactImpulse impulse) {
}
}
}
public class GMediaPlanet
{
protected IAreaShape mSprite = null;
protected Body mBody = null;
public GMediaPlanet()
{ }
public Vector2 GetLocation()
{
mBody.getPosition();
}
}//end
public class GEnvironment
{
private PhysicsWorld mWorld;
private Scene mScene;
private org.andengine.engine.Engine mEngine;
public GEnvironment(PhysicsWorld pWorld, Scene pScene, org.andengine.engine.Engine pEngine)
{
mWorld = pWorld;
mScene = pScene;
mEngine = pEngine;
}
/** the constructor is hidden, available within Appearances packet only */
public GMediaPlanet CreatePlanet(float pX, float pY, ITextureRegion pTextureRegion, MyFixture pFixture)
{
GMediaPlanet entity = new GMediaPlanet();
entity.mSprite = new Sprite(pX, pY, pTextureRegion, mEngine.getVertexBufferObjectManager());
entity.mBody = PhysicsFactory.createCircleBody(mWorld, entity.mSprite, BodyType.DynamicBody,
pFixture.GetDef(), GlobalSettings.PIXEL_2_METER);
mScene.attachChild(entity.mSprite);
entity.mSprite.setUserData(entity.mBody);
entity.mBody.setUserData(entity);
mWorld.registerPhysicsConnector(new PhysicsConnector(entity.mSprite, entity.mBody, true, true));
return entity;
}
// -----------------------------
// Creating JOINTS
// -----------------------------
/**
* Creating a force joit based on type of MouseJointDef
*
* @param anchorObj
* the static object in the mTestJoint (anchor base)
* @param movingObj
* object to move forward the target
*/
public MouseJoint CreateForceJoint(GMediaPlanet anchorObj, GMediaPlanet movingObj)
{
ChangeFixture(movingObj, GlobalSettings.FIXTURE_VACUUM);
MouseJointDef jointDef = new MouseJointDef();
jointDef.dampingRatio = GlobalSettings.MOUSE_JOINT_DAMP;
jointDef.frequencyHz = GlobalSettings.MOUSE_JOINT_FREQ;
jointDef.collideConnected = true;
Vector2 initPoint = Vector2Pool.obtain(movingObj.mBody.getPosition());
jointDef.bodyA = anchorObj.mBody;
jointDef.bodyB = movingObj.mBody;
jointDef.maxForce = (GlobalSettings.MOUSE_JOINT_ACCELERATOR * movingObj.mBody.getMass());
// very important!!!, the initial target must be position of the sattelite
jointDef.target.set(initPoint);
MouseJoint joint = (MouseJoint) mWorld.createJoint(jointDef);
Vector2Pool.recycle(initPoint);
// very important!!!, the target of the joint then changed to target anchor object
Vector2 targetPoint = Vector2Pool.obtain(anchorObj.mBody.getWorldCenter());
joint.setTarget(targetPoint);
Vector2Pool.recycle(targetPoint);
return joint;
}
public void ChangeFixture(GMediaPlanet entity, MyFixture pFixture)
{
Filter fil = new Filter();
fil.categoryBits = pFixture.categoryBit;
fil.maskBits = pFixture.maskBits;
if(entity.mBody != null)
{
entity.mBody.getFixtureList().get(0).setFilterData(fil);
}
}
}
You can not modify the world in Step()-Call of Box2D because the World is locked! you should get an exception. You have to Remember which objects are colliding and do stuff after beginContact.. for example in an update function.
Your explanation was too general, it couldn't show out the call of 'modification'. I still cannot catch your idea. Where did I modify the Step()-Call of Box2D? what does the world lock?
Hah, I can catch your idea, means: In the ContactListener, I should only store the pointers to the colliding Objects, then in another Update, we process any changing to the world raised by that objects. Thing will be more clear in here: http://www.neilson.co.za/?p=168
And can see more detail of ContactListener here: http://blog.sethladd.com/2011/09/box2d-collision-damage-for-javascript.html
| common-pile/stackexchange_filtered |
How to fetch only certain rows from an MySql database table
I have a php array of "primary key" values, which correspond to just a few rows of a very long table of data, saved on a MySql database.
How can I fetch with just one query only those rows. Is it possible or will I need to make one query per primary key? eg: SELECT * FROM table1 WHERE key = 8573
Thanks,
Patrick
Don't forget to add an index for your table. Running a query on a "long table of data" is expensive. Considere adding this: alter table table1 add index (key)
Select * from table WHERE primary_key IN (8573,8574,8578)
for your php array you could use implode
$key_array = implode(",", $array);
Select * from table WHERE primary_key IN ($key_array)
You can use MySQL's IN() operator.
... WHERE x in (1,4,9,16)
IN is a bit of a SQL silver bullet that too many coders don't use either because they don't know about it or forget it exists.
On most databases, "key IN (set)" works faster than "key=a or key=b or...".
Specifically for PHP, you may use implode to generate your SQL:
$SQL = "select * from table where key in (".implode(',', $KeyArray)).")"
Assuming integer key. With a string key, you need to quote them:
$SQL = "select * from table where key in ('".implode("','", $KeyArray))."')"
Select * from table1 WHERE key IN ([values separated by commas])
Two options:
select * from table1 where key in (8573, 5244, 39211);
select * from table1 where key = 8573 or key = 5244 or key = 39211;
Use the OR statement.
SELECT * FROM table1 WHERE key=8573 OR key=9999;
Of course this can get really long, so you'll probably want to use a loop and concatenate the keys to the query string.
$query = "SELECT * FROM tableName WHERE primaryKey IN ('" . implode("','", $keys) . "')";
Use php's implode function:
$keys = array(1,2,3,4);
$sql = "SELECT * FROM table WHERE key IN ( " . implode($keys, ",") . ")";
Just use this form of select
SELECT * FROM table1 WHERE key IN (1234, 1235, 6789, 9876)
| common-pile/stackexchange_filtered |
Seeking assistance with the circuit design for Force balance geophone
I am attempting to DIY this patent article, configuring the circuit based on the method provided (Fig. 1). I've configured the circuit (Figs. 2 and 3 and VIDEO). In my experiment, once I connect the output of amplifier 240 to the circuit, I can indeed sense feedback force with a detector, but the oscilloscope doesn't show any signal as it should be a force balance signal. However, when I remove the output of amplifier 240, I can see waveforms, but obviously, there's no feedback force. I'd like to ask if it's a mistake in my design or if the 240 transconductance amplifier (V TO I) in the figure is an IC instead of an OPA?
Fig. 1
Fig. 2
Fig. 3
No supply capacitors?
Yes, I didn't use supply capacitors. The situation is the same when I power it with batteries, so it shouldn't be related to power supply noise
What are the signals (ouputs of the opamps) on the scope?
The oscilloscope is connected to the output terminals of nodes 570 and 565 in Figure 2. https://youtu.be/ApWgVYdkejg
in the video, when I shake the seismic detector, the oscilloscope reacts when the 240 amplifier is disconnected and does not react when it's connected
| common-pile/stackexchange_filtered |
Unity launcher: how can I make it show one icon per window?
I have a lot of chrome windows open on my machine (and multiple chrome profiles, even), but they all get merged down into one icon in the unity launcher.
Is there a way to make them each show up as one icon? Or at least separate by chrome profile?
IMHO, the "simpliest" way is to use an alternative desktop environment rather than Unity.
Look at Xfce or KDE.
| common-pile/stackexchange_filtered |
Stopwatch/countdown (00:00:00) format with Javascript
I have this code:
function startStopwatch() {
vm.lastTickTime = new Date();
$interval.cancel(vm.timerPromise);
vm.timerPromise = $interval(function() {
var tickTime = new Date();
var dateDiff = vm.lastTickTime.getTime() - tickTime.getTime();
var secondsDiff = Math.abs(dateDiff / 1000);
vm.secondsElapsed += Math.round(secondsDiff);
vm.formattedSecondsElapsed = moment.duration(secondsElapsed, "seconds").format("HH:mm:ss");
vm.lastTickTime = tickTime;
}, 1000);
}
It ticks seconds (int) since the 'play' button was hit on a stopwatch.
But the format is 01, 02.... up to 60 and then 01:01, 01:02, etc.
I'd like to format it as 00:00:00, incrementing the int. I've found answers for other programming languages but not JavaScript. Any help appreciated!
woah downvoter: this is a well thought out question with supporting code...
Why don't you build your own conversion function - it shouldn't be too hard? Not sure if something out-of-the-box exists for this purpose. Whenever I needed something like that, I was building my own in a few minutes... But it's possible there's a way to do it with moment, let's wait for an answer.
yeah I'm hoping somebody knows a handy answer off the top of their head. I'm really confused about the downvote. I'll roll my own function and post it if I need to.
If you get no answers in, say, 10 minutes, I'll help you write your own function. Don't worry about the downvoters - there are a couple of them that randomly go and downvote everything... :/
I didn't downvote, but I would guess that maybe the person who did did so because your question doesn't show any attempt to fix the problem: it basically says "I wish this code did something else". You said in a comment you'd roll your own function if you need to - why not do that first? Best way to learn. Read the Moment documentation. If you get stuck, then ask about it in this forum.
There are tons of answers, did you see them @RJB? This one looks good: http://stackoverflow.com/questions/6312993/javascript-seconds-to-time-string-with-format-hhmmss
I was searching for a while but I didn't see that post. Thanks, solid answer.
Haha, it was the first I found. You're welcome.
Borrowing from the answer provided by RJB, convert the algorithm to a filter:
Filter
app.filter('time', function() {
function isInt(n){
return Number(n) === n && n % 1 === 0;
}
return function(val) {
if (isInt(val))
return new Date(null, null, null, null, null, val).toTimeString().replace(/.*(\d{2}:\d{2}:\d{2}).*/, "$1");
return val;
}
});
Usage
app.controller('ctrl', function($scope) {
$scope.seconds = 25251;
});
HTML
<div ng-controller="ctrl">
{{ seconds | time }}
</div>
Demo Plunker
Thanks for taking the question seriously. This is great; I had originally hoped to use filters.
I'm going to undelete this because I think this is the best answer and I think it has academic value.
vm.durationElapsed = new Date(null, null, null, null, null, secondsElapsed).toTimeString().replace(/.*(\d{2}:\d{2}:\d{2}).*/, "$1");
Answer found on: https://stackoverflow.com/a/12612778/1507899
I think you should close it, and upvote the original answer there.
It's a modification based on one of the comments of the answer. His answer modifies a Date; this formats seconds elapsed -- more useful for a stopwatch.
Ah, I see. Fair enough!
| common-pile/stackexchange_filtered |
Angular redundant databinding?
I'm new at my current company and refactoring code from my predecessor.
<th ng-repeat="(key,headline) in model.headlines" ng-class="headline.classes" data-ng-bind="headline.name">
{{headline}}
</th>
I discovered that removing {{headline}} changes nothing (or adding anything e.g. text to the content of the object does nothing as well).
Removing the data-ng-bind="headline.name" and replacing {{headline}} with a {{headline.name}} enables me to add further content just like in any other html object.
As I'm new to angular and I deem my predecessor quite competent,
does the data-ng-bind="headline.name" have any benefits compared to the "inline-version" {{headline.name}}?
I want to add additional style to the object and putting another div into it would be great, but I can't do it with the data-ng-bind because it eats the content and just replaces it.
Could this change mess anything up ?
I'm quite confused why the more experienced predecessor would have written the code in a way that made it less agile in the first place, so I want to double check, that I'm not overlooking anything.
<th ng-repeat="(key,headline) in model.headlines" ng-class="headline.classes"">
<div ng-class="additional styling">
{{headline.name}}
</div>
</th>
{{...}} has a bit of performance overhead on initialisation but in practice it's negligible. In practice the difference is that ng-bind will always replace the contents of the element where with {{...}} you're only setting a part of it so you can still append or prepend to the element.
In the end it's more preference than anything, I tend to use ng-bind if I know that's all I want as the content of the element.
The decreased readability of ng-bind in comparison to expression in element content IMO more than offsets the minuscule performance gain.
Using ng-bind instead of the inplace {{...}} has one major benefit, which prevents the page to look like a hell of {{}} everywhere before angular kicks in and replaces everything (this can be very visible on slow devices (tablets, phones etc).
You can however get around it by adding ng-cloak class to the elements you don't want shown until angular is ready processing the template, then it will remove the ng-cloak and all will become visible.
(It happened to us when you switched to a page with some heavy charts and grids, it showed some really ugly {{}} all over the place on tablets).
Other then that the other answer about partial rendering and performance is also correct.
I think you are trying to bind the data to table header can you try to use this:
<th ng-repeat="(key,headline) in model.headlines" ng-class="headline.classes"">
<div ng-class="additional styling" data-ng-bind="headline.name">
</div>
</th>
yeah, but is the data-ng-bind="headline.name" better practice than the {{headline.name}} ? I've read somewhere that {{headline.name}} is the thing that you commonly use to bind data and seeing a bind written out is rather rare, but thanks for the suggestion, I haven't thought about moving the data-bind tag to the inner object
I found that you can use the redundant way to provide debugging information.
In the first example in the question,
if the object has no headline.name property,
but an headline object,
then the json object information is printed instead.
This can help to track down wheater the json object,
already exist and which properties have been set already,
this helps to track down where something might have gone wrong sometimes.
| common-pile/stackexchange_filtered |
When does AEM 6 workflow purge run?
CQ 5.6 had a 'cron' expression to control the timing for the workflow purge [1], but this is gone from AEM 6.2 [2].
What's the default, and can it be changed?
The workflow purge task is now triggered via maintenance window section - http://localhost:4502/libs/granite/operations/content/maintenance.html. You can schedule maintenance windows in daily, weekly and monthly intervals and add maintenance tasks to it. The workflow purge task is part of the weekly maintenance window by default.
Thanks. It's surprising that I couldn't find this info anywhere in the documentation or elsewhere online.
| common-pile/stackexchange_filtered |
pyodbc - ImportError: DLL load failed:
I am running 64-bit Windows 7 and the Python 2.6 installation (64-bit version).
In addition I am using Aptana Studio 3 to run python.
I just downloaded and installed the pyodbc-3.0.2.win32-py2.6 package.
When I run python and try
import pyodbc
I receive the following error: 'ImportError: DLL load failed: The specified procedure could not be found.'
any idea how do I make it work?
Anything useful with python -v or python -vv?
I fix the problem by installing pyodbc-2.1.11.win32-py2.6 from here
Did you ever get pyodbc-3.0.2.win32-py2.6 to work? I'm facing the same issue now with 3.0.7. pyodbc-2.1.11.win32-py2.6 is working for me but I'd like to use the latest version.
Sorry but I did not use other version then pyodbc-2.1.11.win32-py2.6
Try the installer for the 64bit version of pyodbc:
http://code.google.com/p/pyodbc/downloads/detail?name=pyodbc-3.0.2.win-amd64-py2.6.exe&can=2&q=
| common-pile/stackexchange_filtered |
gosimple/oauth2 and Google OAuth2
I'm using the gosimple/oauth2 package to handle OAuth2 logins for users. I have zero problems with the GitHub example code, which works perfectly and returns what I want.
I do, however, have issues getting it to work with Google. I've defined my scope correctly (as far as I can see), and I'm getting a token response, but once I enter it in, the application panics at ~line 68.
package main
import (
"bitbucket.org/gosimple/oauth2"
"flag"
"fmt"
"io"
"log"
"os"
)
var (
// Register new app at https://github.com/settings/applications and provide
// clientId (-id), clientSecret (-secret) and redirectURL (-redirect)
// as imput arguments.
clientId = flag.String("id", "", "Client ID")
clientSecret = flag.String("secret", "", "Client Secret")
redirectURL = flag.String("redirect", "http://httpbin.org/get", "Redirect URL")
scopeURL = flag.String("scope", "https://www.googleapis.com/auth/userinfo.profile", "Scope URL")
authURL = "https://accounts.google.com/o/oauth2/auth"
tokenURL = "https://www.googleapis.com/oauth2/v1/userinfo"
apiBaseURL = "https://www.googleapis.com/"
)
const startInfo = `
Register new app at https://github.com/settings/applications and provide
-id, -secret and -redirect as input arguments.
`
func main() {
flag.Parse()
if *clientId == "" || *clientSecret == "" {
fmt.Println(startInfo)
flag.Usage()
os.Exit(2)
}
// Initialize service.
service := oauth2.Service(
*clientId, *clientSecret, authURL, tokenURL)
service.RedirectURL = *redirectURL
service.Scope = *scopeURL
// Get authorization url.
aUrl := service.GetAuthorizeURL("")
fmt.Println("\n", aUrl)
// Open authorization url in default system browser.
//webbrowser.Open(url)
fmt.Printf("\nVisit URL and provide code: ")
code := ""
// Read access code from cmd.
fmt.Scanf("%s", &code)
// Get access token.
token, _ := service.GetAccessToken(code)
fmt.Println()
// Prepare resource request.
google := oauth2.Request(apiBaseURL, token.AccessToken)
google.AccessTokenInHeader = false
google.AccessTokenInHeaderScheme = "token"
//google.AccessTokenInURL = true
// Make the request.
apiEndPoint := "user"
googleUserData, err := google.Get(apiEndPoint)
if err != nil {
log.Fatal("Get:", err)
}
defer googleUserData.Body.Close()
fmt.Println("User info response:")
// Write the response to standard output.
io.Copy(os.Stdout, googleUserData.Body)
fmt.Println()
}
And the panic:
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x2341]
This is the offending line:
google := oauth2.Request(apiBaseURL, token.AccessToken)
I'd appreciate help to get this working. Note that the code is just modified example code from the gosimple/oauth2 repo I'm trying to debug this stage before wrapping it with the controller methods in my application.
Full stack trace as below:
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x2341]
goroutine 1 [running]:
main.main()
/Users/matt/Desktop/tests/google_demo.go:68 +0x341
goroutine 2 [syscall]:
goroutine 5 [runnable]:
net/http.(*persistConn).readLoop(0xc2000bd100)
/usr/local/Cellar/go/1.1/src/pkg/net/http/transport.go:761 +0x64b
created by net/http.(*Transport).dialConn
/usr/local/Cellar/go/1.1/src/pkg/net/http/transport.go:511 +0x574
goroutine 6 [select]:
net/http.(*persistConn).writeLoop(0xc2000bd100)
/usr/local/Cellar/go/1.1/src/pkg/net/http/transport.go:774 +0x26f
created by net/http.(*Transport).dialConn
/usr/local/Cellar/go/1.1/src/pkg/net/http/transport.go:512 +0x58b
... and forcing a panic on a token error:
panic: No access token found, response: [78 111 116 32 70 111 117 110 100]
Some object in the code must be nil. Could you tell us the exact line where the panic happens and post a copy of that line so we can find it in the code?
@DavidGrayson Line 68 (it's in the post, just buried; forgot SO doesn't have line numbers)
You wrote a tilde before it, so I thought it might be an approximate line number. So based on your edit I would guess that token is nil. I wonder why.
Actually you should give the whole stack trace so we can see whether the panic happened inside the oauth library or your code.
What is the second return value from service.GetAccessToken(code)? You are ignoring it by assigning it to _. If it is an error, you should definitely assign it to err and check if it is nil. Use this code: if err != nil { log.Fatal(err); } That will probably tell you why the code is failing.
@DavidGrayson Have added the full trace, which seems to indicate the problem is created when the program calls net/http. I've also caught the error (noting that this is demo code; not mine!) and it's definitely a token issue. My feeling is that Google's tokens, which start with [1-9] and a forward-slash, are breaking the parser in the library.
OK, then we have been looking in the wrong place. You should figure out why GetAccessToken is returning that error. You might need to add a bunch of fmt.Println statements to the library, or somehow use a debugger.
@DavidGrayson Thanks, I appreciate the help. I've just done a run with the other OAuth2 library (https://code.google.com/p/goauth2/) and it worked as expected, so it's definitely a library bug.
@elithrar - It looks like you've found the answer ("it's defintely a library bug"). Could you add that as an answer below and accept it, so it's easier for others to see when they find this page? (Your comment was hidden when I first viewed the page)
This was due to a bug in the library's token handling, which I logged with the author and which was fixed within the day: https://bitbucket.org/gosimple/oauth2/commits/e8e925ffb2424645cceaa9534e0ed9a28e564a20
| common-pile/stackexchange_filtered |
Spring Boot deploy to Heroku: unable to access jarfile target/springApp-0.0.1-SNAPSHOT.jar
I have a spring boot app deploying to heroku
Procfile
web: java $JAVA_OPTS -jar target/dependency/webapp-runner.jar --port $PORT target/*.war
pom <artifactId>springApp</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>war</packaging>
when i deploy i can access my apps pages but i get an error when i run heroku logs
Error: unable to access jarfile target/springApp-0.0.1-SNAPSHOT.jar
From what i can tell, when i deploy the application heroku uses java -Dserver.port=26325 $JAVA_OPTS -jar target/bigfans-0.0.1-SNAPSHOT.jar to run the process. It does not use the commands specified in the Procfile.
I have tried to find what the cause could be but I have not found anything online.
Did you commit your Procfile? Are you on the correct branch? If you created/edited your Procfile on a branch but deployed with git push heroku master, the changes in your branch will not be deployed.
I committed the Procfile onto master and pushed the code to github. I am using heroku's github deployment method.
The dynos that i have configured are visible on my heroku dashboard but my logs indicate that they are not being used when deploying.
| common-pile/stackexchange_filtered |
Sorting rows of two matrices using same ordering
Possible Duplicate:
Sort a matrix with another matrix
Given two matrices A and B of the same size, I want to sort A across the second dimension (rows) and apply the same ordering to the matrix B. Is it possible to vectorize this current code?
r = 10; c = 4;
A = rand(r,c);
B = reshape(1:r*c,c,r)'; % can be any random matrix'
[A,order] = sort(A,2);
for i=1:r
B(i,:) = B(i,order(i,:));
end
This is a duplicate. Please have a look at the solutions and discussion in this question: Sort a matrix with another matrix
@Jonas: thanks, I guess vectorization isn't worth it in this case :)
| common-pile/stackexchange_filtered |
Stuck at showing $U = V$ in SVD of Hermitian, positive definite matrices
Show $U = V$ in SVD of Hermitian, positive definite matrice $A_{n \times n}$
Try
Since eigenvalues and singular values coincide in Hermitian matrices,
$$
AV = U \Sigma = U \Lambda
$$
where $\Sigma$ is the singular value diagonal matrix with decreasing diagonal, and $\Lambda$ is the eigenvalue diagonal matrix, i.e. $\Sigma = \Lambda$.
This implies that
$$
Av_i = \lambda_i u_i \ \ (\ast)
$$
where $i=1,\cdots, n$, which does not necessarily imply that $v_i$ are eigenvectors, and $v_i = u_i$, $\forall i$.
Let
$$
v_i = c_{i1}\xi_1 + \cdots + c_{in}\xi_n \\
u_i = d_{i1}\xi_1 + \cdots + d_{in}\xi_n \\
$$
where $\xi_i$ is the eigenvectors corresponding to $\lambda_i$.
The $(\ast)$ implies that, $\forall i$,
$$
c_{i1}\lambda_1\xi_1 + \cdots + c_{in}\lambda_n\xi_n = d_{i1}\lambda_i\xi_1 + \cdots + d_{in}\lambda_i\xi_n
$$
thus, since $[\xi_1 | \cdots | \xi_n]$ : invertible by spectral theorem,
$$
\begin{bmatrix}
c_{11} & c_{12} & \cdots & c_{1n} \\
c_{21} & c_{22} & \cdots & c_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
c_{n1} & c_{n2} & \cdots & c_{nn} \\
\end{bmatrix}
\begin{bmatrix}
\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n
\end{bmatrix} =
\begin{bmatrix}
d_{11} & d_{12} & \cdots & d_{1n} \\
d_{21} & d_{22} & \cdots & d_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
d_{n1} & d_{n2} & \cdots & d_{nn} \\
\end{bmatrix}^T
\begin{bmatrix}
\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n
\end{bmatrix}
$$
I notice that the matrices of $c_{ij}$ and $d_{ij}$ are invertible by the invertibility of $U$ and $V$, but I'm stuck at showing the identity.
Any help will be appreciated.
$U$ contains the eigenvectors of $AA^H$ and $V$ has those of $A^HA$. Since $A$ is hermitian, $A^HA = AA^H$ implying that the matrices $U$ and $V$ are identical.
@sudeep5221 But when there exists $\sigma_i = \sigma_j$, $i \neq j$?
@sudeep5221 I don't see how your argument works. The equality $A^HA=AA^H$ is also satisfied by every normal matrix $A$, but unless $A$ is positive semidefinite, we cannot possibly have $U=V$ in the SVD of a normal matrix $A$.
@user1551 Oh I see what you are trying to say. I had in mind what you wrote in your answer and took it for granted forgetting to point that out clearly. Sorry about that and thanks for pointing that out. :)
Here is one way to prove it. Note that every positive semidefinite matrix $P$ has a unique positive semidefinite square root $P^{1/2}$. In fact, $P^{1/2}=f(P)$ where $f$ is the Lagrange interpolation polynomial that maps each eigenvalue of $P$ to its square root.
Now, in your case, since $A^2=AA^\ast=US^2U^\ast$, by the uniqueness of positive definite square root, we have $A=USU^\ast$. Hence $U=V$.
| common-pile/stackexchange_filtered |
Original method still getting called in Moq even after CallBase = true/false
Here's my code:
public class Bar { }
public class Foo { public string Name { get; set; } public Bar TheBar { get; set; } }
public class Dependency
{
public Foo DoSomething(Expression<Func<Foo, bool>> exp1) { return new Foo(); }
}
public class Base
{
public Dependency Dependency { get; set; }
public virtual Foo MethodA(Expression<Func<Foo, bool>> exp1,
params Expression<Func<Foo, object>>[] exp2)
{
return Dependency.DoSomething(exp1);
}
}
public class Derived : Base
{
public Foo DerviedMethod(string str)
{
return base.MethodA(e1 => e1.Name.Equals(str), e2 => e2.TheBar);
}
}
And my Unit Test code:
var mock = new Mock<Derived> { CallBase = true }; // Same result with false
mock
.Setup(m => m.MethodA(
It.IsAny<Expression<Func<Foo, bool>>>(),
It.IsAny<Expression<Func<Foo, object>>>()
))
.Returns(new Foo());
// Act
var result = mock.Object.DerviedMethod("test");
// Assert
Assert.IsNotNull(result);
But it still calls the original method and not the mocked one. Both classes exist in same assembly.
I have searched about it and almost all people got it right with CallBase = true or false.
Any ideas what is wrong with above code?
Shouldn't your Derived method be marked as virtual? Otherwise I think you can't mock concrete classes properly. Not sure tho.
I have marked it as virtual but now it returns null and not the bar instance
The problem is probably coming from your Setup then. You can try to find what's the problem by removing more fluff, like the various Expressions parameters and generics to get to the core. Wasn't able to get your example to compile on my side.
Thanks. I have updated to code to make it compile-able. I am trying to slim the code on my side as well.
@Pierre-LucPineault I've simplified the example but still having no success :S
Me neither, looks like it might be something Moq can't do. Similar answers/comments like this one seems to draw the same conclusions. Changing your architecture from inheritance to composition (So that Derive have an instance of Base, instead of inheriting from it) could probably solve your problems though. That way you can mock the Base class directly and pass your mock to your Derived class.
You haven't included what DoTest does, however you can't mock the code as it's currently written. Your calling the method you're trying to mock out, via a base. specifier, which tells the implementation to run a specific version of the function. Any virtual declaration is ignored, so it can't be intercepted by Moq. It's basically the same issue I discussed in this answer (the limitations on Moq and NSubstitute are the same for this) http://stackoverflow.com/a/31062287/592182
Thanks, it helps. In my scenario (the original example), I had dependency extracted so I will go for mocking the dependency instead of mocking base class' method.
As has been suggested by @Pierre-Luc in the comments, extracting the base class and injecting it as a dependency is probably the better approach (I always think mocking the class you're actually trying to test feels wrong).
That said, for you to be able to mock a call of a class, it needs to be made via the VTable. Essentially, the mocking framework creates a new implementation of the virtual method. When you call it normally, this version of the method is run and can then intercept calls. The problematic line is this one:
return base.MethodA(e1 => e1.Name.Equals(str), e2 => e2.TheBar);
Because you're explicitly calling MethodA, via the base keyword, it tells the compiler to call a particular version of the method. It's always going to call the base implementation. This means that the mock can't intercept the call.
Changing the method to:
public Foo DerviedMethod(string str) {
return MethodA(e1 => e1.Name.Equals(str), e2 => e2.TheBar);
}
Allows the MethodA method to be mocked. Whether or not this is the right thing from a design perspective is up to you.
| common-pile/stackexchange_filtered |
VU meter: datalabel does not show in its position
I am using highchart to display vu meter with data-label. Chart display and work correctly with live data from database but i have problem to show data-label in correct position.
I have tried crop and overflow options as it said here but it didn't work for me.
Here is the code i captured from debuger:
<g class="highcharts-data-labels" visibility="visible" zIndex="2" transform="translate(10,40) scale(1 1)">
<g zIndex="1" style="cursor:default;" transform="translate(0,-999)">
<rect rx="3" ry="3" fill="url(#highcharts-3)" x="0.5" y="0.5" width="55" height="21" stroke="silver" stroke-width="1"></rect>
<text x="3" y="15" style="font-family:"Lucida Grande", "Lucida Sans Unicode", Verdana, Arial, Helvetica, sans-serif;font-size:11px;font-weight:bold;color:#666;line-height:14px;fill:#666;" zIndex="1">
<tspan style="fill:#339" x="3">0.96 ^H</tspan>
</text>
</g>
</g>
Then when i manually change -999 to 0 in the second line data-label shows as below:
<g zIndex="1" style="cursor:default;"transform="translate(0,0)">
However when the next live data arrives, it will changes to transform="translate(0, -999)".
Is there any way to fix offset for data label?
I'll also appreciate any other solution.
Recreate issue on jsFiddle, please. Also make sure you have latest version of Highcharts (3.0.8). Here is a list of bugs with gauge dataLabel position.
@PawełFus i create the fiddle : http://jsfiddle.net/G5UUn/ and it works nice. so my problem was using Highcharts version 3.0.7
As Pawel Fus said the problem solved by using latest version of Highcharts from here
| common-pile/stackexchange_filtered |
Prevent zsh from trying to expand everything
Recently switched from bash, I noticed that zsh will try to expand every command or argument that looks like it has wildcards in it. So the following lines won't work any more:
git diff master{,^^}
zsh: no matches found: master^^
scp remote:~/*.txt .
zsh: no matches found: remote:~/*.txt
The only way to make the above commands work is to quote the arguments, which is quite annoying.
Q: How do I configure zsh to still try to expand wildcards, but if there are no matches, just pass on the argument as-is?
EDIT: Possibly related: scp with zsh : no matches found
It is an intended feature of zsh. when using any shell, it is considered best practice to quote any character that is considered a meta character to the shell. ^ is a pattern used to negate a string when the option extendedglob is set. * is a pattern used to match zero or more characters.
You can stop it by disabling the option nomatch. But by doing so, your unquoted patterns make your statements volatile, depending on what files may be present in the current working directory. You shouldn't do that.
Maybe I shouldn't, but it is quite tempting to do so. I used to think that I shouldn't have lots of aliases either. But in the end, all that matters is how long it takes for me to type in a command on my machine.
For scp, see https://superuser.com/a/431568
For git, use ~ instead of ^.
| common-pile/stackexchange_filtered |
Does apple notify you when a certificate is going to expire?
Do apple notify the developer or email when a push notification is going to expired?
Yes, at 30 days, 10 days and 1 day prior to expiration date, you will receive the following email:
Is there any setup for this? I have Enterprise apple account, which is handled and created by my client. To get this expiration mail what to do? Do I have to add my email as admin type in client apple account?
@iKT I did nothing in particular to receive these emails, I just get them!
ok. which type apple account you have? agent or admin or member? any idea?
@iKT This isn't through iTunes Connect, this Apple Push Certificates Portal
Ok, Is there any way to get mail notification for provisioning profile expiration before 30 days.
Hi our push notification expired without any email at all. Is this still working?
| common-pile/stackexchange_filtered |
this vs this@activity_main in context paramter for a function in kotlin
when i create a toast in android studio I got an error when writing the code like this
but I searched online w and I found that when I replace "this" with "this@main_activity" which is the current activity my code works and compiles.
So what is the difference between "this" and "this@main_activity" ?
class MainActivity :AppCompat(){
val playbackListener = object : YouTubePlayer.PlaybackEventListener{
Toast.makeText(this, "Good, video is playing ok", Toast.LENGTH_SHORT).show()
}
}
https://kotlinlang.org/docs/reference/this-expressions.html#qualified -- you use the @ qualifier when there are multiple definitions of this that you could access.
Your Toast.makeText() call is inside of an object:
object : YouTubePlayer.PlaybackEventListener {
Toast.makeText(this, "Good, video is playing ok", Toast.LENGTH_SHORT).show()
}
Therefore, the value of this is the object (the PlaybackEventListener).
In order to refer to the instance of the Activity that your object lives inside, you can qualify the this keyword: this@MainActivity
| common-pile/stackexchange_filtered |
Find if x is divisible by a number in the sequence from 2 to p
What's the best way ( in terms of complexity ) to find if $x$ is divisible by any number between $2..p$ ( inclusive), the obvious solution is to iterate over all numbers $i$ between $2$ and $p$ and check if $x\%i==0$, that's work in $O(p)$.
My solution is to find all divisor of $x$ in $O(\sqrt{x})$ and check if one of these divisors is between $2$ and $p$, is there a faster approach?
For divisors not too large, lets say upto $30$ digits, ECM is quite efficient. In general, the problem is as hard as integer factorization with unknown complexity. Strictly speaking, ECM never guarantees that there is no prime factor below $10^{30}$, but it can be made very likely that there is none. If the number is small enough, it can be completely factored with the number field sieve.
@Peter execuse me but what do you mean by ECM, btw x and p are up to 10^9.
The elliptic curve method (ECM) is an efficient method to find small prime factors. If you download GMP-ECM or yafu, you can easily find factors upto , lets say $20$ digits, even of very large numbers. But if your numbers are so small, this would be an overkill. If p exceeds $\sqrt{x}$, it boils down to a primilaty test which can be done efficiently, but for so small numbers, trial division will not be much slower. So, in your case, trial division is efficiently enough.
| common-pile/stackexchange_filtered |
Angular checkbox checked and uncheck Event
I have created a checkbox (ngmodel)]="ishighlylevaragedmeasure"
which has to display the value "true" and "false" when onclick checked and unchecked. but not changing the Boolean value to "true" when checked programatically .
How to set the Boolean values to change inside ts.file for programatically checked and unchecked .
I have created here change event
(Change)="getishighlylevaragedmeasure($event)"
I have created if else statement that it .
change should have 'c' in lowercase!
Instead of [(ngModel)]="ishighlylevaragedmeasure", use [checked]="ishighlylevaragedmeasure".
Example:
<input type="checkbox" [checked]="ishighlylevaragedmeasure" (change)="ishighlylevaragedmeasure = ! ishighlylevaragedmeasure" />
| common-pile/stackexchange_filtered |
WPF TabItem lost focus event
I have tabItems with TextBox on their headers. I use LostFocus and MouseDoubleClick events to set the text to the TextBox.
<TabControl>
<TabItem Width="50">
<TabItem.Header>
<TextBox Text="text" IsReadOnly="True" LostFocus="TextBox_LostFocus" MouseDoubleClick="TextBox_MouseDoubleClick"/>
</TabItem.Header>
</TabItem>
</TabControl>
private void TextBox_MouseDoubleClick(object sender, MouseButtonEventArgs e)
{
TextBox text_box = sender as TextBox;
if (text_box == null) { return; }
text_box.IsReadOnly = false;
text_box.SelectAll();
}
private void TextBox_LostFocus(object sender, RoutedEventArgs e)
{
TextBox text_box = sender as TextBox;
if (text_box == null) { return; }
text_box.IsReadOnly = true;
}
LostFocus event happens if only you click on the TabItem header area outside the TextBox or on enother TabItem.
Clicking the tab item content area doesn't fire lost focus event.
How to make the TextBox to lose focus when user click any area outside the TextBox?
To lost Focus, in other word to get Focus at inside tab content(target):
Focusable of the target is set as true
The target should be hit testable. Background of the target should not be null.
Add event handler to PreviewMouseDown event(NOTE: NOT MouseDown) to react to mouse click.
If you except 3 step, you application will react only to TAB key.
<TabControl>
<TabItem Width="50">
<TabItem.Header>
<TextBox
Text="text" IsReadOnly="True"
LostFocus="TextBox_LostFocus"
MouseDoubleClick="TextBox_MouseDoubleClick"/>
</TabItem.Header>
<Border Focusable="True" Background="Transparent" PreviewMouseDown="Border_PreviewMouseDown"/>
</TabItem>
</TabControl>
private void Border_PreviewMouseDown(object sender, MouseButtonEventArgs e)
{
var uiElement = sender as UIElement;
if (uiElement != null) uiElement.Focus();
}
To lose focus, an element must first have focus. Perhaps an alternative could be to give your element focus in an appropriate place when your elements are initialized, for example:
Change
<TextBox Text="text" IsReadOnly="True" LostFocus="TextBox_LostFocus" MouseDoubleClick="TextBox_MouseDoubleClick"/>
To
<TextBox x:Name="MyTextBox" Text="text" IsReadOnly="True" LostFocus="TextBox_LostFocus" MouseDoubleClick="TextBox_MouseDoubleClick"/>
And in your constructor use the FocusManager to set the focused element:
...
FocusManager.SetFocusedElement(MyTextBox.Parent, MyTextBox);
...
Focus Overview on MSDN is a good resource, it is also important to distinguish between keyboard focus and logical focus!
| common-pile/stackexchange_filtered |
Horizontal scroll appears at lower resolution- Responsive issue
I have strange problem with website responsiveness.
When on desktop resolution no horizontal scroll appears at Chrome.
When i resize it to lower resolutions 400px width and less the horizontal scroll appears.
I think some element is forcing width bigger than actuall screen size but i cant find it!
Please help.
Here is website link
I checked your code,
You have to get rid of this code in your footer styles. Your margin-right is making your content overflow.
Try using padding, or something similar instead.
It appears you are using bootstrap for that.. So the best way to do this would be to overwrite this by creating a
#footer > div.row {
margin-right: 0 !important;
}
or if you have bootstrap locally then you can probably delete from there. But i just overwrite it using `!impornat
.row {
/* margin-right: -15px; */
margin-left: -15px;
}
| common-pile/stackexchange_filtered |
Change NSTextField value from another viewcontroller swift
so i was programming my app today when i ran in to this issue.
When i call a variable NSString "hello" from my second view controller, i can easily println it from my first view controller. but the problem is when i try to access the variable NSTextField i get error which either says nil, or other errors. how can i achieve this ?
Second View Controller :
class ViewController: NSViewController {
@IBOutlet weak var txtDetail: NSTextField?
@IBOutlet weak var txtTitle: NSTextField!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
override var representedObject: AnyObject? {
didSet {
// Update the view, if already loaded.
}
}
}
First View Controller
let shared = ViewController()
class AddUrlViewController: NSViewController {
@IBAction func readIt(sender: NSButton) {
println(shared.txtTitle.stringValue)
}
}
@DharmeshKheni i get this error in my debug log --> fatal error: unexpectedly found nil while unwrapping an Optional value
(lldb)
Yes --> println(shared.txtTitle.stringValue)
it asks me to delete the ? (error)
then remove ? and add this !
Your question is not specific to swift. I haven't tackled learning much swift yet, but the answer applies to both Objective-C and swift.
The short answer: Don't. You should treat another view controller's views as private. This is an example of an important principle of OOD called encapsulation.
Instead, pass a string property to the other view controller. Then, in the other view controllers's viewWillAppear method (assuming it's a full-screen VC) copy the string into the field.
If the view might be on-screen when the string value is set, implement a custom setter and set the contents of the text field in the setter.
| common-pile/stackexchange_filtered |
How to store each key or element from array to database in rails
I'm stuck on trying to store keys or elements from a returned array into the database. It looks something like:
{"id"=>"28898790358_10152709083080359",
"from"=>{"category"=>"Tv channel",
"category_list"=>[{"id"=>"169056916473899",
"name"=>"Broadcasting & Media Production"}],
"name"=>"WGRZ - Channel 2, Buffalo",
"id"=>"28898790358"}
I'm trying to grab stuff like 'id', 'category_list', all of their values and store it into a column of a database table.
I've got it before but this time only certain values get in and usually I get an error:
undefined method[]
You can try hstore, if you are using rails 4 and you want to store all this in one column [https://github.com/heroku/hstore_example] else you want to pick id, category_list and put in resp. fields in db then post your code.how u r trying to do it.
You're getting the undefined method [] error when you try to use the [] method on a class that doesn't have that method defined. My guess is that you are running into this problem while parsing the above hash (e.g. calling [] on Nil Class, and getting the error undefined method [] for nil:NilClass).
Long Story Short: You need to parse the JSON and save the items you need to the appropriate spots in the database
I'm a little unclear about the example you gave, so I'll try to use a simplified one.
Suppose you are querying for a TvShow, which belongs_to a TvChannel. TvChannel also has_many_and_belongs_to a certain Category. So perhaps this query you are doing returns the TvShow, along with the TvChannel and Category to which the TvShow belongs. Suppose you get back something like the json below:
json_response = { 'name': 'Desperate Housewives',
'id': '12345-67890',
'tvChannel': {
'name': 'ABC',
'id': '123-456-789',
'categories': [
{ 'name': 'Comedy',
'id': '000-000' },
{ 'name': 'Drama',
'id': '111-111' }
]
}
}
So now you want to save the TvShow Desperate Housewives to the TvChannel ABC, and save ABC to the Categories Comedy and Drama. This is how I would go about parsing:
new_show = TvShow.new
new_show.name = json_response["name"]
new_show.external_id = json_response["id"] # assuming you want to save their identifier
channel = TvChannel.find_or_create_by(:name, json_response["tvChannel"]["name"]) # you might want to do this by the id, but it's up to you.
new_show.tv_channel = channel # this saves the TvChannel's ID to the new_show's tv_channel_id foreign key field
new_show.save # save your new TV Show to the database, with the name, id, and tv_channel_id.
# Next we parse the "categories" part of the JSON, finding each Category by name (creating a new Category if one doesn't exist by that name),
# And then adding our TvChannel to that Category
json_response["tvChannel"]["categories"].each do |category|
c = Category.find_or_create_by(:name, category["name'])
channel.categories << c
channel.save
end
I hoped this helped show you how to parse JSON responses, and how to use ActiveRecord to create or find instances in your database. It is useful to store the JSON response to a variable (json_response in my example above), for easier parsing.
Let me know if I can clarify anything!
| common-pile/stackexchange_filtered |
How to use FastButtons for them to work on mobile devices?
I have read this (and other relating things also):
https://developers.google.com/mobile/articles/fast_buttons?hl=fi#conclusion
but I don't understand how to use this in Google Apps Script. I made the UI with GUI Builder and now I want the buttons to work also on mobile devices.
Could someone please explain to me with a code example of how to change my buttons to work also on mobile devices?
Code that works Ok on PC but not on mobile device:
var handler109 = app.createServerClickHandler('func109');
var but109 = app.getElementById('Button109');
but109.addClickHandler(handler109);
How to use Client Handler? app.createClientHandler('func109') generates an error, app.createClientClickHandler('func109') generates an error... How do I define that function func109 should be called?
var handler109 = app.createClientHandler();
var but109 = app.getElementById('Button109');
but109.addClickHandler(handler109);
A clientHandler cannot call a function.... only serverHandler can do it since the script is executed on the server , see docs for more details
First, the article you've shared is applicable for mobile and not so much for Google Apps Scripts.
However, the buttons you make in Google Apps Script WILL work on mobile devices too. However, if you only have a server handler set up on the button, it will take a noticeable period of time before the action of the button is seen by the user.
Google Apps Script also has client handlers which you can use which show up a much faster response than server handlers.
Issue 1086 might be relevant in your case
You're right, this is what I need! :-) But... I have problems using this. How do I tell the client handler which function to call when the button is presed? See code above.
As Serge pointed out, you cannot call functions within client handlers. Client handlers are not for elaborate processing but only for indicating progress to the user or such simple tasks. All the heavy lifting must be done in a server handler.
The send-button in the built in contact form works also on my mobile device and Srik wrote "However, the buttons you make in Google Apps Script WILL work on mobile devices too." So is there a work around possibility to get my button work on mobile devices?
Srik: Issue 1086 might be relevant in your case: "Buttons generated from code DOES NOT require he above process to work. As soon as I touch the button, the event will fire." I tried to create the button from CODE but the behaviour was the same: Pushing the button from PC fires the event but from mobile device not... Is there some way to get a button work in mobile devices too?
Yes, I use Samsung GIO. "Fast" double click zooms the screen out. A little bit slower double click does nothing.
| common-pile/stackexchange_filtered |
Error trying to apply a function for each item (List) in TransformedDStream
I have an issue with my code. I'm working with Spark Streaming and I listen a stream of data (tweets). Each tweet is represented by a list in which each item is a single word. I'd like to apply a function for each word with the Map function but it doesn't work. In fact, I obtain this error:
Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
This is the portion of the code that cause this issue:
model = Word2VecModel.load(sc, "/home/davide/word2vec_models")
features_training = training.map(lambda tweet: filtering(tweet[0].split(" ")))
features_training = features_training.map(lambda tweet:[model.transform(word) for word in tweet])
The "filtering" function returns another list with a word for each item
Do you have any suggestion to solve this issue?
Thanks!
can you add the code which is causing this
@QuickSilver I've updated the question
| common-pile/stackexchange_filtered |
How to prove a Distributive lattice
Do we need to apply distributive law for all elements to prove distributive lattice?
Since every chain is distributive so don't need to apply for chains(right?) & also the two distributive laws are equivalent, so don't need to apply both(right?), but for larger lattices having many elements (excluding chains), there are also many elements to apply the distributive law & we have to do lengthy work.
Is there any other way to prove a distributive lattice?
I tried to find myself, but found nothing.
I want to ask that ,can we prove a lattice to be distributive without checking distributive law?
| common-pile/stackexchange_filtered |
How does the DA 40 NG automatically control the mixture, pitch and power by just providing one throttle?
I'm a student pilot and here at my school we have the older gen DA40s which have individual lever for mixture, pitch and power.
I was looking at this video of the DA40 NG, and got quite curious as to how everything from prop to mixture is automatically controlled, and on top of that you don't need to do any run up checks!
Tried using google but nothing really came, so I thought this would be the right platform to ask and satisfy my curiosity.
Don't worry for Google. Soon, the first result you'll get will be this question.
According to Diamonds website the NG engine is a:
Austro Engine AE 300 turbocharged common-rail injected 2.0 liter diesel engine with 168 hp and EECU single lever control system
The prop is a:
3 blade MT hydraulic constant speed propeller features advanced blade geometry for efficient performance, low vibration and noise. It is automatically controlled by the engine’s digital engine control through a conventional hydraulic governor.
From the sounds of it, it is as close to a full FADEC (Full Authority Digital Engine Control) unit as you can get
It is controlled by a computer module just like your modern fuel injected car.
The question isn't how does it work, it's more why it isn't on all airplanes. It's really the same engine technology that has been in widespread use in cars for 40 years.
@GdD when an "engine controller" goes on a car one is basicly helpless. The aviation version is redundant, but there are still risks of this happening in mid-air. Engine controllers (and turbos) have done wonders for engine performance and reduction of emissions, and adding in prop pitch control with fuel/air control just makes it better.
The Austro engines have dual engine controllers @RobertDiGiovanni, you test them both as part of the run-up procedure. Even if both controllers fail the engine will go into an emergency mode and still run. The caveat to that is electrical power, a DA-42 went into a field after the battery connection got loose, after that they added a second battery.
@RobertDiGiovanni: Modern cars have a limp-home mode; I already encountered that in my old Ford Focus diesel from 2000. It probably would break emission norms if tested at that point, but for airplane use it's sufficient to get you back to the ground under power.
@MSalters If an aero engine ran at the same power levels as your Focus (for limp-home mode) the pilot would be likely to have a very bad day.
It is probably a nonissue. A couple of things that worry me about this engine are that this engine appears to have a reduction drive gearbox and only one generator. Even with the dual backup batteries, these two items seem to be points of failure. I wonder if there is a way to add a second generator.
@MikeBrockington: It lost ~50% of its power. I think larger two-engined planes should even be able to continue a take-off when that happens? But in general it gives you far more flexibility in selecting a landing site. The idea is to turn a very bad day into a rather annoying day.
@MikeBrockington It'd be like losing an engine on a twin, except without the asymmetrical thrust problems. Anyway you can have computer control without computer total control; for the first 10 year of computers all they did was fine-tune; they did PWM to a mix-control solenoid on the carb, which tweaked the mix to keep the mix stochiometric based on what the O2 sensor was saying; but its limits of travel were sane driveable values. You could remove the computer and drive almost normally, maybe get 1 MPG worse (so 6 MPG instead of 7 lol).
@GdD, for various reasons, the light-aircraft industry is pretty much stuck in the 1960s.
Aside from the fact that the engine is computer controlled...
the reason you don't have mixture control is Diesels don't use that.
Gas engines need to breathe a stochiometric mix of air and fuel, i.e. proportioned correctly so there's just enough fuel for the oxygen admitted. Too fuel-lean and it won't burn. Power is controlled by partially blocking the air intake with throttle plates (hence, vacuum). Getting the mix stochiometric under all conditions is a hard job, and that's what the carburetor does. The mixture control helps you fine tune what the carb cannot.
In a diesel, you gulp in a full shot of air (throttle plates: Gone), and compress to a high compression ratio and high heat. This means when fuel is injected, combustion is both certain and spontaneous. So they don't need spark plugs, and can happily run lean (mixture control: Gone). Spontaneous means the engine intake drawing in a fuel-air mixture is out of the question; it would ignite too soon. The engines are direct injection.
Diesel power is controlled solely by how much fuel is injected per cylinder. That is done at the fuel injection pumps. One pump per cylinder that pumps at the appropriate time in the stroke, driven by a cam. Each pump has an adjustment that decides how much fuel is injected. And this is controlled by a "fuel rack" which sets all pumps the same, and the fuel rack is the "throttle".
I always thought diesel's simplicity would make it ideal for aircraft, but the high compression ratio requires stronger cylinders and makes diesel engines heavier, and that's an issue.
Of course nowadays, in their mad pursuit for EPA Tier 4 emissions, they use computers. One common trick is to eliminate the mechanical cam and rack with sensors and EFI to inject at the right time. But they don't have to do that. It's a reliability vs emissions numbers call.
As stated above in the last answer, it’s turbodiesel powered. Like the DA-42 and DA-62, the engine is basically power by wire, using a primary and secondary Engine Control Unit (ECU) computers, each with a dedicated battery backup, both for engine control and control over the constant speed propeller.
Other answers concentrated more on the 'mixture' and FADEC side of things. I'll show what it does with the prop speed. This is actually simpler and is explained right in the flight manual (p. 7-25).
The prop speed is pre-programmed to follow the engine lever:
So, it only uses highest RPM at takeoff power, and then reduces it at cruise settings. At very low settings it 'unloads' the prop by commanding higher RPM (which it may not reach). This may increase windmilling drag though, and this is a common complaint about automaitc prop control.
It is not necessary to have FADEC for such programming. Cirrus aircraft use mechanical linkages to achive a similar control.
| common-pile/stackexchange_filtered |
Ggplot - How to present the mean of a third varience?
Let's say I have this data frame:
The data frame
I want to make a graph which presents for each SES (Social Economy Status) what is the mean income for females and what is the mean income for males.
I have so far this code:
ggplot(incomeSorted, aes(GENDER)) +
scale_y_continuous("Mean")+
geom_bar(position = "dodge")+
facet_wrap("SES")
and this is the output:
How do I make the graph to present the mean of income instead of counting the number of females and males at each category?
Thanks ahead!
Please add data using dput and not as images. Read the info about how to ask a good question and how to give a reproducible example.
If you want to display mean income, you have to compute it. You can use dplyr and group_by() with summarise() to obtain the key variable and then plot. Here a code for the task:
library(ggplot2)
library(dplyr)
#Data
df <- data.frame(id=1:8,Gender=c(rep('Female',4),rep(c('Male','Female'),2)),
income=c(73,150,220.18,234,314.16,983.1,1001,1012),
SES=c('Bottom','Bottom','Middle','Middle','Middle',
'Upper','Upper','Upper'),
stringsAsFactors = F)
#Compute and plot
df %>% group_by(SES,Gender) %>%
summarise(MeanIncome=mean(income,na.rm=T)) %>%
ggplot(aes(x=Gender,y=MeanIncome)) +
scale_y_continuous("Mean")+
geom_bar(stat = 'identity')+
facet_wrap(.~SES)
Output:
Or you can avoid facets and displaying the plot with a fill variable like this:
#Code 2
df %>% group_by(SES,Gender) %>%
summarise(MeanIncome=mean(income,na.rm=T)) %>%
ggplot(aes(x=Gender,y=MeanIncome,fill=SES)) +
scale_y_continuous("Mean")+
geom_bar(stat = 'identity',position = position_dodge2(0.9,preserve = 'single'))
Output:
Hi! Thanks for your fast comment!
When I tried the first code, i got this error:
Error: At least one layer must contain all faceting variables: SES.
Plot is missing SES
Layer 1 is missing SES
What should I do?
@pdoron Hi! Did you tried first with the df data I included? I tried to creating a similar data as those you showed in an image. Your error looks like something missing. Check the names of your dataframe and that all packages mentioned here are loaded. Let me know how that goes!
| common-pile/stackexchange_filtered |
How to select alternate rows from two tables
Having two tables, I want to display the result from alternate rows from both tables, just like UNION ALL.
Can you help me to find out the solution in a MS SQL Server query?
Records of Table1:
id - value
-------------
1 - abc
4 - dce
9 - fgh
16 - ijk
25 - lmn
Records of Table2:
id - value
-------------
5 - opq
10 - rst
15 - uvw
20 - xyz
25 - zab
The result I want:
Id - value
-----------
1 - abc
5 - opq
15 - uvw
9 - fgh
15 - uvw
20 - xyz
16 - ijk
25 - lmn
25 - zab
----------------
Tables have no inherent order. If you want results to appear in a particular order you have to tell us (and SQL Server) what the rules are for what order the rows should be sorted into. You're unlikely to get an answer at the moment since we have no idea what the tables even look like, let alone what the correct rule is.
I think this will do it for you, but you have to change the query and add your table names and your column names in the ORDER BY statement of the OVER clause.
Also, take note that both of your tables must have the same number of columns, and the same datatypes in order for them to work in an UNION.
SELECT
ROW_NUMBER() OVER (ORDER BY column),
1 AS 'rowOrder',
*
FROM TABLE1
UNION ALL
SELECT
ROW_NUMBER() OVER (ORDER BY column),
2 AS 'rowOrder',
*
FROM TABLE2
ORDER BY 1, 2
using just ROWNUM? (if tables are actually populated in the same way...)
@FrancescoGabbrielli typically you would expect OP to post a sample dataset which is similar to his real one. So, the problem that I am solving is quite specific and there is no room for interpretation of the scenario.
| common-pile/stackexchange_filtered |
Two positive integer with prime number
Let $a, b$ be distinct positive integers. Prove that there exists a prime $p$ such that when dividing both $a$ and $b$ by $p$, the remainder of $a$ is less than the remainder of $b$.
How can i solve this?
is it true that there are primes in the interval $[b+1-a,b]$ that are not in the interval $[1,a]$?
I think what I'm saying is true
Just pick $p$ such that $a<b<p$
What if b>a, what then?
Just take a prime dividing $a$ but not $b$.
but that may not exist.
$a,b$ are distinct integers.
So are, for example, $4,2$.
If we assume $a < b$, then if there does not exist a prime $q$ with $q|a$, $q\not|b$, one could simply take a prime $p > \max{a,b}$. It's not entirely obvious how one could remove the assumption $a < b$...
| common-pile/stackexchange_filtered |
Karaf: Bundles Instances and classloaders
Is each Karaf bundle loaded by a separate classloader?
Are all bundles of a feature loaded by the SAME classloader?
If I include a dependent feature in a new feature I am trying to develop, would the bundles of that feature use the SAME classloader as the feature I am trying to develop?
In OSGi in general each bundle has its own classloader. This classloader serves the classes of the bundle and delegates to the classloader of bundles this bundle imports packages from. So basically each class is normally loaded by the classloader of the bundle the class resides in. The Import-Package and Export-Package statements in the Manifest and the OSGi resolver then make sure bundles can also see the classes of other bundles.
Karaf features are completely unrelated to classloaders. They simply define which bundles are loaded.
| common-pile/stackexchange_filtered |
CSS, Trouble aligning divs in chrome
Im having a mysterious problem with some positioning in chrome.
Its about one of my pages: http://www.iphonedeksler.no/
The div to the right, with class="content", is too far down in chrome, but perfect in firefox and IE.
Whats adding to the mystery, is that when you inspect, and toggle the "float:right;" css rule off, and then on again, it aligns like it should.
Is this a chrome bug, and should I use css hack for chrome only?
looks good to me - you should add a clear after that div though -
It seems that .main { clear: both; } is messing up the .content div alignment.
If you remove clear: both you will solve the alignment problem.
| common-pile/stackexchange_filtered |
How can I display the result of a bitwise operation in binary form?
Possible Duplicate:
Converting decimal to binary in Java
I have found this code that takes as an input a number x, an array p and another number pmax which indicates the size of the array p
public static int permute(int x, int p[], int pmax) {
int y = 0;
for (int i = 0; i < p.length; ++i) {
y <<= 1;
y |= (x >> (pmax - p[i])) & 1;
}
return y;
}
this code should be permutating the bits of the first byte that is stored in the number x
according to the rules defined in the array p
so for example if the first pmax bits of x are 101 and the array p is the following:
p={2 1 3}
then the result of this function(y) will be equal to 011
Question
I have never worked with bitwise operators and I would like to see all the intermediate results of the above code. For example what does y<<=1 do and why it is used. To do this I need to somewhat display the y variable in the binary form, otherwise I just get some random decimals which doesn't help me understand anything.
So how can I display after each bitwise operation the result in binary format?
If this is not a good method to learn exactly what this code does, what method would you suggest?
Integer.toBinaryString(int x) will do this for you.
I'd rather close this question as an easy duplicate than award the fastest gun in the west and his deputies 10 XP each.
@JanDvorak The linked duplicate is related to writing a handmade method that converts the number to generate the binary string. I even voted this question to be closed but this doesn't imply I can't spend 10 seconds of my time answering in the meanwhile. If it's going to be closed then it will happen anyway.
are you looking for Integer.toBinaryString(int i) ?
| common-pile/stackexchange_filtered |
Controlling WS2812b LED strip produces inconsistent and uncontrollable output
Background
I'm trying to control a single WS2812b LED strip from a Raspberry Pi. I've followed this tutorial from raspberrypi.com (with the exception of Prep & Installation step 4, because I couldn't find snd-blacklist.conf).
That didn't work, so I also tried this tutorial (which involved adding a Logic Level converter), and when that didn't improve things, a few other tutorials/videos, all to no avail.
Code & Output
The first tutorial got the closest. It had me use this github library, where I eventually ran strandtest.py. It appeared to partially work, but not entirely. I wasn't sure (and I'm still not) if this is a hardware or software issue, so I reduced the code to the minimum possible where it still turns on at least one LED.
import time
from rpi_ws281x import *
LED_COUNT = 14 # Number of LED pixels.
LED_PIN = 18 # GPIO pin connected to the pixels (18 uses PWM!).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 10 # DMA channel to use for generating signal (try 10)
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)
LED_CHANNEL = 0 # set to '1' for GPIOs 13, 19, 41, 45 or 53
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS, LED_CHANNEL)
strip.begin()
while True:
print ('Running display')
for j in range(100):
for i in range(strip.numPixels()):
strip.setPixelColor(i, Color(50, 50, 50))
strip.show()
time.sleep(.05)
Yes, those nested loops are necessary. Simplifying to something like this without loops doesn't turn on any LEDs at all:
strip.setPixelColor(0, Color(100, 100, 100))
strip.show()
time.sleep(10)
Demo Video
When I run that code (the first block), this is what happens:
https://youtu.be/nzMCeyBNY50
Every time I run the code, I wait a few seconds, and eventually some LEDs turn on. It's different every time: sometimes a couple turn on, sometimes all of them, sometimes they turn off again for a little bit as it keeps running, etc. This was true even before I switched to using the Logic Level converter from the second tutorial, and happens the same way even if I run the original strandtest.py.
Parts
I've followed this schematic from the second tutorial linked above. Note that I'm powering 14 LEDs currently (but would like to do the full strip in the future). Also, this schematic shows the power for the LEDs running through the breadboard, but I've soldered the red power+ directly instead.
Here's a list of the parts I used:
Raspberry Pi 3 B+ (already owned)
Breadboard, miscellaneous wires, and solder (already owned)
Logic Level converter
WS2812b RGB LED strip (5 meters, 30/m)
5V 10A Power Supply
Welcome! I am sorry to say you are light on the 50 watt (5 * 10 = 50) power supply, here is why: (Ohm's law states volts * amps = watts). Your Pi needs about 3 amps or 15 watts. You now have 7 amps or 35 Watts remaining. Next you have 5 meters of leds at 18 watts per meter (per your link) so that is (18 * 5) 75 watts. You had 35 watts left after the Pi so you are at a negative 40 watts or -8 amps. Best solution get another power supply and if you use both be sure to connect the grounds of each together and to the Pi ground.
Sorry, my explanation of the wiring must have been unclear: the pi is powered through the USB port by a standard AC to USB converter, completely separate from the power to the LEDs. I measured the led circuit and it was hitting about 4.8 V (what I'd expect), but it was hardly drawing 1 amp, because I'm only using 14 LEDs and they aren't even fully lit. (Note that I will use more LEDs later if I get this working, and may have to change the power supply).
One thing I did notice was that the data line was barely at .1 volts (with or without the logic level converter). I thought the pi was supposed to be sending 3.3 V which I would increase to 5 V, but unless I'm measuring poorly (which is definitely a possibility) then the voltage is very low.
Schematics generally tell more than pages of words!
@kviLL please edit your question and explain how the LEDs and Pi are being powered.
I had included a link to a schematic from one of the tutorials I followed. I've edited the post now to embed the image directly instead, so hopefully that's more clear.
Frizzy things do not count as schematics. Your Pi will not work it does not have any power. Show it exactly how you have it wired and the test points you used to measure voltage and what you used to measure it.
The pi is powered by a USB cable.
I had exactly this same problem.
If /etc/modprobe.d/snd-blacklist.conf does not exist, you need to create it and add the following line
blacklist snd_bcm2835
You may also need to add/uncomment the following from /boot/config.txt
hdmi_force_hotplug=1
hdmi_force_edid_audio=1
Don't forget to reboot and all should work nicely.
| common-pile/stackexchange_filtered |
Issue with JQuery pinch zoom
I am working with JQuery and I want to pinch zoom the image.
My code is:
$(document).bind('mobileinit', function(){
$.mobile.metaViewportContent = 'width=device-width';
});
I need help, Please Hurry up
I m not clear what you want to achieve.
I have an image and want to get pinch and zoom effect on it
then what is the relation with width=device-width .
well i have searched and got that code .. But still it is not working :(
use jquery touchy plugin
https://github.com/HotStudio/touchy
Don't want to use plugin .. It is last option to use ..
because I have had issues with plugin while running app in android ..
Well checked your given link but it is also not working regarding pinch zoom it is for swipe
use this code :
http://www.caincode.com/mobile-pinch-zoom-gesture-detection-javascript-jquery/
searched this one too :(
just increase scal value on mobileinit , it will work
| common-pile/stackexchange_filtered |
How does 'screen' impact environment variables?
I noticed a difference in 'env' before and after a 'screen' call, is there any additional clarity on what gets called (for setting environment variables)?
I couldn't see any clear explanation from a quick search on 'man screen'
Some googled queries on the topic:
http://alan.lamielle.net/2009/03/09/environment-variables-and-gnu-screen
https://superuser.com/questions/105954/updating-screen-session-environment-variables-to-reflect-new-graphical-login
I guess a more specific sub-question would be, what is not instantiated in a screen session vs. that of a normal log-in?
A process inherits the environment variables from the parent, this means the first time you call screen (create a new one) it has a copy of all the environment variables of the parent process. Now screen adjusts/creates some variables like COLUMNS, LINES, TERM, TERMCAP, WINDOW and STY. You can also adjust or delete environment variables in your screenrc with setenv/unsetenv.
On some systems, screen is setuid or setgid in order to update utmp and wtmp; then a few more variables are removed from the environment when screen starts, such as LD_LIBRARY_PATH.
If you attach to an existing screen session your environment variables won't be copied as the screen process already exists and has it own environment variables (from the time when you started the process before). This means your changed environment variables won't be visible in the processes started by screen as they are copied from the parent process which has the old environment variables.
| common-pile/stackexchange_filtered |
pyplot plt.text custom font
I am trying to make a picture in python in a custom font for some nametags. So the end goal is to have a loop that saves an image with text how I want it.
Afaik this should work, as I changed the rc. But the font of plt.text is still the default. Does anyone know how to make this work?
import matplotlib.pyplot as plt
from matplotlib import font_manager, rc
f_name = font_manager.FontProperties(fname='/Users/me/Library/Fonts/customfont.ttf').get_name()
rc('font', family=f_name)
plt.text(0, 0.6, r"$%s$" % 'test', fontsize = 50)
Here is an example of what I get. The ticks change, so rc is set correctly. But the text does not.
The problem seems to come from the $ ... $ notation that should override the font properties.
Try:
import matplotlib.pyplot as plt
from matplotlib import font_manager
# Path
path = '/path/to/custom/font.ttf'
# Create FontProperty object
font_prop = font_manager.FontProperties(fname=path)
# Apply font_prop to the text
plt.text(0, 0.6, 'Custom font !', font_properties=font_prop, fontsize = 50)
plt.show()
Output:
| common-pile/stackexchange_filtered |
Glide: How to preload Images and get confirmation that it is stored in cache?
I have a RecyclerView and inside of each row I have ImageView where I'm using Glide to set an image from URL. The problem is when the first time I open and try to scroll RecyclerView then only that images are downloaded. Obviously it'll save to cache and next time it won't be downloaded again.
I'd like images to be preloaded for the first time so that when a user will scroll he/she won't have to wait.
Please see the code:
@Override
public void onBindViewHolder(PostAdapter.MyViewHolder holder, int position) {
try {
Post post = postArrayList.get(position);
if (checkNull(post.getDesc())) {
setText(holder.postDesc, postArrayList.get(position).getDesc());
} else if (checkNull(post.getUrl())) {
setImage(holder.postImage, postArrayList.get(position).getUrl());
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void setImage(ImageView image1, String str) {
image1.setVisibility(View.VISIBLE);
Glide.with(context).load(str).
diskCacheStrategy(DiskCacheStrategy.ALL)
.into(image1);
}
Please clarify the purpose of your question: a) To load images without cache b) To preload images into cache before rendering list c) Your choice? :)
@ror b) To preload images into cache before rendering list
As I understand, the question is: how to preload images with Glide? It wasn't clear before all the conversation happened.
This is actually quite simple and almost identical to loading an image into an ImageView. Glide has a preload() function available that will preload image from given URL. Select DiskCacheStrategy that is most likely fits your situation.
Glide.with(context)
.load(imageUrl)
.diskCacheStrategy(DiskCacheStrategy.SOURCE)
.preload();
Use preload(int width, int height) if you want to change the size of the resulting image.
Glide.with(context)
.load(imageUrl)
.diskCacheStrategy(DiskCacheStrategy.SOURCE)
.preload(width, height);
If your cached images do not actually cache follow this solution to add custom LruCache map.
A little test conducted
A test involved three different sizes ImageViews 100x100, 200x200 and 300x300 DP respectively. Glide was tasked to load an 8K image into 200x200dp ImageView. Then after a short delay load the same image into the 100x100dp ImageView and after another delay into 300x300dp ImageView.
The test shows that the original image was cached due to instant loading speed into 300x300dp ImageView.
Note: Toast messages pop-up right before the image loading starts.
Video proof:
(If the video link is broken try this link).
Update (a bit out of question scope): how to wait until all images are preloaded?
... rest of YourActivity class
private int imagesLoaded = 0;
private int totalImagesCount = 0;
private void preloadAllImages(ArrayList<String> imagesUrls) {
totalImagesCount = imagesUrls.size();
for (String url : imagesUrls) {
preloadImage(url);
}
}
private void preloadImage(String url) {
Glide.with(this)
.load(url)
.diskCacheStrategy(DiskCacheStrategy.ALL)
.listener(new RequestListener<Drawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<Drawable> target, boolean isFirstResource) {
// Handle exceptions differently if you want
imagesLoaded++;
if (imagesLoaded == totalImagesCount) {
startMainActivity();
}
return true;
}
@Override
public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
imagesLoaded++;
if (imagesLoaded == totalImagesCount) {
startMainActivity();
}
return true;
}
})
.preload();
}
Thanks for the reply. The thing is inside onBindViewHolder, I written [Glide.with(context)].......So when I scroll that time it'll download and store, but I want on Splash Screen itself. So when user scroll, it won't download.
The code I presented above you can use wherever you like. Copy it to your SplashScreen, get postArrayList, loop through postArrayList and invoke for each image the code I presented above. You can do that even from your Application class.
So if I'll loop through images inside Spalsh, then in onBindViewHolder what I 'll write for images? Same thing? Can you post an answer please?
Within onBindViewHolder you won't need to change anything. Your images, if they already loaded and cached, will be displayed instantly. If some image are still loading - they will load faster as the downloading is already in process.
I heard one thing about Glide that, Glide resizes the image as per the dimension of the ImageView. See this [ https://medium.com/@multidots/glide-vs-picasso-930eed42b81d ] Then in our case will it again download and resize accroding to adpter row imageview size? Because in Spalsh screen it'll download actual image right?
It is true about Glide's image resizing when loading into an ImageView. It should not load the same image twice if the size required for the next loading is equal or smaller than the size of the cached image. Looks like Glide actually caches image of original size and then resizes it when required. I'll post an update.
Above is image. Can you send video link?
Let me inplement in Spalsh Screen and test. I'll come back to you and if it's working then I'll accept and upvote your answer. Thanks. Please be online as well.
One doubt: How will I get to know that Glide downloaded and cached Image from URL. Because after cached all images only I'll close. I'm pausing for 2 seconds using handler.
Let us continue this discussion in chat.
Okay check my reply please
Thank you very very much dear!
Can you please check: https://stackoverflow.com/questions/62619895/how-to-get-image-path-which-is-stored-and-downloaded-by-glide-using-url-in-andro
@PriyankaSingh, ok.
Please see the chat, I had in that question.
I cannot. It is private.
I meant in Question's comments. [Not in personal discussion]
Can you help? https://stackoverflow.com/questions/62783444/why-does-multipart-pdf-is-not-able-to-upload-in-api-using-retrofit-2-in-android
| common-pile/stackexchange_filtered |
How to set default SQL options for all SQL queries in OpenLink Virtuoso
Currently I actively exploit a very attractive ability of Virtuoso to define SQL views using SPARQL. When I use these views in a composite SQL query without any Virtuoso-specific options,
the execution time is enormously large. So, additional options should be included at the end of each query, like
SELECT COUNT(*) from V1 JOIN V2 ON V1.s = V2.d ..... option (loop, order)
Is there a way to configure Virtuoso by a such manner, that the options clause will be added automatically and the SQL query itself will not include it ?
The query options like loop and order (as well as index) are fine-tuning hints for Virtuoso's built-in SQL optimizer; as such, they cannot be set globally (because they should not be).
Some queries may indeed work better with these set, but this certainly will not apply for every SQL query; in fact, in most cases, their use would produce worse execution plans. If you compose a SQL query and find Virtuoso does not make an optimal plan, this is when such on-demand fine tuning can be appropriate.
That said, it would be very helpful if any SQL queries that are producing worse execution plans were reported to us (OpenLink Software produces Virtuoso, and employs me) via the OpenLink Community Forum for any edition of Virtuoso and/or a GitHub issue for the Open Source Edition (a/k/a VOS), so we can look into them and possibly improve or fix the query optimizer.
| common-pile/stackexchange_filtered |
#include confusion and classes
I have been making several games with the Allegro API and C++. I have also been putting all my classes in 1 big main.cpp file. I tried many times to make .h and .cpp files, but my big problem is I have trouble with #including at the right place. For example, I want all my classes to access the allegro library without #including allegro.h everywhere. Could someone please explain how to correctly #include things. In .Net, everything seems to come together, but in c++ one thing cannot be used before it is included. Is there also a way to globally include something throughout my entire program?
Thanks
I want all my classes to access the allegro library without #including allegro.h everywhere.
Why? That is how you do it in C++ land.
Could someone please explain how to correctly #include things. In .Net, everything seems to come together, but in c++ one thing cannot be used before it is included
Conceptually, in .NET, it is not much different at all. You still have to place "using " at the top. The difference there is that, in .NET, you could also write this every time if you wanted to:
void Foo( System.Drawing.Drawing2D.BitmapData bData ) { }
A common way to do this is to have a master include file that includes all of the others in the correct order. This works especially well if you use precompiled headers so
in precomp.h
#include <stdio.h>
#include <allegro.h>
.. etc.
in myfile.cpp
#include "precomp.h"
in myfile2.cpp
#include "precomp.h"
and so on.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.