text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Module parse failed: Unexpected token (7:8) File was processed with these loaders When I was using external jsx component in react js
I have an integrated project. It contains some pages (Each page in a folder is a react app). Outside these folder (as a root of them), I want to create some custom components an use them in some of pages.
The structure is as follow:
node_modules
AppPage1
AppPage2
AppPage3
SharedComponents
MyComponent.jsx
package.json
Inside each AppPage's package.json, I addressed the MyComponent.jsx in dependencies is:
"app-components": "file:./SharedComponents",
AppPages can recognize this Components using this:
import {MyComponent} from 'app-components/MyComponent'
But the following error appears:
ERROR in ../SharedComponents/MyComponent.jsx 7:8
Module parse failed: Unexpected token (7:8)
File was processed with these loaders:
<EMAIL_ADDRESS>../node_modules/source-map-loader/dist/cjs.js
You may need an additional loader to handle the result of these loaders.
What should I do to solve this problem?
| common-pile/stackexchange_filtered |
Jade: each val in times returns "no such function: val"
I have the following Jade:
template(name='hello')
button(class="ui grey basic button", id="clickme")
i(class="sun icon")
| Times
div(class="ui list")
each val in times
div.item= val
Where times is a session variable being called from a JS helper method.
I'm running a Meteor server, using Semantic-UI as my design framework.
When I try to use this Jade, the page inspector console (in Chrome) returns
Uncaught Error: No such function: val
I'm not sure what to fix, as I'm following the Jade (and Meteor-Jade) documentation to the letter.
Thanks!
can you provide your js code?
The error seems to suggest that the val function isn't scoped so that it can be read. The JS code would certainly be useful in seeing where/what the issue is.
each doesn't work with in. Try:
Jade:
div(class="ui list")
each times
div.item= val
Js:
if (Meteor.isClient) {
Session.set('times', [{val: 'value1'}, {val: 'value2'}]);
Template.hello.helpers({
times: function () {
return Session.get('times');
}
});
}
Or if you have an array instead of object in your Session variable, you can use this:
Jade:
div(class="ui list")
each times
div.item= this
Js:
if (Meteor.isClient) {
Session.set('times', ['value1', 'value2']);
...
}
| common-pile/stackexchange_filtered |
Having problem pass data with curl into established mlflow models
I am following the mlflow documentations-saving and serving models(https://www.mlflow.org/docs/latest/quickstart.html ), and came across a problem while trying to passing data into established models.
command:
python sklearn_logistic_regression/train.py
mlflow models serve -m runs:/<RUN_ID>/model
curl -d '{"columns":["x"], "data":[[1], [-1]]}' -H 'Content-Type: application/json; format=pandas-split' -X POST localhost:5000/invocations
error: This predictor only supports the following content types, ['text/csv', 'application/json', 'application/json; format=pandas-records', 'application/json; format=pandas-split', 'application/json-numpy-split']. Got 'application/x-www-form-urlencoded'.curl:
it's interesting. if I execute your command against nc -l localhost 5000, I see all headers set correctly
what version of curl do you have? execute curl --version
and what version of MLflow?
curl 7.55.1 (Windows) libcurl/7.55.1 WinSSL
mlflow, version 1.16.0
Changing the quotation marks from ' to " solved it for me:
curl -d "{\"columns\":[0],\"index\":[0,1],\"data\":[[1],[-1]]}" -H "Content-Type: application/json" localhost:5000/invocations
Adding the escape characters for double quotes was part of the fix, thank you very much! For mlflow >2.0, other fixes are needed - please see my answer.
Update for mlflow > 2.0 (shoutout to MMLovelace's answer with nested escaped double quotes):
curl -d "{\"columns\":[0],\"index\":[0,1],\"data\":[[1],[-1]]}" -H "Content-Type: application/json" localhost:5000/invocations
{"error_code": "BAD_REQUEST", "message": "The input must be a JSON
dictionary with exactly one of the input fields {'dataframe_split',
'dataframe_records', 'instances', 'inputs'}. Received dictionary with
input fields: []. IMPORTANT: The MLflow Model scoring protocol has
changed in MLflow version 2.0. If you are seeing this error, you are
likely using an outdated scoring request format. To resolve the error,
either update your request format or adjust your MLflow Model's
requirements file to specify an older version of MLflow (for example,
change the 'mlflow' requirement specifier to 'mlflow==1.30.0'). If you
are making a request using the MLflow client (e.g. via
mlflow.pyfunc.spark_udf()), upgrade your MLflow client to a version
= 2.0 in order to use the new request format. For more information about the updated MLflow Model scoring protocol in MLflow 2.0, see
https://mlflow.org/docs/latest/models.html#deploy-mlflow-models."}
You must use the exact fields specified. In my case, with a toy example that takes in a list, I was able to use the following:
curl -d "{\"inputs\":[[0.09178,0.0,4.05,0.0,0.51,6.416,84.1,2.6463,5.0,296.0,16.6,395.5,9.04]]}" -H "Content-Type: application/json" http://<IP_ADDRESS>:5000/invocations
In a more complicated case that expects a dataframe I was able to do it by:
curl -d "{\"inputs\":{\"dayofyear\":[1,2,3], \"hour\":[1,2,3], \"dayofweek\":[1,2,3], \"quarter\":[1,2,3], \"month\":[1,2,3], \"year\":[1,2,3]}}" -H "Content-Type: application/json" http://<IP_ADDRESS>:5000/invocations
{"predictions": [12695.537109375, 12193.8173828125, 12634.658203125]}
Hopefully, this saves others a couple of hours!
| common-pile/stackexchange_filtered |
Defaulting a column to the 1st of next month
I'm trying to find the most elegant means of defaulting a column's value to the 1st of next month. The best I've been able to come up with is this:
ALTER TABLE Foo ADD
Bar datetimeoffset(0) NOT NULL DEFAULT(DATEFROMPARTS(DATEPART(year, DATEADD(month, 1, GETDATE())), DATEPART(month, DATEADD(month, 1, GETDATE())), 1))
Whilst this works, it feels really clunky because I need to calculate DATEADD(month, 1, GETDATE()) twice and because I need to do the DATEFROMPARTS dance.
Is there a simpler way to achieve my goal?
FinalDate = CURRENT_DATE - DAY(CURRENT_DATE) + 1 DAY + 1 MONTH
@Akina Sure, but when you convert that to a compilable expression it still ends up very unwieldy...
What's the point of saving bytes of query text? Well, use DATEFROMPARTS(YEAR(GETDATE())+(MONTH(GETDATE())/12), 1+(MONTH(GETDATE())%12), 1)
The point is readability/not wasting developer time, not "saving bytes"
It think this would be the simplest way to do it.
DATEADD(DD,1,EOMONTH (GETDATE()))
So you query would be like
ALTER TABLE Foo ADD
Bar datetimeoffset(0) NOT NULL DATEADD(DD,1,EOMONTH (GETDATE()));
Thanks!
Interesting - thanks! I'll try this out ASAP.
| common-pile/stackexchange_filtered |
Why does my Perl unit test fail in EPIC but work in the debugger?
Has anyone ever experienced a unit test that fails and when they tried to debug it to find out where the failure was occurring, the unit test succeeds when running the code in the debugger?
I'm using Eclipse 3.5.1 with EPIC 0.6.35 and ActiveState ActivePerl 5.10.0. I wrote module A and module B both with multiple routines. A routine in module B calls a bunch of routines from module A. I'm adding mock objects to my module B unit test file to try to get more complete code coverage on module B where the code in module B tests to see if all the calls to module As routines fail or succeed. So I added some mock objects to my unit test to force some of the module A routines to return failures, but I was not getting the failures as expected. When I debugged my unit test file, the calls to the module A routine did fail as expected (and my unit test succeeds). When I run the unit test file as normal without debugging, the call to the mocked Module A routine does not fail as expected (and my unit test fails).
What could be going on here? I'll try to post a working example of my problem if I can get it to fail using a small set of simple code.
ADDENDUM: I got my code whittled down to a bare minimum set that demonstrates my problem. Details and a working example of the problem follows:
My Eclipse project contains a "lib" directory with two modules ... MainModule.pm and UtilityModule.pm. My Eclipse project also contains at the top level a unit test file named MainModuleTest.t and a text file called input_file.txt which just contains some garbage text.
EclipseProject/
MainModuleTest.t
input_file.txt
lib/
MainModule.pm
UtilityModule.pm
Contents of the MainModuleTest.t file:
use Test::More qw(no_plan);
use Test::MockModule;
use MainModule qw( mainModuleRoutine );
$testName = "force the Utility Module call to fail";
# set up mock utility routine that fails
my $mocked = new Test::MockModule('UtilityModule');
$mocked->mock( 'slurpFile', undef );
# call the routine under test
my $return_value = mainModuleRoutine( 'input_file.txt' );
if ( defined($return_value) ) {
# failure; actually expected undefined return value
fail($testName);
}
else {
# this is what we expect to occur
pass($testName);
}
Contents of the MainModule.pm file:
package MainModule;
use strict;
use warnings;
use Exporter;
use base qw(Exporter);
use UtilityModule qw( slurpFile );
our @EXPORT_OK = qw( mainModuleRoutine );
sub mainModuleRoutine {
my ( $file_name ) = @_;
my $file_contents = slurpFile($file_name);
if( !defined($file_contents) ) {
# failure
print STDERR "slurpFile() encountered a problem!\n";
return;
}
print "slurpFile() was successful!\n";
return $file_contents;
}
1;
Contents of the UtilityModule.pm file:
package UtilityModule;
use strict;
use warnings;
use Exporter;
use base qw(Exporter);
our @EXPORT_OK = qw( slurpFile );
sub slurpFile {
my ( $file_name ) = @_;
my $filehandle;
my $file_contents = "";
if ( open( $filehandle, '<', $file_name ) ) {
local $/=undef;
$file_contents = <$filehandle>;
local $/='\n';
close( $filehandle );
}
else {
print STDERR "Unable to open $file_name for read: $!";
return;
}
return $file_contents;
}
1;
When I right-click on MainModuleTest.t in Eclipse and select Run As | Perl Local, it gives me the following output:
slurpFile() was successful!
not ok 1 - force the Utility Module call to fail
1..1
# Failed test 'force the Utility Module call to fail'
# at D:/Documents and Settings/[SNIP]/MainModuleTest.t line 13.
# Looks like you failed 1 test of 1.
When I right click on the same unit test file and select Debug As | Perl Local, it gives me the following output:
slurpFile() encountered a problem!
ok 1 - force the Utility Module call to fail
1..1
So, this is obviously a problem. Run As and Debug As should give the same results, right?!?!?
Both Exporter and Test::MockModule work by manipulating the symbol table. Things that do that don't always play nicely together. In this case, Test::MockModule is installing the mocked version of slurpFile into UtilityModule after Exporter has already exported it to MainModule. The alias that MainModule is using still points to the original version.
To fix it, change MainModule to use the fully qualified subroutine name:
my $file_contents = UtilityModule::slurpFile($file_name);
The reason this works in the debugger is that the debugger also uses symbol table manipulation to install hooks. Those hooks must be getting installed in the right way and at the right time to avoid the mismatch that occurs normally.
It's arguable that it's a bug (in the debugger) any time the code behaves differently there than it does when run outside the debugger, but when you have three modules all mucking with the symbol table it's not surprising that things might behave oddly.
Perfect! Thanks! Now my statement coverage is well within the acceptable range!
Does your mocking manipulate the symbol table? I've seen a bug in the debugger that interferes with symbol table munging. Although in my case the problem was reversed; the code broke under the debugger but worked when run normally.
No, my mocked routine just returns an undef so that my error handling code will run. I added a working example to my question so hopefully some smart Perl guru will be able to witness my problem first hand and tell me how to fix it.
Actually, Test::MockModule does manipulate the symbol table. I've added another answer that addresses the revised question, but am leaving this one for reference.
| common-pile/stackexchange_filtered |
How to open Location services screen from setting screen?
I want to open location service screen programmatically to turn on service.
Step 1: Click on project name >> target>> info >> url Types
Step 2:
-(IBAction)openSettingViewToEnableLocationService:(id)sender
{
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"prefs:root=LOCATION_SERVICES"]];
}
Apple prohibits the use of this API now, you might not want to use it anymore since it may lead to rejection by app review: https://stackoverflow.com/questions/49059668/appstore-rejection-performance-software-requirements-prefsroot-graphicsser
App Rejected by Apple.
I have tried all the above answers,it's not working on iOS11..it just opens settings page and not the app settings .. Finally this works..
UIApplication.shared.open(URL(string:UIApplicationOpenSettingsURLString)!)
Swift 4.2:
UIApplication.shared.open(URL(string:UIApplication.openSettingsURLString)!)
Refer: https://developer.apple.com/documentation/uikit/uiapplicationopensettingsurlstring?language=swift
Yes. Avoid use of "prefs:root" or "App-Prefs:root" in you app, otherwise App will be rejected from App Store. Just open Setting page.
This will open your app setting in Settings app, not the 'Location Services' .
It went to the 'Location Services' for me!
I just saw that tinder takes you directly to the screen location services. How do they do that? Been searching through all the internet.
Swift 4.2
Go straight to YOUR app's settings like this:
if let bundleId = Bundle.main.bundleIdentifier,
let url = URL(string: "\(UIApplication.openSettingsURLString)&path=LOCATION/\(bundleId)")
{
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
Easier:
UIApplication.shared.openURL(URL.init(string: UIApplicationOpenSettingsURLString)!)
@TàTruhoada it's useless if Location services are disabled, what you wrote here it's for app location permission but not for enable/disable locations services itself.. you can't change location permissions for your app if location services itself are disabled
As of May 25, 2018 our app got rejected because of using prefs:root under Guideline 2.5.1 - Performance - Software Requirements
question asks about going to the location services page not the app settings page
I just saw that tinder takes you directly to the screen location services. How do they do that? Been searching through all the internet.
Although it's not the precise answer (Monika asked about system services) but it's exactly the answer that I was looking for!
You can open it directly like using below code,
But first set URL Schemes in Info.plist's URL Type Like:
Then write below line at specific event:
In Objective - C :
[[UIApplication sharedApplication] openURL:
[NSURL URLWithString:@"prefs:root=LOCATION_SERVICES"]];
In Swift :
UIApplication.sharedApplication().openURL(NSURL(string: "prefs:root=LOCATION_SERVICES")!)
Hope this will help you.
After implementing it will it pass App Store review guidelines?
in iOS 10 I need use the URL App-Prefs:root=Privacy&path=LOCATION.
@MMSousa in IOS 11 URL(string: "App-prefs:root=LOCATION_SERVICES") still works without problems...
As of May 25, 2018 our app got rejected because of using prefs:root under Guideline 2.5.1 - Performance - Software Requirements
did anyone got rejected for using that in 2024?
Do you want to be safe? use UIApplicationOpenSettingsURLString, which will open the app settings, without deep-link.
Using App-prefs your app will be rejected, as many sub comments said.
https://github.com/mauron85/cordova-plugin-background-geolocation/issues/394
Swift 5+
Easy Way Direct Your said application page will open
if let BUNDLE_IDENTIFIER = Bundle.main.bundleIdentifier,
let url = URL(string: "\(UIApplication.openSettingsURLString)&path=LOCATION/\(BUNDLE_IDENTIFIER)") {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
please share the error message why its reject my app is still live with this code
Guideline 2.5.1 Performance - Software Requirements
Your app uses the "prefs:root=" non-public URL scheme, which is a private entity. The use of non-public APIs is not permitted on the App Store because it can lead to a poor user experience should these APIs change.
Specifically, your app uses the following non-public URL scheme
app-prefs:root=privacy&path=location
Continuing to use or conceal non-public APIs in future submissions of this app may result in the termination of your Apple Developer account, as well as removal of all associated apps from the App Store
https://stackoverflow.com/questions/50040558/ios-app-store-rejection-your-app-uses-the-prefsroot-non-public-url-scheme
First:
Add URL
Go to Project settings --> Info --> URL Types --> Add New URL Schemes
See image below:
Second:
Use below code to open Location settings:
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"prefs:root=LOCATION_SERVICES"]];
referred from: https://stackoverflow.com/a/35987082/5575752
@Joe Susnick Do you have a solution for iOS 10? Thanks for any help
@EricBrunner yes, I posted above but the url: App-Prefs:root=Privacy&path=LOCATION worked for me.
@Joe Susnick Great. Do I have to distinquish between iOS 8,9 and 10.x or does that work in all versions? Thanks again for your support!
@EricBrunner I've only tested it on 10 but I'm pretty confident it'll work on 9. As far as 8 goes, not sure.
As of May 25, 2018 our app got rejected for using prefs:root under Guideline 2.5.1 - Performance - Software Requirements
If you set locationManager.startUpdatingLocation() and you have disabled on your iphone, it automatically show you an alertView with the option to open and activated location.
This worked exactly one time. It was the first time I tried. Does iOS remembers in any way what option I choose? I know startUpdatingLocation() showed me a standard dialog to navigate to the location service settings. And it navigated into the system wide location service settings! But it did that only the first time I called it. Any idea on this?
To reply to previous comment, Apple documentation clearly states that this popup can only be shown ONCE per app lifetime. If you want it to re-appear, you need to restart the app. There are no way to work around this :(
This solution works for me, but I didn't know App Store will reject my app or not
if let bundleId = Bundle.main.bundleIdentifier,
let url = URL(string: "\(UIApplication.openSettingsURLString)&root=Privacy&path=LOCATION/\(bundleId)") {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
App got rejected or not by using above option
Actually there's much simpler solution to that. It'll show your app settings with loction services/camera access, etc.:
func showUserSettings() {
guard let urlGeneral = URL(string: UIApplicationOpenSettingsURLString) else {
return
}
UIApplication.shared.open(urlGeneral)
}
you can't enable location and change permission for your app if location services itself are disabled in system, so first you have to enable location services (see screenshot from author question)
Eureka: Forget about URLs. Use the method requestWhenInUseAuthorization from the CLLocationManager code to open the native prompt message. This is what Tinder and other apps do. Took me a while to figure it out.
This will open a prompt to take you EXACTLY to the Location Services screen:
let locationManager = CLLocationManager()
locationManager.requestWhenInUseAuthorization()
This doesn't seem to do anything if the user has previously disallowed location permissions for the app. From what I can find, this is only supposed to work until the user has made a choice about the location services permissions for the app.
If he explicitly did so, yes, I think you're right. But if it's the first time he's been asked, I think this should work, even before making a choice, but I could be wrong. In general, after the user decides he does not want to give permission, the only thing left for the app to do is let him know he has to manually enable permissions again. Tinder and other famous apps do that that way too.
The point was that this doesn't always work as described. Once the user has made a permission choice, this no longer does anything visible in the app. Even uninstalling the app and reinstalling it won't always change the behavior since the system caches the user permission settings for the app.
I see. You have a point. Do you think it's nonetheless useful to leave this answer here? Otherwise I maybe should delete it.
The answer is useful. It just needed the extra little detail about not always showing the question to the user.
After adding prefs as a url type, use the following code to go directly to the location settings of an application.
if let url = URL(string: "App-prefs:root=LOCATION_SERVICES") {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
As of May 25, 2018 our app got rejected because of this under Guideline 2.5.1 - Performance - Software Requirements
@Ted even our app got rejected due to this. Do you know alternative to this? or workaround to get this working ? help would be appreciated
| common-pile/stackexchange_filtered |
Displaying Content with laravel template without blade
I have this controller method
class Account_Controller extends Base_Controller
{
public $layout = 'layouts.default';
public function action_index($a,$b)
{
$data['a'] = $a;
$data['b'] = $b;
$this->layout->nest('content', 'test',$data);
}
}
And this is my layout
<div id = "content">
<?php echo Section::yield('content'); ?>
</div>
And this is my test.php
echo $a;
echo '<br>';
echo $b;
echo 'this is content';
When i access this
http://localhost/myproject/public/account/index/name/email
I get my layout loaded but test.php is not loaded. How can i load content in my template. I dont want to use blade.
When you nest a view within another it's content is defined as a simple variable. So, simply output it:
<?php echo $content ?>
Section is used when you need to change something on your layout (or any parent view really) from within the child view. For instance:
// on layout.php
<title><?php echo Section::yield('title') ?></title>
// on test.php
<?php Section::start('title'); ?>
My Incredible Test Page
<?php Section::stop(); ?>
<div class="test_page">
...
</div>
Great! it is too simple i never paid attention this could be too simple. Thanks!
I think you need render for it, not sure, maybe partial loading:
<div class="content">
<?php echo render('content.test'); ?>
</div>
Look this sample for nesting views: http://laravel.com/docs/views#nesting-views
public function action_dostuff()
{
$view = View::make('controller.account');
// try dump var to grab view var_dump($view);
var_dump($view);
$view->test = 'some value';
return $view;
}
Or use instead blade: Templating in Laravel
| common-pile/stackexchange_filtered |
Perform a multi test run in Intellij IDEA 2022.2.2 by calling single tests(like these test are variables)
I'm an automation tester and I write automatic test in Gherkin+selenium and java.
I have different scenario to test, and i have lots of test related to the same feature.What i want to do is to create a single feature file which recall the automated tests related to the same feature.
Let's give an example
Sprint X ,there are 3 automatic test that analyze the Login feature.
- login with wrong credential
- login with expired credential
- login with correct credential
Sprint x, there are 2 automatic test that analyze the Login Feature.
- login with blabla
- login with xyz
The expectation would be to create a grouped one,called LOGIN.feature by calling single test like we usually do with variables
Actually what i do is creating a new feature file(that I'll call "grouped one"),copy and paste the gherkin steps of the automatic test from these sprint into the grouped one.The real problem is: if something changes in single tests,i need to fix the test and the "grouped one".
I'm asking your help.
Thank you in advance,
Best Regards
Instead of using "grouped" scenarios, you can tag individual Features or Scenario elements with their group functionality.
For example:
@login
Feature: Login with LDAP
Scenario: ...
...
Scenario: ...
...
Feature: Password authentication
@login
Scenario: Login with username and password
...
Scenario: Reset password
...
Then you use your test tooling to run a specific group of scenarios matching a tag. How you do this depends on your tooling.
If you are using Maven with cucumber-junit-platform-engine you can use:
mvn verify -Dgroups="login"
This will run the 3 scenarios from the example above. The value of groups is a JUnit 5 tag expression so more complicated selections are possible. It would be prudent to read the documentation.
If you are using Maven with cucumber-junit or cucumber-testng then you can use:
mvn verify -Dcucumber.filter.tags=@login
This too will run the 3 scenarios from the example above. The value of cucumber.filter.tags is a Cucumber tag expression so more complicated selections are possible.
The previous two examples use Maven. If you are using the cucumber-junit-platform-engine you can also use the IncludeTags annotation with the JUnit Suite to select tests declaratively. For example:
@Suite
@IncludeEngines("cucumber")
@SelectClasspathResource("com/example/features")
@IncludeTags("login")
public class RunCucumberTest {
}
Thanks for the answer but i don't get how to make it.I've shared a screenshot to explain better my problem
| common-pile/stackexchange_filtered |
How can I shorten page references from specific to a range (eg. pg 1-3,6-7,9 to pg 1-9)
I have a list of page references that need to be converted from specific references to min-max references. How can I accomplish this?
Examples and desired results:
Current Desired Result
pg 1-3,6-7,9 pg 1-9
pg 6,7 pg 6-7
pg 1-3 pg 1-3
What do you expect to happen with something like pg 1-3,5,9 ?
You can use the following to match:
(pg\s+)(\d+).*(\d+)
And replace with:
$1$2-$3
See DEMO
| common-pile/stackexchange_filtered |
Malformed xml after regex replacement
I'm trying to parse an XML file with Java.
Before I start parsing, I need to replace (encode) some text between the <code> and </code> tags.
Therefore I read the contents of the file into a String:
File xml = new File(this.xmlFileName);
final BufferedReader reader = new BufferedReader(new FileReader(xml));
final StringBuilder contents = new StringBuilder();
while (reader.ready()) {
contents.append(reader.readLine());
}
reader.close();
final String stringContents = contents.toString();
After I readed the XML into the string, I encode the values using Pattern and Matcher:
StringBuffer sb = new StringBuffer();
Pattern p = Pattern.compile("<code>(.*?)</code>", Pattern.DOTALL);
Matcher m = p.matcher(stringContents);
while (m.find()) {
//Encode text between <code> and </code> tags
String valueFromTags = m.group(1);
byte[] decodedBytes = valueFromTags.getBytes();
new Base64();
String encodedBytes = Base64.encodeBase64String(decodedBytes);
m.appendReplacement(sb, "<code>" + encodedBytes + "</code>");
}
m.appendTail(sb);
String result = sb.toString();
After the replacements are done, I try to read this String into the XML parser:
DocumentBuilderFactory dbFactory = DocumentBuilderFactory
.newInstance();
DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
Document doc = dBuilder.parse(result);
doc.getDocumentElement().normalize();
But then I get this error: java.net.MalformedURLException: no protocol: <root> <application> <interface>...
As you can see, after I read the File into a String, for some reasons there are a lot of spaces added, where there were newlines or tabs in the original file. So I think that's the reason why I get this error. Is there any way I can solve this?
Obligatory link. I think this is a prime example of why you should never do this.
What would be the right way to encode text between the and tags then? Because I can't parse it before encoding it, it contains special characters like < and > and the parser will give errors because of that.
But note that the problem that the parser can't parse the xml in my example has something to do with the way how I read it into a String using BufferedReader. The spaces are already there before the regex changement.
Well, you don't have valid XML then. Find some valid XML.
If I let the parser read the File object instead of the String, readed in by BufferedReader, the parser works and there aren't any errors. So the XML is valid. But in order to do the replacements I have to read it into a String first and that's where it goes wrong.
I think you still need to check that readLine has not returned a null.
while ((line = reader.readLine()) != null) {
contents.append(line)
}
| common-pile/stackexchange_filtered |
What in-depth spatial database systems tutorials exist?
Is there a good tutorial that explains in depth the internals of a GIS and a spatial database system such as PostGIS with examples and without any background assumption? I'm specifically looking for answers to questions such as:
What is a geometry in GIS?
How is it represented?
Given a lat/long coordinate, what are the operations I need to perform on it to bring it to a state where I can call something like ST_contain PostGIS to check if the polygon contains the lat/long coordinate?
I have some experience with general relational database systems but no background with GIS or the spatial database paradigm.
you could try http://www.bostongis.com/ they have a number of guides for PostGIS also the links provided by the other posters are also excellent tutorials. For general GIS information, ESRI's website has some excellant tutorials www.esri.com. and also provide training video's in relation to their software.
Other links to try
http://geospatial.ucdenver.edu/foss4g/resources/webinars
While the PostGIS in Action book isn't a step by step tutorial, it is a very handy guide, and I recommend it to anyone who wants to truly understand PostGIS and get beyond just the basics of PostGIS.
Perhaps the Intro to PostGIS would fit the bill, http://postgis.net/workshops/postgis-intro/
just crisscrossed this post.
Try the SpatiaLite Tutorial, it is very well done
Definely you should take a look at the very good documentation from Postgis. Start with the 1.5 version for vector stuff and general GIS questions.
http://postgis.net/docs/manual-1.5/ch04.html#PostGIS_Geography AND http://postgis.net/docs/manual-1.5/reference.html#PostGIS_Types
http://postgis.net/docs/manual-1.5/ch04.html#OpenGISWKBWKT AND http://postgis.net/docs/manual-1.5/ch04.html#EWKB_EWKT
http://postgis.net/docs/manual-1.5/PostGIS_FAQ.html#id420854
| common-pile/stackexchange_filtered |
How to turn off "<em class="placeholder"> </em>" surrounding vars in the output of t()?
In my module I display a menu inside a block using drupal_render(menu_tree('my-menu')).
In the output the variables printed with t() are surrounded by <em class="placeholder"> </em>.
Drafts <em class="placeholder">(4)</em>
Inbox <em class="placeholder">(2)</em>
How do I turn this off?
There is actually an excellent comment in the documentation for this. Pasting here for completeness
There are three styles of placeholders:
!variable, which indicates that the text should be inserted as-is. This is useful for inserting variables into things like e-mail.
$message = t("If you don't want to receive such e-mails, you can change your settings at !url.", array('!url' => l(t('My account'), "user/$account->uid")));
@variable, which indicates that the text should be run through check_plain, to escape HTML characters. Use this for any output that's displayed within a Drupal page.
$title = t("@name's blog", array('@name' => $account->name));
%variable, which indicates that the string should be HTML escaped and highlighted with theme_placeholder() which shows up by default as emphasized.
$message = t('%name-from sent %name-to an e-mail.', array('%name-from' => $user->name, '%name-to' => $account->name));
Sorry I should have checked that doc:$ I search for the HTML not for the function:$. THX
| common-pile/stackexchange_filtered |
Do I need to modify the app if I connected CloudFront with S3?
So, I have an app with people who can post videos, make a profile etc.
I don't have so much experience with AWS, but I'm confused. I followed a tutorial about "How to connect S3 with CloudFront" and I don't know if I need to do more.
So, on Origin domain I selected my S3 Bucket and on "Allowed HTTP" only GET, HEAD. And the other settings like all edge locations.
My question is, my app still have performance issues, Do I need to modify anything else? Like, when I go to my app and tap on the share button, the share link is s3bucket...... , I didn't change it to my website yet.
You want to serve your content through cloudfront with your website url. Is that correct?
By website you mean my domain and not that address(s3bucket...), no. I want to know if I need to modify something else on my app for CloudFront to work. If I connected the CloudFront with S3, it means that I have CDN, right? And my videos from the app should work with the CDN
yes you can access any content in your bucket using the Cdn url if the bucket allows public read.
Yes, the bucket is public. Thank you! I thought I need to modify something in my app for CloudFront to work properly!
| common-pile/stackexchange_filtered |
Java: Sorting a 2D array by first (int filled) column
I want to sort an Object[][] by the first column which consists of ints.
Take, for example, this array:
Object[][] array = {{1, false, false}, {2, true, false}, {0, true, true}};
Now, I want to create a new array whose rows are sorted in a way that the first element of each row ascends:
Object[][] newArray = ...;
System.out.println(Arrays.deepToString(newArray));
What it should print: {{0, true, true}, {1, false, false}, {2, true, false}}
Thank you.
Arrays.sort(array, new Comparator<Object[]>() {
@Override
public int compare(Object[] o1, Object[] o2) {
Integer i1 = (Integer) (o1[0]);
Integer i2 = (Integer) (o2[0]);
return i1.compareTo(i2);
}
});
Arrays in java has a sort method that takes in Comparator as 2nd parameter.
You can pass in the parameter to be considered for sorting. In your case, we'll need to sort based on the first parameter of the input 2d array; the solution would look like:
Arrays.sort(array, Comparator.comparingInt(a -> a[0]));
https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html
You can use Array.sort method using a custom comparator as follows
Arrays.sort(array, new Comparator<Object[]>() {
@Override
public int compare(Object []o1, Object []o2) {
return (Integer)o1[0] - (Integer)o2[0];
}
});
The functionality for comparing is implemented in the Integer-Class. So you should not implement your own solution (substraction). Use the compareTo method.
| common-pile/stackexchange_filtered |
Trigger UIApplicationWillTerminateNotification for "stop" Xcode action
I noticed that UIApplicationWillTerminateNotification is not getting sent if I stop the executable from Xcode. Is there a clean way to get it to call?
Use this code below. It will work great!
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(applicationWillTerminate:) name:UIApplicationWillTerminateNotification object:nil];
| common-pile/stackexchange_filtered |
lambda/function pointer with std::function causes error
Update:
These errors appear to be from CODAN, Eclipse's code analyzer, that appear in the Problems view and are also underlined in red on the line the error refers to. Oddly enough building the project successfully creates the executable, and the build does not report these errors in the Console. I can run that executable and get the expected output.
So now the question becomes: how to have Eclipse's CODAN recognize these uses of lambdas and function pointers with std::function are not errors?
Original Question:
In the following C++ code, it compiles fine directly with g++, but causes errors in Eclipse CDT. How can I get my Eclipse project to recognize and allow the use of lambdas (or function pointers) with std::function?
Note: see Edit at bottom of question for update
#include <functional>
#include <iostream>
void foo(int i) {
std::cout << i << std::endl;
}
void function_ptr(void (*bar)(int)) {
bar(1);
}
void function_std(std::function<void(int)> baz) {
baz(2);
}
int main() {
// these function calls are ok
function_ptr(&foo);
function_ptr([](int i) -> void {
foo(i);
});
// these function calls cause errors
function_std(&foo);
function_std([](int i) -> void {
foo(i);
});
// these assignments cause errors
std::function<void(int)> f1 = foo;
std::function<void(int)> f2 = [](int i) -> void {
foo(i);
};
f1(3);
f2(3);
return 0;
}
Compiling this code on the command line works as expected and produces the following output:
$ g++ src/LambdaFunctionParameterExample.cpp -o example
$ ./example
1
1
2
2
3
3
However in Eclipse, the function calls accepting a std::function as a parameter produce this error:
Invalid arguments '
Candidates are:
void function_std(std::function<void (int)>)
'
And the std::function variable assignments produce this error:
Invalid arguments '
Candidates are:
function()
function(std::nullptr_t)
function(const std::function<void (int)> &)
function(std::function<void (int)> &&)
function(#10000)
'
I have set the language standard to ISO C++17 (-std=c++17) in Project -> Properties -> C/C++ Build -> GCC C++ Compiler -> Dialect. (I have assumed setting this property was necessary to access the <functional> header according to this documentation. Oddly enough, specifying the language level (or not) is not effecting the above errors. And, specifying the language level is not necessary when building directly with g++.)
I'm using macOS Catalina 10.15.5, gcc 10.2.0 from homebrew, and Eclipse CDT version 2020-09 (4.17.0).
IDEs like Eclipse CDT generally aren't as good as compilers about correctly working through declarations, templates, etc.
is this an intellisense (or however is called in eclipse) or an actual compile error?
@bolov CODAN. Fortunately there is no CODAN armada from which you must defend the frontier.
@bolov Thank you for asking for a clarification. These appear to be code analysis errors that appear in the Problems view and also underlined in red on the line the error refers to. Oddly enough the executable is still built, and building the project does not report errors in the Console. I can actually run the executable and get the expected output. (I didn't realize this was the case before, so I'll add this info to the question.)
this means it's just the intellisense. Some IDEs use a different engine for realtime autocompletion and error hinting then the actual compiler that is used to build the program. This intellisense can lack behind in language features, can be much more buggy, can use old cache instead of the updated code etc. It is a good help when it works, but when it doesn't don't fret about it, if you can't quickly make it work then ignore it, disable it or chose another IDE that can use the same compiler to build and provide intellisense.
Does this answer answer your question?
@UnslanderMonica I followed the instructions suggested by both answers to that question, but neither led to changes in the errors being reported.
I found an Eclipse bug report matching my issue exactly.
Should I add this as an answer to my question? My intuition says no, as this doesn't answer the question. But, it seems there will be no other answer. Perhaps the question should be closed?
Following advice found here, I decided to answer my own question.
This behavior is a known bug with Eclipse CDT version 10.0.
Update:
This issue is fixed by upgrading to Eclipse CDT version 10.1:
https://download.eclipse.org/tools/cdt/builds/10.1/cdt-10.1.0-rc2/
Steps to fix:
Enter the above URL in Help --> Install New Software... --> Work with: and install all the options.
Restart Eclipse.
Select Project --> C/C++ Index --> Rebuild.
| common-pile/stackexchange_filtered |
Read the csv file store data in nested dictionary
I have csv file like this
[image][1][1]:
https://i.sstatic.net/kaBMF.png which has 2 columns, I have read this csv file and stored data in dictionary, below is my code
import csv
with open('p1.csv', mode='r') as infile:
reader = csv.reader(infile)
mydict = {rows[0]:rows[1] for rows in reader}
pprint(mydict)
This above lines of code gives me output as follows:
{'00:00-00:15': '266.78',
'00:15-00:30': '266.78',
'00:30-00:45': '266.78',
'00:45-01:00': '266.78',
'01:00-01:15': '266.78',}
But i want this output in below form:
{{'00:00-00:15': '266.78'},
{'00:15-00:30': '266.78'},
{'00:30-00:45': '266.78'},
{'00:45-01:00': '266.78'},
{'01:00-01:15': '266.78'}}
This should work. Put extra brackets around rows[0]:rows[1] to create multiple dictionaries. Keep in mind, the output you want (and this gives) is a set of dictionaries ({1, 2, 3} is set literal notation). You might want to use a list, by replacing the outer {} with []
import csv
with open('p1.csv', mode='r') as infile:
reader = csv.reader(infile)
mydict = [{rows[0]:rows[1]} for rows in reader]
pprint(mydict)
When I tried the code, I got at this line mydict = {{rows[0]:rows[1]} for rows in reader} the error TypeError: unhashable type: 'dict'
as @Evan said replace outer {} with []
So it will be mydict = [{rows[0]:rows[1]} for rows in reader]
I also get a TypeError: unhashable type: 'dict'. This is because you cannot have a "set of dictionaries" — they're unhashable.
Prathamesh: Yes, but @Evan's answer also claims it's possible to create something impossible — something they should correct — not you. I think it's obvious they didn't try running the code in they answer before posting it and this situation is why that's often a bad idea.
Evan: While fixing your code to get rid of the TypeError was an improvement, what's still in the rest of it is still incorrect.
What you want appears to be a set of dictionaries, however that's impossible because dictionaries aren't hashable. You can workaround that restriction by defining your own dict subclass which is.
import csv
import hashlib
from pprint import pprint
class HashableDict(dict):
""" Hashable dict subclass. """
def __hash__(self):
m = hashlib.sha256(b''.join(bytes(repr(key_value), encoding='utf8')
for key_value in self.items()))
return int.from_bytes(m.digest(), byteorder='big')
with open('p1.csv', mode='r', newline='') as infile:
mydict = {HashableDict({row[0]:row[1]}) for row in csv.reader(infile)}
pprint(mydict, sort_dicts=False)
Output:
{{'00:00-00:15': '266.78'},
{'00:15-00:30': '266.78'},
{'00:30-00:45': '266.78'},
{'00:45-01:00': '266.78'},
{'01:00-01:15': '266.78'}}
| common-pile/stackexchange_filtered |
Incorrect Android documentation about layout aliases?
I'd like to figure out how to reuse or "alias" layouts with the least boilerplate code.
It seems that the Android documentation about layout aliases is incorrect, and certainly appears inconsistent. This section of documentation shows the following layout file as an example:
<resources>
<item name="main" type="layout">@layout/main_twopanes</item>
</resources>
If I try to compile this, I get an Attribute is missing the Android namespace prefix error. Even after adding the namespace to the resources element, I get error: Error: String types not allowed (at 'type' with value 'layout').
Elsewhere in the Android documentation, they show a different and seemingly inverted and incorrect way to alias layouts:
To create an alias to an existing layout, use the element,
wrapped in a <merge>. For example:
<?xml version="1.0" encoding="utf-8"?>
<merge>
<include layout="@layout/main_ltr"/>
</merge>
Running this results in the following error in LogCat E/AndroidRuntime(1558): android.view.InflateException: <merge /> can be used only with a valid ViewGroup root and attachToRoot=true. So this error seems to reinforce the fact that this <include> <merge> pair must be a mistake, because it requires an unnecessary parent View.
Lastly there's the <merge> documentation, which seems to contradict the former direction, making no mention of the inverted form of a top-level <merge><include/></merge>.
To avoid including such a redundant view group, you can instead use
the element as the root view for the re-usable layout. For
example:
<merge xmlns:android="http://schemas.android.com/apk/res/android">
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/add"/>
<Button
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/delete"/>
</merge>
The first method seems to compile and work fine for me. Do you have the latest SDK Tools installed?
@Joe thanks for verifying this! It ends up I didn't read the instructions carefully. I did what I thought I should do--put the layout.xml inside the layout-large rather than values-large folder.
The first technique works, you just have to put your <resources> file in the correct folder. It should be in the values folders not the layout folders as you might when reusing layouts via <include>.
For instance, suppose you have a layout named editor.xml that lives in the layout folder. Suppose you want to use a specialized layout on small and normal screen sizes. If you didn't care about repeating yourself, you would just copy and paste this layout into the layout-small and layout-normal folders and name it editor.xml in each folder. So you'd have three files named editor.xml.
If you don't want to repeat yourself, you would place the specialized layout in the main layout folder and name it, say, compact_editor.xml. Then you'd create a file named layout.xml in the values-small and values-normal folders. Each file would read:
<?xml version="1.0" encoding="utf-8"?>
<resources>
<item name="editor" type="layout">@layout/compact_editor</item>
</resources>
I've filed a documentation issue about the other two problems.
"this inverted use of merge seems to be undocumented otherwise" -- the reason the <merge> is needed is that <include> is not supported as a root element, for some reason. Otherwise, this use of <merge> is normal. <merge> says "here's a bunch of widgets and containers that need to go in some indeterminate parent", with <merge> serving as the root XML element (since XML needs one). In this case, the "bunch of widgets and containers" happen to be stored in another layout resource, hence the <include>.
@CommonsWare Sounds reasonable, but this is your interpretation based on your understanding of <merge> not its actual documentation.
Well, the actual documentation isn't exactly going to win a prize, but there is nothing in there that I see that contradicts what I wrote.
@CommonsWare Okay, note this new issue: Support as a top-level element so we don't need to surround it with
@CommonsWare You're probably tired of discussing this, but after actually testing the second option in a ListView, the inflater fails. See my question edit: Running this results in the following error in LogCat E/AndroidRuntime(1558): android.view.InflateException: can be used only with a valid ViewGroup root and attachToRoot=true. So this error seems to reinforce the fact that this pair must be a mistake, because it requires an unnecessary parent View.
The <merge><inflate/></merge> definitely works with setContentView(), as I tested that before commenting. I can see where it might not work with inflate(R.layout.alias, parent, false).
@CommonsWare So first, this approach can't work from a ListView row. But in defense of the documentation, they didn't advertise that it would. More importantly though, doesn't this then create a possibly unnecessary layer of ViewGroup even when it does work? There's the parent's ViewGroup as well as the one in the included file. I was pleased to learn that the first resources strategy actually does work--it was my error.
"More importantly though, doesn't this then create a possibly unnecessary layer of ViewGroup even when it does work? There's the parent's ViewGroup as well as the one in the included file." -- it should not add any more layers than you would have in referring to the original resource. All that being said, any chance I can con you into adding an update to your question with the proper syntax for what you got working? I haven't seen that approach before. Thanks!
Thanks! Though in the particular scenario you outlined, the simpler solution would be to have res/layout/editor.xml (for small/normal) and res/layout-large/editor.xml (for large/xlarge). Then, you do not need the alias. That being said, layout aliases will be very useful when mixing and matching other resource set qualifiers, like the -swNNNdp stuff.
@JeffAxelrod did you mean to say values-small and values-normal? Otherwise that is identical to the original documentation, no? I will do an edit to this answer to use their example with what you propose is the solution in the first paragraph (and seems to work for my project)
@DandreAllison yes, thanks, it should have read values, not layout. I've corrected it. If you'd like to add anything else, please do so in a new answer or comments. Thanks!
| common-pile/stackexchange_filtered |
partclone via clonezilla - where are the bad blocks logged?
I've just backed up a partition using clonezilla, which in turn employed partclone. During the backup, several blocks were bad and were skipped (it was in rescue mode). Is there a log of these?
There's a log in /var/log/partclone.log, which lists the bad sectors:
WARNING: Can't read sector at<PHONE_NUMBER>20, lost data.
WARNING: Can't read sector at<PHONE_NUMBER>32, lost data.
WARNING: Can't read sector at<PHONE_NUMBER>44, lost data.
WARNING: Can't read sector at<PHONE_NUMBER>56, lost data.
but it seems the correlation with actual filenames is not logged (it's also not displayed during the partition cloning).
I suspect that correlating bad sector numbers with actual file data is involved enough to deserve its own, separate question.
@TwistyImpersonator: Ok, fair enough.
| common-pile/stackexchange_filtered |
Are toolbars considered views?
I was working through one of the Material Design tutorials when I noticed one of their Fragments was using a FrameLayout with both a Toolbar and a RecyclerView. This struck be as odd considering that the documentation for FrameLayout has:
FrameLayout is designed to block out an area on the screen to display
a single item. Generally, FrameLayout should be used to hold a single
child view ...
Why would the tutorial be adding both a Toolbar and a RecyclerView to the FrameLayout if it was designed to only contain one view?
Are toolbars themselves not considered views?
The RecyclerView is the only object here explicitly with view in the name, but what is a Toolbar object if not also a view?
The FrameLayout documentation also has:
You can, however, add multiple children to a FrameLayout and control
their position within the FrameLayout by assigning gravity to each
child, using the android:layout_gravity attribute. Child views are drawn
in a stack, with the most recently added child on top.
Why does the documentation contradict itself? It first (first quote) says that the FrameLayout is designed to hold a single view, then it later (second quote) says that multiple children can be added. Am I misunderstanding what a 'child' is?
Are the Toolbar and the RecyclerView then effectively stacked on top of each other?
I noticed that the RecyclerView's parent layout (a NestedScrollView) used android:layout_marginTop="56dp", presumably to move its start below the end of the Toolbar (Indeed if this margin was set to 0dp then the Toolbar would obscure some of the RecyclerView). Thus, given the 'stacking' nature of FrameLayout children, I was expected the RecyclerView to be on top of the Toolbar if I removed this marginTop attribute from the RecyclerView (because the RecyclerView is added after the Toolbar in the xml design). This did not work, however; the Toolbar seemed to keep its prominent position at the 'top of the stack', so to speak.
Is the FrameLayout in this example used more as a generic container where the children views are self managing their positions in the viewable area (considering the hard-coded use of android:layout_marginTop="56dp" in the NestedScrollView)?
If the Toolbar is a view, is it implicitly given a weight such that it always remains at the top of the viewable stack of inflated views? One could argue this makes sense for the purposes of a Toolbar.
FrameLayout can be used to add more than one child views..
Are toolbars themselves not considered views?
All the UI Objects e.g. Toolbars, Buttons etc. extend the View object at the bottom where some may extend ViewGroup (ToolBar) in the hierarchy but at the botton, even the ViewGroup extends View object.
Why does the documentation contradict itself? It first (first quote) says that the FrameLayout is designed to hold a single view, then it later (second quote) says that multiple children can be added. Am I misunderstanding what a 'child' is?
It doesn't, it doesn't say that the FrameLayout can only have one child, it says it should have,
Therefore, its said that FrameLayout can have multiple child.
Also child views are those views that are added in a ViewGroup i.e. as the name suggests, its a View which can hold a group of other Views.
If the Toolbar is a view, is it implicitly given a weight such that it always remains at the top of the viewable stack of inflated views? One could argue this makes sense for the purposes of a Toolbar.
Ofcourse Toolbar is a View or you can say a ViewGroup,
It doesn't mean it stays on the top of the Viewable stack or inflated views..
You can always control where you want to show the Toolbar..
The reason why it is shown on the Top is that it is a general practice to show Title & other option menu's on the top of the View..
| common-pile/stackexchange_filtered |
Inheriting from both ABC and django.db.models.Model raises metaclass exception
I am trying to implement a Django data model class, which is also an interface class, using Python 3. My reason for doing so is, I'm writing a base class for my colleague, and need him to implement three methods in all of the classes he derives from mine. I am trying to give him a simplified way to use the functionality of a system I've designed. But, he must override a few methods to supply the system with enough information to execute the code in his inherited classes.
I know this is wrong, because it's throwing exceptions, but I'd like to have a class like the following example:
from django.db import models
from abc import ABC, abstractmethod
class AlgorithmTemplate(ABC, models.Model):
name = models.CharField(max_length=32)
@abstractmethod
def data_subscriptions(self):
"""
This method returns a list of topics this class will subscribe to using websockets
NOTE: This method MUST be overriden!
:rtype: list
"""
I understand I could avoid inheriting from the ABC class, but I'd like to use it for reasons I won't bore you with here.
The Problem
After including a class, like the one above, into my project and running python manage.py makemigrations I get the error: TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases. I have searched Stack Overflow, but have only find solutions like the following one:
class M_A(type): pass
class M_B(type): pass
class A(metaclass=M_A): pass
class B(metaclass=M_B): pass
class M_C(M_A, M_B): pass
class C:(A, B, metaclass=M_C): pass
I've read the following posts:
Using ABC, PolymorphicModel, django-models gives metaclass conflict
Resolving metaclass conflicts
And I've tried many variations of those solutions, but I still get the dreaded metaclass exception. Help me Obi-Wan Kenobi, you're my only hope. :-)
I re-wrote my question, because after receiving feedback that my question was poorly formed, I re-read it. Even I had trouble understanding my own question! Must have been written when I was bleary eyed from coding too much. lol
I had the same need and found this. I've altered the code for clarity and completeness. Basically you need an extra class which you can use for all your model interfaces.
import abc
from django.db import models
class AbstractModelMeta(abc.ABCMeta, type(models.Model)):
pass
class AbstractModel(models.Model, metaclass=AbstractModelMeta):
# You may have common fields here.
class Meta:
abstract = True
@abc.abstractmethod
def must_implement(self):
pass
class MyModel(AbstractModel):
code = models.CharField("code", max_length=10, unique=True)
class Meta:
app_label = 'my_app'
test = MyModel(code='test')
> TypeError: Can't instantiate abstract class MyModel with abstract methods must_implement
Now you have the best of both worlds.
I found a solution that worked for me, so thought I would post it here in case it helps someone else. I decided to not inherit from the ABC class, and instead just raise an exception in the "abstract" methods (the ones the derived class must implement). I did find helpful information in the Django docs, describing using Django data models as an Abstract base class and also Multi-table inheritance.
Django Data Model as an Abstract Base Class
Quoted from the docs:
Abstract base classes are useful when you want to put some common information into a number of other models. You write your base class and put abstract=True in the Meta class. This model will then not be used to create any database table. Instead, when it is used as a base class for other models, its fields will be added to those of the child class.
An example:
from django.db import models
class CommonInfo(models.Model):
name = models.CharField(max_length=100)
age = models.PositiveIntegerField()
class Meta:
abstract = True
class Student(CommonInfo):
home_group = models.CharField(max_length=5)
The Student model will have three fields: name, age and home_group.
The CommonInfo model cannot be used as a normal Django model, since it
is an abstract base class. It does not generate a database table or
have a manager, and cannot be instantiated or saved directly.
Fields inherited from abstract base classes can be overridden with
another field or value, or be removed with None.
Multi-table Inheritance with a Django Data Model
My understanding of "multi-table inheritance" is, you can define a data model and then also use it as a base class for a second data model. The second data model will inherit all the fields from the 1st model, plus its own fields.
Quoted from the docs:
The second type of model inheritance supported by Django is when each
model in the hierarchy is a model all by itself. Each model
corresponds to its own database table and can be queried and created
individually. The inheritance relationship introduces links between
the child model and each of its parents (via an automatically-created
OneToOneField). For example:
from django.db import models
class Place(models.Model):
name = models.CharField(max_length=50)
address = models.CharField(max_length=80)
class Restaurant(Place):
serves_hot_dogs = models.BooleanField(default=False)
serves_pizza = models.BooleanField(default=False)
All of the fields of Place will also be available in Restaurant,
although the data will reside in a different database table. So these
are both possible:
>>> Place.objects.filter(name="Bob's Cafe")
>>> Restaurant.objects.filter(name="Bob's Cafe")
The question was about combining ABC and Django models.
ABC make sure that you can't initialize the AbstractBase as long as not all abstract methods have been defined.
This is little different to Djangos abstract Model.
| common-pile/stackexchange_filtered |
Is the galaxy made of a nebula or the solar system?
My book says on the topic 'Formation of stars' that a galaxy starts forming by the accumulation of hydrogen gas in the form of very large clouds called nebulae. Then local lumps of gas are formed. These lumps start getting even denser and eventually form gaseous bodies called stars.
That seems a fair explanation but in the topic 'Our solar system' my book says "The nebula from which our Solar system is supposed to have been formed, started its collapse and core formation some time 5-5.6 billion years ago and the planets were formed about 4.6 billion years ago."
So, is the nebula being talked about in the topic 'Formation of stars' same as the nebula being talked about in the topic 'Our solar system'?
This question came to my mind because the nebula being talked about in 'formation of stars' topic might be very large, but when talking about our solar system it is eventually formed into the Sun, so how can they talk about the nebula while describing about our solar system, as the solar system is something within our galaxy.
Any picture helping to visualize will be very nice.
No, those are two different nebulae. There are stars in our galaxy much older than 5.6 billion years. The problem is that "nebula" is a very generic term, so it can be a whole galaxy or even the remains of a single star. It's not even clear what is meant by the nebula from which our Sun formed, it could be the giant molecular cloud if our Sun formed in a huge cluster, or it could be something called a "proplyd" (which is much smaller) if the Sun formed in relative isolation. There are even proplyds that break off from giant molecular clouds in a kind of combined scenario where the GMC forms the proplyd and then the proplyd forms the star. In any event, it is important to realize that the galaxy formed from a huge nebula some 10 billion years ago, and the Sun formed from a much smaller nebula much later.
| common-pile/stackexchange_filtered |
backbone.js: What should be a view?
I'm working on an app that has 1 main list of videos, and a playlist that you can add videos to from the main list (http://gosukpop.com).
I'm in the process of "transferring" my project over to backbone.js. I only have views for the main video list table and the playlist table. And I just noticed that you can only capture events that are within the el element(Right?). So I would have to create a view for every thing on the page...like the search, sort options, and playlist controls; is this correct? Or should I just have 2 views for the main list (includes search, sort options) and the playlist?
The great thing about Backbone is the flexibility it provides. It doesn't require everything on the page to be Backbonified.
In your case, I would recommend a Backbone view for the main list and one for the play list. (You might also consider an item view for each row in the table in these lists)
The search, sort and playlist controls do not have to be contained within a view.
Just subscribe to their respective events and update the Backbone collections and models as needed.
If you need some help starting out, feel free to send an email to the Backbone mailing list. There's a lot of people willing to help out.
Thanks for the answer, and the mailing list!
| common-pile/stackexchange_filtered |
APPLICATION ERROR: transaction still active in request with status 0
Looks like it is a big bug in JBPM6.5. If the loop is very big, the process will encounter error "APPLICATION ERROR: transaction still active in request with status 0"
I am not sure what happened. Anyone can explain it? Thanks very much. You can reproduce the error by following steps:
download the workflow file into your project.
https://github.com/kylinsoong/jBPM-Drools-Example/blob/master/jbpm/quickstarts/src/main/resources/quickstarts/looping.bpmn
change the variable "count" to a big number in "Init" step, eg. 50000,
see screenshot
deploy your project.
run the workflow "looping", the error will come up.
| common-pile/stackexchange_filtered |
ERROR in Error encountered resolving symbol values statically
I just migrated my project under angular-cli and I'm getting this error when I start it:
ERROR in Error encountered resolving symbol values statically.
Function calls are not supported. Consider replacing the function or
lambda with a reference to an exported function (position 63:45 in the
original .ts file), resolving symbol AppModule in
C:/Data/Private/Innovation/EV/ev-dashboard/src/app/app.module.ts
Which corresponds to the APP_INITIALIZER below in app.module.ts:
...
providers: [ // expose our Services and Providers into Angular's dependency injection
APP_PROVIDERS,
ConfigService,
{ provide: APP_INITIALIZER, useFactory: (config: ConfigService) => () => config.load(), deps: [ConfigService], multi: true }
...
What is funny is that when I comment this line, it starts well and uncomment it afterward which triggers a compilation without error this time!!!
Do you have an idea?
Thanks,
Serge.
You need to extract function like:
export function configFactory(config: ConfigService) {
return () => config.load()
}
...
providers: [{
provide: APP_INITIALIZER,
useFactory: configFactory,
deps: [ConfigService],
multi: true
}
See also
https://github.com/angular/angular/issues/11262
Angular4 APP_INITIALIZER won't delay initialization
| common-pile/stackexchange_filtered |
Disposing/Cleaning up web service proxies
What is the best practise for Disposing/Cleaning up a web service proxy instance after synchronous usage?
How does the answer differ if the proxy class is derived from SoapHttpClientProtocol versus ClientBase<T>?
Background
I'm trying to figure out why one of my WCF web services sometimes seems to get into a state where it no longer reponds to service calls. Basically it seems like it hangs and for now I don't really have any hard data to figure out what's going on when this occurrs.
One thing that I suspect might be an issue is the fact that this WCF service is itself doing web service calls to a few other services. These other services are called (synchronously) using proxies that are derived from SoapHttpClientProtocol (made using wsdl.exe) and at this time these proxy instances are left to be cleaned up by the finalizer:
...
var testProxy = new TestServiceProxy();
var repsonse = testProxy.CallTest("foo");
// process the reponse
...
So should I simply wrap these up in a using(...) { ... } block?
...
using(var testProxy = new TestServiceProxy())
{
var repsonse = testProxy.CallTest("foo");
// process the reponse
}
...
What if I were to change these proxy classes to be based on ClientBase<T> by recreating them using svcutil.exe? Based on my research so far, it seems the Dipose() method of classes derived from ClientBase<T> will internally call the Close() method of the class and this method might in turn throw exceptions. So wrapping a proxy based on ClientBase<T> in a Using() is not always safe.
So to reiterate the question(s):
How should I clean up my web service proxy after using it when the proxy is based on SoapHttpClientProtocol?
How should I clean up my web service proxy after using it when the proxy is based on ClientBase<T>?
Based on my best efforts to find the answer to this question, I'd say that for SoapHttpClientProtocol based proxies (regular .asmx web service proxies) the correct way is to simly wrap it in using():
using(var testProxy = new TestAsmxServiceProxy())
{
var response = testProxy.CallTest("foo");
// process the reponse
}
For proxies based on ClientBase<T> (WCF proxies) the answer is that it should not be wrapped in a using() statement. Instead the following pattern should be used (msdn reference):
var client = new TestWcfServiceProxy();
try
{
var response = client.CallTest("foo");
client.Close();
// process the response
}
catch (CommunicationException e)
{
...
client.Abort();
}
catch (TimeoutException e)
{
...
client.Abort();
}
catch (Exception e)
{
...
client.Abort();
throw;
}
I'm not sure about the first part of this. Dispose is implemented by Component, but Abort is implemented by WebClientProtocol. Looking at the code it appears that Dispose couldn't know about the request. https://msdn.microsoft.com/en-us/library/ff647786.aspx seems to back that up (under "Abort Connections for ASP.NET Pages That Timeout Before a Web Services Call Completes").
For WCF, see https://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue
| common-pile/stackexchange_filtered |
An enigmatic partition of the chemical elements
I have partitioned the chemical elements into two groups based on a certain
property that is shared by some of them, but not by the remaining ones.
The following elements have my enigmatic property:
Bismuth, Chlorine, Gold, Helium, Iodine, Mendelevium, Nitrogen, Silver
The following elements do not have my enigmatic property:
Dubnium, Erbium, Nobelium, Terbium, Ytterbium, Yttrium, Zinc, Zirconium
Which of the following elements do have this property?
Aluminium, Fluorine, Francium, Gallium, Lutetium, Neon, Oxygen, Polonium.
The property is
having a prime atomic number.
The first group:
Bismuth 83
Chlorine 17
Gold 79
Helium 2
Iodine 53
Mendelevium 101
Nitrogen 7
Silver 47
All of these atomic numbers are prime.
The second group:
Dubnium 105 (divisible by 5)
Erbium 68 (divisible by 2)
Nobelium 102 (divisible by 2)
Terbium 65 (divisible by 5)
Ytterbium 70 (divisible by 5)
Yttrium 39 (divisible by 3)
Zinc 30 (divisible by 5)
Zirconium 40 (divisible by 5)
None of these atomic numbers are prime.
Therefore, of the final group:
Aluminum 13 (prime)
Fluorine 9 (divisible by 3)
Francium 87 (divisible by 3)
Gallium 31 (prime)
Lutetium 71 (prime)
Neon 10 (divisible by 5)
Oxygen 8 (divisible by 2)
Polonium 84 (divisible by 2)
Aluminum, Gallium, and Lutetium have the property.
| common-pile/stackexchange_filtered |
Alkaline batteries used in Ni-Cd battery rechargable power outage LED emergency light
I have a Vector Rechargeable Power Outage LED Emergency Light that plugs into the wall. When the electricity goes off, it comes on. The instructions state to use three AAA Ni-Cd batteries and I only have alkaline batteries. Hurricane Irma is headed my way and I was unable to find Ni-Cd batteries anywhere.
Is is safe for me to use alkaline batteries instead?
It's obviously unsafe to put alkaline cells in it while plugged in, as it would try to charge them. Putting them in when not plugged in has two risks: first only the designer knows how it will react, and second you could forget that you have done so, and plug it back in with them still inside after the storm.
Cadmium is obsolete due to toxicity for disposal.
NiCd is 1.2V/cell while Alkaline is 1.5Vcell thus 4.5V vs 3.6. Meanwhile White LED,s are around 3.1V so they have internal resistors selected to limit the current ( and brightness) and extend the duration.
Since the luminaire is obsolete due to NiCd availability, you have nothing to lose by trying it in the dark with no power. It is safe, but the internal resistors will see a bigger power loss and get hot inside. But this is still very low power, but may fail if poorly designed.
Portable flashlights are cheap now and some come with Lipo cell charger and some using Alkaline.These much brighter than what you have.
My preference is LiPo hand torch 5W.
Ni-Cd are rechargable batteries, and your emergency light has charger built in. You cannot put non-recharchable batteries in. It is not safe. Charger (when plugged into the wall) will eventually try to charge those batteries. It is a fire hazard to charge non-recharchable alkaline batteries.
not a hazard since the charge voltage never exceeds the cell 1.5 voltage and below at 1.2V /cell it does not conduct much power as a dead battery has high ESR. It just wont function well or be reliable.
@TonyStewart.EEsince'75 it depends on type of charger. If it provides constant current, you have a problem. Or if there there's a black out, battery discharges, then power comes up, and try to charge partially discharged battery. It is not safe.
As long as you don't use the charger, the light will have no clue that the batteries are of a different type. In fact, you will most likely get a longer runtime with alkalines, as they have higher mAh capacity than NiCd.
A fully charged NiCd is 1.4V, a fully charged Alkaline is 1.5V... not much of a difference.
Don't use the charger with alkaline batteries though, they could leak and ruin the lamp. They are not designed for charging.
If you still have NiCd batteries though, you should bring them to a recycling.battery disposal place because Cadmium is extremely toxic. Make sure they don't end up in a landfill. Well, you have other worries at the moment, but keep it in mind.
Stay safe ;)
Doing fine? From the other side of the atlantic, the news footage looks quite daunting...
Thank you, yes, my dog and I are fine. We just got our power on after 2 days but that pales in comparison to many! Thanks for asking!
Glad to know you're OK from the other side of the internet. Have a nice day ;)
| common-pile/stackexchange_filtered |
How can Wan's history be reconciled with legend?
In The Legend of Korra S02E07-08 Beginnings, we see how the first Avatar, Wan, came to be. During these episodes, we see him gain firebending, airbending, waterbending, and lastly earthbending from lion-turtles that carry human cities on their backs. However, in Avatar: The Last Airbender, it was established that humans learned bending from various animals. Firebenders learned from dragons, earthbenders from badger-moles, and so on.
In Beginnings, we see humans being granted bending by the lion-turtle when they leave a city. For people on short hunting excursions, this meant they were granted bending while hunting outside the city, then they were non-benders after they returned and the lion-turtle took back the bending ability.
This seems to be at odds with what had previously been established as the origins of element (and energy) bending. Can these be reconciled, or did the writers just retcon the origin of bending?
I don't see this as a retcon. Before humans could bend, someone had to teach them it was possible. Since the capacity appeared to be able to be bestowed if one is enlightened enough, it may have started as a lending program and eventually turned into a capacity that could be learned if you had the right spiritual capacity (whatever the unique element that allows for bending) to take place.
Maybe the lion turtles just copied what they saw the first benders do?
How can radio-carbon dating be reconciled with the age of the Earth estimated from the Bible? Just because people within the world believe such and such a thing about their ancient past (and ten thousand years is well beyond the point where history becomes prehistory in the real world, so we really are talking about the far, far past) doesn't mean that it is necessarily correct. ATLA established what people in the world believed; this latest episode established the actual facts (and implied that there was at least some truth to the traditional beliefs).
It's also explicitly stated that there are "dozens" of lion turtle cities. Maybe THESE firebenders learned one way, and THOSE firebenders learned a different way, etc. This, for example, is the likely explanation for the differences between the Fire Nation (i.e. Japanese) and Sun Warrior (i.e. Aztec) cultures: two different lion turtle cities.
I think humans had the ability to bend BEFORE they met their respective animal/spirit mentors.
The way I see it, humans had the capacity for bending, but not the skill to wield it properly. After Wan befriends the spirits, you can see him learning proper firebending technique from a dragon, after which the other proto-Fire Nation humans from his city comment how they had never seen anyone able to use fire like that. (I don't think they use the word "bend" even once throughout the two episodes.)
This tells me that after the humans left their lion turtle cities, they each eventually found their own "teachers", who helped them perfect their rudimentary powers. This theory is supported by the tattoos on the Air Nomads' foreheads: They aren't arrows. Air bison haven't become part of their culture yet.
(Also, I assume that the Sun Warriors came from a different lion turtle and learned from the dragons first, after which the Fire Nation "stole" their technique.)
Nice! I'd noticed that the Airbender tattoos were different in Wan's day, but I hadn't made the connection that the Sky Bison weren't there yet.
I'm going to assume that both are correct, and show a timeline that they can fit together in:
The creation of the Avatar equivalent of Earth. /Mentioned in a season 1 episode/
Humanity comes into existence.
Energy bending is discovered and used. /Mentioned in season 3/
Other bending formats are discovered, and energy bending falls out of style /Whole TOS timeline/
More spirits start moving in, and humanity moves in with the lion turtles for protection. /I forget where this was mentioned. Probably a book./
For the sake of practicality, people choose to store the bending abilities with the lionturtles, rather than have a divided society where only certain people can do certain jobs, or the lion turtles made this decision. /This isn't explicitly stated, but given how hazardous their world was at this time, it made sense./
People spent so much time in these isolated cities, they forgot there were other cities. /Mentioned, but I forgot where. And this is horrifying from a genetic perspective./
Eventually, the spirits left, and humanity left the lion turtles. /Shown, and I'm skipping details because lazy./
For some reason, humanity remembered how the various bending skills were discovered, but aren't explicitly shown to remember the lion turtle era. There are several possible reasons for this. It's possible both are remembered, but only the earlier was mentioned in-series, this is plausible. It's also possible that humanity isn't proud of this genetic disaster of an era, and people chose not to carry the information onward, or mention it, etc. .
I'm not saying this is the official explanation; I'm just showing how the two are compatible.
I think that at first, humans got the bending from the lion turtles because, as they had said, the lion turtles protected the people. Then, they were said to no longer be needed and didn't grant the power of bending anymore, so there were no benders. Then, as time went on, more and more people became benders by remembering their ancestors could bend and trying to find ways to do it, with people eventually settling on the air bison, dragons, badgermoles, and the moon. They did learn it from the animals/celestial object, ut they did it to regain the power they once had. Either that or it was just legend, because if you think about it, that means that there could be multi-elemental benders that just studied a combination of those sources. Also, to my recollection, every time they claimed a source, they said "legemd has it" or "they say" like when Princess Yue said "The legends say the moon was the first waterbender. Our ancestors saw how it pushed and pulled the tides and learned how to do it themselves." But i don't know, still got ma hopes high for the second one XD that's cooler.
In Korra season 3 (I think it was Original Airbenders), Tenzin again cites the legend that airbending was originally learned from the flying bison. (Curiously, Korra didn't contradict him, even though her vision of Wan had shown a lion turtle granting airbending.)
My current theory is that the "humans learned bending from animals/the Moon" story is actually more of a legend, and the lion turtle origin is the true one.
When Aang encountered a lion turtle, the implication seemed to be that few people were even aware of its existence, so it's not surprising that its connection with the origin of bending was also forgotten.
During Wan's story he receives the power of fire, but there is a scene later where he is learning to bend the fire to his will, and wield it. In this scene the hunt master narrates that he's never seen anyone wield fire like Wan, while visually we see Wan doing the dancing dragon form with a white dragon (a nice nod to TLA).
Can you please detail on on your answer, e.g. in S1E13 (if possible even the minute : second the scene begins) we see that Wan does X, in S1E14, it's shown that... and so on.
I'm not sure how this answers the question of how the story of bending matches from TLoK and TLA. Could you [edit] to clarify?
No. The lion turtles were the origin of the bending but the human wanted to be independent so the lion turtles gave element to who ever lived on him. (IMPORTANT POINT) But the lion turtles gave the element power to the animals too so if one day , one of the nation of element has been killed. The non benders can get the bending of the nation to recreate it so that the avatar cycle does stop. Or maybe the animal get to become the avatar. (just guessing the last sentence)
| common-pile/stackexchange_filtered |
How to remove box and whiskers from plot() function in R?
I'm trying to do a very simple plot using the plot() function in R where I plot out the weights and diets of chicks in a stratified plot. For other simple plots like this I've been able to just use the plot() function and it's gone fine. But for some reason, R insists on plotting this as a box-and-whiskers chart instead of plotting all the values themselves. I've tried many different solutions I've found on the Web, from box=FALSE to box=0 to bty="n" to type="p" but nothing works. R always plots it as a box-and-whiskers chart. I can use whisklty=0 to get rid of the whiskers, but nothing I've tried (including all possible combinations of the above solutions) will replace the boxes with the actual values I want.
If the x-axis data is categorical, plot will return a boxplot by default. You could run plot.default() instead of plot() and that will give you a plot of points.
Compare, for example:
plot(iris$Species, iris$Petal.Width)
plot.default(iris$Species, iris$Petal.Width)
If you type methods(plot) in the console, you'll see all of the different kinds of plots the plot function returns, depending on what type of object you give it. plot.default is the "method" that gets dispatched when you provide plot with two columns of numbers. plot.factor gets dispatched when the y-values are numeric and the x-values are categorical (run ?plot.factor for details). If you do plot(table(mtcars$vs, mtcars$cyl)) the plot.table method gets dispatched. And so on.
Wow, this worked perfectly! Thank you! I will accept your answer as soon as the question is old enough for me to. I'm surprised I couldn't find this answer on here somewhere already; seems like such an obvious problem to run into
| common-pile/stackexchange_filtered |
Differentiate between questions and answers in the suggested edit queue
Possible Duplicate:
Suggested edits: Better visual indication of whether edit is to a question or an answer
I've now accidentally rejected a few edits because they edited the code, and code in questions shouldn't be edited by others since it might remove the whole problem that the question is about. BUT on answers, it's fine to edit the code a little for various reasons.
The problem I've found is that there doesn't seem to be a way to differentiate whether you're dealing with a question or an answer, when looking at the suggested edit page, so if I don't go into the post, I can make a bad decision on which one it is, and find out later I'm wrong (when it's too late to change my vote).
Could we please have some differentiation between questions and answers on the suggested edit page?
Sorry, I realized this isn't strictly a duplicate, although the fact that you didn't know where to look is probably a sign that it should be made much more clear.
@tim, I agree that it's a dupe, and voted to close.
There is a way to differentiate, it's just fairly subtle. The user box at the bottom of the suggestion says "answered {date stamp}" if the post is an answer, and "asked {date stamp}" if the post is a question:
Thanks. Now I know where to look.
| common-pile/stackexchange_filtered |
Spring Boot & Couchbase Stack Overflow
Hi I have 2 simple classes in a bidirectional relationship which are causing infinite recursion resulting in an eventual stack overflow when serialising into JSON. I am using the Spring Boot 2.2.2 starter project for Couchbase and Spring Web dependencies. My maven pom.xml file is as follows:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.2.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>org.stevie</groupId>
<artifactId>stack-overflow-error</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>stack-overflow-error</name>
<description>Reproduce Couchbase API Failure</description>
<properties>
<java.version>13</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-couchbase</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
My domain classes are as follows:
Household.java
import org.springframework.data.couchbase.core.mapping.Document;
import com.couchbase.client.java.repository.annotation.Field;
import com.couchbase.client.java.repository.annotation.Id;
import com.fasterxml.jackson.annotation.JsonIdentityInfo;
import com.fasterxml.jackson.annotation.JsonManagedReference;
import com.fasterxml.jackson.annotation.ObjectIdGenerators;
@Document
@JsonIdentityInfo(
generator = ObjectIdGenerators.PropertyGenerator.class,
property = "@id")
public final class Household {
@Id
private final String key;
@Field
private String type = "household";
@Field
private final String headSurname;
@Field
private final String headForename;
@Field
private final List<Member> members;
public static Household of(String key, String headSurname, String headForename) {
return new Household(key, headSurname, headForename);
}
Household(String key, String headSurname, String headForename) {
super();
this.key = key;
this.headSurname = headSurname;
this.headForename = headForename;
this.members = new ArrayList<>();
}
public String getKey() {
return key;
}
public String getHeadSurname() {
return headSurname;
}
public String getHeadForename() {
return headForename;
}
@JsonManagedReference
public List<Member> getMembers() {
return members;
}
@Override
public String toString() {
return String.format("Household [key=%s, headSurname=%s, headForename=%s]", key,
headSurname, headForename);
}
}
Member.java
import org.springframework.data.couchbase.core.mapping.Document;
import com.couchbase.client.java.repository.annotation.Field;
import com.couchbase.client.java.repository.annotation.Id;
import com.fasterxml.jackson.annotation.JsonBackReference;
import com.fasterxml.jackson.annotation.JsonIdentityInfo;
import com.fasterxml.jackson.annotation.ObjectIdGenerators;
@Document
@JsonIdentityInfo(
generator = ObjectIdGenerators.PropertyGenerator.class,
property = "@id")
public final class Member {
@Id
private final String key;
@Field
private final String name;
@Field
private final Household household;
public static Member of(String key, String name, Household household) {
return new Member(key, name, household);
}
Member(String key, String name, Household household) {
super();
this.key = key;
this.name = name;
this.household = household;
}
public String getKey() {
return key;
}
public String getName() {
return name;
}
@JsonBackReference
public Household getHousehold() {
return household;
}
@Override
public String toString() {
return String.format(
"Member [key=%s, name=%s]", key, name);
}
}
My unit test (JUnit 5) is
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.couchbase.core.CouchbaseTemplate;
import org.stevie.overflow.StackOverflowErrorApplication;
import org.stevie.overflow.domain.Household;
import org.stevie.overflow.domain.Member;
import com.couchbase.client.java.Bucket;
@SpringBootTest(classes = StackOverflowErrorApplication.class)
class HouseholdRepositoryTest {
@Autowired
private HouseholdRepository repo;
@Autowired
private CouchbaseTemplate template;
@BeforeAll
static void setUpBeforeClass() throws Exception {
}
@AfterAll
static void tearDownAfterClass() throws Exception {
}
@BeforeEach
void setUp() throws Exception {
}
@AfterEach
void tearDown() throws Exception {
}
@Test
void throwsStackOverflowExceptionTest() {
String key = manuallyGenerateKey("household");
Household house = Household.of(key, "BLOGGS", "Joe");
key = manuallyGenerateKey("member");
Member m1 = Member.of(key, "Joe Bloggs", house);
house.getMembers().add(m1);
/** Following code throws Stack overflow exception!!!!! **/
house = repo.save(house);
}
private String manuallyGenerateKey(String prefix) {
Bucket bucket = template.getCouchbaseBucket();
Long nextId = bucket.counter("counter::" + prefix, 1, 100).content();
String key = prefix + "::" + nextId;
return key;
}
}
I want the JSON for the household to contain its member objects and the member to contain its household object. I have tried Jackson annotations such as @JsonManagedReference and @JsonIdentityInfo but Spring appears to ignore them. Thanks.
Not sure why you want to do that, it's like sending a lot of data twice.... I suggest you to use @JsonIgnore on your member object to not include household, and just iterate through the returned data on your client. Now if there are different flows in which you need e.g the household and it's members and in another one you need a member and it's household, I suggest you to create custom responses for those scenarios. Try to avoid over-fetching to much data.
Hi I have resolved this issue by fully normalising household which now has a list of member keys and member with has a single household key. I would still like to know why spring doesn't detect the JsonIgnore, JsonManagedReference, JsonIdentityInfo annotations.
| common-pile/stackexchange_filtered |
Are foreign consulates located in the United States considered foreign territory when it comes to abortion laws?
My friend and I got into a debate whether a woman could step into a foreign consulate, take an abortion pill, then leave without fear of prosecution if abortion is illegal in the State where she resides. (More of a hypothetical question considering a possible scenario if Roe v. Wade is overturned.)
Women on Waves has a similar idea - ships are under the laws of their home country, when in international waters (12 miles out).
A U.S. consulate or embassy outside of the United States, is not within the jurisdiction of any U.S. state even though it is subject to the authority of the leading diplomatic person on site who must generally comply with U.S. law.
The whole issue of consulates is irrelevant, states can absolutely pass laws that apply even if you are not in the US.
Another possible "loophole" in evading State laws is conduct an illegal activity on a Federal Indian reservation where said activity is legal. Although I believe Congress still has the power to change reservation laws.
In this hypothetical where it is illegal for her to take the pill, it probably would also be illegal for someone to give it to her outside the embassy to bring it inside and take it, or for the embassy itself to have the pills shipped in to give to her?
@Davislor The Vienna Conventions on Consular Relations state that a package in the hands of the consulates designated courier "shall be inviolable" and that "The consular bag shall be neither opened nor detained." What constitutes a consular bag can be defined pretty broadly (it can be multiple packages). It can be transported by a carrier to an entry port and passed directly to the official courier, but the carrier may not act as the courier in the receiving country. The courier "shall enjoy personal inviolability and shall not be liable to any form of arrest or detention".
IANAL but the actual problem isnt swallowing the pill but getting it somewhere, isnt it? Like with alcohol and under-age - at least in Europe, when you are under-age, you technically can drink beer, but no one can sell or give it to you.
@user8675309 In that case, it might be hard to keep the consulate from giving the pills to their own staff. If word got out that any woman could go in and ask for the pill, though, that would at minimum strain diplomatic relations and and possibly lead to the consulate being shut down. There’s a reason foreign consulates don’t sell illegal narcotics.
A woman can be persecuted for committing abortion regardless of the law. You probably mean "prosecuted".
@Jan'splite'K. In the UK it is legal to give alcohol to anyone over the age of 5.
@Davistor I bet it would be a lot harder to justify shutdown if it were only violating a local or state law - to my understanding, the state has no authority over foreign relations, and would have to escalate to the Feds to get results anyways. See, for instance, the eternal complaints over cars with diplomatic plates in Washington and New York City, where local governments can only shake their fists at parking scofflaws.
There are certainly many possible loopholes, but they all tend to require travel. The bottom line is that poor women will have less access, and therefore there is no equal protection.
@DrSheldon Unequal ability to circumvent criminal law isn't normally considered to be a a violation of equal protection for constitutional purposes, as far as I know.
@Acccumulation Corrected
The idea that a diplomatic mission is "foreign soil" is an exaggeration of the legal situation. Certain individuals (diplomats) are immune from arrest, so if the woman is the ambassador, she can't be arrested, period. The limits on legal actions w.r.t. entering a mission are spelled out in the Vienna Convention on Diplomatic Relations 1961, Art 21-25. See Art 22: "1. The premises of the mission shall be inviolable. The agents of the receiving State may not enter
them, except with the consent of the head of the mission". Otherwise, criminal actions carried out within the premise of a diplomatic mission can be prosecuted under the laws of the receiving jurisdiction just as though the action had taken place outside of the mission – being inside a mission does not confer immunity from prosecution.
As a general rule, when you enter an foreign embassy you do not have to go through passport formalities upon entering and exiting, and your single-entry visa is not "used up" by visiting your home country embassy (or any other embassy). That is because an embassy is not foreign soil.
Note: for Consulates the Vienna Convention Consular Relations of 1963 apples. See What is the legal basis of diplomatic immunity? - Law Stack Exchange
Do lower ranking diplomats not also have immunity? I never thought that it was just the ambassador.
@JosephP. Originaly it was only the Ambassador. Until the 1961/3 Conventions everything else was based on "Gentlemen's agreements".
And now do the gentleman’s agreements remain? Surely the gentlemen agreements are enshrined in writing somewhere? What is true written basis?
The existing "Gentlemen's agreements" became, for the most part, the core of the 1961/3 Conventions.
The term used in the Diplomatic/Consular conventions is not immunity but inviolable (both for the person and premises). For a Consulate, that part of the consular premises which is used exclusively for the purpose of the work of the consular post is 'inviolable'. So a visitor could be arrested in any room not covered in Article 5 (Consular functions), just as such rooms could searched through by the host country authorities. This does not apply to an Embassy.
@MarkJohnson both terms are used. They denote different things.
Inviolability refers to search and seizure. An inviolable official cannot be arrested or detained (Art 29 CDR). Immunity refers to legal liability (whether criminal or civil). An immune official cannot be subjected to the authority of the courts (Art 31 CDR). Under the CDR, a "diplomatic agent" is both inviolable and immune (with limited exceptions to immunity). "Diplomatic agent" includes any staff member "with diplomatic rank."
@JosephP. The two Vienna conventions (one each for diplomatic and consular relations) confer varying degrees of immunity on all diplomatic and consular staff. The ambassador is by no means the only officer to enjoy "full" diplomatic immunity; this is extended to the diplomatic staff, including deputy ambassadors and other lower-level diplomats whose titles may not include the word "ambassador." Consular officials and "technical" staff enjoy "official acts" immunity, so they can be prosecuted for crimes committed in a personal capacity.
@phoog The Vienna Convention Diplomatic Relations (CDR), 1961 does not apply to a Consulate. Consular officers (who are not diplomatic agents) can be arrested or detained for a grave crime (Article 41 Vienna Convention Consular Relations (CCR), 1963).
Let us continue this discussion in chat.
Thanks - although I suppose if a U.S. citizen is temporarily appointed ambassador for the foreign consulate, they are outside U.S. jurisdiction.
@RobertF: No, this would only happen when the US accepts that person as an ambassador. It's unlikely that the US would accept such appointments of random US citizens.
@MSalters Understood, thank you
The main difference between "foreign soil" and "inviolable" would be that if I commit what is a crime according to US law inside a diplomatic mission, first the ambassador could allow US police to make an arrest if they wish so, and second it would legally be a crime that happened in the USA, so I could lose protection as soon as they leave the embassy.
@MarkJohnson "Consulates ... Relations of 1963 apples." --> had to look twice at that old fruit. ;-)
Are foreign consulates located in the United States considered foreign territory when it comes to abortion laws?
No, a consular premises is not considered foreign territory (for any law).
The main term used in the Diplomatic/Consular conventions is not immunity but inviolable (both for the person and premises).
For a Consulate (where the Vienna Convention Consular Relations (CCR), 1963 applies), that part of the consular premises which is used exclusively for the purpose of the work of the consular post is 'inviolable'. (Article 31)
So a visitor could be arrested in any room not covered in Article 5 (Consular functions), just as such rooms could, in theory, be searched through by the host country authorities.
This does not apply to an Embassy (where the Vienna Convention Diplomatic Relations (CDR), 1961 applies). The whole premises of an Embassy is 'inviolable' (Article 22). It is also not considered a foreign territory.
Both conventions contain no special provisions about visitors to a Consulate or an Embassy premises. It is therefore up to the host country to decide how to deal with any violations of their law commited inside such premises.
See also: What is the legal basis of diplomatic immunity? - Law Stack Exchange
You don’t specify what jurisdiction this is in or what this hypothetical law would say. Generally, those countries that restrict drugs such as misoprostol make it illegal for someone to distribute or receive it without a prescription, not merely to swallow it. If a woman could obtain the pill in the first place without being caught, finding a safe place to take it is probably not the problem.
It is plausible that an embassy might get away with importing the pills (through its diplomatic pouch that the police are not allowed to search, to the premises they are not allowed to enter), and giving it out through its own staff, who have diplomatic immunity—if they were discreet about it. It would be very hard to prove a case in court without the host country revealing that it violated a treaty itself. If the woman who uses it has diplomatic immunity, she cannot be prosecuted either, only sent home, even if the hosts did find out and actually care what foreigners do inside their embassy.
since you ask about consulates: the privileges of consulates are weaker than those of embassies. Under section II, article 35 of the Geneva Convention on Consular Relations, “if the competent authorities of the receiving State have serious reason to believe that the bag contains something other than [official correspondence and documents or articles intended exclusively for official use], they may request that the bag be opened in their presence by an authorized representative of the sending State,” and if the request is refused, the bag must be returned undelivered. This could be used to prevent the delivery of drugs and medical equipment to a consulate. Also, the police may search a consulate, except for rooms used “exclusively for the purpose of the work of the consular post.” (Section I, Article 31, part 2) Finally, consular officials do not have as total a diplomatic immunity as ambassadors or heads of state, and may be arrested for “grave crimes.” (Section II, article 41). (Thanks to Mark Johnson for the reference.) They also have a much narrower exemption from civil liability, which some jurisdictions use as a means to shut down abortion providers. To answer the question you asked literally, consulates are not considered the territory of the “sending nation.”
There are reasons, however, that embassies don’t sell heroin over the counter. If the host country thinks that a diplomatic mission is abusing its privileges to make trouble, they can respond in several ways, from having their own ambassador talk to the consul’s boss and ask them to stop, to expelling the ambassador and asking the country to send someone else, to shutting down the consulate entirely. The police might also investigate people who go in for no apparent reason.
In the real world, though, it never reaches that point in countries that ban abortion, because diplomats mind their own business.
The selling of heroin would be considered a grave crime and a consulate offical can be arrested or detained for such crimes (Article 41, Vienna Convention Consular Relations (CCR), 1963). Any room inside a Consulate, where heroin was sold, could be raided by the host country since that room is not being used exclusively for the purpose of the work of the consular post (Article 31 CCR). The rules for a Consulate and Embassy (and their staff) are different. It's like appling the car traffic rules to a ship - they are different and should not be mixed.
@MarkJohnson For now, I’ll remove consulate but leave embassy, Thank you for the link. Are there other errors I should correct?
@MarkJohnson if the sending country claims that a room is being used only for consular business and the receiving country believes that heroin is being sold there, a factual dispute exists. The receiving country can't just search the room based on the disputed claim; if they could, inviolability would be meaningless.
@phoog Possible ways out of the conundrum: A. an informant tells them heroin is being sold there, B. if the consulate were to happen to catch on fire, the consent of the sending nation to enter the premises could be “assumed,” or C: the receiving state could declare the consular official persona non grata and tell the other country to send someone else who’ll cut that out, or even shut down the consulate and send everyone home, without giving a reason.
@phoog In the case of heroin, D: search/test people leaving the consulate. E: or if junkies are all shooting up inside the premises, someone will OD. Although that was a silly reductio for the original example, and I can’t think of any country that would stop women on the street and make them take a blood test for misoprostol.
A claim by an informant is evidence weighing on the factual dispute, but it does not resolve the factual dispute. It's just not feasible to search a consulate without consent unless you're prepared to cause a diplomatic incident.
@phoog I really believe that any situation where one country said, “We searched your consulate and found heroin. Maybe one of your diplomats went rogue and started dealing without your knowledge,” and the sending nation said, “No there wasn’t any heroin! You’re lying! In fact, you broke international law because the room you searched was being used exclusively for official business,” what would happen is termination of consular relations between those two countries.
"They would still, however, be immune to most civil lawsuits": consular officers are only immune if the civil liability arises from an official act. That would not be the case here.
Let us continue this discussion in chat.
| common-pile/stackexchange_filtered |
google-services.json file for each flavor in android studio
I have different productFlavors specified in my build.gradle file
dev {
applicationId ""
versionCode 83
versionName "2.2.1"
}
staging {
applicationId ""
versionCode 119
versionName "2.8.1"
}
both flavors have different project ids etc for google services. i have placed respective google-services.json file in assests folder in each flavor like this
src>dev>assests>google-services.json
and
src>staging>assests>google-services.json
google-services json must be present on root level on compilation.
what should i have to specify in my build.gradle file so that respective file or content should be copied to root folder's file when i build a particular flavor.
you could change package name for each flavor. google-service.json must stay in root folder.
@NguyễnHoàiNam can you elaborate a bit more.
I'm pretty sure that you must setup google-services.json from your developer console. It manage setup by app, with ability to add multiple package names. That's how I use one json for dev/prod flavors. Take a double look at how you get json file and try to add some juice.
yes you are right in that sense and i am using multiple package names and for flavors. The problem is that there are 2 different google console projects are being used. so basically project id and name is different for both. and google-services.json files respectively.
@Nguyễn Hoài Nam imi not clear on what your saying. the issue seems to be that the developer has multiple application Id and wants to know how to link google-services.json to each one. im also wondering the same thing. forgive me if im wrong, can you enlighten me? but i think the answer can be found here: http://stackoverflow.com/questions/30772201/google-services-json-for-different-productflavors
@j2emanue I'm pretty sure my comment is old as the question. The point here maybe: The problem is that there are 2 different google console projects are being used --> different than the usual case when you setup your app from one Google account. Latest Android Studio and GCM plugin (3.0.0) help you to solve the problem I think.
| common-pile/stackexchange_filtered |
C#: how assign a value to a string var without using switch
First of all I must say that I am pretty new using C#. I have written this code block to assign a value to a string var depending on the value of another var. I have used the Switch statement:
switch (_reader.GetString(0))
{
case "G":
permiso.Area = "General";
break;
case "SIS":
permiso.Area = "Sistems";
break;
case "SOP":
permiso.Area = "Development";
break;
case "HLP":
permiso.Area = "Support";
break;
}
Can I make this in an easier way in C#?
Thanks!
Can you define "easier" ? I mean.. that's already done, so it's pretty hard to do something easier than "already done"..
I mean if exists in C# somo sentence that makes something like that: my_string=my_string.decode(old_value0,new_value0, old_value1,new_value1, ...)
Is var d = "G SIS SOP HLP".Split().Zip("General Sistems Development Support".Split()).ToDictionary(t => t.Item1, t => t.Item2); or var d = new Dictionary<string, string>(){ ["G"] = "General", ["SIS"] = "Sistems", ["SOP"] = "Development", ["HLP"] = "Support" } followed by permiso.Area = d[_reader.GetString(0)] "easier"?
Readable Code Matters...support RCM to stop the madness
@ŇɏssaPøngjǣrdenlarp soo... no code in comments? :D
@CaiusJard I think (hope) what Ñyssa meant is that the OP's original code was very readable. Attempting to reduce code length in the vain hope that that makes it "easier" is something to be discouraged. I always tell my team members, when writing code, to think of the unfortunate programmer, who in three years' time needs to amend your code. If it takes them twice as long to understand it, you have written it badly in the first place.
It is also worth pointing out, that not only is the switch statement highly readable, it is also very fast at runtime.
You can use Dictionary<string, string>(), which can store your "switch case" string as key and "switch case value" in value.
Example:
var dict = new Dictionary<string, string>()
{
{"G", "General"},
{"SIS", "Sistems"},
...
}
So your code in order to access will be:
var key = _reader.GetString(0);
if(dict.TryGetValue(key, out var value)
{
permiso.Area = value;
}
else
{
// handle not exists key situation
}
Modern C# has a pattern matching switch
permiso.Area = _reader.GetString(0) switch {
"G" => "General",
"SIS" => "Sistems",
"SOP" => "Development",
"HLP" => "Support",
_ => throw new InvalidOperationException($"The value {_reader.GetString(0)} is not handled")
};
C# will complain at you if you don't include the "else" at the end _ =>
I mean if exists in C# somo sentence that makes something like that: my_string=my_string.decode(old_value0,new_value0, old_value1,new_value1, ...)
If you're after something like Oracle's DECODE, you can write it:
string Decode(string expr, params string[] arr){
for(int i = 0; i < arr.Length; i+=2)
if(arr[i] == expr)
return arr[i+1];
return arr.Length % 2 == 0 ? null : arr[arr.Length-1];
}
You'd use it like:
permiso.Area = Decode(reader.GetString(0), "G", "General", "SIS", "Sistems", "SOP", "Development", "HLP", "Support");
If you want an ELSE, pass an odd length array (something after the "Support")
If you want to be able to call it on a string, such as reader.GetString(0).Decode("G" ...) you can declare it in a static class and precede the first argument with this :
static string Decode(this string expr, ....)
That will make it an extension method, so it can be called "on a string"
If you either have same mappings occur on multiple locations or generally use the same strings repeatingly (in conditions, e.g.), a more sophisticated and cleaner approach would be to use enums and their descriptions in the first place. This makes the code more readable as you can (and should) use the unique enum in the code and catch its description when needed.
You need this enum extension method to read enum descriptions:
using System;
using System.ComponentModel;
public static string Description(this Enum source) {
DescriptionAttribute[] attributes = (DescriptionAttribute[])source
.GetType()
.GetField(source.ToString())
.GetCustomAttributes(typeof(DescriptionAttribute), false);
return attributes.Length > 0 ? attributes[0].Description : string.Empty;
}
Prepare the enum and its corresponding dictionary mappers. If possible, those enums should only have one unique description but you can define additional mapper dictionaries to your likings, e.g. when you need a simple short to long text mapper as in your example.
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
// enum and its description
public enum PermissionArea {
[Description("Development")]
Development = 1,
[Description("General")]
General,
[Description("Sistems")]
Sistems,
[Description("Support")]
Support
}
public static class MyEnumDicts {
// default mapping of enum to its description
public static readonly Dictionary<PermissionArea, string> PermissionAreaToText = new Dictionary<PermissionArea, string>() {
{ PermissionArea.Development, PermissionArea.Development.Description() },
{ PermissionArea.General, PermissionArea.General.Description() },
{ PermissionArea.Sistems, PermissionArea.Sistems.Description() },
{ PermissionArea.Support, PermissionArea.Support.Description() }
};
// mapping of enum to short text
// (only if needed as it is better to only use one unique
// value which is already set as description in the enum itself
public static readonly Dictionary<PermissionArea, string> PermissionAreaToShortText = new Dictionary<PermissionArea, string>() {
{ PermissionArea.Development, "SOP" },
{ PermissionArea.General, "G" },
{ PermissionArea.Sistems, "SIS" },
{ PermissionArea.Support, "Support" }
};
// add reverse mappers via linq
public static readonly Dictionary<string, PermissionArea> TextToPermissionArea = PermissionAreaToText.ToDictionary(m => m.Value, m => m.Key);
public static readonly Dictionary<string, PermissionArea> ShortTextToPermissionArea = PermissionAreaToShortText.ToDictionary(m => m.Value, m => m.Key);
}
The usage could be as follows:
public void MyMethod(string permissionAreaShortText) {
try {
// map to enum (no switch or ifs etc. needed here)
PermissionArea permissionArea = MyEnumDicts.ShortTextToPermissionArea[permissionAreaShortText];
// now you can work via enums and do not
// have to hassle with any strings anymore:
switch (permissionArea) {
case PermissionArea.Development: ...; break;
case PermissionArea.General: ...; break;
case PermissionArea.Sistems: ...; break;
case PermissionArea.Support: ...; break;
}
// output/use its description when needed:
string permissionAreaText = permissionArea.Description();
// ...
} catch (Exception ex) {
// error handling: short text is no permission area
// ...
}
}
| common-pile/stackexchange_filtered |
VoiceOver is not able to read mobile web
I have been away from accessibility\programming for awhile and I am helping friend with his responsive web site. We have fixed the code parts for accessibility on the desktop and Android\Talkback is usable. The issue is that VoiceOver, in iOS 12.14.5, doesn't recognized any content on the page. I remember running into an issue like this with earlier versions of iOS but can't remember the solution. I can't do any code because my peculiar that way. Any suggestions, sorry for the lack of code.
stick a link to the page if you can't provide the code here, without seeing what you have done there could be 200 different things that could cause this. This question will get closed otherwise as it 'needs details or clarity'. I haven't marked it for closing yet to give you chance to respond so we can help you :-)
To be clear it is not my code, it was something I notice on their website while evaluating it. It does work with Android\TalkBack with out any isues.
Sorry, forgot link, https://www.medtronic.com/us-en/index.html
| common-pile/stackexchange_filtered |
Why enol is unstable?
Alkynes undergo acid-catalyzed addition of water across the triple bond in the presence of mercuric ion catalyst. The products are enol and keto. I want to know why is keto more stable than enol.
Try a Google search... "keto-enol tautomerism"
http://chemistry.stackexchange.com/questions/18904/which-is-the-more-stable-enol-form
It's is basically because in keto form the C=O bond is more stable than the C=C bond in enol. And the difference is probably close to 12 kcal.
| common-pile/stackexchange_filtered |
Fiddlercap not capturing in IE8
I'm debugging a JS problem on a client's app. So I gave him Fiddlercap to record him reproducing the problem.
Except - fiddler cap isn't capturing anything (it captures the fiddler cap welcome URL). He's a developer, so am sure he understood the instructions.
Any idea what could cause this, all I know is the app is running remotely and is not HTTPS.
Thanks
Unfortunately, without more details, it's unlikely that anyone will be able to help. What browser is he using? If it's Firefox, he needs to configure it to use the System Proxy setting.
@EricLaw thanks - IE8. Unfortunately it's one of those situations where you're already debugging something, the last thing you want to do is debug the debugger with someone who just wants the problem gone... Anyway in this case there was a happy ending via a different route.
| common-pile/stackexchange_filtered |
When removing old carpet in preparation for the installers should I also remove the old tack strip?
I've got carpet installers booked in to replace the carpet in 2 rooms of my apartment. In preparation for them I started removing the old carpet and then old carpet tack strips so they'd have a clear concrete floor to work on.
I got partway through removing the tack strips in the first room when I realised that the strips were probably in decent enough condition to be useable again (nails where a little rusty though).
Should I be removing the tack strips at all?
Ask the installers. On wood floors I would lean toward removing them, but on concrete they may want them left in place. (Last time we had carpet replaced the neighbors must have thought we'd gone around the bend. After pulling up the old carpet we spent evenings bouncing around trying to find all of the squeaky subfloor spots and screwing them down. Also took the opportunity to cut holes in the floors to fix lighting fixtures in the ceiling below. No concrete was injured.)
If the tack strips are in good condition, most installers would prefer that they remain. It saves them time and material and they will probably thank you for it.
| common-pile/stackexchange_filtered |
Plot explicit cdf instead of ecdf in R
I have adjusted the parameters (lambda, mu, sigma) for a mixture of two normals fitted to my data. Now I would like to plot the cdf of this model using the explicit function instead of the ecdf. Is there any way to do this or I do I have to simulate data so then I can use again ecdf?
The explicit function is something like:
ipc_values_EM\$lambda[1] * dnorm(x, ipc_values_EM\$mu[1], ipc_values_EM\$sigma[1])
+
ipc_values_EM\$lambda[2] * dnorm(x, ipc_values_EM\$mu[2], ipc_values_EM\$sigma[2])
(as you can note, is the mixture of two normals different mus and different sigmas)
Like the title of the function ecdf() says, it is empirical and only runs on samples.
If you want the exact cdf of a Gaussian, the function you are looking for is pnorm(). Here is a demonstration.
x <- seq(from=-5, to=5, by=.1)
y <- pnorm(x)
plot(x, y, type='l')
If you replace dnorm() by pnorm() in your code, and x by the range of values you want to take the cdf over you should get the result you are looking for.
I tried a little different approach but with the same spirit in mind, just used the curve() function and the code is as follows:
curve(ipc_values_EM$lambda[1] * pnorm(x, ipc_values_EM$mu[1], ipc_values_EM$sigma[1])
+
ipc_values_EM$lambda[2] * pnorm(x, ipc_values_EM$mu[2], ipc_values_EM$sigma[2]), from=-0.10, to=0.07, add=TRUE, col="blue")
You might be interested in using the distr package for plotting the theoretical distribution functions for mixture distributions. Here is a quick example:
library(distr)
tmp <- UnivarMixingDistribution( Norm(10,2), Norm(15,1), mixCoeff=c(1,2)/3)
plot(tmp)
I think this approach works for plotting a cdf of a standard normal. It seems to give almost the same answer as pnorm(x, 0, 1) with the standard normal and also allows the function to be modified.
sigma <- 1
mu <- 0
integrand <- function(x) {(1/(sigma*sqrt(2*pi)))*(exp(1)^((-1*((x-mu)^2))/(2*(sigma^2))))}
my.cdf <- matrix(0, ncol=2, nrow=length(seq(-5,5,0.01)))
m <- 1
for(i in seq(-5,5,by=0.01)){
my.cdf[m,1] <- i
my.cdf[m,2] <- as.numeric(integrate(integrand, lower = -5, upper = i)[1])
m <- m+1
}
plot(my.cdf[,1], my.cdf[,2])
x <- seq(-5,5,0.01)
my.cdf2 <- pnorm(x, 0, 1)
round(my.cdf[,2],5) - round(my.cdf2,5)
Here I modify the function to approximate what I suspect you want. I am not sure whether this gives the solution you are after:
myconstant1 <- 0.4
sigma1 <- 1
mu1 <- 0
myconstant2 <- 0.2
sigma2 <- 2
mu2 <- 3
integrand <- function(x) {myconstant1 * ((1/(sigma1*sqrt(2*pi)))*(exp(1)^((-1*((x-mu1)^2))/(2*(sigma1^2))))) +
myconstant2 * ((1/(sigma2*sqrt(2*pi)))*(exp(1)^((-1*((x-mu2)^2))/(2*(sigma2^2))))) }
my.cdf <- matrix(0, ncol=2, nrow=length(seq(-5,10,0.01)))
m <- 1
for(i in seq(-5,10,by=0.01)){
my.cdf[m,1] <- i
my.cdf[m,2] <- as.numeric(integrate(integrand, lower = -5, upper = i)[1])
m <- m+1
}
jpeg(file="myplot.jpeg")
plot(my.cdf[,1], my.cdf[,2])
dev.off()
| common-pile/stackexchange_filtered |
The type or namespace 'name' could not be found
I am trying to build the solution 'NopCommerce'(solution name) of my project..
But it shows the following type of errors:
The type or namespace name 'UasParser' could not be found (are you missing a using directive or an assembly reference?)
I know this question has been asked quiet a few times and most of the times i came across a solution which says that it is the client profiling issue and
set ".Net Framework 4 Client Profile" to ".Net Framework 4"
this would solve my problem and give me a successful build.
But the thing is i am new to .Net and I don't know from where i can do that. Please tell me the steps to change the client profile.
Thanks a lot in advance.. :)
which Version of the NopCommerce do you have? As far as I know, NC doesnt have client profile in any of it is projects by default.
| common-pile/stackexchange_filtered |
How to show that simple random sample sensitivity is unbiased for population sensitivity
In diagnostic testing, sensitivity $S$ is the probability that the test gives a positive result given that you have the condition being tested. From a simple random sample of people who take the test, an estimate for sensitivity is $s=n_{++}/n_{+}$, where $n_{++}$ is the number of people in the sample who test positive and have the condition and $n_{+}$ are the number of people who have the condition.
I want to get $E(s)$ and show that this is unbiased for sensitivity. That is,
$E(s)=E(n_{++}/n_{+})=S$
This follows easily if I can do $E(n_{++}/n_{+})=E(n_{++})/E(n_{+})$ but I know that in general, $E(X/Y)\neq E(X)/E(Y)$. Is there a reason why I might be able to do this for this setting or is there a different approach?
Following the arguments of Duris, F., Gazdarica, J., Gazdaricova, I. et al. Mean and variance of ratios of proportions from categories of a multinomial distribution. J Stat Distrib App 5, 2 (2018). https://doi.org/10.1186/s40488-018-0083-x
Consider a sample of $n$ observations from a multinomial distribution with $k=4$, representing the number of True Positives, False Negatives, False Positives, and True Negatives in the sample. Let $X_1$ be the number of True Positives and $X_2$ be the number of False Negatives. Then the sample sensitivity is $X_1/(X_1 + X_2)$.
Define a function $f(X_1,X_2) = X_1/(X_1 + X_2)$ and consider a Taylor expansion of degree 2 around the point $u = (\mu_{X_1}, \mu_{X_2})$.
$$
f(X_1,X_2) \approx f(u) + (X_1 - \mu_{X_1})\frac{\partial f(u)}{\partial X_1} + (X_2 - \mu_{X_2})\frac{\partial f(u)}{\partial X_2} +\\
\frac{1}{2}(X_1 - \mu_{X_1})^2\frac{\partial^2 f(u)}{\partial X_1^2} +
\frac{1}{2}(X_2 - \mu_{X_2})^2\frac{\partial^2 f(u)}{\partial X_2^2} + \\
(X_1-\mu_{X_1})(X_2 - \mu_{X_2})\frac{\partial^2 f(u)}{\partial X_1 X_2}
$$
Taking expected values leads to
$$
E[f(X_1,X_2)] \approx f(u) + \frac{1}{2}\frac{\partial^2 f(u)}{\partial X_1^2}\sigma_{X_1}^2 + \frac{1}{2}\frac{\partial^2 f(u)}{\partial X_2^2}\sigma_{X_2}^2 + \frac{\partial^2 f(u)}{\partial X_1 X_2}\sigma_{X_1X_2}
$$
Calculating the derivatives and substituting the values $\mu_{X_i} = np_i$, $\sigma_{X_i}^2 = np_i(1-p_i)$, $\sigma_{X_1X_2} = -np_1p_2$ leads to
$$
E[f(X_1,X_2)] \approx \frac{np_1}{np_1+np_2} - \frac{n^2p_1(1-p_1)p_2}{(np_1 + np_2)^3} + \frac{n^2p_1p_2(1-p_2)}{(np_1 + np_2)^3} + \frac{(np_1 - np_2)(-np_1p_2)}{(np_1 + np_2)^3}\\
= \frac{p_1}{p_1+p_2}
$$
So the sample sensitivity is (up to a 2nd order approximation) unbiased for the true sensitivity.
Thank you. I also saw a paper by Storey (2003) where in the context of false discovery rate and under some assumptions, it is shown that $E(V/R)=E(V)/E(R)$. Where $V$ are the false discoveries and $R$ are all discoveries. I think this can be easily repurposed for sensitivity. https://www.jstor.org/stable/3448445
| common-pile/stackexchange_filtered |
merging a multi-dimensional array into another multi-dimensional array
this relates to my previous post.
how to create a collection of multi dimensional arrays and not overwrite origional values when new ones are pushed
i am thinking my problem has to do with how i am creating the array. what i'm trying to do is make an array that looks like this
Array
(
[10] => Array
(
[0] => 29
[1] => 36
)
)
into something like this
Array
(
[10] => Array
(
[0] => 29
[1] => 36
)
[20] => Array
(
[0] => 29
[1] => 36
)
[25] => Array
(
[0] => 29
[1] => 36
)
)
the 10, 20, and 25 is the product id where the numbers within those are the selections that were selected on that page (in the link i gave above). so each product would have its own collection selected.
when i use array_push instead of doing what i want it to do the first collection of array as in the first example keep reseting. so if i do my selections on say flyers and add to cart then i go to business cards and do my selections and add to cart the array resets and it becomes like the first example. whatever i try i cant get it to merge below a collection like the second example that i have. i have tried array_merge(),array_push but those dont really work.
If you were to var_dump the array prior to trying to do the insertion, what displays? I have a feeling you're submitting multiple pages but not carrying the array over from page to page (for example, by setting the array to session).
Solution:-
If you want to append array elements from the second array to the first array while not overwriting the elements from the first array and not re-indexing, use the + array union operator:
$a = array(10 => array(25,26));
$b = array(22 => array(45,66));
$c = $a + $b;
print_r($c);
Output:-
Array
(
[10] => Array
(
[0] => 25
[1] => 26
)
[22] => Array
(
[0] => 45
[1] => 66
)
)
Hope this helps.
| common-pile/stackexchange_filtered |
What is the value of the following expressions?
My C++ programming professor gave the students some exercises about boolean algebra.
Two of them were:
1) true || true
2) true && false
The possible answers for the two are the following:
a) true
b) false
c) 1
d) -1
What is the correct way to evaluate these exercises, and possibly, other exercises of the same type?
True || True isn't valid code AFAIK, because boolean literals need to be lowercase. The value +1 may be treated as true in certain situations.
What is True? That's not a valid C++ identifier.
What exactly is True here? Is it an object? If you meant the boolean value, it must be true in small case.
My C++ professor asked me this question, just like that. The possible answers were 1, -1, True, or False.
I think he refers to some kind of boolean algebra, but he was not very clear about it.
I just modifed the question. Is it more clear now?
True is not a valid keyword in C++. But true & false are keywords in C++.
Boolean variables are variables that can have only two possible values: true (1), and false (0). so true || true will evaluate to 1.
See live demo here.
Isn't bool a proper type in C++, just like int? true is not the same as 1, but if you do true == 1, the true will be casted to int, which gives 1. So true || true will not evaluate to 1. It will evaluate to the boolean value true. The demo works because C++ streams by default have their boolalpha flag unset, meaning that writing true to a stream will result in "1" and not "true".
Actually, your link mentions boolalpha, and doesn't say anywhere that true || true will evaluate to 1, or even that true by itself will evaluate to 1, for that matter. The link does not support your answer.
| common-pile/stackexchange_filtered |
How to compute $P(\sum\limits_{i=1}^n \frac{1}{x_{i}^3}>L)$
Let $L >0$ be a constant and consider the random variables $\left(x_i\right)_{i=1}^n$ that follow a binomial distribution as $x_{i} \backsim \operatorname{Bin}(n,\frac{k}{n})$, but they are not necessary independent from each other. The variables $k$ and $n$ are integers.
I would like to know if there is a way to compute the following probability:
$$\mathbb{P}\left(\sum_{i=1}^n x_i^{-3}>L\right)$$
I can think about Azuma theorem but it was given for $\mathbb{P}\left(\sum_{i=1}^n x_{i}>L\right)$.
Thank you.
Surely you noticed that no $$\frac1{x_i^3}$$ is a well defined random variable since $$P(x_i=0)\ne0$$
$x_{i}$ represents a number of nodes in my case that cannot be zero. So, I can use that form.
But you wrote that all of the $x_i$ are binomially distributed, which implies that they can be zero?
I can say that they are binomially distributed with the assumption that they can not take the value 0 right ?
@LinaHam not really. Do you mean that you rescale the pmf so all weights add up to 1 and all other values are binomial, i.e. $$\mathbb{P}[x_i=k] = \frac{\binom{n}{k} p^k (1-p)^{n-k}}{\sum_{j=1}^n \binom{n}{j} p^j (1-p)^{n-j}}?$$
That's not exactly binomial distribution...
In fact, initially $x_{i}$ represents the sum of all nodes that are linked to the node $i$, and by assuming that the expected number of nodes linked to $i$ is fixed to $k$, then, it was clear for me that the sum of all the 1's could be represented as a random variable that follow a binomial distribution $ Bin(n,\frac{k}{n})$ ..And in my case since the graph is connected then $x_{i}$ is never isolated so will never take the value 0.
@LinaHam Wishful thinking $\ne$ Rigorous mathematical setting.
| common-pile/stackexchange_filtered |
How to make an alias active in both current session and in .bashrc at same time?
I just found frequently I decide to add an alias for longer time use at the same time use it now. So I have to type the same thing twice, first in current bash session then .bashrc.
First, put it in the .bashrc, then source ~/.bashrc.
I have this at the top of my ~/.bash_aliases: alias realias='source ~/.bash_aliases'
Define this function (say, in your .bashrc):
function permAlias {
alias "$@" # set the alias(es) in this session
printf 'alias %q\n' "$@" >> ~/.bash_aliases # set it for all sessions
}
Then use it the same way you would make a normal alias:
% permAlias foo='/path/to/command -some --options=here'
Note: This isn't the most robust solution in the universe. It will probably break under all sorts of different use cases. But it will work for simple things.
Your .bash_aliases will keep growing if you tweak or otherwise shadow existing aliases, but other than that, this is going to work well in typical situations (after a few quoting tweaks).
Works wonders most of time!
I learned "the realias trick" from a talk by Damian Conway. The realias trick is documented at Modern Perl Programming. It follows the basic steps explained at https://www.cyberciti.biz/faq/create-permanent-bash-alias-linux-unix/, automating the final 'source'
Here is how I do it:
In .bashrc, ensure there is a line such as:
source ~/.bash_aliases
or
. ~/.bash_aliases
Ensure ~/.bash_aliases has a line like this:
alias realias='$EDITOR ~/.bash_aliases; source ~/.bash_aliases'
Start a new shell to pick up the changes to .bashrc and .bash_aliases.
Run realias, which will start your editor on ~/.bash_aliases
Add your new alias or shell function. Eg:
alias psme='ps -fu ${USER}'
Save and exit. On exit, the shell sources ~/.bash_aliases
Use your brand new alias.
$ psme
Because it is so convenient to add and use a new alias or shell function, my ~/.bash_aliases file has become a repository of anything slightly complicated that I expect to use again.
| common-pile/stackexchange_filtered |
Visualisation with PCM wave data
I have an API that gives me PCM wave data:
http://msdn.microsoft.com/en-us/library/ff966424.aspx
The byte[] buffer format used as a parameter for the SoundEffect
constructor, Microphone.GetData method, and
DynamicSoundEffectInstance.SubmitBuffer method is PCM wave data.
Additionally, the PCM format is interleaved and in little-endian.
The audio format has the following constraints:
The audio channels can be mono (1) or stereo (2).
The PCM wave file
must have 16-bits per sample.
The sample rate must be between 8,000 Hz
and 48,000 Hz.
The interleaving for stereo data is left channel to
right channel.
I would like to do a visualisation based on this data.
I want to split the sound pitch levels into 3rds, and get the volume/level of each.
So, if i speak in a low voice, i'll get a high value, then 2 low values, if i speak normally i'd get a low value, a high value and a low value and if i speak in a high voice i get 2 low values, and a high value.
How can i achieve this? I've never tried anything dealing with sound, so i'm at level 1, and don't know where to start.
A full answer would probably be too complex to give here but you need to take the time-domain, PCM sample data and derive the frequency-domain representation from it so that you can then assess the level of the signal in the different frequency ranges. The technique for doing this is known as the Fast Fourier Transfer (FFT). Implementing this yourself requires a significant amount of knowledge of DSP, so perhaps your best approach would be to source a library that offers an FFT implementation out-of-the-box that you can use.
Also, when you implement the FFT and group your frequencies together, make sure you use a logarithmic scale to display their values - if you use a linear scale the differences between values will be huge (I actually thought my code wasn't working properly because the graph was all over the place when I did a similar visualization using a linear scale)
| common-pile/stackexchange_filtered |
Adobe Acrobat is interfering with PDF files shown with WebView
I am using the WebView class from WebKit to display a PDF document.
I am creating an instance of NSURLRequest and passing it the URL to the PDF.
The PDF document does not display when Adobe Acrobat is installed on the system. Uninstalling Acrobat fixes the issue.
When Acrobat is installed, a white screen with a loading bar is shown:
When Acrobat is not installed, the PDF is shown:
How can I fix this and get it to work with Acrobat installed?
Access the preferences property of the WebView, and disable plugins.
[webView.preferences setPlugInsEnabled:NO];
This will disable the Acrobat plugin.
| common-pile/stackexchange_filtered |
this.context.history undefined while using <Link> on child component
How do I populate context so that history is not undefined when using tags in react-router?
The error I am getting is :
Uncaught TypeError: Cannot read property 'pushState' of undefined
on function Link.handleClick at line:
if (allowTransition) this.context.history.pushState(this.props.state, this.props.to, this.props.query);
I'm using react-router 1.0.0-rc1.
The top level routes works fine. I get to my main component, which displays a link to "signup". This link should bring me to the child component.
My sample app looks like this:
Routes.jsx
============
var React = require('react');
var ReactRouter = require('react-router');
var createBrowserHistory = require('history/lib/createBrowserHistory');
var Router = ReactRouter.Router;
var Route = ReactRouter.Route;
var Main = require('./main/main');
var Signup = require('./signup/signup');
module.exports = (
<Router history ={createBrowserHistory()}>
<Route path="/app" component={Main}>
<Route path="/signup" component={Signup}></Route>
</Route>
</Router>
);
Main.jsx
===============
var ReactRouter = require('react-router');
var Link = ReactRouter.Link;
module.exports = React.createClass({
render: function(){
return <div><Link to="/signup">signup</Link></div>
}
})
same Error was for me but I was get solution by this.props.history.push('/', null); and check it by this.props.history YOu should be get to see also push and replace bothfunction
I was having this same problem. Ensure that your <Router> definition is at your top level and not embedded within another Component.
My version that is working:
var React = require('react');
var ReactDOM = require('react-dom');
var {useBasename, createHistory} = require('history');
var routes = require('./routes/routes');
var {Router} = require('react-router');
const history = useBasename(createHistory)({
basename: '/'
})
ReactDOM.render(
<Router history={history}>
{routes}
</Router>,
document.getElementById('app'));
My routes.js file:
var React = require('react');
var { IndexRoute, Router } = require('react-router');
var Route = require('react-router').Route;
var AppContainer = require('../components/AppContainer');
var Index = require('../components/Index');
var Login = require('../components/Login');
var Register = require('../components/Register');
module.exports = (
<Route path="/" component={AppContainer}>
<IndexRoute component={Index} />
<Route path="register" component={Register} />
</Route>
);
I am doing vanilla Flux and I want to call history.pushState(null,'/users') from my action, so i am passing history as a parameter to action call from the main ES6 Class, Is this the right approach ?
| common-pile/stackexchange_filtered |
DPM 2016 System State Protection Windows Server Backup Event 19<PHONE_NUMBER>
I am using Data Protection Manager 2016 and have configured "System State Backup" on a server. The server has the agent installed, Windows Server Backup feature enabled, and the configured dpm user is in the backup operator group.
The backup keeps failing with meaningless messages in DPM admin console. "Recommended action" leads to same result.
I have tracked down in the eventviewer of the to be backed up server (see also https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc734395(v=ws.10)?redirectedfrom=MSDN )
- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider Name="Microsoft-Windows-Backup" Guid="{1DB28F2E-8F80-4027-8C5A-A11F7F10F62D}" />
<EventID>19</EventID>
<Version>3</Version>
<Level>2</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x4000000000000000</Keywords>
<TimeCreated SystemTime="2021-05-19T08:27:32.395701700Z" />
<EventRecordID>161</EventRecordID>
<Correlation />
<Execution ProcessID="2760" ThreadID="856" />
<Channel>Microsoft-Windows-Backup</Channel>
<Computer>vm</Computer>
<Security UserID="S-1-5-11" />
</System>
- <EventData>
<Data Name="HRESULT">0x8079005f</Data>
<Data Name="BackupTime">2021-05-19T08:27:32.394029700Z</Data>
<Data Name="BackupTarget">J:</Data>
<Data Name="NumOfVolumes">0</Data>
<Data Name="VolumeNames" />
<Data Name="ErrorMessage">%%2155413599</Data>
</EventData>
</Event>
I do not know where the faulty "BackupTarget" is coming from, there is no J: drive. From descriptions found on the net, DPM is supposed to backup the system state to the C: drive (that has reasonable amount of free space).
Searching some more, I found a config file which is conveniently and following an age old standard located in the program folder at
C:\Program Files\Microsoft Data Protection Manager\DPM\Datasources\PSDataSourceConfig.xml
This file contained a reference to the J: drive (in the node). Editing it and changing it to C: resolved the issue, the next "synchronization job" succeeded, the state of the system state backup is green now.
On another system, on the C:\ drive, there just wasn't enough space for the backup to succeed.
On yet another system, I had to uninstall/reinstall Windows Server Backup. Afterwards, it tried to backup to H: drive, which also needed to be changed to C: (like above).
| common-pile/stackexchange_filtered |
Can OpenRefine easily do One Hot Encoding?
I have a dataset like a multiple choice quiz result. One of the fields is semi-colon delimited. I would like to break these in to true/false columns.
Input
Student
Answers
Alice
B;C
Bob
A;B;D
Carol
A;D
Desired Output
Student
A
B
C
D
Alice
False
True
True
False
Bob
True
True
False
True
Carol
True
False
False
True
I've already tried "Split multi-valued cells" and "Split in to several columns", but these don't give me what I would like.
I'm aware that I could do a custom grel/python/jython along the lines of "if value in string: return true" for each value, but I was hoping there would be a more elegant solution.
Can anyone suggest a starting point?
GREL in OpenRefine has a somehow limited number of datastructures, but you can still build simple algorithms with it.
For your encoding you need two datastructures:
a list (technical array) of all available categories.
a list of the categories in the current cell.
With this you can check for each category, whether it is present in the current cell or not.
Assuming that the number of all available categories is somehow assessable,
I will use a hard coded list ["A", "B", "C", "D"].
The list of categories in the current cell we get via value.split(/\s*;\s*/).
Note that I am using an array instead of string matching
and use splitting with a regular expression considering whitespace.
This is mainly defensive programming and hopefully the algorithm will still be understandable.
So let's wrap this all together into a GREL expression and create a new column (or transform the current one):
with(
value.split(/\s*;\s*/),
cell_categories,
forEach(
["A", "B", "C", "D"],
category,
if(cell_categories.inArray(category), 1, 0)))
.join(";")
You can then split the new column into several columns using ; as separator.
The new column names you have to assign manually (sry ;).
Update: here is a more elaborate version to automatically extract the categories.
The idea is to create a single record for the whole dataset to be able to access all the entries in the column "Answers" and then extract all available categories from it.
Create a new column "Record" with content "Record".
Move the column "Record" to the beginning.
Blank down the column "Record".
Add a new column "Categories" based on the column "Answers" with the following GREL expression:
if(row.index>0, "",
row.record.cells["Answers"].value
.join(";")
.split(/\s*;\s*/)
.uniques()
.sort()
.join(";"))
Fill down the column "Categories".
Add a new column "Encoding" based on the column "Answers with the following GREL expression:
with(
value.split(/\s*;\s*/),
cell_categories,
forEach(
cells["Categories"].value.split(";"),
category,
if(cell_categories.inArray(category), 1, 0)))
.join(";")
Split the column "Encoding" on the character ;.
Delete the columns "Record" and "Categories".
| common-pile/stackexchange_filtered |
Command Prompt freezing after pressing enter, invalid loop
I am currently making a program that allows you to choose 1 of 4 options, and in order to execute this properly, I have to use a loop. The point of this simple program is to allow the user to select 1 of 4 choices:
1 - Set percent to avg by
2 - Enter grade (and take the avg of those grades entered)
3 - Get average (I'm assuming the grade avg and the percent avg entered)
4 - Quit
At the moment, I get no compile errors and am able to run the program. Whenever I type 1 and press enter, for some reason the number enters but then it's just a blank space, and I have to press CTRL-C to unfreeze it. I also do not know how to get the " while ( choice != 1); " line to execute properly.
I need to get the program to loop, allowing the user to do every option as many times as they want until they press 4 to quit, so I am using a sentinel-controlled loop. Here is my code, and I am a beginner so I may not have the whole "Loop" process down yet. Thanks!
import java.util.Scanner;
public class ExerciseThree
{
public static void main ( String[] argsv )
{
float percent = 0;
double grade = 0;
double totalAvg = 0;
double total = 0;
double gradeAvg = 0;
int gradeCounter = 0;
int quit;
int choice;
int choiceOne;
Scanner input = new Scanner (System.in);
System.out.println( "Please choose one of the following: \n 1 - Set percentage of total for new grades \n 2 - Enter new grades \n 3 - Get average \n 4 - Quit ");
choice = input.nextInt();
while ( choice != 4 );
switch ( choice )
{
case 1:
if( choice == 1 ) {
System.out.println( "Enter a percentage to multiply by" );
percent = input.nextFloat();
break;
}
case 2:
if ( choice == 2 ) {
System.out.println( "Enter grades" );
grade = input.nextDouble();
total = total + grade;
gradeCounter = gradeCounter + 1;
gradeAvg = (double) total / gradeCounter;
break;
}
case 3:
if ( choice == 3 ) {
System.out.println( "You have chosen to get the average" );
totalAvg = totalAvg + percent * grade;
totalAvg = input.nextDouble();
break;
}
default:
if ( choice == 4 ){
System.out.println( "You have chosen to quit" );
quit = input.nextInt();
break;
}
}
if( choice != 4 || choice == 4 ) - This is always true :/
Oh I see! Because any choice != 4 would also make 1, 2, or 3 count as != 4
Furthermore, when you switch(choice), there is no need for the if after that. Since.. you switch(choice) already did that.
I don't understand why whenever I enter let's say a 1, it doesn't execute the code for Case 1
I don't even know where to start... there are multiple errors in your code:
while ( choice != 4 );
remove the semi-colon and wrap your switch inside { }
if ( choice == 1 )
there is no need to place an if statement inside a case, it's redundant
if ( choice != 4 || choice == 4 ){
this is always true and, as already said, the if is redundant in a case code block
case 1:
if ( choice == 1 ) {
System.out.println( "Enter a percentage to multiply by" );
percent = input.nextFloat();
}
at the end of each case code block you need to place a break; statement
Thank you! I also have another question, whenever you press 1 and you enter the percentage, how would I go back to the menu with the 4 options? Right now it just loops "Enter a percentage to multiply by"
That happens because the value of your choice variable never changes, so you enter into an infinite loop where the choice variable is always equal to 1... I think you should do a little rework to your code's logic
I'm thinking I can do a counter-controlled loop so It'll know once they enter a percentage, then it can go back to the menu after entering it once since that's all they will be doing.
After each case you should put a "break" statement.
case 1:
if ( choice == 1 ) {
System.out.println( "Enter a percentage to multiply by" );
percent = input.nextFloat();
So instead put:
case 1:
if ( choice == 1 ) {
System.out.println( "Enter a percentage to multiply by" );
percent = input.nextFloat();
break;
That is the only problem I see. If you have any other questions comment this answer.
Remove that if please, it's redundant.
I changed it accordingly, althought I previously had break statements and deleted them to see if it'd fix the problem (Idk XD)
But the problem still exists, even when I deleted the if statements, when I run the program, Enter a number, it shows the number then shows a blank space and I have to press Control-C to exit the program.
I fixed it by putting a { before the switch!
| common-pile/stackexchange_filtered |
More settings for the facebook php API
Maybe I searched completely in the wrong way, and the facebook documentation pretty much sucks in my opinion.
I was wondering, I'm connecting to facebook with the settings below and that works (I'm able to retrieve the profile information of the logged in user allows my application to access his profile.) Are there other options I can set, like the callback url and the expiration (which I want to set to never, so I can save the token in my db and re-use it)?
This is the config now:
$facebook = new Facebook(array(
'appId' => 'xxxxxxxxxxxx',
'secret' => 'xxxxxxxxxxxx',
'cookie' => true
));
I'm also connection to twitter and there I'm able to this:
$this->configTwitter = array(
'callbackUrl' => 'xxxxxxxxxxxx',
'siteUrl' => 'http://twitter.com/oauth',
'consumerKey' => 'xxxxxxxxxxxx',
'consumerSecret' => 'xxxxxxxxxxxx'
);
Thanks in advance!
Facebook should take a look at Twitter
After trying a few days, spitting the documentation of the Graph OAuth 2.0 (which doesn't work as good as expected and searching the internet I found a solution which I used to create the following script:
// Config
$this->redirectUrl = 'http://www.mywebsite.com/facebook'
$this->clientId = 'APPLICATION_ID';
$this->permissions = 'publish_stream,offline_access';
// Check if a sessions is present
if(!$_GET['session'])
{
// Authorize application
header('Location: http://www.facebook.com/connect/uiserver.php?app_id='.$this->clientId.'&next='.$this->redirectUrl.'&perms='.$this->permissions.'&return_session=1&session_version=3&fbconnect=0&canvas=1&legacy_return=1&method=permissions.request');
}
else
{
$token = json_decode($_GET['session']);
echo $token->access_token; // This is the access_token you'll need!
// Insert token in the database or whatever you want
}
This piece of code works great and performs the following actions:
Log in if the user isn't logged in to facebook
Asks if the user wants to add the application and grants the requested permissions (in my case: publish_stream and offline_access) in one dialog
returns to the page you specified under redirectUrl with an access_token (and because we requested for offline access this one doesn't expire)
I then store the token in the database so I won't have to request it again
Now you can, if you wish use the facebook code, or you're own code to get the users data, or other information from facebook (be sure you requested the right permissions)
Don't now if this is facebook-valid code, but it works great and I can easily define the 'config' like I wanted....
| common-pile/stackexchange_filtered |
Long Form writing for CS Educators. Is a Blog possible?
I have just written a long meta post that I'm told doesn't really belong here. I agree, but think something like it belongs somewhere if we are to really service educators.
What doesn't work.
Neither a Question/Answer format nor a Chat format works for such writing. If I pose it as a question and then answer it, it seems self serving. It is also longer than most would be looking for in a question.
Chat doesn't work, even in a separate room, since commentators will interrupt the thread as it is being written and a reader later will need to evaluate all of the inline commentary along with the author's key ideas. The classroom is no good, of course, since we often have multiple simultaneous threads going on.
Separate chats also disappear automatically after a while, though this might be changed, I suppose.
What I propose.
I propose two things. I will discuss them separately. First is a Blog platform similar to our Meta where a user can post something long and (hopefully) unified, that will be maintained. Second is, for each Blog post a separate, dedicated chat room for that post.
A proposal for the Blog
I would think that users over a certain rep, neither too small, nor prohibitive, ought to be able to open a new Post in the Blog. It would be known as a "Post", not a question. There would be, in this idea, no "Answer" post possible, however, the ordinary comments could be left as they can be left for questions and answers elsewhere. Only the OP of a post will have edit capabilities (other than mods... for maintenance). But it wouldn't be like a community post as the intent is to capture the thoughts of an individual.
I'll just pick a number, 500, for the min rep to post. Just a guess. Just a minor suggestion.
Other users, who are allowed to post, will, of course, be able to give their own personal views that may be contrary to a given post. But lacking "answers" inline, it is less likely, I think to result in edit-wars.
I would also suggest that the Blog not affect rep. An interesting idea, however, is that a member would need to "pay" some rep for a ticket to post. Not too much, but enough so you think about it.
A proposal for the associated chat
This proposal suggests that each such blog post have automatically opened a chat room dedicated to that post. Some rep is needed to comment in these chats. I'd suggest enough rep that a user has made some commitment to the site. I'd think actual membership should be required as well as, say 25 in-site rep. This is to prevent, or slow, off-hand comments by people who have given little thought to the teaching process.
The chat room would be free form and an author could have BS called there if needed, but such comments would leave the original post intact for the future. If a chat dies out, then the room might be eliminated in the usual way, but leaving the original in place.
Conclusion
Such a process would allow for long-form writing, perhaps somewhat philosophical. My reason for suggesting it, and for the long piece I already wrote, is that some teachers, especially younger ones, may have seen only a limited subset of what it is possible to do in teaching and might be inspired to try different things, if they only knew about them.
Without something like this, there are things that should be said, that are important to say, and that deserve a hearing, but will have no home here. I think that would be a loss.
Issues
It seems that SE won't want to resuscitate blogs, so if we want to do it, we need to do it ourselves. I think the following are barriers. Please correct me or add additional if you like.
Domain name: Someone needs to own it and pay for it. That someone needs to be around for the foreseeable future.
Hosting: Someone needs to own it and pay for it. That someone needs to be around for the foreseeable future.
Selecting a platform: Probably not a big issue unless the following interfere.
Specific design of the site: ditto.
Integrating it with CSEducators: If membership and rep is a requirement to post then we need to authenticate users here. There is an api: OAuth.
Setting membership rules, etc to be compatible: Negotiable.
Desirable features of a Blog
Editing similar to this site. Markdown, MathJax, etc so that users can use their skills in both without change.
Permalinks. Each Post should have a listed permalink to make external references easy and permanent.
Search. Normal search for text and also search by author.
Archive. Posts should be archived by date. There should be an easy way to bring back a range of posts by date and date-range. Some blogs just have a list of archive "issues" at the side. Other things are possible.
Tags/Categories. Posts should be able to be tagged with multiple tags. These could be taken from the main site, separate for blog-space, or merely textual markers not collected. It should be possible to bring up all posts for a given tag as is done on the main site.
Images. If not too expensive, it would be a nice feature if images could be uploaded and saved. Otherwise they depend on external links, which could expire.
The blog is certainly possible, but the chat room rep would have to be 20.
Would every blog post need its own chat room? I think it's possible that one dedicated chat room could be used for all the posts.
I would think one per post. If it is popular (several per day or week) it would let people focus the way the classroom does not. The unused ones would disappear as usual.
A blog hosted by someone of this community is certainly possible, but I don't know if it's possible to associate it with the Stack Exchange Network (e.g. to authenticate users or check reputation). In fact, Stack Exchange once had community blogs but dropped the concept so we'd have to implement everything on our own (e.g. using the Stack Exchange API).
Several SE sites have associated blogs, run by users on their own site. Most of the sites that started a blog posted a few articles and then interest waned off (which is why SE stopped putting efforts in blogs and discontinued them when maintenance started to require a non-negligible amount of work). [worldbuilding.se] has an active blog (check their meta). If you're serious about this, you should ask the people who maintain it for advice.
I don't see why participating on the blog should require a reputation threshold. Approving an article for publication should be restricted, but writing an article should be open to anyone.
For issue #6 I don't think there's likely to be a reason we couldn't simply incorporate the TOS from SE, in toto, by reference or by cite.
In keeping with features #2, and SE practice, we should probably have all images on imgur as well. As near a permanent link as is practical on the Web.
Real quick here - Worldbuilding has a blog on Medium (titled Universe Factory) - that might be the best option for a blog. You might want to look at the way Worldbuilding does it.
@Gilles, I didn't envision an approval process, but that would be (to me) an acceptable substitute. The intention of requiring rep was only to restrict publication to those who have made some commitment to the CSEducators site. If completely unrestricted, I think it can be misused.
I am not disagreeing, however with reference to answering your own question. According to the guideline it is ok to write a question and immediately answer it. it is encouraged. You can also write both question and answer and post them simultaneously. As well as using this to document something that you want to share. You also end up with alternative answers.
This is a great idea and I'm very disappointed in me for not thinking of it myself weeks ago. That's a bad Pops! No cookie.
As others have pointed out already, Stack Exchange no longer has integrated blogging, but the Worldbuilding community has a successful blog going on over at Universe Factory. They use Medium, so there's no money involved. The TeX and SFF SE communities also have blogs: TeX Talk and the surprisingly boringly named (for them) SFF community blog.
The main lesson we learned from our time hosting blogs was that monetary cost is, relatively speaking, a non-factor. The real challenge is getting people to commit to contributing, especially over the long haul instead of just once or twice at the start of the project. Very often, people start off thinking "ooh a blog sounds like a great idea for our group", but then everyone loses interest once the novelty wears off, or expects to be able to just read cool stuff that someone else comes up with. (At the same time, you can't just hand out quotas to authors; blogging succeeds only if people intrinsically want to participate, not if it feels like work or some horrible obligation (YouTube link, relevant from about 0:35 to 1:25).)
And that's why I think this site will be one of the exceptions that does well with a blog: you may not be huge like Stack Overflow (yet), but you do have a fantastically dedicated core group of participants with a lot of knowledge to share. You also have a "squishier" topic that invites commenting and kibitzing. Once you get off the ground, some high-quality, and/or controversial-but-not-insane, posts that spread around the wider CS teaching community could be extremely effective for site promotion. Who knows, maybe the blog will end up being phenomenally successful and the Q&A site will end up being the junior partner in the relationship!
I don't have much in the way of concrete advice. Since the blog won't be integrated with our system, you'll be free to invent whatever setup you like for choosing admins and authors and such, but you'll be on your own for implementation. Managing blogs isn't exactly something we CMs have tons of experience with, but let me know if there's anything you'd like help with. At a minimum we can try to get you in touch with the people who run the other community blogs, if you're interested and don't already have those connections.
Thanks for your encouraging words. My idea here is to have a few "interested" people try to invite people to write for the blog when a question or answer indicates they have an interesting viewpoint. Then, if possible, to shepherd them (in the patterns community sense) in making a really good post. Note that there is also an effort underway to develop a blogging platform (resurrect really) in the acm SIGCSE community.
This is to supplement what I proposed in the "question". I'm just collecting ideas here. I note that SIGCSE is also talking about a blogging system and it is not inconceivable that these can be coordinated. Here are some ideas. They may be appropriate for one or the other system or both or neither:
Gatekeeper. A post is submitted to a board of editors who can approve for publication or work with the proposer to improve the post before publication.
Shepherds. A group of editors who work with authors to improve posts. This is done for all patterns submitted to patterns conferences. Shepherding would require either off-site communication or a sandbox for revisions.
Cheerleaders. A group of people who look around for interesting topics and propose articles to the initiator of the ideas. For example, and interesting answer on this site might be expanded to a blog post with more information. A cheerleader can contact the author through the normal channels here (say in comments) or directly if not anonymous.
Trusted Authors. A group of authors who have been proven to be reliable and can post directly without gatekeeper intervention. I think that one of the ACM blogs has this feature.
More to come.
The short answer would be no. I just read about how stack overflow stopped their huge documentation program. One of the many things they mentioned for the failure of that project is the costs.
You have already outlined it
Domain name: Someone needs to own it and pay for it. That someone
needs to be around for the foreseeable future.
Hosting: Someone needs to own it and pay for it. That someone needs
to be around for the foreseeable future.
I don't mind paying for hosting and the domain name until a point. I work on Azure, so I could setup a proof of concept blog in a day or two, test it and have it running in a few weeks for the public. The whole thing can run on cloud so that the chances it not running properly is almost zero.
However, this introduces complications when (say a few years later) I no longer wish to admin. The domain name would be owned by me (or whoever buys it) and then I will need to transfer it. Or, it has to be linked to a independent entity. That brings in a whole lot of issues.
Then, there is the hosting cost, if the blog ends up becoming popular. That could be huge depending on the traffic.
| common-pile/stackexchange_filtered |
How do I loop through and access all sections of returned entries?
On my search results page I want to add a section filter depending on the different sections that are included in all of the returned entries.
I am getting all the sections like this:
{% set entries = craft.entries.section(['inspirationArticles', 'pages', 'pressArticles', 'products']).search(searchQuery).order('score') %}
{% set entrySections = entries.getAllSections() %}
And then loop through them:
{% for entrySection in entrySections %}
...
{% endfor %}
But when trying to access the sections’s names or handles via {{ entrySection.name }} or {{ entrySection.handle }}, I get a CException error:
Craft\EntryModel and its behaviors do not have a method or closure named "handle".
So it looks like it isn't actually accessing the SectionModel. Am I missing a step?
Entries to not have a getAllSections method. You can use the group filter to group your entries by their sections.
{% set entries = craft.entries.section(['inspirationArticles', 'pages', 'pressArticles', 'products']).search(searchQuery).order('score') %}
{% set grouped = entries|group('section') %}
{% for section, entriesInGroup in grouped %}
<h2>Section: {{ section }}</h2>
<ul>
{% for entry in entriesInGroup %}
<li>{{entry}}</li>
{% endfor %}
</ul>
{% endfor %}
Here's some documentation to help!
Thanks Aaron, that works like a charm! Though I do kinda wish I could get a way a one less for loop to get to the sections’ details, but that's just me being picky ;)
| common-pile/stackexchange_filtered |
Trying to screen-grab an entire Spritekit scene
I'm trying to capture an entire Spritekit scene for the purpose of later magnify particular areas. To capture the scene I tried doing:
if let texture = view.texture(from: node) {
let sprite = SKSpriteNode(texture:texture)
// ... do whatever with sprite
}
Which works fine with anything besides the scene itself. For example, if i try this code inside the didMove() method passing self as "node" i get an empty/black texture as a result.
Do you call it before the node is added as child to the scene? This would return a clear color background.
I'm calling it on the scene itself. After i've added a background and some sprites.
Maybe didMove is a little early. Did you try to place a button to take the screen?
I've tried it in different methods. Same result. I've also tried a different approach, as suggested here (last answer): https://stackoverflow.com/questions/19571357/ios-sprite-kit-screengrab. This works but it's very slow and generates an extreme drop in fps. To be honest, i'm thinking of changing a mechanic so to avoid the need for screen-capturing all together. Grazie lo stesso :)
| common-pile/stackexchange_filtered |
Is the intersection of $\sin(\mathbb{N})$ and $\cos(\mathbb{N})$ empty?
My guess is that the intersection is empty and this is as far as I got in an attempt to prove this by contradiction:
$\exists n,m \in \mathbb{N}, \cos(n)=\sin(m) \land n \neq m \quad (1)$
$\cos^2(n)=1-\cos^2(m) \iff \cos^2(n)+\cos^2(m)=1 \quad (2)$
I'm almost certain that the last equation can't be satisfied but I'm not sure how to proceed.
$$\cos x=\sin y\iff \cos x=\cos\left(\frac\pi2-y\right)\iff \frac\pi2-y=\pm x+2k\pi,\ k\in\Bbb Z$$
In particular, if $\cos x =\sin y$, then either $\dfrac{x+y}\pi$ or $\dfrac{y-x}\pi$ is in the form $\dfrac{1+4k}{2}$ for some $k\in\Bbb Z$.
| common-pile/stackexchange_filtered |
Multiple instances of same Entity
As far as I know JPA manages all the entities in the persistent context when entities get created. Will there be multiple instances of same entity? Suppose there are 2 methods being called at same time which creates same entity then will there be 2 instances of same entity created in persistent context and any changes done to them will be in respective instance?
please define "the same entity"
No no actually my question is can persistent context can have multiple instances of same entity? If yes then how it's uniquely identified. I mean related to transaction
If you add multiple entities with the same id to the context, the changes made to any entity will reflect to all of them.
However, hibernate will optimize the process in such a manner that as few as possible database calls are made.
If there are multiple entities in the context that share the same fields but not the id, changes made to an instance of an entity will not reflect to other entities.
Thank you so much !! Also wanted to know what is difference between persistent context and persistent units
The persistence context manages persistence units.
I suggest you watch Laur Spilca's hibernate fundamentals playlist
| common-pile/stackexchange_filtered |
Can't to place categories module on product page
I need to place "Jshopping Categories" module at bottom of the product page.
On main page this module placed at the position 'position-22' .
I tried using this code:
$modules =JModuleHelper::getModules('position-22');
foreach ($modules as $module){
echo JModuleHelper::renderModule($module);
}
but it shows no effect.
And, if I use this code :
$document = JFactory::getDocument();
$renderer = $document->loadRenderer('module');
$options = array('style' => 'raw');
$dbo = JFactory::getDBO();
$dbo->setQuery("SELECT * FROM #__modules WHERE id='91' ");
$module = $dbo->loadObject();
$module->params = "heading=2\nlimit=10";
echo $renderer->render($module, $options);
I get only the categories title but no images.
I'm using JoomShopping component in Joomla 3.4.
Also I'm thinking to use "Load module" plugin. But how I can place my custom HTML with "{loadposition position-22}" in my page template?
If it is in position-22 why are you getting position-10? You will probably be better off at [joomla.se].
Excuse me, I using 'position-22' . I edited my question
It transpired, a module was enabled only for selected pages. If it enabled for all pages, a code from a first example works. But in this case a module is displayed on each page(on page with code like above is displayed twice).
How and where you are going to use {loadposition} method?
| common-pile/stackexchange_filtered |
jQuery is() or has() on multiple elements
I'm trying to detect if array of elements $a contains the element $c where:
var $a=$('#b1,#b2,#b3,#b4');
var $c=$('#b3');
but no $c.is($a) or $a.has($c) works.
Looking for a jQuery answer.
Did you try .filter ? $a.filter($c).length > 0 (edit: changed to .filter rather than .find as it looks like #b3 is not a child item (implied by the use of "contains" in the question)).
Possible duplicate: https://stackoverflow.com/a/1473737/2181514
Possible duplicate of How do I check if an array includes an object in JavaScript?
might not be a full solution you're after but you can do $.inArray(needle, haystack)
@freedomn-m publish it as answer since is the correct one :)
You could loop through your first selector and check for all its elements individually.
Here I use jQuery .filter() method to search for the element, combined with .length to check if the element was found.
let $a=$('#b1,#b2,#b3,#b4');
let $c=$('#b3');
let $d=$('#b5');
let result = !!$a.filter((_,e) => $(e).is($c)).length;
console.log('$c in $a ? '+result);
let result2 = !!$a.filter((_,e) => $(e).is($d)).length;
console.log('$d in $a ? '+result2);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="b1"></div><div id="b2"></div><div id="b3"></div><div id="b4"></div>
You can use
$a.findIndex(x => x === $c) > -1
or you can use find in array
a simple example is
this
You can use selector property of jQuery object to check that:
var $a=$('#b1,#b2,#b3,#b4');
var $c=$('#b3');
if($a.selector.indexOf($c.selector) !== -1){
console.log('match');
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
Use inArray()
$.inArray($c,$a)
You can use this $a.findIndex(x => x === $c) > -1
Try
$a.findIndex(x => x === $c) > -1
to get a boolean.
If you want the elements that match use map
$a.map(x => { return x === $c })
You can use index for doing that:
var $a=$('#b1,#b2,#b3,#b4');
var $c=$('#b3');
if($a.index($c) > -1){
console.log('match');
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="b1"></div><div id="b2"></div><div id="b3"></div><div id="b4"></div>
| common-pile/stackexchange_filtered |
CSS float div not working as supposed
I have a container div called main and then two divs floated left. The problem is that I need the main div background color visible (I supposed that the blue color background should be visible on the right side (300px which remains) and at the 4th row of the medium div as it is lower div than the left div). I also need both left and medium divs to automatically increase their heights on words wrapping and as you can see it does not work in the grey (middle) div.
See the http://jsfiddle.net/djqfo3we/2/
.main {
width: 500px;
background-color: blue;
}
.left {
width: 100px;
float: left;
background-color: red;
}
.middle {
width: 100px;
float: left;
background-color: gray;
}
<div class="main">
<div class="left"> dsfslfs sfsf slfjks flsdf slf s fs sdf ssdfegrerterte</div>
<div class="middle">wfwefwef jjjjjjjjjjjjjjjjjj ddddddddddddddddddddddddd</div>
</div>
You have to clear the floats otherwise the margins of the parent collapse and it appears that the parent has no height.
There are various techniques for clearing floats and you can find out more with a simple search
As for the text wrapping, as you have discovered long text strings won't break by themselves.
You can force a word break using word-wrap:break-word and leave your original text unchanged.
.main {
width: 500px;
background-color: blue;
overflow: hidden; /* quick clearfix */
}
.left {
width: 100px;
float: left;
background-color: red;
}
.middle {
width: 100px;
float: left;
background-color: gray;
word-wrap:break-word;
}
<div class="main">
<div class="left"> dsfslfs sfsf slfjks flsdf slf s fs sdf ssdfegrerterte</div>
<div class="middle">wfwefwef jjjjjjjjjjjjjjjjjj ddddddddddddddddddddddddd</div>
</div>
Add a div inside the main div but at the bottom called clear:
<div class="main">
<div class="left"> dsfslfs sfsf slfjks flsdf slf s fs sdf ssdfegrerterte</div>
<div class="middle">wfwefwef jjjjjjjjjjjjjjjjjj ddddddddddddddddddddddddd</div>
<div class="clear"></div>
</div>
Then give the class clear a style:
.clear {
clear: both;
}
and you get this: http://jsfiddle.net/djqfo3we/4/
EDIT:
As others have pointed out, in order to apply a wrap so that they stay within the set width dimensions, add the style word-wrap: break-word; to the content you want to have wrapped.
I've applied the word-wrap to both the middle and left div within the main div.
updated jsfiddle: http://jsfiddle.net/djqfo3we/10/
.main {
width: 500px;
background-color: blue;
}
.left {
width: 100px;
float: left;
background-color: red;
}
.middle {
width: 100px;
float: left;
background-color: gray;
}
.clear {
clear: both;
}
.middle, .left {
word-wrap:break-word;
}
<div class="main">
<div class="left"> dsfslfs sfsf slfjks flsdf slf s fs sdf ssdfegrerterte</div>
<div class="middle">wfwefwef jjjjjjjjjjjjjjjjjj ddddddddddddddddddddddddd</div>
<div class="clear"></div>
</div>
wow, thank you very much, that is exactly how I wanted it to work
well, I haven't noticed that one issue remains and that is the overflow of the text in the grey div (middle). How to force it to increase the height instead of overflowing to the right?
Edited the answer and provided a new jsfiddle link.
| common-pile/stackexchange_filtered |
Author+title of story of planet inhabited by intelligence which makes mouse and man evolve on ship
In my youth, I’ve read countless science fiction and fantasy books. One of these, when I finished it, I felt like a part of my life ended with the end of the book, and surely there must be a sequel… except there was none and I felt emptiness… It was that good! Or it was simply what I wanted to read at that age :-)
Anyway, now I’d like to find this book again. I remember very little about it now, and I’m not even sure of what I do remember :-/ Here we go:
A (scientific?) crew in a spaceship lands on an unknown planet somewhere.
The planet is eerily empty: not desert, but seemingly lacking in intelligent life… or maybe there are only plants and animals, or just plants…
But there is one intelligence there, a powerful one, and if I remember correctly, it is egg-like in shape.
One crew member finds the egg, and takes it aboard without realizing (or maybe he is influenced by the egg, I don’t remember) and puts it not far from a laboratory rat/mouse (I guess they are scientists).
Somehow, someone realizes that the planet is not as uninhabited as it appears to be, and the ship leaves the planet as fast as possible… I think…
And then we discover that proximity to the egg makes the body evolve at an accelerated rate. While piloting the ship, the pilot looses his nails, and his teeth, and his hair, and gets clumsier (sort of like a human from Wall-E), but much more intelligent.
But… Ta daa!! The mouse was close to the egg much sooner than the pilot, and proves to have superior intelligence at the end.
That’s all I remember, except for one small “fact”: I’m almost sure that the author is Edmond Hamilton, which is not a ridiculous possibility, given that “The Man Who Evolved”, which I discovered today (thank you Wikipedia), seems to describe a very similar accelerated evolution…
Any idea?
Sounds a bit like this old question:
@user14111: well… in France, we use different words so I’m not sure, but it must have been about 1cm thick in pocket-book format, maybe 200 pages…
@DraganMilosevic: This P.K.Dick story is indeed very close to the one I remember, but a lot shorter. And I’m positive Dick is not the author I’m looking for (although I do like his stories).
| common-pile/stackexchange_filtered |
Go Modules does not recognize files under GOPATH
I was trying to set up GO Modules in intellij and was trying import a package under GOPATH. When I use Go Modules, it doesnt seem to 'import' the packages from GOPATH. Any ideas on what I could be doing wrong?
Below is a screenshot. Left pic: GoModules, which doesnt recognize the package. Right Pic: Simple GO project, which recognized the packages.
I tried doing sync package, with no luck.
Go version - 1.12.3
.
The two supported modes ("GOPATH mode" and "module-aware mode") are mutually exclusive modes. This means you can't have both, you can't mix modules and GOPATH.
Quoting from Command go: GOPATH and Modules:
When using modules, GOPATH is no longer used for resolving imports. However, it is still used to store downloaded source code (in GOPATH/pkg/mod) and compiled commands (in GOPATH/bin).
And also Command go: Preliminary module support:
For more fine-grained control, the module support in Go 1.11 respects a temporary environment variable, GO111MODULE, which can be set to one of three string values: off, on, or auto (the default). If GO111MODULE=off, then the go command never uses the new module support. Instead it looks in vendor directories and GOPATH to find dependencies; we now refer to this as "GOPATH mode." If GO111MODULE=on, then the go command requires the use of modules, never consulting GOPATH. We refer to this as the command being module-aware or running in "module-aware mode". If GO111MODULE=auto or is unset, then the go command enables or disables module support based on the current directory. Module support is enabled only when the current directory is outside GOPATH/src and itself contains a go.mod file or is below a directory containing a go.mod file.
In module-aware mode, GOPATH no longer defines the meaning of imports during a build, but it still stores downloaded dependencies (in GOPATH/pkg/mod) and installed commands (in GOPATH/bin, unless GOBIN is set).
If you wish to use packages located on your disk, see How to use a module that is outside of "GOPATH" in another module?
Thank you, that makes sense. I am new to go-modules and I was trying to figure out how it worked.
I faced this problem, and I use this setting for each project, and it solved my problem.
But I'm still looking for a global GO module configuration.
| common-pile/stackexchange_filtered |
Downgrade from SQL 2005 Enterprise to 2005 Standard on a clustered server
Can anybody help with downgrade Ent to Standard on a cluster. We have no place to test, so we want to make sure we get everything right.
I don't believe you can downgrade to a lesser version. You have to do a new installation of SQL Server 2005 standard. If you are OK with named instances this shouldn't be an issue. You can install a new named instance on the same server with Enterprise assuming that Enterprise is the default instance. Then there are number of methods you can use to get your Databases, Jobs, SSISpackages, etc. over to the standard instance assuming your not using any Enterprise specific features that are not supported by standard.
SQL Enterprise edition is different by design from SQL standard edition but in 10 steps, here's a straight forward way to avoid expensive work and the lengthy error fixing in case one comes up:
Request for downtime like 3-5 hours for this exercise depending on how fast your server is, then go ahead and perform the following tasks.
1. Take users off your application & ensure that all transactions are complete.
2. If you have many jobs that run frequently, simply stop SQL server agent so that nothing starts up automatically while you're trying to make major changes.
3. Do a full database backup of your production databases, verify backups after. This is a good practice. Checking database integrity (data & index linkage) before backing up will show you the state of your database, hence reducing the risk of losing your data.
4. If you want, script out your jobs and logins (use a procedure sp_help_revlogin from Microsoft for logins).
5. Note the path of your production database files first, then detach your production databases.
6. User database files are not deleted by uninstalling. Therefore uninstall the SQL Enterprise instance, restart the operating system.
7. Install SQL Standard. I recommend you to apply the latest service pack at this point. A restart might be required.
8. Note that user databases from an Enterprise SQL instance can be managed by one running SQL Standard. So, go ahead and attach the production databases.
9. Viola! Now you have an SQL Standard instance running your production databases.
10. If you scripted your jobs and logins, run your scripts to re-create them on the newly installed instance otherwise manually create logins and write your agent jobs.
--As answered in SSC. Hope this should be the same for Cluster too since you are not restoring system databases because you are going to script those settings.
Would it be possible to use mirroring to switch from Enterprise to Standard? I assume not, but I thought I would ask.
| common-pile/stackexchange_filtered |
JQGrid with Column Reordering
I have a jqgrid and I can reorder my columns with this option in my JQGrid
jQuery("#list").jqGrid({
sortable: true,
...
});
This functionality let me reorder ALL my columns. But I want that some columns must be on fixed places. Is there a way to solve this?
Thanks in advance!
Bruno
Never say never. jqGrid uses jQueryUI's Sortable class to perform the column drag-n-drop function. http://jqueryui.com/demos/sortable/
To remove a column from the list of sortable columns, run these two commands after you've rendered your grid (with sortable: true).
// disable the sortable property on your target element
// (which was applied when jqGrid rendered the grid)
$('tr.ui-jqgrid-labels').sortable({ cancel: 'th:#element_id'});
// update the list of sortable item's, and exclude your target element
$('tr.ui-jqgrid-labels').sortable({ items: "th:not(#element_id)" });
Note: This works best if your unsortable columns are on the left or right edge of the grid. Otherwise, you will still be able to re-sort other columns around them.
Also: Make sure you understand the difference between the two sortable options (grid level and colmodel level). On the grid options, "sortable: true" means the columns can be reorders by drag-and-drop. On the colmodel options, "sortable: true" means that you can reorder the rows by clicking on the column's header. Setting sortable to true on the grid options will not cascade down to the colmodel options. However, on the colmodel sortable is true by default.
Thanks for your reply, if i have to update this project i'll check it out! :)
You should at least accept Walters' comment in the mean time. It's better then "It's impossible".
Can multiple th ids can be applied to the code above for fixing the order for multiple columns ?
@Sandy505 : probably, you'll have to test for yourself. Refer to http://jqueryui.com/demos/sortable/#option-cancel . The parameter you pass into cancel/items is just selector, so you just need to write a selector that will select all the th's that you want to modify.
My guess is the syntax would be something like
$('tr.ui-jqgrid-labels').sortable({ cancel: 'th:#element_id, th:#element_id2,th:#element_id3'});
@TomHalladay I think you meant to say "accept Walter's comment" since there is only one of me.
Now in 2013 you can define the "exclude" parameter of "sortable"
, like this :
sortable: {
exclude: '#'+ grid[0].id +'_actions_buttons'
},
You can set sortable for each column in the colModel
colModel: [{ name: 'name', index: 'name', sortable: true },...
Checkout the documentation it's pretty helpful.
if you enable that option, you can sort the table on that column.
This will not affect my column to reorder the column from position.
It would be nice if there was an option that i can put in my colmodel to disable the reordering for just one column...
But thx.
doesn't look like this is possible with the jqgrid settings, you could try catch the th click event an stop the ordering for certain columns
| common-pile/stackexchange_filtered |
Intermittent start 1999 Toyota Tacoma 4cyl
A few weeks ago I had an intermittant starting issue where the motor would start to turn but wouldn't fully turn over, it would just click. I took it to the mechanic, they said it needed a new battery/ cleaning terminals, and they checked the started motor and said that was fine. Now I've had a couple starts where the engine turns but doesn't start, and there is no click from the relay, so far turning the key again has started the car (2-3 times out of 50 or so starts).
I checked the battery while starting, 12.6 nominal, 10.6 under max load, 13-14 when the alternator kicks in, which seems normal. The car doesn't make any attempt to turn when out of park, so I think that means the neutral safety switch is fine. Is there anything this description seems to point to?
Have you cleaned the plugs?
The most common cause of this symptom is intermittent failure of the electrical contacts in the starter solenoid.
Removal and disassembly of the starter is required to confirm this
Had the same issue with my 95 Pathfinder. New starter solved the problem.
| common-pile/stackexchange_filtered |
OpenGL - Should I store Attribute/Uniform locations?
Are glGetUniformLocation and glGetAttribLocation time consuming?
Which way is better?
Call glGetAttribLocation or glGetUniformLocation every time I need it ?
Store locations in varables and use them when needed ?
Whether on Android or iPhone, for an opengl surface you will have methods like:
onSurfaceCreated and onSurfaceChanged,
get into the habit of fetching uniforms and attributes here in these 2 methods.
The only way you can make rendering faster (which will soon become your priority when your code crosses 1000 lines of code) is by only having gluseprogram, glbindbuffer, texture binds and other binds inside onDrawFrame method, always cache variables inside onSurfaceCreated and onSurfaceChanged
Good answer. Attributes, uniforms, and other variables used inside shaders should be cached inside init() methods, and not the draw_Frame() methods.
Which way is better?
Think about it. No matter how fast glGetUniformLocation and glGetAttribLocation are, they cannot be faster than just fetching a variable. So if you are concerned about performance, then use the method that is always faster.
According to my tests, fetching such locations took about 100~200 times the amount of time that it takes for a single glDrawElements call. This is only an estimate based on System.nanoTime(), but I think it's save to say that it's worth storing them in variables on initialization.
This is a much better answer. (I hope the data is correct of course, but I could imagine it is, because getting attributes is synchronous and drawing is not).
| common-pile/stackexchange_filtered |
Nim static compilation
I'm totally new to Nim.
Does Nim automatically try to do everything it can during compilation? The documentation talks a lot about figuring static stuff out at compile-time, and it seems this is a guiding philosophy of the language.
I have a program in python which computes some stuff by going through a bunch of nested loops, generating an array each loop, and then computing a heuristic based on the values in the array. There is no I/O during the run except at the end, where it writes the heuristic to a file or console.
The output is 100% deterministic and in principal knowable before the program runs, but Python computes the output at runtime of course, and this can take a long time, even after enabling multi-processing (since the problem is CPU and memory-bound).
If I were to rewrite the Python program in Nim, is the compiler smart enough to figure out the output at compile time? The runtime would thus just write the final output to a file.
I can simplify the Python program in Nim as:
var x: int
for i in 1..10:
x += i
echo x
Returns:
55
Here the almost unreadable C output has a nimAddInt(...) call inside the NimMainModule, so I'm guessing in this particular case the action happens at runtime. How can I enforce this to go at compile time? Macros?
Nim won't automatically do things on compile time. You have to explicitly tell it to. Fortunately this is as easy as using the result in a compile-time context. For example by assigning it to a const or explicitly putting everything in a static block.
In addition to what pmunch said you can mark procs and or variables with the {.compileTime.} pragma to enforce their evaluation, at compile time.
| common-pile/stackexchange_filtered |
TCP Choosing optimal window size?
I know I need to get the RTT but I really don't get how to calculate it.
For some contextualization:
I have a ns2 script which simulates a server and a receiver with 2 routers in the way. All three links - server to first router, first router to second router, second router to receiver - have different transmission speeds.
The propagation delay is 10ms for the first two links and 3ms for the third one.
I'm sending a 3MB file in 1000 bytes packages (TCP's default packet size), 3146 packages total.
I don't want you to calculate it (the RTT) for me of course, I just want to know how to do it. :\
Are you talking about discovering this by experiment or by putting code inside the simulation system to calculate what value it should use?
@DonalFellows hm, I don't think it's supposed to be by experiment nor putting code into the script. Just raw math/data analysis.
In that case, you're not dealing with a real Tcl problem, but rather a networking-simulation problem plus a bit to do with how to tell the simulator to do it. None of which I can help with.
You can use Agent/Ping and collect RTT. Here is an example TCL snippet
There is only one problem if there is to much trafic de ping packet will be drop (you can see if you collect trace-all)
While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - From Review
| common-pile/stackexchange_filtered |
Ruby: CSV to YAML
I have the following csv example:
en.activerecord.models.admin_user.one;Guide
en.activerecord.models.admin_user.other;Guides
en.simple_captcha.placeholder;Type here
Is there a ruby gem or method to turn it into a Yaml file:
en:
activerecord:
models:
admin_user:
one: Guide
other: Guides
simple_captcha:
placeholder: Type here
I'm still trying (using tree data model) but no results.
Any idea?
Convert your CSV to a hash, then write the hash to YAML. There are lots of questions about converting dot-delimited keys to nested hashes.
require 'yaml'
hash = {}
file = "en.activerecord.models.admin_user.one;Guide
en.activerecord.models.admin_user.other;Guides
en.simple_captcha.placeholder;Type here"
file.split("\n").each { |line| hash.deep_merge!(line.split(/\.|;/).reverse.inject() { |m,v| {v => m} }) }
puts YAML.dump(hash)
---
en:
activerecord:
models:
admin_user:
one: Guide
other: Guides
simple_captcha:
placeholder: Type here
Thank you. hash should be {} in the beginning?
useful in converting files to localization yamls in rails.
Also please note the deep_merge! method is a rails method so can't be used directly in plain ruby script.
| common-pile/stackexchange_filtered |
Copying & renaming a VS solution folder and projects
What is the best way to copy my VS solution, including its projects? I'd like to do this so I can experiment, maybe even destroy, my solution in the name of learning.
It looks like I can just copy the solution folder and rename it to the desired new name. Correct? As far as the .SLN file goes, can I just do a find and replace to repalce the old solution name with the new solution name? What about the GUIDs? Do those need to be updated as well? When are those guids used?
Is there anything I am missing?
That should be it. As long as the projects are in subfolders of the solution folder, they should use relative paths so their relative relationships will remain. The GUIDs in the solution have to match the ProjectGuid elements in the project files, so you could change it in both places if you want but it's not necessary.
| common-pile/stackexchange_filtered |
Swift & Vapor/Fluent : How to pull values from another table during a post router request
I'm writing a backend app in Xcode 11.2 using Vapor3 and Fluent with SQLite. I have a route for a POST request to update a record in Table1, but before I save the record, I need to pull values from another table with a key that's in one of the fields I'm passing in. The part that I'm getting stuck on is accessing that data from the other table.
What I have so far:
Table for staff
final class Staff: Content, SQLiteUUIDModel, Migration, Parameter {
var id: UUID?
var staffID: String (value that's in the other table to match)
var value1: Int (one of the values I'm trying to fetch)
var value2: Int (another value I'm trying to fetch)
...(additional variables and Init
Table for assignments
final class Assignments: Content, SQLiteUUIDModel, Migration, Parameter {
var id: UUID?
var staffID: String
...(additional variables and Init
Route for incoming POST request to update existing Assignment:
router.post("update", Assignments.parameter) { req -> Future<Assignments> in
return try req.parameters.next(Assignments.self).flatMap { Assignment in
return try req.content.decode(Assignments.self).flatMap { updatedAssignment in
(Code where I perform calculations on the incoming 'Assignment' and save to 'updatedAssignment' before calling:)
return WorkOrder.save(on: req)
The route I have works, incoming data is modified and written to the SQLite database, however there are a couple of fields that I need to perform calculations on or set to values stored in the Staff table. IE:
editedAssignment.variable = (variable) * (value1 from staff table for matching staffID)
editedAssignment.variable = (value2 from staff table for matching staffID)
What I've tried so far
let staffData = Staff.query(on: req).filter(\.staffID == staffID).all() before the calculations
(adding as a Future as:)
router.post(...) { req -> Future<Staff> in return try Staff.query(on: req).filter ...(rest of query)
-- this method throws off the whole request, the values of the incoming request are just lost.
Ideally, if I could call a query and save as a dictionary in the calculations, that would be ideal, I just can't seem to "figure it out".
You need to put it inside your route, so that it is part of the chain, rather than disjointed as you had it:
router.post("update", Assignments.parameter) {
req -> Future<Assignments> in
return try req.parameters.next(Assignments.self).flatMap {
assignment in
return try req.content.decode(Assignments.self).flatMap {
updatedAssignment in
return Staff.query(on: req).filter(\.staffID == staffID).first().flatMap {
staffData in
/* Code to update 'updatedAssignment' */
return WorkOrder.save(on: req)
}
}
}
}
You will most likely get compiler errors and need to put explicit return types in, but I can't predict/test these. I've left it in, but I don't think it is a good idea that you are explicitly stating a Future<Assignments> is being returned when you end up with the return from WorkOrder.save....
It appears to work, I'm able to build without errors. I'll have to complete the section and run data through to test. As for return WorkOrder.save, that was a typo on my part. It should have been Assignment.save.
Glad it works. Accepting the answer pushes it up the search rankings and makes it more useful to other users.
| common-pile/stackexchange_filtered |
Linked pulldown lists using MySQL, PHP/JavaScript/Ajax/jQuery
I am quite new to web development and have a task to develop a web application that will basically show the user 5-15 pull down lists on one page, where each selection will limit the choices in all other lists. The user should be able to start with any one of the lists (so no set selection order) and when the user have selected something in each list or all parameters are otherwise locked by previous choices the user has to press the GO button and some calculations will take place, presenting a database selection. Basically it is a muliple parameter product selector application.
The relations between the lists are not simple, and could need calculated fields etc, and one list could affect the content of several others. The database behind will be MYSQL, probably a single large table, with perhaps 30 fields and 500-5000 rows. I will be using PHP, JavaScript and perhaps AJAX unless you have a strong reason not to.
I have done some research and found three ways to do this:
Send all data to the browser and handle the filtering etc client side with Javascript.
Send parameters back to the server after each selection and reload the whole form after each selection. Probably a littebit Javascript and most code in PHP.
Use AJAX to change all list content dynamically without reloading the whole form.
Since I am so new to this I have a hard time telling which way to go, what pitfalls there are etc...
I have some conserns:
A. Slow initial loading. Worst for #1?
B. Slow dynamic response. Worst for #2?
C. Complicated programming. Worst for #3?
D. Compatibility issues for different browsers and plattforms. Have no idea of which method is most likely to create problems...better if I use some Framework?
E. Could I even try to make something at least part-working for people with javascript turned off? (like selecting each list on a new page and having to press GO button each time)? (I think I can tell my users they must have Javascript on so no big issue....) Perhaps #2 is best here?
F. I think the specification of "free selection order" means I have to download most of the database initially, so perhaps I should try to avoid that option.....if I keep it I might as well use method #1, or???
G. It would be best to do as much as possible of the selction/filtering in SQL to allow future extensions by building custom SQL code, so that gives a big minus to #1...
H. Other pitfalls etc???
I have found tutorials etc for all three methods, but if you can point to good resources like this I would appreciate it, especially so I dont base my code on examples that are not smart/good/compatible....
1:
http://www.bobbyvandersluis.com/articles/unobtrusivedynamicselect.php
http://javascript.about.com/library/bl3drop.htm
http://www.experts-exchange.com/Programming/Languages/Scripting/JavaScript/Q_20523133.html
2:
http://www.plus2net.com/php_tutorial/php_drop_down_list.php
http://www.plus2net.com/php_tutorial/php_drop_down_list3.php
3:
http://techinitiatives.blogspot.com/2007/01/dynamic-dropdown-list-using-ajax_29.html
http://www.webmonkey.com/tutorial/Build_an_Ajax_Dropdown_Menu
http://www.noboxmedia.com/massive-ajax-countryarea-drop-down-list/
http://freeajaxscripts.net/tutorials/Tutorials/ajax/view/Create_AJAX_Dynamic_Drop_Down_List_using_PHP_-_xajax.html
3+jQuery:
http://remysharp.com/2007/01/20/auto-populating-select-boxes-using-jquery-ajax/
Now to the question: Could anyone experienced in all these methods help me out a bit, with the evaluation of methods 1-3 above so I can choose one and get started on the right track? Also, will I be helped by learning/unsing a framework like jQuery+jSON for this?
Rgds
PM
Send all data to the browser and handle the filtering etc client side
with Javascript.
You mentioned that your table has 30 columns and 500-5000 rows potentially? In that case it would not be a good idea to send that much data when the page loads as: 1. It will make the page slower to load and 2. It is likely to make the browser hang (think IE).
Send parameters back to the server after each selection and reload the
whole form after each selection.
Probably a littebit Javascript and
most code in PHP.
I'm not sure how this differs much from the third approach, but probably you mean that you need to reload the page? In that case it isn't likely to be a good user experience if they need wait for the page to refresh every time a drop down selection is changed..
Use AJAX to change all list content
dynamically without reloading the
whole form.
By far the best approach from a user's perspective as it makes filling out the form simple. Perhaps slightly harder to implement from your end, but as you would likely need to perform the same calculations with each of the solutions - might as well move them to a separate page that can be called by AJAX to retrieve your data. As others have mentioned, using jQuery for all your JavaScript/AJAX stuff is going to make things a hell of a lot easier ;)
Thanks James for a comprehensive answer! AJAX + Framework is what most people seem to suggest. Guess I have to take the effort....the other negative about it is that it basically rules out having a fall back solution for people without Javascript on, or? For the Framework, is jQuery a good/best choice for this tye of application, or should I consider YUI, MooTools etc that has more functions but seems a bit harder to use/learn?
I'd definitely recommend using AJAX with jQuery its tested in all of the major browsers and has simple calls that will make it a lot faster to code and you wouldn't have the browsers compatibility problems of normal JavaScript.
Thanks Scott! Have you seen any more good tutorials for anything like this using jQuery?
My personal recommendation is to go with AJAX.
Raw SQL or not is really a question of what backend you are using.
You need to be able to set the relationships between the different selections. The population of the lists must be able to communicate with your backend.
The real issue here is how you implement the relationships between selections. I have no good answer here, it depends heavily on the backend and your administrative needs. It can be hard coded in PHP or configured via XML or via administrative interfaces and persisted to your database solution.
It's no easy task to make it fully customizable.
The reason why i suggest using AJAX is basically because you need to filter upon any change in any selection. That would mean either download a lot of unused information or a lot of page refresh. Going with ajax gives the user a smooth experience all the way.
Thanks Peter for even further confirming the use of AJAX! Not sure I fully understand your answer though....
jquery is a simpple way to use... You can also try a particular class called xajax..! These will make stuff easier.
| common-pile/stackexchange_filtered |
Proving linear independence of function factors
I have an algorithm for converting a two-variable function $F(x,y)$ into a sum of products of single-variable functions $F(x,y) = \sum_i g_i(x)h_i(y)$. I am attempting to determine whether (or when) the $g_i$ produced in this way are all linearly independent.
The algorithm is as follows:
If $F(x,y)$ is identically zero, then we're done—the decomposition needs no terms.
Otherwise, pick a point $(a,b)$ where $F(a,b)\neq 0$. Define $g(x)\equiv F(x,b)$ and define $h(y) = F(a, y)/F(a,b)$. The functions $g(x)$ and $h(y)$ are the first two factors in the decomposition.
To find the remaining factors, recursively apply this algorithm to the new function $G(x,y)\equiv F(x,y)-g(x)h(y)$.
The algorithm proceeds, generating successive terms $g_i,h_i$, stopping whenever $F(x,y) - \sum_i g_ih_i = 0$, i.e. when $F(x,y)=\sum_ig_ih_i$.
What I've tried
I think it might be true that each function $g_k$ is a linear combination of $F(x,b_1),\ldots,F(x,b_k)$, in which case the $g_k$ are linearly independent if and only if the $F(x,b_i)$ are.
I also know the theorem that functions $g_1,\ldots,g_n$ are linearly independent if and only if there exist points $x_1,\ldots,x_n$ such that the determinant $\det([g_i(x_j)]_{i,j}) \neq 0$. But I haven't been able to make the calculation do anything useful as of yet.
I wonder if I might be able to use the points $a_1,\ldots,a_n$ (generated by the algorithm as places where the function is nonzero) as the points required for the theorem, but I don't see how they carry the right properties to prove linear independence.
ETA: I think the $g_i(x)$ might have different zeroes. That is, I suspect $g_i(a_j)$ is zero whenever $i\geq k$ and nonzero whenever $i<k$, which suggests a kind of proof by induction that if any linear combination of them is zero, then the coefficient on $g_1$ must be zero, hence on $g_2$, hence on $g_3$, etc.
Can you explain why the algorithm terminates? It's not at all clear to me that it does.
I don't have a proof that it terminates, and I suspect there are functions where it doesn't. Proving that the $g_1,\ldots,g_n$ and $h_1,\ldots,h_n$ might be useful for establishing such a proof: if $f$ has two different decompositions $\sum_ip_iq_i$ and $\sum g_ih_i$ (with $p_i, q_i, g_i,h_i$ all linearly independent sets), then both sums have the same number of terms.
One good example of how it works is on the function $f(\alpha,\beta)= \cos(\alpha+\beta)$.
| common-pile/stackexchange_filtered |
Schedul run Laravel 503 services unavailable error
I update the laradock and now my project is showing 503 error. Im not running any cron job.
Php version i have 8.1
Laravel 8
Node v14.17.0
NPM 8.1.3
I tried to install the composer, npm laravel roles permission, nothing worked.
Can't figure it out this error
(laradock) CMD (/usr/bin/php /var/www/artisan schedule:run >> /dev/null 2>&1)
Can you run this command php artisan up
@LokendraSinghPanwar , i run 'php artisan up' this command, still showing same error 503.
| common-pile/stackexchange_filtered |
Geometrical Rotation and Rotation operator relation and ambiguity
In Cohen-Tannoudji's it is said (vol.1 page 694):
$|\psi \rangle$ is the state of the system before the rotation.
$|\psi ' \rangle$ is the state of the system after the rotation.
$|\psi ' \rangle= R |\psi \rangle$ in the bracket notation.
Geometrically: $\psi '(\vec r)=\psi (\mathcal{R}^{-1} \vec r)$
Also : $\mathcal{R}$ (geometrical rotation) $\rightarrow R$ (operator)
This is the part I don't fully understand, the notation change and the commentary:
The relation $\psi '(\vec r)=\psi (\mathcal{R}^{-1} \vec r)$ characterizes it's action in the {$|\vec r\rangle$} representation:
$\langle \vec r|R| \psi \rangle=\langle \mathcal{R}^{-1} \vec r| \psi\rangle$,
where $|\mathcal{R}^{-1} \vec r| \psi\rangle$ is the basis ket of this representation determined by the components of the vector $\mathcal{R}^{-1} \vec r$.
I have the following questions for the commentary in bolt:
What it is meant with "chracterizes it's action in the..."
Why here: $\langle \vec r|R| \psi \rangle=\langle \mathcal{R}^{-1} \vec r| \psi\rangle$, we replaced the operator with it's geometrical analogon, instead with the complex conjugate $ R^\dagger$ ?
" where $|\mathcal{R}^{-1} \vec r| \psi\rangle$ is the basis ket of this representation determined by the components of the vector $\mathcal{R}^{-1} \vec r$ ". What it is meant with this?
Let's be a bit more explicit about what the objects here are:
$\vec r\in\mathbb{R}^3$ is a position vector. $\mathcal{R}\in\mathrm{SO}(3)$ is the usual representation of a rotation as a 3d matrix, and hence $\mathcal{R}\vec r\in\mathbb{R}^3$ is a rotated position vector.
$R\in O(\mathcal{H})$ is an operator on our Hilbert space $\mathcal{H}$ ($O(\mathcal{H})$ is just notation for the space of linear operators on $\mathcal{H}$) and so for any state vector $\lvert \psi\rangle\in\mathcal{H}$, $R\lvert \psi\rangle\in\mathcal{H}$ is the state vector we get by applying a rotation to a state vector.
There is a unitary representation map $\rho : \mathrm{SO}(3)\to O(\mathcal{H}), \mathcal{R}\mapsto \rho(\mathcal{R})$ that maps every rotation $\mathcal{R}$ to its representation $\rho(\mathcal{R})\in O(\mathcal{H})$ on the space of states. This map fulfills $\rho(\mathcal{R}^{-1}) = \rho(\mathcal{R})^\dagger$. It's a bit confusing to just call the image under this map $R$, so let's call it $\rho(\mathcal{R})$ from now on.
The ket $\lvert \vec r\rangle\in\mathcal{H}$ is the state that corresponds to a particle localized at $\vec r$. The relation that your quote wants to state is
$$ \lvert \mathcal{R}\vec r\rangle = \rho(\mathcal{R})\lvert \vec r\rangle$$
i.e. applying the rotation to the position vector and then finding the state corresponding to it is the same as taking the state corresponding to the position vector and applying the rotation as an operator on the space of states to it. For the wavefunction of a rotated state $\lvert \psi'\rangle = \rho(\mathcal{R})\lvert \psi\rangle$, this means
\begin{align} \psi'(\vec r) & = \langle \vec r\vert \psi'\rangle = \langle \vec r\rvert \rho(\mathcal{R})\lvert\psi\rangle = (\lvert \vec r\rangle,\rho(\mathcal{R}) \lvert \psi\rangle) = (\rho(\mathcal{R})^\dagger\lvert \vec r\rangle, \lvert \psi\rangle) \\
& = (\rho(\mathcal{R}^{-1})\lvert \vec r\rangle, \lvert \psi\rangle) = (\lvert \mathcal{R}^{-1}\vec r\rangle,\lvert \psi\rangle) = \langle \mathcal{R}^{-1}\vec r\vert\psi\rangle = \psi(\mathcal{R}^{-1}\vec r),\end{align}
where we've written $\langle a\vert b\rangle = (\lvert a\rangle, \lvert b\rangle)$ in terms of the inner product $(-,-)$ since the bra-ket notation obscures a bit what's going on here.
| common-pile/stackexchange_filtered |
Django 1.7 Not Finding New Model w/ makemigrations
I have a project with several apps and many data models. I'm using Django 1.7 and Python 2.7.
I've organized the models into app-level modules.
- common/
-- models/
--- __init__.py
--- these_models.py
--- those_models.py
I've added a new file in this structure and Django's makemigration command is not detecting changes.
If I put the new models in an existing model file the migration files are created perfectly, everything migrates and runs great. Once I put them into a new file Django doesn't find them. They aren't in a new app - it's an existing app/models/ module, just a new file. I don't import * (ewwww) in the __init__.py or anything.
In Django 1.4 I had to use the Meta's app_label but don't do this anymore.
Any thoughts? Will I need to make the migration files manually (I have no problem doing this)?
http://stackoverflow.com/questions/5534206/how-do-i-separate-my-models-out-in-django
This is not relevant to my question as I'm using Django 1.7. Also, I mentioned in my question that is not relevant. Did you read my question?
You should import your models in the __init__.py inside models. No one is telling you to use *.
@snahor This does not follow our current project paradigm. Django can handle what I'm describing and I'm not sure why it's not this time.
Django does now support models in subfolders without needing to specify the Meta class and app_label but it's still python and doesn't magically load all modules in the models folder.
You still need to import your models into your app/models/__init__.py.
This is not true. My project has about 8 model modules with about 2 dozen model files across the modules. Not one is imported in the app/models/__init__.py. We just reference all our models as from app.models.these_models import SomeModel. Django seems to handle this paradigm just fine. (other than this use case)
@Rico in that case you are "getting lucky" by the fact there are imports for those models. If there is a period of time a model in one of those files is not imported by a module auto-imported by django, makemigrations would fail to see the models and delete it. Sounds a little dangerous! This is also the problem with why your new models aren't picked up by makemigrations. Nothing is importing them. This is a dependency nobody would expect (removing an import should not cause massive changes in the way the app works) and I recommend importing in __init__.py where django expects to find them
| common-pile/stackexchange_filtered |
Are these summation notation the same or do they express different rules?
Are these two the same?
$\sum_{u\in S}\sum_{v\in S}f(u,v)$
$\sum_{u,v\in S}f(u,v)$
Yes, you're right.
| common-pile/stackexchange_filtered |
Angular Date Incorrect
I am using angular 1 with a webapi service.
I have a html5 date input on the page and if I select 2nd June 2018 I get the following JSON returned
2018-05-31T23:00:00.000Z
My computer and the server are set to GMT so why on earth is it removing an hour from the time?
Yep... That was my mistake. It is now edited to show the correct info
Lol... It is now correct above. It is the end of a hard day ;-)
Welcome to the bane of developers everywhere... date/time
Everything is fine. It returns as (BST) time - British Summer Time. The time is reduced by one hour == (UTC +1). Sat Jun 02 2018 00:00:00 GMT+0100 (BST) == "2018-06-01T23:00:00.000Z"
Here is a demo of how time can be displayed:
var app = angular.module('myApp', []);
app.controller('myCtrl', function($scope) {
$scope.test = new Date("2018-06-02");
$scope.check = function() {
console.log($scope.test);
}
});
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.4/angular.min.js"></script>
<div ng-app="myApp" ng-controller="myCtrl">
<input type="date" ng-model="test">
<button ng-click="check()">Click</button>
<br>{{test}}
<br>{{test | date}}
<br>{{test | date:"fullDate"}}
</div>
It isn't BST at the moment... Surely Zulu time is GMT not BST(Which could change at a government decision)?
Ahh. I see what you are saying now. However Surely it should therefore return Sat Jun 02 2018 00:00:00 GMT+0100 not 2018-06-01T23:00:00.000Z?
@coolblue2000 it might be a good idea to use moment.js to automate/simplify most of the time (display) issues
How would this be done? Angular assumes I have picked a specific date. Is this not an issue with the HTML Date Input?
I am not sure how that helps. I know what date it is returning. It is just the incorrect date. In a date picker I want the date I picked not the date I picked minus 1 hour.
@coolblue2000 it's just depends on your local time, it will show you a different result. You can fix it by adding additional time with .getTimezoneOffset()
| common-pile/stackexchange_filtered |
Google Analytics POST request works fine in REST Client, but not using Python Requests
I'm trying to send an event to Google Analytics using a Python script.
When I try to send the event using the Google Analyics "Hit Builder", it works fine. When I try to do the same thing with a simple POST request using the VSCode REST Client, it also works fine. But, for reasons I can't understand, making the exact same POST request with the Python Requests library doesn't work.
Here's what I want to do. This request works perfectly using the VSCode REST Client:
###
# @name analytics_post
POST www.google-analytics.com/collect
?v=1
&t=event
&tid=UA-XXXXXX-Y
&cid=Client%20ID%20Goes%20Here
&ec=Event%20Category%20Goes%20Here
&ea=Event%20Action%20Goes%20Here
&el=Event%20Label%20Goes%20Here
&ev=123
When I make that request, the event shows up right away in Google Analytics - great! But when I try to do the exact same thing in Python with Requests, it fails:
import requests
from urllib.parse import urlencode, quote
params = {}
params["v"] = 1 # Version
params["t"] = "event" # Event hit type
params["tid"] = "UA-XXXXXX-YY" # Tracking ID
params["cid"] = "Client ID Goes Here" # Anonymous Client ID
params["ec"] = "Event Category Goes Here" # Event Category
params["ea"] = "Event Action Goes Here" # Event Action
params["el"] = "Event Label Goes Here" # Event label
params["ev"] = 123
url = 'https://www.google-analytics.com/collect'
query = urlencode(params, quote_via=quote)
response = requests.post(url=url, params=query)
After running this, I get a 200 Response object and everything looks fine. The response even has a url object, which is exactly the same as the URL I formatted for the REST client request... so there aren't any issues with the way the URL is formatted. But the data doesn't appear in Google Analytics.
Just to be sure, I tried the same code but leaving out the quote_via=quote parameter, so that it would default to quote_plus, meaning that spaces would be rendered as + characters rather than %20. I also tried doing data=query instead of params=query in the POST request (last line). Neither of these worked. What am I doing wrong?
you don't need to urlencode if using requests
I get the same behvaiour if I omit the urlencode line
are you sure you need the 'params' kwarg and not 'data'? i havent messed with google analytics so just asking.
I've seen differing answers on whether I should use params or data... Anyway I tried both and got the same behaviour.
what headers does the vs thing use? GA may not like the default python ones
REST Client uses "User-Agent": "vscode-restclient". I tried just using {"User-Agent": ""} as the headers per another post, and that worked. Thanks for your help.
Turns out, this was an issue with the default headers used by requests. The solution mentioned here worked for me. I just had to also add {"User-Agent": ""} as the set of headers for the POST request.
This worked for me:
import requests
params = {}
params["v"] = 1 # Version
params["t"] = "event" # Event hit type
params["tid"] = "UA-XXXXXX-YY" # Tracking ID
params["cid"] = "Client ID Goes Here" # Anonymous Client ID
params["ec"] = "Event Category Goes Here" # Event Category
params["ea"] = "Event Action Goes Here" # Event Action
params["el"] = "Event Label Goes Here" # Event label
params["ev"] = 123
url = 'https://www.google-analytics.com/collect'
headers= {"User-Agent": ""}
response = requests.post(url=url, params=params, headers=headers)
| common-pile/stackexchange_filtered |
Is writing to a by-value function parameter good style?
I always heard the rule that changing a parameter that was given to the function by value is bad coding style. Instead it is preferred to create a copy that is modified.
But I think in the following case it would be acceptable to change the function parameters. What do you think is the best way to do it?
Point3f GetWorldPoint(int x, int y)
{
x = saturate(x, 0, width);
y = saturate(y, 0, height);
...
}
template<typename T>
T saturate(T val, T min, T max) {
return std::min(std::max(val, min), max);
}
I believe compilers optimize it out, so it's all personal preference.
What I think the best way to do it is my opinion. Which makes this primarily opinion based. Although obviously returning by value is better than modifying references.
agree with @AngeloGeels compiler can do optimization So it is a personal choice, but In general you should avoid copies if you don't need it.
With small types (typically value types), I would avoid passing by reference, so that the interface clarifies that there is no side effects on the passed variable. Reasons to choose for references are usually optimization (save copy time and saving memory in a lot of cases) or convenience reasons.
This is very much a matter of opinion or preference. It's not something I've seen a lot of discussion on. You can do it, of course, but it feels wrong, as though it's going to make your code harder to follow.
@fardjad, abort: the question is explicitely about by-value parameters, so no side effect, only local copies of parameters are modified.
Not related to the question at all, the function that you call saturate is usually called clamp.
The question is not about references. The question is, given that the parameters will be passed by value, is it "bad style" to modify that? To me, that's a very strange question, but I think it's good to stay focussed on the original question.
"I always heard the rule that changing a parameter that was given to the function by value is bad coding style" Here's my opinion: it is not bad coding style. It is perfectly fine. The parameter is your own local copy. You can do with it what you like.
I have some objection with using
Point3f GetWorldPoint(int x, int y)
{
x = saturate(x, 0, width);
y = saturate(y, 0, height);
...
}
Because even semantically, x and y aren't the same before and after the saturate function (they are saturated after all). In more complex situations, this might become confusing. Of course, in obvious functions, it's still straightforward that after saturate, you are now using the saturated x.
Anyway, there is nothing wrong with
Point3f GetWorldPoint(int origX, int origY)
{
int x = saturate(origX, 0, width);
int y = saturate(origY, 0, height);
...
}
It is clearer and an optimizing compiler will create similar code in both cases.
I'm going to stick my neck out here and say it's fine. For example:
template<class T>
Matrix4x4<T> Transpose(Matrix4x4<T> m)
{
return m.Transpose();
}
or
template<class T>
Matrix4x4<T> Transpose(Matrix4x4<T> const & m)
{
Matrix4x4<T> n(m);
return n.Transpose();
}
Which is more concise?
I am a little torn in this specific example. If I am not mistaken there is a real difference between the two codes. As RVO can not be applied to a function parameter the first one will result in transpose -> move/copy while the second one will result in copy -> transpose. This might make no difference for matrices but is something to keep in mind.
I think it's fine to pass by value anything you're going to make a copy of anyway. I remember reading or seeing Scott Meyers talking about it, though my recollection is only dim.
| common-pile/stackexchange_filtered |
Integers >9999 in PIC C18
In this answer, I made a function to convert an integer to an ASCII string:
void writeInteger(unsigned int input) {
unsigned int start = 1;
unsigned int counter;
while (start <= input)
start *= 10;
for (counter = start / 10; counter >= 1; counter /= 10)
serialWrite(((input / counter) % 10) + 0x30);
}
I tested this function with a loop:
unsigned int counter;
for (counter = 0; counter< 1000000; counter++) {
writeInteger(counter);
serialWrite('\r');
serialWrite('\n');
}
This works, for \$1\le{}n\le9999\$. However, for 10,000 and above, I'm getting weird strings on the terminal:
10000 => 2943
10001 => 2943
10002 => 2944
10003 => 2944
10004 => 2944
10005 => 2945
...
Why is that? How can I fix it?
Check this answer: http://stackoverflow.com/questions/6302195/where-can-i-find-a-free-or-open-source-c-library-to-do-bcd-math#6302328 You really want to limit your divisions to the minimum, as they are slowwwwwwwwwwww
You have made counter<1,000,0000 in the main. However, counter is an int
@abdullahkahraman yeah, thanks - I thought an int would hold 32 bits, as in Java, for example.
Here is my routine that is optimized for an 8-bit microcontroller. It is fast since it does not use any modulus or multiplication. Also, you can modify it easily so that it is suitable for more digits.
It's because the following section of code will be trying to set start to 100,000 for numbers equal to or above 10,000 which is too big for an unsigned int which is a 16-bit and can only hold 0-65535:
while (start <= input)
start *= 10;
Changing the definition to the following should fix it:
unsigned long start = 1;
Another alternative to make the code clearer is to include stdint.h so it may be defined as the following that will work across compilers:
#include <stdint.h>
uint32_t start = 1;
How come? When input is 10,000 , it will set start to 10,000.
Ahh! I thought an int would hold 32 bits, but that's a long! For reference, section 2.1.
@abdullahkahraman no, it sets it to 100,000 and the counter is set to start/10.
When input is 10,000, it will set start to 10,000. However, when the input is bigger than 10,000 it will set start to 100,000. What am I doing wrong?
@abdullahkahraman It's <=, not < ;)
@CamilStaps, can be a trap with some micro compilers, I've also added the way I'd normally go about it that's more portable.
@CamilStaps But 10,000 = 10,000 when input=10,000.
@abdullahkahraman as long as start<=10,000, it will be multiplied by 10. The last time it multiplies by 10 is when start=10,000, so the end result would be 100,000.
| common-pile/stackexchange_filtered |
Can't compile C++ function on my machine although it compiles fine using online compiler
I have a simple recursive Binary Search program in C++. The program compiles just fine using ideone: http://ideone.com/gMB96l
However, when I try to compile on my machine, using Xcode in OS X, it gives an error: control may reach end of non-void function.
It's also the same when I tried to compile using command line: g++ RecursiveBinarySearch.cpp and ./a.out, it gives me: RecursiveBinarySearch.cpp:18:1: warning: control may reach end of non-void function [-Wreturn-type]
Does anyone know why?
#include <iostream>
using namespace std;
static const int SIZE = 10;
int search(int arr[], int target, int startIndex, int endIndex)
{
if (startIndex > endIndex) return -1;
int midIndex = (startIndex + endIndex) / 2;
if (target == arr[midIndex])
return midIndex;
else if (target < arr[midIndex])
search(arr, target, startIndex, midIndex-1);
else
search(arr, target, midIndex+1, endIndex);
}
int main() {
int arr[SIZE] = {1,2,3,4,5,6,7,8,9,10};
cout << "3 is at index: " << search(arr, 3, 0, SIZE-1) << endl;
return 0;
}
No it does not compile fine with an online compiler. The online compiler just doesn't show you warnings, nor treats warnings as errors. If you want to use an online compiler, find one that lets you set your own flags, and use -Wall -Werror
Yes, that compiler was misleading. I should have not used it
Your code for search doesn't return a value unless target == arr[midIndex]. You probably meant to return the value returned by your recursive calls to search i.e. return search(arr, target, ...);
I see where you might be confused. Your logic is correct, however the compiler assumes that you do not have a return statement although it will be reached when target=arr[midIndex]
Hense the warning
Control may reach end of non-void function [-Wreturn-type]
A quick (but ugly) fix will be to add a return statement at the end of the method which will never be reached.
Note in the future, when you encounter this error it is usually this issue.
Hope this helps
| common-pile/stackexchange_filtered |
Are multidimensional arrays accessed via temporary pointers?
I've read that array subscript access is the same as pointer subscript access (can't find the proper link, but http://c-faq.com/aryptr/joke.html mentions that indirectly).
So what about multidimensional access? Obviously there is no pointer at c[0] in a multidimensional array.
#include <stdio.h>
int main(){
char c[5][2][4];
c[1][1][2] = 'n';
printf("\n%c", c[1][1][2]);
char* ptr = c;
printf("\n%c\n", ptr[1*8+4*1+2]);
}
But whatever it references (in a 3D array [d1][d2][d3]) is of size d2*d3, as that is where the imaginary pointer is offset.
Are there any docs for how this is implemented or how this can be reasoned with? In the 2D case, it makes sense, that it is just a pointer, but for the 3+ case, it is not clear if/how pointers are still being worked with.
All array access is really done through the use of pointers. Remember, for any array or pointer a and index i, the expression a[i] is exactly equal to *(a + i). That means an expression like c[1][1][2] is equal to *(*(*(c + 1) + 1) + 2). Also, the best way to figure out how to use "one-dimensional" indexing is to draw the arrays out of paper, with small boxes for each element and then label each box with the corresponding index (like c[1][1][2] etc.) plus a "one-dimensional" index from zero.
C11 Standard - <IP_ADDRESS> Other Operands - Lvalues, arrays, and function designators(p3) All arrays are -- but pay attention to the 4 exceptions for when an array is not converted to a pointer to the first element on access.
Seems you don't need to hear the debunking of the "arrays are pointers in disguise" lie. So I won't repeat it. But suffice it to say, that yes, all indexing happens through pointers. When you write c[0], c decays to a pointer for the purpose of indexing *(c + 0).
But a pointer to what you ask? A pointer to an array, of course. You can form those. For instance:
char d[2][4];
char (*ptr)[2][4] = &d;
Here ptr points to that array of type char[2][4]. And this is exactly the sort of pointer c decays into. That's why the multi-dimensional array school of thought in C is not too exact. We don't have those, what we have is arrays of arrays.
The whole array and pointers chapter in the classic C FAQ is great really, since it starts by correctly addressing arrays vs pointers.
I guess my question was about the size of the pointer as well, but this is close enough. The size of the pointed to type would be the size of everything to the right (in this case 4), so doing d[1] advances by 4 byte. I kept assuming that it had to be a char* ptr.
All array expression evaluations yield a pointer. If all subscripts are used, that yielded pointer is being dereferenced; any other evaluation expression will just yield a pointer value.
I emphasize "yield" to dispel with the notion that an array actually is a pointer - it is not. The mechanism is often described as "decays into a pointer", which I personally find misleading; after all, the array itself remains unchanged.
So, yes, you can reason about this as if the array access produces a temporary pointer.
The concept is explained here: With arrays, why is it the case that a[5] == 5[a]?. It is an alternative syntax allowed "by accident" to keep the language consistent, nothing else.
It has nothing to do with the container type, you can do the same obscure tricks with pointer-to-pointer as with multi-dimensional arrays. Example:
#include <stdio.h>
int main (void)
{
char str[2][2][6] =
{
{"hello", "world"},
{"foo", "bar"}
};
printf("%c\n", str[0][0][0]); // print h
printf("%c\n", 0[0[0[str]]]); // print h (bad code, don't do this)
}
The reason why you can do things like ptr[1*8+4*1+2] is another story, namely that a multi-dimensional array is guaranteed to be allocated contiguously. This is specified in C17 6.2.5/20:
An array type describes a contiguously allocated nonempty set of objects with a
particular member object type, called the element type.
As for "imaginary pointers", how an array is accessed in machine code is up to the compiler. Sometimes it is more efficient to use direct addresses, sometimes it uses a base address + an offset. It's the kind of automatic optimization that programmers need not worry about.
| common-pile/stackexchange_filtered |
Cant browse Android phone with KDE on Kubuntu
I'm using Kubuntu and my Android phone (Samsung s7) is connected to it with KDE connect. I can send files from the phone to the PC but it keeps on failing.
It successfully rings my phone and shows the charge percentage, but when I try to browse the device it says
Failed to mount filesystem: device not responding
Have you made sure the device is unlocked when attempting to browse it? My phone won't let me browse the files if the screen has locked.
| common-pile/stackexchange_filtered |
How to get input argument from a mock method using PowerMock / EasyMock
I have a entity Person class.
A ProcessPerson class which contain process() method needs to be tested.
What I need is to get the object is created in the process() method and is called via mock method create() of mock object personDao.
public class Person {
private String firstName;
private String lastName;
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
}
public class ProcessPerson {
@Autowired
private PersonDao personDao;
public void process() {
Person person1 = new Person();
person1.setFirstName("FN");
//Do more logic to fill values into person object
personDao.create(person1);
}
}
import static org.powermock.api.easymock.PowerMock.*;
import static org.easymock.EasyMock.expect;
@RunWith(PowerMockRunner.class)
@PrepareForTest ( {Person.class} )
public class TestClass {
@Test
public void TestCase1() {
ProcessPerson process = new ProcessPerson();
//Create dependent DAO mock object
PersonDao personDaoMock = createMock(PersonDao.class);
//Inject mock object
ReflectionTestUtils.setField(process, "personDao", personDaoMock);
replayAll();
process.process();
//QUESTION here: can I get the object person1 which was created in process() method
//and called via create() function of my mock object
//
//I need the object to do verification like this
//Assert.assertEqual(person1.getFirstName(), "FN");
verifyAll();
}
}
Any suggestion, idea is welcome
Thanks
EasyMock has "andDelegateTo", which can be used for what you would like to do. It allows you to get some local code called when the mock is called. I use the AtomicReference to get the person object from the anonymous inner class.
@Test
public void TestCase1() {
ProcessPerson process = new ProcessPerson();
//Create dependent DAO mock object
PersonDao personDaoMock = createMock(PersonDao.class);
//Inject mock object
ReflectionTestUtils.setField(process, "personDao", personDaoMock);
final AtomicReference<Person> ref = new AtomicReference<Person>();
personDaoMock.create(anyPerson());
//expectLastCall().andDelegateTo(null);
expectLastCall().andDelegateTo(new PersonDao() {
@Override
public void create(Person person1) {
ref.set(person1);
}
});
replayAll();
process.process();
assertNotNull(ref.get());
verifyAll();
}
public static Person anyPerson()
{
reportMatcher(Any.ANY);
return null;
}
See the EasyMock Documentation for some more details on andDelegateTo().
Thanks centic. It works!. Just only 1 more thing. If the PersonDao interface has 100 functions, I have to override the other 99 functions with nothing in there. Is there any other way to cover that?
There is a method andStubDelegateTo(). I have not yet used it anywhere and the documentation is not very specific, but I think it can be used to only implement the one method and not the whole interface.
| common-pile/stackexchange_filtered |
Impossible to change gitlab project url
I downloaded rpm installer for Centos and installed gitlab.
I also fallowed this link: How to change URL of a working GitLab install?
I searched for 'how to change gitlab url' and similar... but found old results point to /etc/nginx/sites-enabled. But there is no directories like that. I changed every Url I found in files but still project url is the old. Is there any way to change it directly in template files or in config ? Tnx in advance
@omeinusch I also seen that...
I just edited this file:
'/opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml' and now it is file :)
| common-pile/stackexchange_filtered |
Acess boost ptree across multiple threads
I'd like to make a system, that loads options out of an XML-file into a ptree and acess this ptree across multiple threads.
Sofar, i have mad a simple class, that is accessible to every thread, that contains the methods put(id) and get().
Unfortunately, ptree doesn't appear to be threadsafe, so the program crashes a lot.
Is there a way, to make ptree threadsafe? Or is there a better solution all together?
whay don't you use a mutex in the put, get operations?
K, ill have a look at it, thx
You can use the guardian template structure described int this blog post.
Basically, you will create a guardian<ptree> instead of a plain ptree. A guardian is a opaque structure that holds a mutex alongside its data. The only way to access the data is via a guardian_lock, that will lock the mutex.
guardian<ptree> xml;
//thread 1
{
guardian_lock<ptree> lock(xml);
lock->put("a", "b");
}
//thread 2
{
guardian_lock<ptree> lock(xml);
lock->put("c", "d");
}
As you can only access the inner ptree through the lock, and the lock locks the mutex, you will never have race conditions.
| common-pile/stackexchange_filtered |
What parameter types are supported by Google Endpoints in an API method?
I can make String, Date and Long work. If I use byte[] I get an error when I run the endpoints.sh script. I can find nothing in the documentation that lists the types supported and the errors generated a pretty cryptic. I'd like to get a little bit of binary (image) data into an endpoint method. This is no good:
@ApiMethod(name = "device.bikeImage.set")
public void setDeviceBikeImage(com.google.appengine.api.users.User appEngineUser,
@Named("facebookAccessToken") @Nullable String facebookAccessToken,
@Named("deviceId") String deviceId, @Named("bikeImage") byte[] bikeImage)
throws IOException, OAuthRequestException {
}
What types are supported?
The datatypes supported are described in the docs for endpoints, right here.
The supported parameter types are the following:
java.lang.String java.lang.Boolean and boolean
java.lang.Integer and int
java.lang.Long and long java.lang.Float and float java.lang.Double
and double java.util.Date com.google.api.server.spi.types.DateAndTime
com.google.api.server.spi.types.SimpleDate Any enum Any array or
java.util.Collection of a parameter type
Thanks. This is new documentation that didn't exist at the time the question was posted, but I'll accept it as an answer now.
The following article has a list of the value types that are supported (go to the "Properties and Value Types" section:
https://developers.google.com/appengine/docs/java/datastore/entities
When working with Endpoints, you are definitely limited to only those types that can be serialized into JSON.
There is also minimal discussion on serving blobs from Endpoints in these two questions:
How can I upload an thumbnail image (blob) at the same time as an Entity into a datastore in google app engine?
Serving blob from app-engine endpoint
| common-pile/stackexchange_filtered |
User lingering systemd dependency on PostgreSQL
I have a user account on Ubuntu Server 17.10 with user lingering enabled for systemd, which is, in my opinion, a great function. However, it's poorly documented. While I've managed to start up my service as this user manually using systemctl --user ... commands, I have a problem with dependencies.
In my ~/.config/systemd/user/foobar.service I have the dependency set this way:
[Unit]
Decription=StackExchange Foo Bar
After=network.target
# Some non-relevant stuff removed
[Install]
WantedBy=default.target
When I put posgresql.service it fails saying it doesn't know that one. I figured out in other answers, that lingered systemd doesn't see system-wide services. Only generic systemd targets, hence WantedBy should be set to default.target. But with that setting, the service (real application) itself fails to start on boot, as it cannot connect to the system-wide Postgresql [in time?].
So, the real question here is: How to setup all that in lingering mode, to have the working dependency on PosgreSQL at every boot? Any way to setup dummy postgreql.service that would link to the system one somehow? Are there any other methods? Any hint would be welcome.
EDIT: Postgresql is running as a system-wide application, not a service via a user account.
You define a target that requires PostgreSQL as follows:
$ cat postgresql.target
[Unit]
Description=Emergency Mode with Networking
Requires=network.target postgresql.service
After=postgresql.service
AllowIsolate=yes
This you add to your service:
[Unit]
Decription=StackExchange Foo Bar
After=network.target postgresql.target
# Some non-relevant stuff removed
[Install]
WantedBy=default.target
EDIT (After comment):
The above is incorrect, I assumed systemd in user-space could see any target.
After investigation, it appears to only know these targets:
default.target, shutdown.target, sockets.target, timers.target,
paths.target, bluetooth.target, printer.target, smartcard.target,
sound.target.
This means that you would have to add the postgresql.service to any of the above targets that makes sense, I would add it to sockets.target and have your foobar.service start AFTER default.target, which itself is reached AFTER sockets.target.
To add the postgresql.service to sockets.target, you add sockets.target to WantedBy= of postgresql.service, then update systemd config.
Another option would require netcat, nc -z <host> <port> will return 0 if the port is open ... in the script your service runs just loop while the PostgreSQL port is closed .... once the port opens, wait another second or two (or whatever is practical) and then have your service do what it is supposed to.
Good, that leads me somewhere.
@thecarpy could you be so kind, and point correct location for the *.target files in the user's directory structure?
Ups... I wasn't clear enough: PostgreSQL is a standard Ubuntu package running from system's systemd, my service is running from user's systemd. I'm starting to believe the whole lingering feature have a serious flaw, what effectively makes it more less useless without escaping to nasty tricks. There should be at least method to map user services & targets to system services & targets.
i guess postgreql.service should be postgresql.service with an s in front of ql
| common-pile/stackexchange_filtered |
Unix - link my internal server on port 9000 to the web
I am on unix
I have an internal (on the machine) server running on port 9000
I want to access this server via my website, www.example.com
I would like to keep whatever i have already at www.example.com, but access my internal server via www.example.com/internal
What do I have to do to make this happen?
You need a reverse proxy in your web server.
For Apache HTTPD (and compatible), I recommend using RewriteRule with the proxy flag:
RewriteRule ^/internal(/.*)?$ http://localhost:9000$1 [P]
| common-pile/stackexchange_filtered |
JavaEE PaaS for cloud applications
Are there any JavaEE platform for developing cloud based application? I am looking for something similar to what Microsoft Azure platform provides. And importantly I need a full stack implementation including EJB.
If you want full stack of J2EE I would highly recommend going with Amazon EC2, just select one of the many available AMI's best suited for your needs, and you are ready to go and scale.
http://aws.amazon.com/ec2/
But how easy is the configuration part. Does it require a lot of management. Knowledge of any new technologies other than java is required to develop in Amazon? How good is the support for these AMIs?
| common-pile/stackexchange_filtered |
Should you use the Command Pattern for requests involving very little logic?
I am new to design patterns and want to get a better understanding of when to implement the command pattern. The command pattern from my understanding is intended to encapsulate a request and the logic needed to fulfill that request into its own object.
It makes sense to create a command for a more complicated request such as generating and saving a PDF report of some database result. For example:
public class PdfExport implements Command {
private MyEntityDao someDao = new MyEntityDaoImpl();
public PdfExport( ... ) {
// Set up command here...
}
@Override
public void execute() {
List<MyEntity> data = someDao.getData();
// Complex logic to create and export PDF...
}
}
But say we just have an extremely simple task such as deleting a single record by its name field. For example:
public class DeleteRecordByName implements Command {
private MyEntityDao someDao = new MyEntityDaoImpl();
String name;
public DeleteRecordByName(String name) {
this.name = name;
}
@Override
public void execute() {
someDao.deleteByName(name);
}
}
As you can see there is really no logic being implemented in the second command. The only reason I could come up with for doing this is that you have a layered architecture and want to keep your DAOs out of your client-side code, or to keep the command history.
Is there any benefit to creating a command for something as simple as deleting a single record?
As a follow-up question, is there a certain amount of logic that needs to be involved before it makes any sense at all to create a command object?
The point of the command pattern is to give all your commands a common interface, which can represent (e.g.) permissions, and to give you the ability to pass commands around your program. The complexity of any single command has no bearing on whether the pattern is useful. IMHO.
Yes, there are numerous reasons for simple Commands.
The GoF book lists five points of applicability starting on page 235.
Use the Command pattern when you want to
parameterize objects by an action to perform... Commands are an object-oriented replacement for callbacks.
specify, queue, and execute requests at different times. A Command object
can have a lifetime independent of the original request.
support undo. The Command's Execute operation can store state for reversing its effects in the command itself.
support logging changes so that they can be reapplied in case of a system
crash.
structure a system around high-level operations built on primitives operations. Such a structure is common in information systems that support transactions.
The book adds more details; and some of these features require expanding the Command interface beyond just an execute() method; but in answer to the OP, note that none of these features is concerned with any particularly complex logic the Command executes.
The quintessential examples of Commands in Java are Runnable and Callable. Consider how often these are implemented with simple and even trivial logic.
| common-pile/stackexchange_filtered |
Why does C++ span's C style array constructor need type_identity_t?
The C style array constructor for span is specified as follows
template<size_t N> constexpr span(
type_identity_t<element_type> (&arr)[N]) noexcept;
Why is type_identity_t necessary? instead of just:
template<size_t N> constexpr span(
element_type (&arr)[N]) noexcept;
As was originally defined in this proposal?
As cbhattac's answer explains, the problem was that span's deduction guide's picked the wrong overload.
In issue3369 a fix was developed.
The core problem was that:
template <size_t Size>
requires (Extent == dynamic_extent || Extent == Size)
span(T (&)[Size]) {}
ctor generates an implicit deduction guide, and so does
template <typename T, size_t Extent>
span(T (&)[Extent]) -> span<T, Extent>;
The constructor builds a span with variable length, and the deduction guide builds one with a fixed length.
When passed an array of fixed length, the ideal deduced span should also be of fixed length. But it wasn't happening.
Naively, explicit deduction guilds beat ones produced from constructors, but that isn't true -- the constructor here is more constrained due to the requires (Extent == dynamic_extent || Extent == Size) clause. So it beats the deduction guide.
To fix this, type_identity_t<T> was used to block CTAD with this constructor completely. (An alternative that would also work was to add a trivial constraint to the deduction guide).
It was because "span's deduction-guide for built-in arrays did not work" without that change.
Please check this link for more details.
You should put the complete example in the answer, especially since it's a pretty interesting case.
I don't think I can explain it better than what Stephan explained in the link. Do you recommend that I delete my answer and add a comment to my question instead?
why did it not work? and why does it now work?
if you dont know how to say it in your own words you can include a quote from the article in your question.
If you add a description, please poke me so I can delete my answer which elaborates on yours. Thanks!
| common-pile/stackexchange_filtered |
is there a infinite loop in this happy number solution in leetcode?
Here is the happy number question in leetcode
This is one of the solution
Using Floyd Cycle detection algorithm.
int digitSquareSum(int n) {
int sum = 0, tmp;
while (n) {
tmp = n % 10;
sum += tmp * tmp;
n /= 10;
}
return sum;
}
bool isHappy(int n) {
int slow, fast;
slow = fast = n;
do {
slow = digitSquareSum(slow);
fast = digitSquareSum(fast);
fast = digitSquareSum(fast);
} while(slow != fast);
if (slow == 1) return 1;
else return 0;
}
Is there a chance to have infinite loop?
There would only be an infinite loop if iterating digitSquareSum could grow without bounds. But when it is called with an n digit number the result is always smaller than 100n so this does not happen because for n >= 4 the result is always smaller than the number used as input.
All that ignores that integers in the computer in most languages cannot be arbitrarily large, you would get an integer overflow if the result could grow mathematically to infinity. The result would then be likely wrong but there would still not be an infinite loop.
| common-pile/stackexchange_filtered |
Definition TableModel removeRow() method
This is my tableModel:
public class d9 extends AbstractTableModel {
ArrayList<String> cols = new ArrayList<>();
ArrayList<ArrayList<String>> data = new ArrayList<>();
public d9() {
...
int c = resultSet.getMetaData().getColumnCount();
while (resultSet.next()) {
ArrayList<String> eachRow = new ArrayList<>();
for (int i = 1; i <= c; i++) {
eachRow.add(resultSet.getString(i));
}
data.add(eachRow);
}
...
}
@Override
public int getRowCount() {
return data.size();
}
@Override
public int getColumnCount() {
return cols.size();
}
@Override
public Object getValueAt(int rowIndex, int columnIndex) {
ArrayList<String> selectedRow = data.get(rowIndex);
return selectedRow.get(columnIndex);
}
@Override
public String getColumnName(int column) {
return cols.get(column);
}
public void removeRow(int rowNumber) {
data.remove(rowNumber);
fireTableRowsDeleted(rowNumber, rowNumber);
}
}
Now, after passing a convertRowIndexToModel line number to removeRow method
Row remove from table, But after re-run program, It come back!
I assume that this is related to your previous question? Where is the data persisted/stored to?
@MadProgrammer data store to ArrayList<ArrayList<String>> data... in the constructor
But when you re-run the program, where is the data coming from, where is it persisted to?
From what I can guess out of your comment, the data is coming from a database. So did you make sure, its deleted there as well as in your table? Else of course it will always comes back, because you load your data from there and the database never knew that this row was deleted.
@MadProgrammer as my table populate own with data and cols , after re-run the program, That removed selected row, come back to table
@ymene data store from database into data ArrayList, when i want to delete a row by selecting it, it remove from table ,but after re-run program, it come back ,i think that it remove from my data !
So, execute the required delete command on the database when you call removeRow. There is no linkage between the ArrayList and your database, they don't know about each other
@MadProgrammer I add linkage between ArrayList and database, But i don't add how connect to database codes to prevent from bustle
@Sajjad there are two ways, both are very similair, difference is in network traffics, you can to start with ResultSetTableModel / TableFromDatabase (reload whole table model, actual snapshot) or to start delete row from database, (if executed, any exception) then to remove from table model, better and easiest is 1st of ways, but network traffic is higher, for database you could to use Vector (any difference significant database structure and tablemodel structure), no performance issue for util.List in compare with implemented arrays in API
But the ArrayList has no concept of where the data has come from, it is just a container. When you add or remove elements from this ArrayList, they don't update the database, they don't care, that's you job
hey .. don't ask the exact same question again and again: instead learn enough to understand the answers!
@kleopatra I got answer...
When you call removeRow you need to try and remove the row from the database.
Now because I have no idea what the structure of your database is, you will need to fill in the details, but this a simple outline of what you need to do
public void removeRow(int rowNumber) throws SQLException {
Connection con = ...;
PreparedStatement ps = null;
String keyValue = ...; // Get key value from the ArrayList
try {
ps = con.prepareStatement("DELETE from youDatabaseTabe where key=?");
ps.setObject(1, keyValue);
if (ps.executeUpdate() == 1) {
data.remove(rowNumber);
fireTableRowsDeleted(rowNumber, rowNumber);
} else {
throw new SQLException("Failed to remove row from database");
}
} finally {
try {
ps.close();
} catch (Exception e) {
}
}
}
You may want to spend some time having a read through JDBC Database Access
+1 part of users havent't Java7, then catch is required
@mKorbel Yeah, haven't made much time to check out auto closure yet :P
I think that there is no need to know the value of each row ID number, and we can delete a row by knowing it's row number just.
How? The rows in the database may not be in same order as what you have on the screen? New data may have begin added or removed since you loaded it. There is no way to guarantee that location of the data between the table and the database. That's why we have identifiers in database :P
My records has two column, ID and name , Do you mean to Get key value from the ArrayList is ID?
@Sajjad he means nothing, only you need to know which value and where corresponding with value stored in DB
Yes. How you identify the row is implementation specific, details that you haven't provided, so I just filled it in as best as I could
@MadProgrammer So, Do you mean that i should delete from database and from ArrayList too?
Yes. There is no "magical" link between the ArrayList and the DB, neither now about each other, that's why you had to populate the ArrayList to begin with. The ArrayList is nothing more then a container on data. If you want to remove the data from the database, then you must do so. Once you're satisfied, you can then update the model (ArrayList) to reflect these changes
@MadProgrammer So nice, i think that when re-run the program, the ArrayList updated automatically, correct?
Yes, the ArrayList is reloaded from the contents of the database. The ArrayList itself is not persistent.
There is no "magical" link between the ArrayList and the DB yeah, there's no magic in the air It's your job to synchronize both, no magic involved (my comment to the OP's exact same question) - but seemingly you have convinced @Sajjad, finally :-)
@kleopatra Yeah, Now understand it
@Sajjad Because you've closed the previous connection and haven't re-opened it...
@kleopatra Where do you think I got the comment from ;)
@MadProgrammer my other problem was that i use the old constructor con and resultset and not create a new of them at removeRow method . As my table is defined in another class of tableModel class, I write my removeRow(int rowNumber , int rowID) as two parameter to send rowID parameter from table.getValueAt(...) method. Correct?
That depends. If the ID is not available to the table model (and it should be), then y will need to provide it, as you need some way to identify it
@MadProgrammer Please see this: http://stackoverflow.com/questions/18177249/initialize-variable-with-constructor
| common-pile/stackexchange_filtered |
How can I change custom item in ListView
JavaFX - I have a class Record with a method(and other functionality) public String getName(). I have ListView<Record> listview and here is the code to display Record's name:
listview.setCellFactory(new Callback<ListView<Record>, ListCell<Record>>() {
public ListCell<Record> call(ListView<Record> param) {
final ListCell<Record> cell = new ListCell<Record>() {
@Override
public void updateItem(Record item, boolean empty) {
super.updateItem(item, empty);
if (item != null) {
setText(item.getName());
}
}
};
return cell;
}
});
Unfortunatelly if I change the Record's name like this:
Record record = listview.getSelectionModel().getSelectedItem();
record.setName("New Name");
the cell text does not change in listview. What should I do to fix this? Or how can I change the text of the cell differently?
The best (or most JavaFX-ish) way to do this is to have the Record class' name exposed as a property and binding the textProperty of the ListCell to it. Assuming your Record has a method nameProperty() returning an instance of StringPropery, you could replace the line
setText(item.getName());
with
textProperty().bind(item.nameProperty());
| common-pile/stackexchange_filtered |
How to append the div element in jquery if don't exist
I am adding the form my current list in a div box at the bottom of the table.
I am appending the div box when someone clicks on add button.
But when i click add button multiple times , then many div boxes are appended.
Is there any way that no matter how many times I click the button, only one instance gets append to div box.
This is my code
$var = $(this).parent().parent();
$var.append($('.jq_div')[0].outerHTML);
One simple solution would be having a boolean flag that you can toggle once your button is clicked. Additionally, there is actually a jQuery function that provides this exact functionality.
It's called one() -
Attach a handler to an event for the elements. The handler is executed
at most once per element.
So your code would look something like this -
$("#someTrigger").one('click', function(){
$var = $(this).parent().parent();
$var.append($('.jq_div')[0].outerHTML);
});
The boolean method is also very simple -
var wasClicked = false;
$("#someTrigger").on('click', function(){
if (wasClicked == false){
// append your form
wasClicked = true;
}
});
Reference -
one()
attach your listener using .one().
$("button").one('click', function(){
// Your code
});
Read more: http://api.jquery.com/one
This is under the assumption that you're using jQuery 1.7+
if i replace on with one , can it dynamically get attached to new dom elements
@user1721949 - yes. but you would know this if you read the documentation at the link provided ;-)
I tried this function but its not working properly. First it append the div. next time it don't append the div but gets the user to href link on next page, rather than doing nothing
Sounds like your add button isn't a button at all. You'll have to just attach another handler using .on() that prevents the default action.
| common-pile/stackexchange_filtered |
Rings whose Frobenius is flat
Let $R$ be a ring of characteristic $p>0$. The (absolute) Frobenius is the map of rings $F_R:R\rightarrow R$ defined by $x\mapsto x^p$.
I am interested in rings for which $F_R$ is flat (hence faithfully flat). Here are some families of examples of rings $R$ with this property.
-Regular rings (Kunz)
-Perfect rings (rings for which $F_R$ is an isomorphism)
-Valuation rings (see Theorem 3.1 of "Frobenius and valuation rings" -Datta, Smith)
In fact, Kunz famously showed that if $R$ is noetherian, then $F_R$ is flat if and only if $R$ is regular. Note that rings of the latter two types are rarely noetherian. Furthermore, it is not hard to see that the above list is by no means exhaustive.
I would like to know what is known about the class of rings with $F_R$ flat in general (without noetherian hypothesis). Specifically, is there is a non-tautological characterization of the class of (not necessarily noetherian) rings such that $F_R$ is flat (ala Kunz)? Or perhaps some characterization among a large class of rings properly containing the class of noetherian rings?
This is probably a question better suited for MO. Any questions involving removal of Noetherian hypotheses are pretty niche, and are much more likely to receive traction at MO.
I agree with Alex Youcis; Datta-Smith at the very least is maaaybe 5 years old. This lies firmly in the realm of research.
| common-pile/stackexchange_filtered |
Excel VBA sort string macro
I have a word in each row that I want to sort. It does not matter which of the words comes first as long as they are sorted. Below is an example of my excel sheet where I have replaced the word with one single letter instead. Is it possible to write a macro to help me to sort?
Thanks in advance!
Column 1
x
y
x
z
x
y
look here:http://stackoverflow.com/q/152319/382588
start recording a Macro, do your sorting with excel (Data-> Sort) and when you're done stop the macro. Finally open VEB editor (ALT+F11) and see the code
use below line
Range("A1").Sort key1:=Range("A1"), order1:=xlDescending, Header:=xlYes
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.