text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How to use substring to put the date from these numbers
how to pull the date (20060807) of these numbers
1.2.840.1137<IP_ADDRESS>.8696.41870.20060807.69548508
1.2.840.1137<IP_ADDRESS>.JDI.65.1.2002816.205431857
1.2.840.1137<IP_ADDRESS>.JDI.06.8.2002816.19213160
1.2.840.1137<IP_ADDRESS>.2360.28594.20030826.80612275
1.2.840.1137<IP_ADDRESS>.JDI.35.26.2002816.207943
Are the date formats consistent?
no, all these numbers are stored as varchar. these two have diff formats (2002816, 20030826)
basically i want to pull the information before the last dot
What about regex
Let me assume that the date formats are consistent. If so, you can do:
select substring(col, len(col) - charindex('.', reverse(col)) - 7, 8)
Because the date formats are not consistent, you might end up with an extra '.' at the end. So, get rid of it using replace():
select replace(substring(col, len(col) - charindex('.', reverse(col)) - 7, 8), '.', '')
Here is a SQL Fiddle.
| common-pile/stackexchange_filtered |
Why do traits in Rust require that no method have any type arguments to be object safe?
Is this requirement really necessary for object safety or is it just an arbitrary limitation, enacted to make the compiler implementation simpler?
A method with type arguments is just a template for constructing multiple distinct methods with concrete types. It is known at compile time, which variants of the method are used. Therefore, in the context of a program, a typed method has the semantics of a finite collection of non-typed methods.
I would like to see if there are any mistakes in this reasoning.
"It is known at compile time, which variants of the method are used" - why do you think so? These variants might come from dependencies, at least.
A small explanation is provided in the book on object safety: "The same is true of generic type parameters that are filled in with concrete type parameters when the trait is used: the concrete types become part of the type that implements the trait. When the type is forgotten through the use of a trait object, there is no way to know what types to fill in the generic type parameters with."
I will take this opportunity to present withoutboat's nomenclature of Handshaking patterns, a set of ideas to reason about the decomposition of a functionality into two interconnected traits:
you want any type which implements trait Alpha to be composable with any type which implements trait Omega…
The example given is for serialization (although other use cases apply): a trait Serialize for types the values of which can be serialized (e.g. a data record type); and Serializer for types implementing a serialization format (e.g. a JSON serializer).
When the types of both can be statically inferred, designing the traits with the static handshake is ideal. The compiler will create only the necessary functions monomorphized against the types S needed by the program, while also providing the most room for optimizations.
trait Serialize {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer;
}
trait Serializer {
//...
fn serialize_map_value<S>(&mut self, state: &mut Self::MapState, value: &S)
-> Result<(), Self::Error>
where S: Serialize;
fn serialize_seq_elt<S>(&mut self, state: &mut Self::SeqState, elt: &S)
-> Result<(), Self::Error>;
where S: Serialize;
//...
}
However, it is established that these traits cannot do dynamic dispatching. This is because once the concrete type is erased from the receiving type, that trait object is bound to a fixed table of its trait implementation, one entry per method. With this design, the compiler is unable to reason with a method containing type parameters, because it cannot monomorphize over that implementation at compile time.
A method with type arguments is just a template for constructing multiple distinct methods with concrete types. It is known at compile time, which variants of the method are used. Therefore, in the context of a program, a typed method has the semantics of a finite collection of non-typed methods.
One may be led to think that all trait implementations available are known, and therefore one could revamp the concept of a trait object to create a virtual table with multiple "layers" for a generic method, thus being able to do a form of one-sided monomorphization of that trait object. However, this does not account for two things:
The number of implementations can be huge. Just look, for example, at how many types implement Read and Write in the standard library. The number of monomorphized implementations that would have to be made present in the binary would be the product of all known implementations against the known parameter types of a given call. In the example above, it is particularly unwieldy: serializing dynamic data records to JSON and TOML would mean that there would have to be Serialize.serialize method implementations for both JSON and TOML, for each serializable type, regardless of how many of these types are effectively serialized in practice. This without accounting the other side of the handshake.
This expansion can only be done when all possible implementations are known at compile time, which is not necessarily the case. While not entirely common, it is currently possible for a trait object to be created from a dynamically linked shared object. In this case, there is never a chance to expand the method calls of that trait object against the target compilation item. With this in mind, the virtual function table created by a trait implementation is expected to be independent from the existence of other types and from how it is used.
To conclude: This is a conceptual limitation that actually makes sense when digging deeper. It is definitely not arbitrary or applied lightly. Generic method calls in trait objects is too unlikely to ever be supported, and so consumers should instead rely on employing the right interface design for the task. Thinking of handshake patterns is one possible way to mind-map these designs.
See also:
What is the cited problem with using generic type parameters in trait objects?
The Rust Programming Language, section 17.2: Object Safety Is Required for Trait Objects
| common-pile/stackexchange_filtered |
Orderby = none not working
I'm trying to set 'orderby = none' to my loop but it is not working. Here's my code:
$query = new WP_Query(array('showposts'=>2, 'post__in' => array(99,4,5,2,8,55), 'orderby'=>'none'));
Could anyone help me?
Thanks.
What exactly is not working ? is it not rendering posts, not displaying or throwing some error ?
it's sorting the posts and I need the posts ordered like 'post__in' order like this: 99,4,5,2,8,55.
I think there's similar question which has been already solved - http://wordpress.stackexchange.com/q/11055/17968
Order by none doesn't do what you think it does. If you don't specify an order, then MySQL doesn't guarantee any particular order and will simply get the posts in whatever order it has them. Since they were inserted in a particular order, then you'll probably get them in that order.
Since you need the posts in a specific order (in your case, by post IDs with "99,4,5,2,8,55"), then mysql isn't capable of that without a far more complex ORDERBY clause. Specifically, you'd have to use this with mySQL syntax:
ORDER BY FIELD($wpdb->posts.ID,99,4,5,2,8,55);
...and that's not particularly easy to do with the WordPress query engine.
You could use the posts_orderby filter to do something like this, with code like this:
function reorder_posts($orderby) {
global $wpdb;
return "ORDER BY FIELD({$wpdb->posts}.ID,99,4,5,2,8,55)";
}
add_filter('posts_orderby', 'reorder_posts');
You'd want to add the filter before your query, then remove it after, so as not to change anything else.
But this is a hard-coded solution, and you'd probably be better off finding a more generic approach than fiddling with the queries. Storing your order in metadata and then selecting based on that and ordering based on meta_value_num is possible with the WP_Query system.
Edit: Note, this just got pushed into WordPress 3.5, so when that comes out, you'll be able to use 'orderby'=>'post__in' (that's a double underscore there) and it will use the value from the post__in for the ordering.
Ticket here: http://core.trac.wordpress.org/ticket/13729
Patch here: http://core.trac.wordpress.org/changeset/21776
See Accepted Order & Orderby Parameters for WPQuery function. There is only ASC and DESC parameters are allowed for order.
$query = new WP_Query(array('showposts'=>2, 'post__in' => array(99,4,5,2,8,55), 'orderby'=>'none'));
Use 'orderby'=>'none' not 'order'=>'none'
I'm already using 'orderby', it was just an error while typing
And it's doesn't working too.
Will you update your question to correct code (Pleas copy-paste your exact code)
| common-pile/stackexchange_filtered |
how can it change url without reload page ? how can I create an event when url changed?
first, please check this url: Shopify variant options
and when you change the variant dropdown option, the current url will be changed with different variant value.
I tried onpopstate event and Location hash event, but all of them is NOT working , as I know they only work for the url format like: /#!/, but the above shopify link using "?"
Anyone knows how it can change url without reload page ? and how I can create an event when url changed ?
Possible duplicate-- does this post answer your question? How do I modify the URL without reloading the page?
thank you , but the link is telling me how I can modify url, but I do not know how I can get a callback event when url changed
You don't need a callback event when URL changed. If the URL get changed by user, the web page will reload; If the URL is changed by you programatically, you just call the action you want after you change the URL.
for sure, the web page will NOT be reloaded, and you also can not get current url via window.location.href which will be a previous (history) url. I wish I can get current url not previous or history url.
If you change the URL programmatically, you already know the URL in the first place.
You are like saying "I want to do let a = 1 and want to listen to the event when a is changed and get the new value of a". This is totally redundant because you yourselves assigned 1 to a so sure you know a is changed to 1.
this website is not programmed by me, they load too many js so that it is difficult to check one by one, if I can directly create an event , that will be best way, I also want to study for this part.
| common-pile/stackexchange_filtered |
Unable to launch bluetooth chat sample app from developer.android.com on emulator
I have built a project from the existing samples of bluetooth chat for android 2.3
I am aware that the the emulator does not support bluetooth.
Earlier I was able to run the app on emulator. The functions didn't work, but I was at-least able to see that its running. I could see the user-interface and all.
But all of a sudden, today when I was trying the same app , I get an error saying bluetooth is not available and it quits.
I don't know if any settings have changed in eclipse by mistake, but can somebody help me to make it work somehow.
so, did my answer help? I'm very sure it's correct as I've spent a lot of time with the bluetoothchat app.
you must have been using an earlier SDK version of the app that didn't check for absense of bluetooth support in the system....
I'm running the 2.1 SDK version of the demo, and that has the check in there.
you have two options:
1) go and find the same demonstration code you used before for an earlier SDK,
2) go into the code (bluetoothChat class, onCreate() method) and comment out this code snippet:
// Get local Bluetooth adapter
mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter();
if (mBluetoothAdapter == null) {
Toast.makeText(this, "Bluetooth is not available", Toast.LENGTH_LONG).show();
finish();
return;
}
actually - if you just comment out the "finish();" line that should allow the app to keep running, while still showing the warning.
| common-pile/stackexchange_filtered |
IOS: Scaling Images while Maintaining a Fixed Point
I am trying to draw an image such that point (X0, Y0) in the image is at a fixed point (usually center of the screen). I am using UIImage drawImageAtPoint (I have to use rotated and multiple images that cannot be scaled to rectangle) on a UIView.
This means that the upper left corner (X1, X2) usually is not (0,0).
Now I want to scale the image (CGContextScaleCTM by S) but keep the image point (X0, Y0) in the same location on the screen.
If I scale the image by S, how much do I need to change (X1, X1) to keep (X0, Y0) in the same location on the screen>?
I have tried a number of transforms based upon other systems I have used but am not getting anything to work.
Does anyone know what this transform should be?
CALayer has an anchorPoint property (amongst many other things) that may be of some use. Check it out! https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html Although I should note that UIImageView's have a CALayer, while UIImage's do not, so you'd be manipulating the anchorPoint of your UIImageView, which may or may not be what you want.
You can do the transform and scaling as usually and then do this:
self.scaledImageView.center = self.view.center; //if you want the image in the center of the view
In the same way you can add its center to the center of any view.
Probably zooming with the help of UIScrollView and Gestures would help you. Have a look at this tutorial where a guidance have been provided to overcome this issue.
Hope it helps!
| common-pile/stackexchange_filtered |
Remove Rows Not Part of Filter
I am filtering a large Excel spreadsheet. I want to remove all rows on the sheet that are not a part of the filter. I know I can copy the rows and paste them into another spreadsheet. But I would rather apply the change directly to the same spreadsheet.
How do I accomplish this?
While it is filtered, stick an arbitrary value in a helper column and copy down to all visible rows. Unfilter and then filter on the helper column, selecting the blanks. Delete all of the newly filtered rows. Then remove the filter and delete the helper column.
@fixer1234 I think this qualifies as an answer (not a comment). Upvotes will be interpreted as "good answer"! instead of "good comment!"...
possible duplicate of how do i delete all non filtered rows in Excel
@agtoever: I hadn't looked for duplicate questions when I posted the comment. I agree with the duplicate you found, which would make this a duplicate of a duplicate. Those are already full of good solutions, including one that sounds similar (but excludes the procedure).
| common-pile/stackexchange_filtered |
What exactly is going on in Proc::Background?
I am trying to write a script that automates other perl scripts. Essentially, I have a few scripts that rollup data for me and need to be run weekly. I also have a couple that need to be run on the weekend to check things and email me if there is a problem. I have the email worked out and everything but the automation. Judging by an internet search, it seems as though using Proc::Background is the way to go. I tried writing a very basic script to test it and can't quite figure it out. I am pretty new to Perl and have never automated anything before (other than through windows task scheduler), so I really don't understand what the code is saying.
My code:
use Proc::Background;
$command = "C:/strawberry/runDir/SendMail.pl";
my $proc1 = Proc::Background -> new($command);
I receive an error that says no executable program located at C:... Can someone explain to me what exactly the code (Proc::Background) is doing? I will then at least have a better idea of how to accomplish my task and debug in the future. Thanks.
Do you have better luck with \ instead of /? The shell isn't as flexible as the OS itself. (Don't forget to double the \ in the literal: "c:\\st...".
I ended up getting it to work by throwing 'perl' at the beginning. I'll update the post in a second.
I did notice on Proc::Background's documentation the following:
The Win32::Process module is always used to spawn background processes
on the Win32 platform. This module always takes a single string
argument containing the executable's name and any option arguments.
In addition, it requires that the absolute path to the executable is
also passed to it. If only a single argument is passed to new, then
it is split on whitespace into an array and the first element of the
split array is used at the executable's name. If multiple arguments
are passed to new, then the first element is used as the executable's
name.
So, it looks like it requires an executable, which a Perl script would not be, but "perl.exe" would be.
I typically specify the "perl.exe" in my Windows tasks as well:
C:\dwimperl\perl\bin\perl.exe "C:\Dropbox\Programming\Perl\mccabe.pl"
Thanks. I got the command to work by throwing "perl" in front of it so it looks like this: '$command = "perl C:/strawberry/runDir/cmd.pl"'
| common-pile/stackexchange_filtered |
Finish the beautiful, ten story tower
A mathematician has commissioned you to build a beautiful tower, ten stories tall, and it's almost complete!
Here are the number of bricks in each level, and the layout of the steel girders.
Exactly how many bricks do the incomplete levels require and why?
*Note: Replace each '?' with a single digit.
Bonus Hidden Beauty Challenge (optional):
Without changing any integer values or the floor they are on, turn the tower into gold (9). (multiple essentially equivalent solutions).
Text version:
30
200003000
420000
1100
?1003010
300002
1000
??000000
100000
30
Hint:
No complicated math required. Look for a simple, logical, beautiful solution. The girders are a hint.
Hint 2:.
This is not a sequence or cipher. @jlee found one thing. Where are the rest?
Are rot13(gur pbybef bs gur obeqref) significant?
@EdMurphy Yes, I updated the text to that effect.
Maybe I'm odd, but this is in my top 3 favorite puzzles I've made so far, even though it's a simple concept. I hope everyone at least gives it a try. Look for an obviously correct answer that explains all aspects of the puzzle.
Something just clicked. But i need to figure out one more thing!
Does the gray vs black girder color have any significance? Or is that just an image artifact?
@jlee Yes, there are 2 colors to differentiate them. Any other differences or nuances of the girders is irrelevant. The color choice is irrelevant.
The missing levels require ...
... 41 003 010 and 40 000 000 bricks.
With these numbers, the total sum of all bricks in the tower is 281 828 172, which are the first nine decimal digits of Euler's number e = 2.718281828459..., read from right to left.
The sum of the digits on each level are the digits of π = 3.141592653..., read from bottom to top. (I hadn't realised this until JLee pointed it out.)
{Op edit: This is equivalent to right-aligning the numbers and then summing the columns from right to left to construct a number, and summing the rows from bottom to top to construct a number. This insight is important for the later bonus challenge). Also note that finding one # without the other does not lead to a unique solution.}
Here's the complete tower with the zeros left out for clarity:
· · · · · · · 3 · 3
2 · · · · 3 · · · 5
· · · 4 2 · · · · 6
· · · · · 1 1 · · 2
· 4 1 · · 3 · 1 · 9
· · · 3 · · · · 2 5
· · · · · 1 · · · 1
· 4 · · · · · · · 4
· · · 1 · · · · · 1
· · · · · · · 3 · 3
2 8 1 8 2 8 1 7 2
That was hinted at ...
... by the fact that a mathematician had commissioned the tower. Leonhard Euler was a famous mathematician. And Euler's formula, which uses both e and π is considered to be very beautiful by some, also in our circles.
The grey girders, when flipped horizontally, look like an e, so the digits of e must be read right to left.. When flipped vertically, they look like a π, so π must be read from bottom to top.
Bonus challenge: A golden tower ...
... is related to the first nine digits of the golden ratio φ = 1.61803398874..., so we must arrange the existing digits horizontally to get 161 803 398, whose digit sum is also 39. But I don't see how.
Op Edit: (self-completing as the difficulty level of the bonus challenge was beyond the scope of the main puzzle.)
We have seen that if we align the digits to the right:
The sums of the columns from right to left form the beautiful number e.
The sums of the rows from bottom to top form the beautiful number pi.
So how can we form a third beautiful number without changing any integer values or the floor they are on?
Well, we were asked to make a golden tower, not a nedlog one, so let's align the digits to the left and sum left to right!
But we face a problem! The left digits sum to 26, and we are looking for the first 9 digits of phi, so it should begin with 1 or 16.
The secret here is to notice that we can left-pad zeros to the numbers; this will change their left alignment but not their integer values!
(Or you could just stagger them with whitespace, for the same result)...
000000030 | 3
<PHONE_NUMBER> | 5
<PHONE_NUMBER>000 | 6
<PHONE_NUMBER>0 | 2
041003010 | 9
000300002 | 5
0001000 | 1
<PHONE_NUMBER>0 | 4
100000 | 1
<PHONE_NUMBER> | 3
_____________
<PHONE_NUMBER>000
Note that due to similar rows with a sole 1 or 3, a few trivial variants are possible by swapping similar rows, but they are essentially equivalent constructions)
The rows still add up to pi, the right-aligned digits still add up to e, and now the left-aligned rows add up to
<PHONE_NUMBER>000, which if we insert the required decimal, is 1.618033980000, which is mathematically equal to 1.61803398, the first nine digits of the golden ratio, as desired.
Thus our golden tower, concealing all three beautiful numbers, is complete!
@MOehm In the hope that you don't mind, I tweaked the puzzle to eliminate the potential ambiguity. It should not affect your corrected answer; just needed to factor in the girder orientation.
Also I just realized what you did by adding a 'depth dimension'. That's very clever... and cheating : ) (and far too easy). The bonus should be solved in 2D like the others.
Can you rot13(ercynpr tbyq jvgu cynfgvp? Hasbeghangryl, fvyire qbrfa'g jbex, nf gur qvtvg fhz whzcf sebz 33 gb 40.)
@Amoz: Ah, I see what you mean and now I understand why the e is flipped. Yes, my idea was to reposition the digits for the gold tower in the third dimension. Aren't we constantly told that we should broaden our view when we can't solve a problem? :) And of course I don't mind that you made a slight modification to your puzzle.
rot13(Cuv vf gur pbeerpg inyhr, ohg sbe gur obahf V jnf ybbxvat sbe na ryrtnag fbyhgvba va gur fnzr fcvevg nf gur pheerag qrfvta (ahzoref ner rvgure ubevmbagny be iregvpny-ab 3Q be qvntbanyf, rgp.). Fvapr gur sybbef ner abg crezvggrq gb punatr, vg fubhyq or pyrne gung gur gnfx vf gb yrnir cv nybar, naq "ghea r vagb tbyq".) @EdMurphy
@MOehm Excellent, now you stand a fighting chance at the bonus. If you prefer to skip it just let me know and I can close this out with a green check!
I'm going to try my hand at the bonus., but I don't think I can do it without either ignoring the zeros or pushing them outside the tower ("golden Jenga tower"). If I don't I've got at least a 2 on either end, but I need a 1.
If you don't want to waste the bonus puzzle, you could also post it as a new question. I'm stuck on the bonus part, but I can see why you think of this puzzle as one of your favourites.
| common-pile/stackexchange_filtered |
Angular 17 Named Router Outlet within Layout Component
My Application has some generic pages like the landing or logout page which are navigatable when the user is not logged in. Those shall be rendered normally within the primary router-outlet.
Then I have Pages that are for logged-in users as the core of the application state and those pages shall be rendered within a general layout component that contains the navigation, footer, header etc.
I am having trouble to render those children within a named router-outlet that I expect to be within my layout component.
app.routes.ts
export const routes: Routes = [
{ path: '', redirectTo: 'landing', pathMatch: 'full' },
{
path: 'landing',
component: LandingPageComponent
},
{
path: 'intern',
component: NavigationComponent,
children: [
{
path: 'enterprise',
component: OverviewComponent,
},
{ path: '', redirectTo: 'enterprise', pathMatch: 'full' },
]
},
];
navigation.component.html
<header></header>
<router-outlet name="intern"></router-outlet>
<footer></footer>
Landing and Navigation component are rendered as expected, but the content of the pages that I want to be within the navigation component in the named router-outled "intern" are not there. Sidenote: As I understood - as long as the child route and the named router-outlet share the same name (here 'intern'), i do not need to define the "outled: 'intern'" property in the app.routes.ts
For your requirement you do not need named router outlets, you need to use it, when you need to render other components in router outlet, apart from the primary router-outlet, which is not the case in your requirement, so you can achieve with normal routing itself, please find below working example where it works fine!
When I removed the name property of router-outlet then it started working!
angular documentation for named outlets
import { Component } from '@angular/core';
import { bootstrapApplication } from '@angular/platform-browser';
import { LandingPageComponent } from './landing-page/landing-page.component';
import { NavigationComponent } from './navigation/navigation.component';
import { OverviewComponent } from './overview/overview.component';
import { Routes, provideRouter, RouterModule } from '@angular/router';
import 'zone.js';
export const routes: Routes = [
{ path: '', redirectTo: 'landing', pathMatch: 'full' },
{
path: 'landing',
component: LandingPageComponent,
},
{
path: 'intern',
component: NavigationComponent,
children: [
{
path: 'enterprise',
component: OverviewComponent,
},
{ path: '', redirectTo: 'enterprise', pathMatch: 'full' },
],
},
];
@Component({
selector: 'app-root',
standalone: true,
imports: [RouterModule],
template: `
<router-outlet></router-outlet>
`,
})
export class App {
name = 'Angular';
}
bootstrapApplication(App, {
providers: [provideRouter(routes)],
});
stackblitz
This actually solves my Problem. But this also means that I have a primary router-outlet in app.component and a second "primary" router-outlet in my navigation.component.html
I feel like this is not the right way to do this, even though it works in deed fine like that. Thanks for the idea
@SeverinKlug when you name <router-outlet></router-outlet> its called a primary outlet, if it has a name then its a named router outlet, When you want to render nested components, then you need multiple primary outlets, but you need to note they are not at the same router level, each array in your routing maps to a router-outlet, when one level is satisfied, the inner children will be considered for the next inner router-outlet!
| common-pile/stackexchange_filtered |
Determinant of matrix is a polynomial with unit coefficients
Let $M$ be a $n\times n$ matrix with entries $(m_{ij})$. The determinant of $M$ is a polynomial in $m_{11},\dots,m_{nn}$. Are the coefficients of this polynomial all either $1$ or $-1$?
If $M=(m_{ij})$ then
$$\det M=\sum_{\sigma\in S_n}\epsilon(\sigma)\prod_{k=1}^n m_{k\sigma(k)}$$
and since $\epsilon(\sigma)=\pm 1$ then yes the determinant is a polynomial in the $m_{ij}$ where the coefficients of this polynomial all either $1$ or $−1$.
| common-pile/stackexchange_filtered |
Instagram icon is not showing in document sharing window
I am using Document Interaction API to share my photo on Instagram.
I am using "igo" as my filename extension and Identifier for Document Interaction UTI is com.instagram.exclusivegram.
Same code was working for iOS12.4
I've got the same problem. As for me, it looks like a bug in the Instagram app. Their documentation still claims the app supports com.instagram.exclusivegram UTI, but apparently it only works with com.instagram.photo on iOS 13.
UPD: as of October 2019 and Instagram v117.0, the UTI doesn't make any difference. What is important that you shouldn't use the ".igo" extension but use ".ig" instead. This happens on iOS 13 only.
Thanks @Mikalai,
"com.instagram.photo" UTI was working fine for 13.0, But now neither UTIs are working for iOS 13.1.3.
Instagram and "Copy Instagram" is present in more option and both have different behavior.
| common-pile/stackexchange_filtered |
$A = \bigcap_{\mathfrak{p} \in \text{Spec(A)}} A_{\mathfrak{p}} = \bigcap_{\mathfrak{m} \in \text{MaxSpec(A)}} A_{\mathfrak{m}}$
I'm doing this exercise.
Let $A$ be an integral domain, then prove that $$A = \bigcap_{\mathfrak{p} \in \text{Spec(A)}} A_{\mathfrak{p}} = \bigcap_{\mathfrak{m} \in \text{MaxSpec(A)}} A_{\mathfrak{m}}$$ where the intersection is taken in the quotient field of $A$ and $A_{\mathfrak{p}}$ is the localization of $A$ at the prime ideal $\mathfrak{p}$, similarly for $A_{\mathfrak{m}}$ .
What I have done: we have $$A \subseteq \bigcap_{\mathfrak{p} \in \text{Spec(A)}} A_{\mathfrak{p}} \subseteq \bigcap_{\mathfrak{m} \in \text{MaxSpec(A)}} A_{\mathfrak{m}}$$ so it suffices to prove that $$\bigcap_{\mathfrak{m} \in \text{MaxSpec(A)}} A_{\mathfrak{m}} \subseteq A$$
Suppose $\frac{a}{b} \in \bigcap_{\mathfrak{m} \in \text{MaxSpec(A)}} A_{\mathfrak{m}}$ . I want to show that $b$ is invertible.
If $b$ is not invertible then there exists $ \mathfrak{m} \in \text{MaxSpec(A)} $ such that $b \in \mathfrak{m}$. By hypothesis $$\frac{a}{b} = \frac{r}{s}$$ with $s \not\in \mathfrak{m} $.
$A$ is a domain so $sa = rb \in \mathfrak{m} $ and thus $ a \in \mathfrak{m} $. Then I don't know how to proceed , any hint ?
@hardmath: you are right
Hint: maximal ideals serve as witnesses to proper denominators - see my answer.
Related: http://math.stackexchange.com/questions/630752
Just in case any future passer-by is as confused as I was, "quotient field" is intended to mean field of fractions in the statement of the exercise.
Let $x$ be contained in the intersection and consider the ideal $I:=\{a \in A : ax \in A\}$. If $\mathfrak{m}$ is any maximal ideal, then there is some $b \in A \setminus \mathfrak{m}$ such that $bx \in A$ (since $x \in A_\mathfrak{m}$). This shows $I \not\subseteq \mathfrak{m}$. Hence, $I=A$, i.e. $x \in A$.
Key Idea the set of maximal ideals contain enough witnesses to show that every proper fraction has nontrivial denominator (ideal), $ $ i.e. if a fraction $\,f\not\in A\,$ then its denominator ideal $\, {\cal D}_f = \{ d\in A\ :\ d\:\!f\in A\} \ne (1),\:$ so $\, \cal D_f\,$ is contained in some maximal ideal $\,M,\,$ so $\,f\not\in A_M,\,$ so $ \,f\not\in \bigcap A_{M}$ over all max $M$.
i.e. a fraction is proper iff its denominator ideal $I$ is proper iff $P\supset I$ for some max $P$ . Said a bit more vividly: a fraction is proper iff denominator ideal is "divisible by"(contained in) some prime.
| common-pile/stackexchange_filtered |
How do I get 'api.example.com'
I am using NextJS and my understanding is that both the front-end and backend exist in the same location. For development, this would be both http://localhost:3000/about for any user who wants to visit the about page. However this means that any API routes I have in 'pages/api' will be visible whenever I just add that to my url, displaying JSON.
How is it that some sites are able to have the same domain and link but with api.website.com where all there other stuff is on website.com. That way any queries to the api and server are done with api.website.com as opposed to revealing anything on the main link?
It's because most websites have their api on a backend server using libraries like express. pages/api is just a next utility which comes under localhost:3000/api/{get-user} or your deployment uri/api/ which is mostly just used for development/testing/production if they don't have a backend server.
Ah makes sense, so if I wanted to achieve something like api.website.com, I would need to have a backend entirely separate from NEXTJS?
No you can deploy to something like Vercel, and then using your DNS provider redirect api.dns to the vercel/api/ deployment.
| common-pile/stackexchange_filtered |
I can show my Folders in Windows8, but I can't in Ubuntu, Why?
I have installed Windows8, then I installed Ubuntu 12.10, all partitions are appeared well in Windows8 & Ubuntu.
I have C which have Windows8 system, and (D & E) as an extended partitions
and Ubuntu two partitions (File-system & Swap)
I can see folders in all partitions but I can't in D (in Ubuntu)
When I want Folders in D, I must restart PC to open windows8
Can you tell me why? How can I solve it?
thanks :)
What filesystem is partition D? For NTFS you might want to install https://en.wikipedia.org/wiki/NTFS-3G. NTFS read support should be default, but maybe it didn't install correctly.
Can you please share with us the output of the following commands: sudo fdisk -l sudo mount?
here; http://bit.ly/Zp1FLU
You need to mount the partitions, they might be in the side bar of your file browser, or you may have to do it through the terminal like so:
sudo mkdir /media/mountpoint
sudo mount /dev/sdaPartiton Number here /media/mountpoint
and when you're done
sudo umount /media/mountpoint
Having some more information from the OP would be nice before telling him that this needs to be done. At the moment from all we know his disk might be on the other side of the world. This might be, or not a solution.
| common-pile/stackexchange_filtered |
Official GCP statement about (non) exposure to Solarwinds Orion vulnerability?
We've tried to find something like an "official" statement from Google about GCP's exposure (or lack thereof) to the recent Solarwinds Orion hack. We're a GCP customer but don't have a support package. Other than purchasing a support package only for the purpose of obtaining a statement (assuming support would provide something), any other channels?
I would be quite surprised if Google used this product anywhere. They have built custom monitoring from the ground up over the years, as nothing else would really work at their scale.
This is what opening a support case is for, getting answers that are not publicly available.
As a guess, unlikely that Google uses Orion, not compatible with Google's inexpensive large scale custom hardware and software stacks.
| common-pile/stackexchange_filtered |
PulseAudio - Speakers crackling in ubuntu minimal install
I have installed Ubuntu minimal. My speakers setup is 5.1. When I use FFplay to play a clip, I get a crackling sound. The sound is working perfectly in a full Ubuntu desktop install. But when in Minimal, I installed alsa and pulse audio, I get the crackling sound.
Is there any difference in sound architecture between Desktop and minimal versions
Ubuntu Version 16.04
| common-pile/stackexchange_filtered |
Android java set number of decimals
I've a variable price which is defined this way:
double price;
price = 7.6;
Up to here everything is right. The problem is that when I make this:
price = 7.6 * 3;
What i get is
price = 22.799999999999999999999999997
instead of
price = 22.80
which is what i need.
Any ideas about how to solve this?? Thank you.
Use BigDecimal or store your amounts of money in cents and use long.
For reasons read Effective Java Item 48.
In summary:
Don't use double and float when you need exact values. These datatypes cannot represent numbers wich are a negative power of ten. (e.g. 0.1)
Use BigDecimal instead. For monetary values long or int are more suitable in most cases.
Your example using long:
long price; //store values in euro cents
price = 760 * 3; //price is now 2280 cents
Y need to store it on euros, if possible.
Then use BigDecimal or convert your euros to euro cents and use long/int. double and float are for scientific calcs.
@user1098933 BigDecimal.valueOf(price, 2).toPlainString() You may want to have a look at Joda Money lib.
Never use floating point (double or float) to sore currency values, as floating point values are by definition approximations. You should either store values in an int/long (e.g. $ cents rather than dollars), or use a class specifically designed for currencies.
There's more information and detail here:
http://www.javapractices.com/topic/TopicAction.do?Id=13
Could you post an example? I've been looking on the net but i find it too complicated. Thanks
| common-pile/stackexchange_filtered |
Read .tiff to byte array in memory and then conver to image
I have been handed a project that involves reading records from an Access database table and placing this information into a large PDF booklet.
One of the columns in the Access table is a MEMO column that contains .tiff image information. I gathered this from the fact that the file information when written to a text file begins with "II*", which I have read is a .tiff header stating that the file is in Intel byte order.
So, with that being said, I need to convert this MEMO field into an image and write that image to the PDF, which I'd like to accomplish in memory if possible.
The code I have so far for the actual conversion is this:
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Collections.Generic;
using System.Text;
namespace ElectionsPollBooks
{
class ImageConversion
{
public Image ConvertMemo(string s_convert)
{
byte[] ba_incTiff = Encoding.Unicode.GetBytes(s_convert);//string of information from the Memo field in the Access Database
using (MemoryStream ms_incTiffImage = new MemoryStream(ba_incTiff))//takes our just converted byte array and reads it to memory as the input tiff
{
return Image.FromStream(ms_incTiffImage);//take said tiff stream that we just made, and return it as an image.
}
}
}
}
Now I am getting the "Parameter is not valid" error on the return Image.FromStream(ms_incTiffImage); method, which I understand means the byte array isn't in any form of data that the Image.FromStream() method understands.
This leads me to wonder if maybe my issue is reading the string in using the Unicode encoding in this line: byte[] ba_incTiff = Encoding.Unicode.GetBytes(s_convert);
But this all new territory for me and I am not sure if my thought process is right with this.
Any help would be appreciated.
Thank you all for your time.
UPDATE:
When I output the MEMO field to a text file with<EMAIL_ADDRESS>s_convert);
I end up with this as a snippet from the output:
II* ê €?àP8$ „BaP¸d6ˆDbQ8¤V-ŒFcQ¸äv=HdR9$–M'”JeR¹d¶]/˜LfS9¤Öm7œNgS¹äö}?
Some additional information:
ba_incTiff ends up with a count of 14176 bytes, the first 5 are 73,0,73,0,42 (I'm not sure if it is relevant, but I am grabbing at straws right now).
UPDATE
I was able to discover if there was any special encoding placed on the information being stored in the MEMO filed to find out there was none.
By changing byte[] ba_incTiff = Encoding.Unicode.GetBytes(s_convert) to byte[] ba_incTiff = Encoding.Default.GetBytes(s_convert);, I was able to get past the problem.
Now, however, I am receiving a A generic error occurred in GDI+. error...
So the image is stored as a string in Access? I think you'll need to determine exactly how that string was generated. Encoding.X.GetBytes(s) will get the bytes representing a string, rather than a real conversion from a string representation of an original byte array. Usually a byte array stored as a string will be base64 or hex - is the data one of these two formats?
@JoeEnos thanks for the information. I'll have to go see exactly how the information is being imported to the database in the first place. I was literally just handed a test database and said, "Make it work...". But now I know I'll have to go see what the end users are actually doing to store this.
If you can provide a small sample of the text here of one of the images, it might be clear what type it is...
First thing i would try is to copy the data out of access into a txt file change extension to a tiff then see if it opens then you can also tweak the encoding and see if that helps
@MikeT You can't copy/paste binary data through text editors.
@JoeEnos I added a snippet of the outputted text file and also a little on the byte array was returning as well.
I'd bet the data is garbage - that it was inserted incorrectly in the first place. Has anyone confirmed that anybody ever successfully read images from that table? If so, does anyone have source code of the app that did? It looks like somebody took a byte array of an image, did Encoding.UTF8.GetString (or similar) on that image, then dumped that string in the DB. If that happened, then the data is not retrievable, since the string doesn't accurately reflect the bytes.
The proper way to store a byte array as a string would be something that uses printable characters only (specifically printable ASCII characters) that can be converted back and forth with no ambiguity. That's where Base64, Hex, ASCII-85, basE91, etc., come in. Base64 would probably be the most common, because it is consistent (3 bytes = 4 characters, not including the padding), and also easy to read. Hex is easy to read but bigger, and ASCII85 and basE91 are smaller but less common, less consistent, and harder to read.
@JoeEnos To add a little more context from what I have gathered from this project, the old system they used is some archaic FoxPro program to extract the information from this table and add it to a PDF that way (outisde my scope of knowledge, or any other programmer here). Now, with that being said, I still need to go meet with the end user who IMPORTS this info. I also learned from digging around that FoxPro adds some garbage data to these formats when importing and exporting information...which could be my issue of the program used to insert the data in the first place is a FoxPro app.
@JoeEnos I am by no means a computer expert in lower-level computing either, but I really was sure that there had to some kind of encoding error on this or something added just by how the memo field was being written in a .txt file.
Let us continue this discussion in chat.
| common-pile/stackexchange_filtered |
How to write a hook to transform each image reference on export?
This question
https://emacs.stackexchange.com/a/27065/13217 explains perfectly how to embed base64 image inside of an HTML exported document, and it works great.
I wonder now how could I automate this process for each image file referenced in my document ?
Adding a pre-export task replacing all file references with the base64 encoded image seems the way to go, but I don't have any practice with elisp. Can anyone help me out ?
I found out some ideas in this documentation : http://orgmode.org/manual/Advanced-configuration.html which seems a good starting point. But I fail to manipulate correctly the input text.
This is what I came out with :
(defun html-base64-images (text backend info)
(when (org-export-derived-backend-p backend 'html)
(message "The text is: %S" text)
(setq filename (replace-regexp-in-string "\\\"\(.*\)+\\\"" "\1" text))
(message "The filename is: %S" filename)
(setq output-text (replace-regexp-in-string "img src=\".*\"" "img src=\"data:image/png;base64," text))
(concat output-text tob64(filename))
(message "The new text is %S" output-text)
output-text
)
)
(add-to-list 'org-export-filter-link-functions
'html-base64-images)
So in the Messages buffer I can see my messages, but I have problems to manipulate the TEXT string correctly. The output looks like :
The text is: "<img src=\"./GN_moissonnage2.png\" alt=\"GN_moissonnage2.png\" />"
The filename is: "<img src=\"./GN_moissonnage2.png\" alt=\"GN_moissonnage2.png\" />"
The new text is "<img src=\"data:image/png;base64, />"
I need to :
get the filename
replace it with my base64 URI
I'm not used to elisp at all ... maybe I'm doing it the wrong way ...
Can you show the output that you get when a link is processed?
You probably don't want to use replace-regexp-in-string at all: you want to extract the filename somehow (that's why I wanted to see the output: that'll depend on what the text looks like) and you want to construct the output string from scratch, not replace anything in text: the (format ...) in the linked answer is probably close to what you need (with the extracted filename replacing the hardwired name that's used in that answer).
Thanks to @Nick's usefull comments, I managed to get what I wanted.
This function will insert base64 images in my HTML exported org document, making it completely standalone (and quite bigger of course).
(defun tob64 (filename)
"Transforms a file FILENAME in base64."
(base64-encode-string
(with-temp-buffer
(insert-file-contents filename)
(buffer-string))))
(defun html-base64-images (text backend info)
"Replaces files links in TEXT with appropriate html string when BACKEND is html. INFO is ignored."
(when (org-export-derived-backend-p backend 'html)
(when (string-match "^<img" text)
(let ((filename (replace-regexp-in-string ".*=\"" "" (replace-regexp-in-string "\\\" .*" "" text))))
(format "<img src=\"data:image/png;base64,%s\">" (tob64 filename)))
)
)
)
(add-to-list 'org-export-filter-link-functions 'html-base64-images)
NIce! I still think that the extraction of the filename should be simpler but I can't argue too much with "it works" :-)
One more thing: you should use a (let ((filename ...)) (format ...)) form so that filename is locally bound in the function. See Local Variables in the Emacs Lisp manual.
| common-pile/stackexchange_filtered |
Anaconda doesn't list package installed by pip
I'm trying to setup elasticsearch package on my system.
It isn't available through conda install - package is missing in current linux-64 channels.
So I installed it via system pip, and that works fine:
$ which pip2
/usr/bin/pip2
$ sudo pip2 install elasticsearch
...
$ pip2 list | grep ela
elasticsearch (2.4.0)
elasticsearch-dsl (2.1.0)
But when I try it through the conda's pip:
$ which pip
/home/michal/Bin/anaconda/envs/raptor/bin/pip
$ pip list | grep ela
$
it doesn't show up. That kind of makes sense, since I did it through the system.
But if I try to install it:
$ sudo pip install elasticsearch
Requirement already satisfied (use --upgrade to upgrade): elasticsearch in /usr/local/lib/python3.4/dist-packages
Cleaning up...
it is already set-up, but I cannot list it either in pip list nor in conda list. If I'm not in conda env, packages import in system python, but in conda env the packages cannot be imported.
Okay, it is in the system. But how can I force it to install into conda? I already tried to uninstall the system conda and install it via conda pip, but it again puts it into usr/local/lib/python3.4/dist-packages/elasticsearch/
My conda info:
$ conda info -a
Current conda install:
platform : linux-64
conda version : 4.1.6
conda-env version : 2.5.1
conda-build version : 1.21.3
python version : 3.5.2.final.0
requests version : 2.10.0
root environment : /home/michal/Bin/anaconda (writable)
default environment : /home/michal/Bin/anaconda/envs/raptor
envs directories : /home/michal/Bin/anaconda/envs
package cache : /home/michal/Bin/anaconda/pkgs
channel URLs : https://repo.continuum.io/pkgs/free/linux-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/linux-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : /home/michal/.condarc
offline mode : False
is foreign system : False
# conda environments:
#
raptor * /home/michal/Bin/anaconda/envs/raptor
tensorflow /home/michal/Bin/anaconda/envs/tensorflow
root /home/michal/Bin/anaconda
sys.version: 3.5.2 |Anaconda 4.1.1 (64-bit)| (default...
sys.prefix: /home/michal/Bin/anaconda
sys.executable: /home/michal/Bin/anaconda/bin/python3
conda location: /home/michal/Bin/anaconda/lib/python3.5/site-packages/conda
conda-build: /home/michal/Bin/anaconda/bin/conda-build
conda-convert: /home/michal/Bin/anaconda/bin/conda-convert
conda-develop: /home/michal/Bin/anaconda/bin/conda-develop
conda-env: /home/michal/Bin/anaconda/bin/conda-env
conda-index: /home/michal/Bin/anaconda/bin/conda-index
conda-inspect: /home/michal/Bin/anaconda/bin/conda-inspect
conda-metapackage: /home/michal/Bin/anaconda/bin/conda-metapackage
conda-pipbuild: /home/michal/Bin/anaconda/bin/conda-pipbuild
conda-render: /home/michal/Bin/anaconda/bin/conda-render
conda-server: /home/michal/Bin/anaconda/bin/conda-server
conda-sign: /home/michal/Bin/anaconda/bin/conda-sign
conda-skeleton: /home/michal/Bin/anaconda/bin/conda-skeleton
user site dirs: ~/.local/lib/python2.7
~/.local/lib/python3.4
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: raptor
CONDA_ENVS_PATH: <not set>
LD_LIBRARY_PATH: <not set>
PATH: /home/michal/Bin/anaconda/envs/raptor/bin:/home/michal/Bin/anaconda/bin:/usr/local/heroku/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/michal/.rvm/bin:/home/michal/Bin:/home/michal/.rvm/bin
PYTHONHOME: <not set>
PYTHONPATH: :/home/michal/Code/BSS/fairalgo/src:/home/michal/Code/BSS/fairalgo/src
WARNING: could not import _license.show_info
# try:
# $ conda install -n root _license
edit:
By using sudo it does indeed use different pip, I didn't realize that:
root@mentat:/home/michal/Bin/anaconda# which pip
/usr/local/bin/pip
When I use pip without sudo, it crashes:
$ pip install elasticsearch
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 236, in run
session = self._build_session(options)
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 52, in _build_session
session = PipSession()
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 216, in __init__
super(PipSession, self).__init__(*args, **kwargs)
File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/sessions.py", line 272, in __init__
self.headers = default_headers()
File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/utils.py", line 555, in default_headers
'User-Agent': default_user_agent(),
File "/usr/share/python-wheels/requests-2.2.1-py2.py3-none-any.whl/requests/utils.py", line 524, in default_user_agent
_implementation = platform.python_implementation()
File "/usr/lib/python2.7/platform.py", line 1521, in python_implementation
return _sys_version()[0]
File "/usr/lib/python2.7/platform.py", line 1486, in _sys_version
repr(sys_version))
ValueError: failed to parse CPython sys.version: '2.7.11 |Continuum Analytics, Inc.| (default, Jun 15 2016, 15:21:30) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]'
Storing debug log for failure in /tmp/tmpUvOaDo
Please help, I am totally lost in this.
I think that sudo pip may get a different version of pip - not the anaconda one. You shouldn't need sudo to install into your home directory anyway.
use /path/to/python -m pip [commands] instead of just pip [commands] to be sure you are installing for the right interpreter
Oh, ok. Good point about I shouldn't need sudo for that, that's what I found weird as well. If I don't use sudo, pip crashes (I updated the original post)
You are still running the wrong pip. Look at the paths in the stacktrace - they are all pointing to the system version of Python. Try the first command @Julius suggested, see if it helps.
@darthbith I get the same error:
$ /home/michal/Bin/anaconda/envs/raptor/bin/python -m pip install elasticsearch
...
ValueError: failed to parse CPython sys.version: '2.7.11 |Continuum Analytics, Inc.| (default, Jun 15 2016, 15:21:30) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]'
Just in case, try reinstalling pip into that environment: source activate raptor && conda remove pip && conda install pip && pip install elasticsearch
| common-pile/stackexchange_filtered |
Is there a way to compare variables in a class
I'm having a problem with comparing variables in a class I made
when I call print(t2.between(t1, t3))
This is the error I get:
Traceback (most recent call last):
File "e:\CS-OOP-Fall 2023\Classtest1.py", line 18, in <module>
print(t2.between(t1, t3)) # is t2 between t1 and t3 as a method
^^^^^^^^^^^^^^^^^^
File "e:\CS-OOP-Fall 2023\MyTime.py", line 81, in between
return (t1 < self < t3)
^^^^^^^^^^^^^^
TypeError: '<' not supported between instances of 'MyTime' and 'MyTime'
Here is the class:
class MyTime:
def __init__(self, hrs=0, mins=0, secs=0):
""" Create a new MyTime object initialized to hrs, mins, secs.
The values of mins and secs may be outside the range 0-59,
but the resulting MyTime object will be normalized.
"""
# Calculate total seconds to represent
totalsecs = hrs*3600 + mins*60 + secs
self.hours = totalsecs // 3600 # Split in h, m, s
leftoversecs = totalsecs % 3600
self.minutes = leftoversecs // 60
self.seconds = leftoversecs % 60
def __str__(self):
# 0 - hours, 1 - minutes, 2 - seconds
return ("{0}:{1}:{2}".format(self.hours, self.minutes, self.seconds))
def add_time(self, t2):
h = self.hours + t2.hours
m = self.minutes + t2.minutes
s = self.seconds + t2.seconds
while s >= 60:
s -= 60
m += 1
while m >= 60:
m -= 60
h += 1
sum_t = MyTime(h, m, s)
return sum_t
def increment(self, seconds):
self.seconds += seconds
while self.seconds >= 60:
self.seconds -= 60
self.minutes += 1
while self.minutes >= 60:
self.minutes -= 60
self.hours += 1
def to_seconds(self):
""" Return the number of seconds represented
by this instance
"""
return self.hours * 3600 + self.minutes * 60 + self.seconds
def after(self, t2, t3):
""" Return True if I am strictly greater than both t2 and t3
"""
if self.hours > t2.hours and self.hours > t3.hours:
return True
if self.hours < t2.hours or self.hours < t3.hours:
return False
if self.minutes > t2.minutes and self.minutes > t3.minutes:
return True
if self.minutes < t2.minutes or self.minutes < t3.minutes:
return False
if self.seconds > t2.seconds and self.seconds > t3.seconds:
return True
if self.seconds < t2.seconds or self.seconds < t3.seconds:
return False
return False
def between(self, t1, t3):
"""Checks if one of the specified times falls between
another the other times with True or False as a indicator
"""
# this will cover both cases
return (t1 < self < t3)
I tried a couple things but none of them worked, I tried other parts of the variable but it doesn't seem to work right
In the end I want it to say True or False based off the time you choose to check but I wanted to see if something else worked first.
You have to implement the special methods for comparisons.
I don't think I've seen that before, is there another way to solve the problem or is that the only way?
The other way to solve the problem is to use objects from the datetime library, which already have methods for basically everything that you're trying to implement yourself here.
What is the value of a MyClass instance? which one of all its attributes is the one to use when evaluating my_class_instance_1 > my_class_instance_2?
You have to use the magic method __lt__ to indicate python how to make a comparison:
from datetime import datetime
class MyTime:
def __lt__(self, other):
my_time = datetime.strptime("%H:%M:%S", f"{self.hours}:{self.minutes}:{self.seconds}")
other_time = datetime.strptime("%H:%M:%S", f"{other.hours}:{other.minutes}:{other.seconds}")
return my_time < other_time
you're welcome. I'll appreciate if you choose my answer.
btw, this will only solve half of the problem: t1 < self < t3 won't work even you have implemented __lt__
You need to implement __gt__ as well, since the expression t1 < self < t3 is evaluated as t1 < self and self < t3 (as said in the docs ), which corresponds to a __lt__ followed by a __gt__
As other people mentioned, t1 < self won't work because you have not implemented __lt__ method.
However, __lt__ has nothing to do with what you want to do.
Your problem lies in
def after(self, t2, t3):
def between(self, t1, t3):
Implementing these functions do not require those __lt__ etc functions. There are several hints for you:
You should have write a compare method to compare self with another MyTime, which means something like this
def after(self, other):
# leave the implementation to you. One simplest way is to
# make use of to_seconds you implemented
With such method, your "sophiscated" after-2-time and between-2-time method is as simple as:
# pseudo code
def after_all(self, t1, t2):
return self is after t1 AND self is after t2
def between(self, t1, t2):
return (self is after t1 AND self is before t2) or
(self is after t2 AND self is before t1)
| common-pile/stackexchange_filtered |
How to maintain display onmouseover after moving mouse with Javascript
I have three objectives with these pics, hide all but the first. Display all pics, inline with the first, when hovering over the first and continue display when hovering over all pics. Hide the pics once leaving the area with display only when hover over the first again. (I forgot img sources but they are there in the code) To do this consistently I wrote html, css and js code but I can only achieve two never all three, repeatedly w/o refresh. The code:
var tog = document.querySelector('#toggle');
var glide = document.querySelector('#first')
glide.onmouseover = function() {
tog.classList.add('picclass');
}
tog.onmouseout = function() {
tog.classList.remove('picclass');
}
.picclass {
display: flex
}
.pics {
display: none;
}
.picclass > .pics {
display: inline-flex;
}
#first {
display: inline-flex;
}
<div id="toggle">
<img id="first">
<img class="pics">
<img class="pics">
</div>
I think I dont realy understand the question, because it's working for me. The three images are inline and only showing if hovering over the first one. But you can do this without js, use in CSS: #first:hover ~ .pics{display: inline-flex} and #toggle {display: flex}
Right hover if over the first but keep display when mouse moves over the images. Hide when mouse leave area and only display when hover over first again
Okay now i get it.
Here you have a sample that works. If you want to change the delaying time you have to change it in the setTimeout. (Now its 500milli = 0,5 sec)
https://jsfiddle.net/falkedesign/ce1ayo93/21/
var tog = document.querySelector('#toggle');
var glide = document.querySelector('#first')
var isover = false;
mover = function() {
isover = true;
tog.classList.add('picclass');
}
mout = function() {
isover = false;
setTimeout(removefunc,500)
}
removefunc =function(){
if(!isover){
tog.classList.remove('picclass');
}
}
glide.onmouseover = mover;
tog.onmouseout = mout;
and in the html
<img class="pics" src="https://img.fireden.net/v/image/1531/17/1531179205107.png" onmouseover="mover()" onmouseout="mout()">
This is valid. Thanks, trynna figure why mines couldnt do this and yours worked. What happens with the “if (!isover)” code. I get it removes the class but don’t get what you wrote between the if ()
if the mouse is out, then isover is false and after 500 milli the class is removed, but if in the 500 milli the mouseover function is called, then the class cant be removed, because the isover is set to true
| common-pile/stackexchange_filtered |
How do you convert a time variable to a decimal in R?
I'm extremely new to R, and I am using it to analyze a massive data set. Currently my variable B4S1 looks like this (e.g., 10:30, 11:00, 9:45), and I need it to look like this (e.g., 10.5, 11, 9.45). I tried really hard to figure out how to accomplish this, but I don't really understand what is going in the strptime function. Here is what I have...
SleepTime <- (usedata$B4S1)
SleepTime <- as.character(SleepTime)
sapply(strsplit(SleepTime,":"),
function(x) {
x <- as.numeric(x)
x[1]+x[2]/60
}
)
If there are obvious mistakes it would be extremely helpful if you could point them out and tell me where I'm going wrong. Being new at this, I don't fully have a grasp on what I'm doing. Thanks a bunch! :)
See also R How to convert time to decimal
There are 1,200 participants, so I'm not exactly sure how to share the data. However, when I run it with the data I get this error message "Warning message:
1: In FUN(X[[i]], ...) : NAs introduced by coercion
2: In FUN(X[[i]], ...) : NAs introduced by coercion
| common-pile/stackexchange_filtered |
Magento Frontend Dashboard Stuck Loading
Magento 2.2.5
Attached are screenshots of the errors in Chrome dev tools:
jquery.js
loader.js
Also:
There are some images being requested that are missing from an S3 bucket that are throwing out 403/404 errors. I wouldn’t think that would cause the dashboard to continue loading without any results or timing out.
Please check the error log in '/var/report/' folder
Is there way to view the file where the data is more readable?
A database connection was missing from env.php.
| common-pile/stackexchange_filtered |
How to create the typical "MAN RAY" effect?
I've always been a huge fan of Man Ray (the artist/photographer). Fascinated by his negatives on gelatin silver print, I was wondering how I could recreate his signature effect with my own portrait pictures (still a neophyte on Photoshop but pretty sure this isn't just inverting colors because the characters look almost like they're coated in silver :))
Below is one of his photos, there are others on Imgur (for reference)!
The technique is called solarization. It can be achieved in a b/w photo dark room, when the printing paper is exposed and already in the developing fluid for a bit, the room lighting is quickly turned on and off, so light areas get exposure too. Durations of exposure and development can be varied to achieve a variation of different results.
Photoshop does come with a Solarize filter, but it has no options, so the results are difficult to optimize to a certain look. And you might need a noise filter to add some grain first.
https://en.wikipedia.org/wiki/Solarization_(photography)
https://en.wikipedia.org/wiki/Sabattier_effect
"the knowledge behind the facts", ty!
| common-pile/stackexchange_filtered |
Extended AWS MS SQL RDS DB audit file search with a server timestamp
I am getting MS SQL RDS DB audit search with following query:
SELECT *
FROM msdb.dbo.rds_fn_get_audit_file('D:\rdsdbdata\SQLAudit\*.sqlaudit', default, default)
Timestamp is coming in event_time column and it is in UTC, however, I need to convert it to the querying server timezone. Is it possible to extend this search with extra column with recalculated timezone?
This has less to do with AWS or even SQL Audit and more "how do I convert UTC time to local time?". This should get you there:
SELECT *,
[event_time_local] = SWITCHOFFSET(event_time, DATENAME(TzOffset, SYSDATETIMEOFFSET()))
FROM msdb.dbo.rds_fn_get_audit_file('D:\rdsdbdata\SQLAudit\*.sqlaudit', default, default);
This will convert the event_time column into whatever timezone the SQL Server is in. If you need it to be a different timezone than that, you're welcome to provide a different offset to the SWITCHOFFSET() function.
| common-pile/stackexchange_filtered |
How to know when to handle different architecture
Almost everywhere I download software, I see a 32-bit version and a 64-bit version of the software. My questions are
What are the major differences in the code of the software? Does it affect performance, memory, etc...?
How can I know when I should provide a 32-bit and a 64-bit versions of software?
32-bit vs. 64-bit
As the number of bits increases there are two important benefits.
More bits means that data can be processed in larger chunks which also means more accurately.
More bits means our system can point to or address a larger number of locations in physical memory.
32-bit systems were once desired because they could address (point to) 4 Gigabytes (GB) of memory in one go. Some modern applications require more than 4 GB of memory to complete their tasks so 64-bit systems are now becoming more attractive because they can potentially address up to 4 billion times that many locations.
A 64-bit application will require more memory to open any particular file than a 32-bit application, because the address sizes of memory pointers and other structures automatically become larger. This means a user should have a minimum of 4 GB of memory installed to benefit from a 64-bit application.
Performance of 32-bit vs. 64-bit apps:
Unless you need to access more memory that 32b addressing will allow you, the benefits will be small, if any.
When running on 64b CPU, you get the same memory interface no matter if you are running 32b or 64b code (you are using the same cache and same BUS).
Check out [1], [2], [3] for more info.
When to provide 32-bit vs 64-bit softwares:
The choice is tough to make. You have to consider a lot of staffs before determining a 32-bit or 64-bit software, for example:
Users: if the users of the software are using x32/x86 machines, you should choose to provide 32-bit software accordingly as 64-bit softwares are not well compatible woth such machines.
Memory-to-use: if your software need to (or must) use more 4G memory, you have to choose to provide 64-bit softwares. I recently did some work on video processing where I need to load very large video into memory, so I have to target my coding on 64-bit.
Moreover, the following articles that you maybe interested in during this choosing:
The forgotten problems of 64-bit programs development
20 issues of porting C++ code on the 64-bit platform
Question: Programming for a 32-bit environment vs programming for a 64-bit environment / Build configurations
| common-pile/stackexchange_filtered |
When the pullback functor is monadic
Consider a site $(\mathcal C,J)$ and the category of sheaves $\mathscr S=\mbox{Sh}(\mathcal C,J)$ on it. Given a morphism $f:X\rightarrow Y$ between sheaves, when is the pullback funtor $f^*:\mathscr S/Y\rightarrow\mathscr S/X$ monadic? And what if I consider sheaves valued in a generic complete category $\mathcal A$ (instead of $\mathbf{Set}$)?
The category of sheaves is Barr-exact, so the pullback functor is monadic iff $f$ is a regular epimorphism.
And is this still true with sheaves taking values in a arbitrary complete category?
In some categories it would definitely fail. I'm pretty sure it would not be true for $\mathcal{A}=\mathbf{Top}$ for example.
great, thank you!
| common-pile/stackexchange_filtered |
OS locale support for use in Python
The following Python code works on my Windows machine (Python 2.5.4), but doesn't on my Debian machine (Python 2.5.0). I'm guessing it's OS dependent.
import locale
locale.setlocale( locale.LC_ALL, 'English_United States.1252' )
I receive the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.5/locale.py", line 476, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting
Questions:
Is it OS dependent?
How can I find the supported locale
list within Python?
How can I match between Windows
locales and Debian locales?
do you have to hardcode the locale? setlocale(LC_ALL, "") will load the locale defined by the environment.
It is OS dependent.
To get the list of local available you can use locale -a in a shell
I think the local you want is something like Windows-1252
there's no locale -a in windows
try
apt-get install locales-all
for me it works like a charm
This is also helpful if one wants to configure locales selectively: http://tlug.dnho.net/node/237
If your want to be a bit more selective, you can install one of the language-pack-* packages, e.g. language-pack-de.
do you know what the yum equivocation is? yum install locales-all doesnt work.
Look inside the locale.locale_alias dictionary.
>>> import locale
>>> len(locale.locale_alias)
789
>>> locale.locale_alias.keys()[:5]
['ko_kr.euc', 'is_is', 'ja_jp.mscode', 'kw_gb@euro', 'yi_us.cp1255']
>>>
(In my 2.6.2 installation there are 789 locale names.)
Actually, the locales defined in the alias dictionary are not necessarily supported.
I tried a variation of this which was list(set(locale.locale_alias.values()) (values instead of keys because I want the real values, and convert to set and list again to retain only unique values). However there's another problem, as raised here (http://stackoverflow.com/questions/1728376/python-get-a-list-of-all-the-encodings-python-can-encode-to/1736533#1736533): are there locales which don't have aliases, and won't be in the alias dictionary at all?
On Ubuntu Precise type
sudo locale-gen en_US
| common-pile/stackexchange_filtered |
At what point did the Apollo 11's Eagle extend the landing gear?
I am trying to figure out, during Apollo missions, exactly when the lunar-module extended the landing gear. Was it closer to the time the CM and LM connected in the space rendezvous or did they extend the landing gear closer to arriving to the Moon?
Wow, I just learned something new about he LM. It never occurred to me that the legs needed to be extended. Probably because I have never seen it with the legs still folded in. Presumably there is no photo of it folded up in space since it would take an EVA to take said photo... Maybe this one is a photo of the LM still in its unextended state (from Apollo 9)?! https://www.flickr.com/photos/projectapolloarchive/21315606674/in/album-72157659042210300/
@user2705196 thanks for the link to the pictures. AS09-20-3063 might also be of interest.
It was done on "PDI Day" (Powered Descent Initate), the day of landing.
Source: Apollo 15 LM Activation Checklist
The Apollo Flight Journal is a fantastic resource for answering these sorts of questions. For every mission, it has a complete transcript of everything said, both over the radio and within the spacecrafts. Every significant event is narrated there, with mission timestamps as Hours:Minutes:Seconds since liftoff.
For Apollo 11, the landing gear was extended as part of Day 5, part 1: Preparations for Landing (search for "gear"):
098:14:07 Armstrong (onboard): Okay, we're going to put our gear down.
098:14:29 Aldrin (onboard): [Garble] Master Arm...
098:14:34 Armstrong (onboard): Okay.
098:14:35 Aldrin (onboard): Landing Gear Deploy, Fire.
098:14:37 Armstrong (onboard): Here we go, Mike.
098:14:46 Aldrin (onboard): Bam, it's out. Ain't no doubt about that.
098:14:50 Armstrong (onboard): And it's gray.
The Lunar Module's four landing legs have deployed to the open position with a bang, and the talkback indicator on the instrument panel has turned grey, as confirmation.
The above exchange took place while the Command Module and Lunar Module were still docked together in lunar orbit, prior to separation and the LM's descent.
Approximately two hours later, as part of Day 5, part 2: Undocking and the Descent Orbit, the LM undocked, and as it moved away, spun so Collins could visually check that the legs were deployed properly:
As the LM moves away, Neil rotates the lander to allow Mike to inspect its exterior and in particular, to ensure that the four landing gear are properly deployed.
Photograph AS11-44-6574 (among others) captures this moment:
So, to summarize, they were deployed while docked in lunar orbit, and successful deployment was visually confirmed right after separation.
Wow, that's a beautiful picture.
| common-pile/stackexchange_filtered |
R - how to use apply (or some variant) to replace nested looping
I've been searching the forums for a while now, and I can't seem to figure out the answer to my problem (although I've come close a few times). My apologies if this has already been answered elsewhere and I've missed it.
I'm working with the Egyptian Skulls data from the HSAUR2 library. I'll explain my problem via the code below. I first load the skulls data and run statistical summaries on it (eg boxplots, means, std. devs, etc). These summaries (not shown here) are broken down by variable (in columns 2-5 of the skulls data) and by "epoch" (column 1 of the skulls data).
library(HSAUR2) # load the skulls data
head(skulls)
# epoch mb bh bl nh
# 1 c4000BC 131 138 89 49
# 2 c4000BC 125 131 92 48
# 3 c4000BC 131 132 99 50
# 4 c4000BC 119 132 96 44
# 5 c4000BC 136 143 100 54
# 6 c4000BC 138 137 89 56
I then call powerTransform (part of the car package) to suggest appropriate transformations to convert the data so that the resulting distributions are "more Normal". I have one transformation for each variable/epoch combination.
library(car)
tfms_mb <- by(skulls$mb,skulls$epoch, function(x) powerTransform(x))
tfms_bh <- by(skulls$bh,skulls$epoch, function(x) powerTransform(x))
tfms_bl <- by(skulls$bl,skulls$epoch, function(x) powerTransform(x))
tfms_nh <- by(skulls$nh,skulls$epoch, function(x) powerTransform(x))
To extract the coefficients, I use sapply.
mbc <- sapply(tfms_mb,coef)
bhc <- sapply(tfms_bh,coef)
blc <- sapply(tfms_bl,coef)
nhc <- sapply(tfms_nh,coef)
Question:
How do I apply the appropriate transformation to each variable/epoch pair?
I am currently using the bct() function (from the TeachingDemos package) to apply the transformation and I can work out how to do it with one set value (eg raise all data to the power of 1.5):
library(TeachingDemos)
by(skulls[,-1], skulls[,1], function(x) { bct(x,1.5)})
My question is, how do I replace the "1.5" in the above line, to cycle through the coefficients in mbc, bhc, etc. and apply the correct power to each variable/epoch combination?
I've been reading up on the apply family of functions for a number of hours and also the the plyr package but this one has me stumped! Any help would be appreciated.
This is a solution using lapply twice:
library(HSAUR2)
library(car)
library(TeachingDemos)
do.call("rbind",
lapply(unique(skulls[["epoch"]]),
function(x) {
coefs <- coef(powerTransform(subset(skulls, epoch == x)[ , 2:5]));
do.call("cbind",
lapply(seq(length(coefs)),
function(y) bct(subset(skulls, epoch == x)[ , (y+1)], coefs[y])))
}
)
)
Thanks Sven! You're a total champion - it works perfectly! This one was out of my reach, however I've now learned a new trick :) Thanks!
Here is a data.table solution that will be memory and time efficient
library(data.table)
SKULLS <- data.table(skulls)
SKULLS[, lapply(.SD, function(x){bct(x,coef(powerTransform(x)))}),by = epoch]
| common-pile/stackexchange_filtered |
TypeError: 'module' object is not callable error?
I studying on a model training. when i called the training function i got this error "TypeError: 'module' object is not callable" and i can't see where i missed it.
here is my calling function:
train(
model,
optimizer,
loss,
train_loader,
hyperparams["epoch"],
scheduler=hyperparams["scheduler"],
device=hyperparams["device"],
val_loader=val_loader,
)
the error i got
I think you need to add your valid and train loaders please to be easily debugged.
@Phoenix it has nothing to do with the loader
please do not post screenshots of logs/stack traces/code. Instead copy-paste the relevant text and format it properly
You are calling tqdm module, instead of tqdm method from tqdm module.
Replace:
import tqdm
with:
from tqdm import tqdm
had similar issue with glom
had similar issue with glob
| common-pile/stackexchange_filtered |
Interpret strings as packed binary data in C++
I have question about interpreting strings as packed binary data in C++. In python, I can use struct module. Is there a module or a way in C++ to interpret strings as packed binary data without embedding Python?
A string is a sequence of contiguous characters (bytes, basically). How much more packed do you wish to get?
So, given a byte array, you want to be able to treat the array as a struct? You could just use a cast.
In C++, for binary data, you would typically use a vector (rather than a string) and the unsigned char type to represent a byte (avoiding signedness issues). Thus a typical "buffer" would be of type std::vector<unsigned char>, rather than std::string... note that in C++03 the string storage need not be contiguous.
@MatthieuM. C++03 didn't require contiguity, but the C style array pointed to by the return value of std::string::data() must be contiguous. And the reason C++11 added the contiguous requirement was in recognition of existing practice---there were in fact no implementations which weren't contiguous (and where &s[0] didn't result in the same values as s.data()).
@JamesKanze: data is for vector, so I believe you are talking about c_str. The problem with c_str is that it is char const* and sometimes you'd like to modify the characters (to_upper ?).
@MatthieuM. data() is for string. It was added to vector in C++11, but has always been in string. There was talk of adding a non-const data() to string in C++11; apparently, it got overlooked. In general, when dealing with byte sized data, rather than text, I would prefer vector<signed char> or vector<unsigned char> (with the latter for raw memory as well)---it seems to expression the intent better---but string will work as well.
@MatthieuM. And c_str() is only for when you want a '\0' terminated string. So not for packed binary data. (Again, it will work, but it expresses a completely different intent.)
@JamesKanze: ah thanks, I had missed that (I'll be unapologetic and blame the too bloated interface of string ;) ).
As already mentioned, it is better to consider this an array of bytes (chars, or unsigned chars), possibly held in a std::vector, rather than a string. A string is null terminated, so what happens if a byte of the binary data had the value zero?
You can either cast a pointer within the array to a pointer to your struct, or copy the data over a struct:
#include <memory>
#pragma pack ( push )
#pragma pack( 1 );
struct myData
{
int data1;
int data2;
// and whatever
};
#pragma pack ( pop )
char* dataStream = GetTheStreamSomehow();
//cast the whole array
myData* ptr = reinterpret_cast<myData*>( dataStream );
//cast from a known position within the array
myData* ptr2 = reinterpret_cast<myData*>( &(dataStream[index]) );
//copy the array into a struct
myData data;
memcpy( &data, dataStream, sizeof(myData) );
If you were to have the data stream in a vector, the [] operator would still work. The pragma pack declarations ensure the struct is single byte aligned - researching this is left as an exercise for the reader. :-)
A string in C++ has a method called c_str ( http://www.cplusplus.com/reference/string/string/c_str/ ).
c_str returns the relevant binary data in a string in form of an array of characters. You can cast these chars to anything you wish and read them as an array of numbers.
You can, although this usually lies somewhere in-between implementation-defined and undefined behaviour.
@OliCharlesworth The results of the conversion will be implementation defined, since whether plain char is signed or not, and how many bits it contains, is implementation defined. Converting a char to an integral type large enough to contain the value (which will be all integral types if char is signed) is well defined, as is converting it to any unsigned integral type or floating point type.
Basically, you don't need to interpret anything. In C++, strings are
packed binary data; you can interpret them as text, but you're not
required to. Just be aware that the underlying type of a string, in
C++, is char, which can be either signed (range [-128,127] on all
machines I've heard of) or unsigned (usually [0,255], but I'm aware of
machines where it is [0,511]).
To pass the raw data in a string to a C program, use
std::string::data() and std::string::size(). Otherwise, you can
access it using iterators or indexation much as you would with
std::vector<char> (which may express the intent better).
Eventhough it might be closer to pickling in python, boost serialization may be closest to what you want to achieve.
Otherwise you might want to do it by hand. It is not that hard to make reader/writer classes to convert primitives/classes to packed binary format. I would do it by shifting bytes to avoid host endianess issues.
Thx guys for the very quickly answers.
For clarification, I want to overwrite my application with Python to C++
http://www.devpda.net/rsceditor
RSCEditor for edit EXE/DLL and RSC files and i have problems with some functions in python for example:
• builtn map()
• struct.unpack
• struct.pack
• index from to in ARRAYS (example arrary[5:15])
i don't know have can i write this in C++
| common-pile/stackexchange_filtered |
Converted from .py to .exe but output plots are not displayed
Based on input data, I have to generate the plots using python scripting and it was run successfully with plots displayed. Later, unfortunately, after conversion from .py to .exe file, the outputs plots are not generated when clicked on .exe file, What may be the problem with .exe? Does anyone face a similar issue?
The plots were probably created by your IDE. Does your program have an interface? Do you have a few fig.show() missing?
I have used plt.show(). if we run with a python script, I don't have any issues to generates plots, the only problem I face is.. when converted to .exe file
Who closed the question? It's pretty clear: matplotlib pop-up windows don't show when using an exe bundled with PyInstaller. Take a look here OP: https://stackoverflow.com/questions/17095180/building-python-pylab-matplotlib-exe-using-pyinstaller
After using degarding to matplotlib 3.1.3 version, the plots are displayed
it could be number of things,
dependencies are not included with the .exe file,
or your program reads from input files that are not in the same directory as the .exe file
run the .exe in the same directory as .py and see if the problem persists.
Thanks for your reply Azeer. I am giving the input file (.txt) as dynamic. When the .exe is executed, it will ask for the user input file path and then execute it. Even I tried copying the i/p file and .exe at the same location path.
| common-pile/stackexchange_filtered |
Are there any SQL injection tools out there so I can test my site's vulnerabality?
Are there any SQL injection tools out there so I can test my site for vulnerabilities? Any good ones? Free ones would be good.
This isn't exactly an external tool to check for SQL Injection. But if you're using PHP you might want to consider looking into MySQLI http://www.php.net/manual/en/class.mysqli.php It has various functions for binding variables into your Queries which prevents your site from being vulnerable to SQL Injection in the first place
Thank you for correcting the spelling, Matt. As someone who learned English as a second language, boy do I hate obvious errors! Get your act together, @gateway. This is not Twitter. It is not cool to type in LOL speak. It can actually make a difference between landing a good and a crappy job.
If it is non-commercial (free only for non-commercial use), http://www.nessus.org/nessus/ offers some really good web-app SQL injection tests. It also tests for XSS, and hundreds of known vulnerabilities as well. It helped me find a hole or two.
| common-pile/stackexchange_filtered |
Changing Linux 'ps' output by changing argv[0]
I'm trying to have a program alter what 'ps' displays as the process's CMD name, using the technique I've seen recommended of simply overlaying the memory pointed to by argv[0]. Here is the sample program I wrote.
#include <iostream>
#include <cstring>
#include <sys/prctl.h>
#include <linux/prctl.h>
using std::cout;
using std::endl;
using std::memcpy;
int main(int argc, char** argv) {
if ( argc < 2 ) {
cout << "You forgot to give new name." << endl;
return 1;
}
// Set new 'ps' command name - NOTE that it can't be longer than
// what was originally in argv[0]!
const char *ps_name = argv[1];
size_t arg0_strlen = strlen(argv[0]);
size_t ps_strlen = strlen(ps_name);
cout << "Original argv[0] is '" << argv[0] << "'" << endl;
// truncate if needed
size_t copy_len = (ps_strlen < arg0_strlen) ? ps_strlen+1 : arg0_strlen;
memcpy((void *)argv[0], ps_name, copy_len);
cout << "New name for ps is '" << argv[0] << "'" << endl;
cout << "Now spin. Go run ps -ef and see what command is." << endl;
while (1) {};
}
The output is:
$ ./ps_test2 foo
Original argv[0] is './ps_test2'
New name for ps is 'foo'
Now spin. Go run ps -ef and see what command is.
The output of ps -ef is:
5079 28952 9142 95 15:55 pts/20 00:00:08 foo _test2 foo
Clearly, "foo" was inserted, but its null terminator was either ignored or turned into a blank. The trailing portion of the original argv[0] is still visible.
How can I replace the string that 'ps' prints?
Chap, what is inside the /proc/$pid/cmdline special file? Can you do hexdump -C of it?
00000000 66 6f 6f 00 5f 74 65 73 74 00 66 6f 6f 00 62 61 |foo._test.foo.ba|
00000010 72 00 |r.|
@osgx: Well, I can't seem to format it properly, but it corresponds to tetromino's description below.
You need to rewrite the entire command line, which in Linux is stored as a contiguous buffer with arguments separated by zeros.
Something like:
size_t cmdline_len = argv[argc-1] + strlen(argv[argc-1]) - argv[0];
size_t copy_len = (ps_strlen + 1 < cmdline_len) ? ps_strlen + 1 : cmdline_len;
memcpy(argv[0], ps_name, copy_len);
memset(argv[0] + copy_len, 0, cmdline_len - copy_len);
That appears to be correct. And I've verified that argv[1..n] point to those null-terminated arguments, which means that I'll end up clobbering argv[1..n]. Something to be aware of.
| common-pile/stackexchange_filtered |
get 2sxc app id from app name
I want to use data from another app like in code
But I want to get appId from app name or folder?
Is there any method to get this?
This is my working code with functions for getting AppId:
public int GetAppIdFromName(string appName){
foreach(var app in sxApps()) if (app.Name==appName) return App.AppId;
return -1;
}
public int GetAppIdFromFolder(string appFolder){
foreach(var app in sxApps()) if (app.Folder==appFolder) return App.AppId;
return -1;
}
public List<ToSic.SexyContent.App> sxApps()
{
var zoneId=(int)ToSic.SexyContent.Internal.ZoneHelpers.GetZoneID(Dnn.Module.PortalID);
var eavApps = ((ToSic.Eav.DataSources.Caches.BaseCache)ToSic.Eav.DataSource.GetCache(zoneId, null)).ZoneApps[zoneId].Apps;
var ps = DotNetNuke.Entities.Portals.PortalSettings.Current;
return eavApps.Select<KeyValuePair<int, string>, ToSic.SexyContent.App>(eavApp => new ToSic.SexyContent.App(zoneId, eavApp.Key, ps)).ToList();
}
Is this OK or can be done easyer?
==================== added =======================
I updated 2sxc from 8.7 to 9.30 and this code don't work anymore.
error:
Compiler Error Message: CS1502: The best overloaded method match for 'ToSic.SexyContent.App.App(ToSic.Eav.Apps.Interfaces.ITenant, int, ToSic.Eav.Logging.Simple.Log)' has some invalid arguments
Source Error:
Line 22: return eavApps.Select<KeyValuePair<int, string>, ToSic.SexyContent.App>(eavApp => new ToSic.SexyContent.App(zoneId, eavApp.Key, ps)).ToList();
Can someone help me with converting this to new version? I don't understand how new ITenant work.
============ edit new solution for 2sxc version 9.30 =============
public static List<ToSic.SexyContent.App> sxApps(int portalID)
{
var zm = new ToSic.SexyContent.Environment.Dnn7.ZoneMapper();
var zoneId = zm.GetZoneId(portalID);
var eavApps = ((ToSic.Eav.DataSources.Caches.BaseCache)ToSic.Eav.DataSource.GetCache(zoneId, null)).ZoneApps[zoneId].Apps;
var ps = DotNetNuke.Entities.Portals.PortalSettings.Current;
var tenant = new ToSic.SexyContent.Environment.Dnn7.DnnTenant(ps);
return eavApps.Select<KeyValuePair<int, string>, ToSic.SexyContent.App>(eavApp => new ToSic.SexyContent.App(tenant, eavApp.Key)).ToList();
}
====================== new edit ===================
On version 9.33 stuff brake again...
(on 9.32.1 still work)
Compiler Error Message: CS1729: 'ToSic.SexyContent.App' does not contain a constructor that takes 2 arguments
Line 23: return eavApps.Select,
ToSic.SexyContent.App>(eavApp => new ToSic.SexyContent.App(tenant,
eavApp.Key)).ToList();
anyone know how to fix this?
As of now 2sxc 8.5 that's more or less it. I could rewrite it, but it would still go through the list of apps on the basecache. This can be risky though, because multiple apps could have the same name (if hosted on different portal...)
@iJungleBoy can you please help me how to convert this to ITenant? Or is there some new way to get this in new version?
@iJungleBoy found and posted the solution, don't need help anymore
@iJungleBoy On version 9.33 stuff brake again.. I posted another answer with new problem
| common-pile/stackexchange_filtered |
Query last day, last week, last month SQLite
I have this table in my Android SQLite DB:
CREATE TABLE statistics (subject TEXT, hits INTEGER, fails INTEGER, date DATE)
On date field is stored datetime('now', 'localtime') in every register.
Now I must query last day, last week and last month registers for showing some statistics.
I've been trying something like this
SELECT Timestamp, datetime('now', '-1 week') FROM statistics WHERE TimeStamp < datetime('now', '-1 week')
and this
SELECT * FROM statistics WHERE date BETWEEN datetime('now', localtime') AND datetime ( 'now', '-1 month')
and doesn't work :(
How can I do it?
Can I check if the query is OK by simply forwarding date in the virtual device emulator?
Thanks!
Here's a hint. The last day of the month is -1 from the first day of the next month.
Thank you @JonH. I have found a solution, but I cannot post it for the time being. In some hours all of you will have it :)
I have found this solution. I hope it works for you.
For last day:
SELECT * FROM statistics WHERE date BETWEEN datetime('now', 'start of day') AND datetime('now', 'localtime');
For last week:
SELECT * FROM statistics WHERE date BETWEEN datetime('now', '-6 days') AND datetime('now', 'localtime');
For last month:
SELECT * FROM statistics WHERE date BETWEEN datetime('now', 'start of month') AND datetime('now', 'localtime');
-6 days without the space! Can't edit it, it says edit must be at least 6 characters
In sqlite 3.9.1 at least, a missing 'localtime' between 'now', 'start...' may cause unexpected behavior. They should be datetime('now', 'localtime', 'start of day') as an example.
This code should get you the previous month
SELECT *
FROM statistics
WHERE date >= date('now','start of month','-1 month')
AND date < date('now','start of month')
SELECT *
FROM statistics
WHERE date >= date('now','start of month','-1 months')
AND date < date('now','start of month')
On more months, is "months" and not month like as other said before.
This code will bring previous week records hopefully
SELECT * FROM order_master
WHERE strftime('%Y-%m-%d',om_date) >= date('now','-14 days') AND
strftime('%Y-%m-%d',om_date)<=date('now') order by om_date LIMIT 6
Previous week or previous two weeks?
SELECT
max(date(date, 'weekday 0', '-7 day')) WeekStart,
max(date(date, 'weekday 0', '-1 day')) WeekEnd,date
FROM table;
You can create a calendar and then get the timestamp from it
final Calendar todayCalendar = Calendar.getInstance();
final long todayTimestamp = todayCalendar.getTime().getTime();
todayCalendar.add(Calendar.DAY_OF_YEAR, -7);
final long aWeekAgoTimestamp = todayCalendar.getTime().getTime();
final String selection = TABLE_COLUMN_DATE_CREATED + " BETWEEN " + aWeekAgoTimestamp + " AND " + todayTimestamp;
| common-pile/stackexchange_filtered |
Remove/undef a class method
You can dynamically define a class method for a class like so:
class Foo
end
bar = %q{def bar() "bar!" end}
Foo.instance_eval(bar)
But how do you do the opposite: remove/undefine a class method? I suspect Module's remove_method and undef_method methods might be able to be used for this purpose, but all of the examples I've seen after Googling for hours have been for removing/undefining instance methods, not class methods. Or perhaps there's a syntax you can pass to instance_eval to do this as well.
Thanks in advance.
class Foo
def self.bar
puts "bar"
end
end
Foo.bar # => bar
class <<Foo
undef_method :bar
end
# or
class Foo
singleton_class.undef_method :bar
end
Foo.bar # => undefined method `bar' for Foo:Class (NoMethodError)
When you define a class method like Foo.bar, Ruby puts it Foo's singleton class. Ruby can't put it in Foo, because then it would be an instance method. Ruby creates Foo's singleton class, sets the superclass of the singleton class to Foo's superclass, and then sets Foo's superclass to the singleton class:
Foo -------------> Foo(singleton class) -------------> Object
super def bar super
There are a few ways to access the singleton class:
class <<Foo,
Foo.singleton_class,
class Foo; class << self which is commonly use to define class methods.
Note that we used undef_method, we could have used remove_method. The former prevents any call to the method, and the latter only removes the current method, having a fallback to the super method if existing. See Module#undef_method for more information.
I would have thought it'd be possible without using the Eigenclass, at least in 1.9.
@Andrew, Perhaps so. Alas, I do not know it.
This didn't work for me in Ruby1.9.3. I was still able to call the removed method.
@joseph.hainline - That's interesting! I just confirmed that the above works in MRI 1.8.3-p374, MRI 1.9.3-p484, MRI 2.0.0-p247, and MRI 2.1.0. Are you perhaps doing something different, either when removing the method, or when calling it, or perhaps using a non-MRI Ruby?
@joseph.hainline - If you have the method in super class, the method is still callable after you call removed_method. You can use undef_method to prevent it.
This also works for me (not sure if there are differences between undef and remove_method):
class Foo
end
Foo.instance_eval do
def color
"green"
end
end
Foo.color # => "green"
Foo.instance_eval { undef :color }
Foo.color # => NoMethodError: undefined method `color' for Foo:Class
This worked for me. I called it on an object, and it only removed it at the object level. Foo.new.instance_eval { undef :color } works too.
removed_method removes method of receiver class where as undef_method removed all methods from inherited class including receiver class.
You can remove a method in two easy ways. The drastic
Module#undef_method( )
removes all methods, including the inherited ones. The kinder
Module#remove_method( )
removes the method from the receiver, but it
leaves inherited methods alone.
See below 2 simple example -
Example 1 using undef_method
class A
def x
puts "x from A class"
end
end
class B < A
def x
puts "x from B Class"
end
undef_method :x
end
obj = B.new
obj.x
result -
main.rb:15:in
': undefined methodx' for # (NoMethodError)
Example 2 using remove_method
class A
def x
puts "x from A class"
end
end
class B < A
def x
puts "x from B Class"
end
remove_method :x
end
obj = B.new
obj.x
Result -
$ruby main.rb
x from A class
I guess I can't comment on Adrian's answer because I don't have enough cred, but his answer helped me.
What I found: undef seems to completely remove the method from existence, while remove_method removes it from that class, but it will still be defined on superclasses or other modules that have been extened on this class, etc.
In Ruby 2.4 it looks to be undef_method now.
If you would like to remove method with name what calculate dinamically, you should use eigenclasses like:
class Foo
def self.bar
puts "bar"
end
end
name_of_method_to_remove = :bar
eigenclass = class << Foo; self; end
eigenclass.class_eval do
remove_method name_of_method_to_remove
end
this way is better than others answers, becouse here i used class_eval with block. As you now block see current namespace, so you could use your variables to remove methods dinamically
Object.send(:remove_const, :Foo)
Doesn't that remove the whole class?
Technically this answer isn't inaccurate (i.e. this is, in fact, a way to remove class methods), since by removing class Foo it also removes all the class methods in Foo :P :P :P.
I mean, it's obviously not what the OP actually wants, but technically it's not false.
Other technically correct answers: 1) kill the containing Ruby process;
2) restart the OS; 3) throw the computer into a lake; 4) drop a nuclear bomb nearby; 5) trigger a supernova; 6) Wait for the heat death of the universe.
| common-pile/stackexchange_filtered |
Gitlab CI : get all commit changes from the push/merge
I would like to have the files that have changed since the last push.
Currently I can find the difference on the last commit. However if I have several commits in one push, only the last commit is taken into account :
git diff-tree --no-commit-id --name-only -r ${CI_COMMIT_SHA} | while read FILE ; do
infra="$(echo ${FILE} | cut -d'/' -f1)";
application="$(echo ${FILE} | cut -d'/' -f2)";
project="$(echo ${FILE} | cut -d'/' -f3)";
if [[ " ${seen[*]} " == *"$project"* ]]
then
echo "Project ${project} has already been sync"
continue
fi
seen+=($project)
if [[ ! -d "${SCRIPT_DIR}/${infra}" ]] || [[ ! -d "${SCRIPT_DIR}/${infra}/${application}" ]] || [[ ! -d "${SCRIPT_DIR}/${infra}/${application}/${project}" ]]
then
echo "${SCRIPT_DIR}/${infra}/${application}/${project} not a valid folder"
continue
fi
pushd ${SCRIPT_DIR}/${infra}/${application}/${project}
echo "Auto delivery ${project}"
bash delivery.sh auto
retVal=$?
if [ $retVal -ne 0 ]; then
echo "Error (code $retVal) : check rsync return"
exit 1
fi
popd
done
In the delivery.sh I do an rsync on the folder that was changed in the last commit (${CI_COMMIT_SHA}). However I would like to take into account ALL the folders in the last push/MR, not just the commit.
Is this possible ?
This workflow seems a bit brittle. What should happen if, for some reason, delivery.sh failed on one push. Then you fix the problem that made delivery.sh fail in the next commit and push it. Should delivery.sh then look at just the last commit, or also redo the previous one it had failed on? It would seem safer to me to design delivery.sh to do its analysis based on the current snapshot, not just the most recent changes.
For what it's worth, I'm not aware of a way to do what you're asking, although there's probably a server-side equivalent to the reflog in GitLab.
@joanis I've added more code to the question. To answer your questions : delivery.sh is an rsync command (more complex than just one folder). I would like to know if I can have a push_id instead of ${CI_COMMIT_SHA}. Because actually, if I do two commits in a push, only the last one will be in ${CI_COMMIT_SHA} and then, only the last one will be sync
The solution is to user git diff with ^! :
git diff --no-commit-id --name-only -r ${CI_COMMIT_SHA}^!
With ^! parameter it will watch the difference with the parent commit.
How about merge request? The answer can only solve the problem when the action is push.
You can probably use CI_MERGE_REQUEST_DIFF_BASE_SHA for it: git diff --no-commit-id --name-only -r $CI_MERGE_REQUEST_DIFF_BASE_SHA...HEAD. This variable is only available for Merge Request Pipelines: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html#predefined-variables-for-merge-request-pipelines
| common-pile/stackexchange_filtered |
Deformation Retract and surjectivity
Let $A$ be a deformation retract of $X$ and $(j)_*: \pi_1(A,x_0) \to \pi_1(X,x_0)$ be the map induced by the inclusion map $j:A \to X$ (with $x_0 \in A$). I want now proof that for every $x \in \pi_1(X,x_0)$ there exists a $y \in \pi_1(A,x_0)$ s.t. $(j)_*([y])=x$.
Okay since $A$ is a deformation retract there exists a retraction $r: X \to A$ and we have that $r \circ j =id_A$ and $j \circ r \sim id_X$. But how to continue? Can somebody help me?
(We don't know about functoriality yet and so we can't use it.)
A deformation retract is in particular a homotopy equivalence, and a homotopy-equivalence induces an isomorphism on all homotopy groups.
We can't use the functoriality here.. We don't saw this term
any $\alpha \colon I \to X$ is homotopic to $j\circ r(\alpha)$, in other words $[\alpha] = j_*([r\circ \alpha])$.
Does this answer your question? Deformation Retract is an isomorphism
Observe if $X,Y$ and $Z$ are topological space and $f:(X,x_0)\to (Y,y_0)$ and $g:(Y,y_0)\to (Z,z_0)$ are two continuous functions .
Then $(f \circ g) _*=f_*\circ g_*$.
We also know that $j \circ r \sim id_X \implies (j\circ r)*=(id_X)_* $
I take $A$ to be a Strong deformation retract. Then the homotopy relation $j \circ r \sim id_X$ is base point ($x_0\in A$) preserving.
Suppose $f,g:(A,x_0)\to (X,x_0)$ and $f\sim g$ is basepoint preserving then there is a continuous map $F:[0,1]\times A \to X$
such that $F(0,x)=f(x)$, $F(1,x)=g(x)$ and $F(t,x_0)=x_0$ $\forall x \in A , t\in [0,1]$.
If $\gamma:[0,1]\to A$ is a loop in $A$ based at $x_0$ you show that $f\circ\gamma$ and $g\circ \gamma$ are path-homotopic as loop based at y_0 in $X$, i.e.
$f\circ\gamma \sim g\circ \gamma$ and base point $(y_0)$ preserving, which implies $[f\circ \gamma]$=$[g\circ \gamma]$.
Then you are done!
Define $G:[0,1]\times[0,1]\to X$ where $G(s,t)=F(s,\gamma(t))$
Observe $G(0,t)=F(0,\gamma(t))=f\circ \gamma(t)$, $G(1,t)=F(1,\gamma(t))=g\circ \gamma(t)$ and $G(s,0)=G(s,1)=y_0$
Hence here we have $$j_*\circ r_*=(j\circ r)*=(id_X)_*=id_{\pi_1(X,x_0)}(\text{Since } j \circ r \sim id_X)$$
$$r_*\circ j_*=(r\circ j)*=(id_A)_*=id_{\pi_1(A,x_0)}(\text{Since } r \circ j = id_A)$$
$\therefore j_*$ is an isomorphism.
I am not familiar with functors. I know this simple proof .
Thanks for your answer, I undertand everything except the part: "$j \circ r \sim id_X \implies (j\circ r)=(id_X)_ $". Why is this true? Sorry, I dont get it.
So do you know the fact that if $f\sim g$ then $f_=g_$?
No, unfortunately not
Would it be okay if I attach a link where you can find the proof ?
Yeah of course, I hope I understand everything
Have you learnt path-homotopy?
Yes, we learnt path-homotopy
I have edited the answer. Hope you understand it ! If you have doubts you have a look at the Chapter of Fundamental Group from "Topology" by James Munkres.
Is it clear now ?
@Noobmathematician (This is about your edit.) You need to show that $f_[\gamma]$ and $g_[\gamma]$ are path-homotopic. That is, you're missing the conditions $G(s, 0) = f\circ\gamma(0)$ and $G(s, 1) = g\circ\gamma(1)$. (This actually a non-trivial fact).
$j \circ r \sim id_X \implies (j\circ r)=(id_X)_ $ is only true if the homotopy is base-point preserving. This is not guaranteed unless $A$ is a strong retract of $X$.
@feyhat Yes, you are right this thing is guaranteed iff we have the homotopy tobe base point preserving. Apologies I missed it. so is it possible that the fact posed in the question might is not be true if $A$ is not a strong of $X$?
@PaulFrost may would it be possible for you to suggest me some edits so that it looks good?. , :) And thanks for you rectified me.
@Noobmathematician I suggest that you mention that your proof is only valid for strong deforrnations retracts. The result is also true for (non-strong) dedformations retracts, but the proof is much more complicated. See https://math.stackexchange.com/q/3465680.
@PaulFrost is it okay now ?
@Noobmathematician Yes, it is correct!
If the homotopy between $j\circ r$ and $id_X$ can be taken to fix the basepoint (as in the case of a "strong" deformation retract) then you can argue as follows.
Hint: Since $j\circ r \sim id_X$, for any $Y$ and any continuous function $f\colon Y \to X$ we have $f \sim j\circ r\circ f$.
Answer: In order to prove surjectivity without using functoriality, notice that if you have any based loop $\alpha \colon I \to X$ then $\alpha = id_X\circ \alpha$ is homotopic to $j\circ r\circ \alpha$. In particular
$$[\alpha] = j_*([r\circ \alpha]) \in \pi_1(X, x_0) $$
and therefore $j_*\colon \pi_1(A, x_0) \to \pi_1(X, x_0)$ is surjective.
This simple argument can be adapted to prove something much more general: if $[Y, X]$ denotes the set of pointed homotopy classes of continuous functions and $h\colon A \to X$ is a pointed homotopy equivalence then the induced function $h_*\colon [Y, A] \to [Y, X]$ is a bijection for every $Y$ (for the fundamental group result let $Y=S^1$).
Alternatively, once you know functoriality (i.e. $(f\circ g)_* = f_* \circ g_*$) and homotopy-invariance ($f\sim g \implies f_* = g_*$) then surjectivity follows from formal set-theoretic properties as in Noob mathematician's answer.
Your argument is only true if the homotopy is base-point preserving. This is not guaranteed unless $A$ is a strong retract of $X$.
we need strong deformation retract
| common-pile/stackexchange_filtered |
Error in getting record Id in Lightning component quick action
I am getting the Error:
Component class instance initialization error
[Cannot read property 'g' of undefined]
quickActionHandlerHelper.js failed to create component - forceChatter:lightningComponent
There is not call to Apex method/Chatter used in this program. Kindly, review the code and help. Thanks in advance
COMPONENT
<aura:component implements="flexipage:availableForRecordHome,force:hasRecordId,force:lightningQuickAction" access="global" >
<aura:attribute name="recordId" type="String" />
<aura:handler name="UpdateOpp" action="{!c.updateOpportunity}" value="{!this}" />
</aura:component>
Controller
({
updateOpportunity : function(component, event, helper) {
alert('Executing...' + component.get("v.recordId") );
}
})
When you use force:hasRecordId, do not define your own recordId.
<aura:component implements="flexipage:availableForRecordHome,force:hasRecordId,force:lightningQuickAction" access="global" >
<aura:handler name="UpdateOpp" action="{!c.updateOpportunity}" value="{!this}" />
</aura:component>
UpdateOpp is not a value name for a value handler. The documentation demonstrates a button to click on to do something, but if you're just getting started, you can see how it works by using the name "init":
<aura:component implements="flexipage:availableForRecordHome,force:hasRecordId,force:lightningQuickAction" access="global" >
<aura:handler name="init" action="{!c.updateOpportunity}" value="{!this}" />
</aura:component>
If you wanted a named event UpdateOpp, you would need to register an event:
<aura:register name="UpdateOpp" event="force:recordSave"
action="{!c.updateOpportunity}" />
You would need to select the appropriate event to use for the event attribute.
| common-pile/stackexchange_filtered |
Finding occurrence of character in a string in C using pointers only
Function find_any returns a pointer to the first occurrence of any member of the array whose
first element is pointed to by pointer vals and has len number of elements in a half-open
range of values. If none of the members of array vals are found, the function returns NULL.
This is the set of code that I've written so far but it does not seem to work.
I already have a function that can find if a single character occurs in a string but I cant seem to make
it check between more characters.
I need to only use pointers and am not allowed to have subscripts or any include directives.
char const* find_any(char const *begin, char const *end, char const *vals, int len){
int flag=0,i=0;
while(begin!=end){
while(i!=len){
if(*begin==*vals){
flag=1;
break;
}
vals++;
i++;
}
i=0;
begin++;
}
if(flag==1) return begin;
else return NULL;
You do know that break only breaks out of the inner while loop, right? So this function will always return end or NULL
Make the inner loop into a for loop: for (int i = 0; i != len; i++). It makes the code easier to read when you under the clutter of the earlier declaration and the extra i = 0;. You could do return begin; instead of flag = 1; break;, which lets you lose the flag variable and an extra test (you simply return NULL if you exit the outer loop).
However, your primary problem is that you increment vals, but you expect to iterate over the same string on the second character, and third…make a copy of vals for use in the loop. You are implementing a variant on strpbrk() — the difference is that you are not working with null-terminated strings but with 'pointer plus length/end' byte arrays. The fact that the vals value is not guaranteed to be null-terminated means you can't use strchr() and there isn't a strnchar() function available as standard.
This looks like a very good time to learn how to debug your programs. For example by using a debugger to step through the code line by line while monitoring variables and their values.
Also, when it comes to pointers I always recommend you use pen and paper to draw it all out. For example, draw rectangles representing the string you search, and the string of characters you want to find. Then draw arrows labeled begin, end and vals and point the arrows at their respective positions when the function is called. Then when you modify a pointer, erase and redraw the arrow representing the variable. If you do that you will find out one major problem with the code, quite quickly.
thanks everyone for the replies, i'll definitely take in all the advice that I see and improve.
Diagnosis
One of your primary problems is that you increment vals, but you expect to iterate over the same string multiple times: for the second character, and the third, and … You need to make a copy of vals for use in the loop.
You are implementing a variant on strpbrk() — the difference is that you are not working with null-terminated strings but with 'pointer plus length/end' byte arrays. The fact that the vals value is not guaranteed to be null-terminated means you can't use strchr() and there isn't a strnchar() function available as standard; otherwise you could use a function call in place of the inner loop.
Make both the loops into for loops: for (int i = 0; i != len; i++). It makes the code easier to read when you undo the clutter of the earlier declaration and the extra i = 0;. You should use return begin; instead of flag = 1; break;, which lets you lose the flag variable and an extra test (you simply return NULL if you exit the outer loop).
Another of your problems is that the break only exits the inner loop whereas you want to exit the outer loop too when you find the character. You could use a goto statement and a label, but that's worse than using a return in the middle of the function. Unless your tutor has a rule against return in the middle of a function, use it.
Fixed code
#include <stddef.h>
char const *find_any(char const *begin, char const *end, char const *vals, int len)
{
for (const char *haystack = begin; haystack < end; haystack++)
{
for (const char *needle = vals; needle < vals + len; needle++)
{
if (*haystack == *needle)
return haystack;
}
}
return NULL;
}
Interface Consistency
I note that the interface uses two different methods for dealing with byte arrays that are not null terminated: pointers to the start and (one place beyond) the end, and a pointer to the start and the length. The interface should probably be more consistent — consistent code is easier to use. Either mechanism is valid, but use only one mechanism in a given function (and use it consistently in a suite of functions). Use either:
extern char const *find_any(char const *haystack, char const *h_end, char const *needle, const char *n_end);
or:
extern char const *find_any(char const *haystack, size_t h_len, char const *needle, size_t n_len);
or even:
extern char const *find_any(size_t h_len, char const haystack[h_len], size_t n_len, char const needle[n_len]);
Also, it's generally good practice to use size_t for sizes/lengths, rather than int (though I'm sure there are some who disagree). At the least, using size_t means you don't have to worry about checking for negative sizes or lengths.
Testing the code
The function is made static to avoid complaints from my default compiler options (source file ba59.c; program ba59). If the function were to be included in a library, there would be a header that declares the function (along with other functions) that would be included in both the source code that implements find_any() and in any code that uses the function.
gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes -fno-common ba59.c -o ba59
Code:
#include <stddef.h>
static char const *find_any(char const *begin, char const *end, char const *vals, int len)
{
for (const char *haystack = begin; haystack < end; haystack++)
{
for (const char *needle = vals; needle < vals + len; needle++)
{
if (*haystack == *needle)
return haystack;
}
}
return NULL;
}
#include <stdio.h>
static void test_find_any(const char *src, const char *end, const char *vals, int vlen)
{
const char *rv = find_any(src, end, vals, vlen);
if (rv == NULL)
printf("No characters from [%.*s] were found in [%.*s]\n",
vlen, vals, (int)(end - src), src);
else
printf("Found character from [%.*s] at [%.*s]\n",
vlen, vals, (int)(end - rv), rv);
}
int main(void)
{
char haystack[] = "Conniptions";
char needle1[] = "ABXYZabxyz";
char needle2[] = "ion";
char needle3[] = "zap";
char *h_end = haystack + sizeof(haystack) - 1;
int n1_len = sizeof(needle1) - 1;
int n2_len = sizeof(needle2) - 1;
int n3_len = sizeof(needle3) - 1;
test_find_any(haystack, h_end, needle1, n1_len);
test_find_any(haystack, h_end, needle2, n2_len);
test_find_any(haystack, h_end, needle3, n3_len);
return 0;
}
Result:
No characters from [ABXYZabxyz] were found in [Conniptions]
Found character from [ion] at [onniptions]
Found character from [zap] at [ptions]
| common-pile/stackexchange_filtered |
Is there a way to test Triggers with mocks?
When I write triggers, I try to follow best practices by implementing them according to the Separation of Concerns principle. This design pattern allows me to encapsulate logic and separate it from my trigger. The problem I run into is how to test a trigger without also testing logic from dependent classes. Take this example below:
trigger AccountTrigger on Account (after insert) {
if(Trigger.isAfter)
{
if(Trigger.isInsert)
{
Foo foo = new Foo();
if(foo.isTriggerable){
foo.execute(Trigger.new);
}
}
}
}
This trigger works, but there is no way to test it without also testing the logic encapsulated in the Foo class. My trigger only cares about whether it should call or shouldn't call foo.execute(). So it shouldn't need to test that logic contained in that method.
In other programing languages, I would test this by mocking the Foo class and calling something like foo.execute.shouldBeCalled(). Unfortunately, there doesn't seem to be a way to do this in Apex.
I've considered using static variables to mimic what a mock would do, but I'd love to hear how other people where able to get around this (even if it's not elegant). I'm aware you can mock HTTP callouts, is there also a way to mock classes and methods?
Note that as the trigger is after insert the Trigger.isAfter and Trigger.isInsert are redundant. (Perhaps you included them to just bulk out the example?)
@KeithC, in this example it is redundant. Just using it to demonstrate my point.
On your question, my view is that the coupling between code moved out into a separate class and the trigger contexts - isInsert/isUpdate/isDelete/isUndelete combined with isBefore/isAfter plus triggers causing each other to run - is more significant than is sometimes implied. So testing triggers and their logic as a whole by inserting/updating/deleting/undeleting objects rather than trying to mock anything is more cost effective. But I will be interested to see the answers you get.
note that a trigger framework such as https://developer.salesforce.com/page/Trigger_Frameworks_and_Apex_Trigger_Best_Practices eliminates all code in the trigger except for a call to a handler
| common-pile/stackexchange_filtered |
Nuxt 3 with Netlify-Edge preset: ERROR nitro server handler (e.adapter || ls.adapter) is not a function
I have a Nuxt 3 chatbot app that works locally. However, when I deploy to Netlify using the netlify-edge preset, I get a 404 error when calling the server/api/chat.post.js endpoint. It seems that although the endpoint is reachable, there was something wrong with the Nitro server (404 is very vague error it seems).
Again, the app works locally with no problems. It's just that when deployed to Netlify, the Nitro server for the chat api fails.
Checking the Edge Function log on Netlify, I see the following message:
[nitro server handler] (e.adapter || ls.adapter) is not a function
Anyone know what that means?
Here's my code:
server/api/chat.post.js:
import { getChatStream } from '../utils/ai'
export default defineEventHandler(async (event) => {
try {
const { messages } = await readBody(event)
const stream = await getChatStream({ messages })
return stream
} catch (error) {
console.error('error on server', error)
}
})
/utils/ai:
import { Configuration, OpenAIApi } from 'openai'
const config = useRuntimeConfig()
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY || config.OPENAI_API_KEY
})
const openai = new OpenAIApi(configuration)
const systemPrompts = [..removed..]
export const getChatStream = async ({ messages }) => {
try {
const response = await openai.createChatCompletion(
{
model: 'gpt-3.5-turbo',
messages: [
...systemPrompts, ...messages
],
temperature: 0.5,
stream: true // Not supported in OpenAI Node SDK yet
},
{
timeout: 15000,
responseType: "stream" // see above
}
);
return response.data;
} catch (error) {
if (error.response) {
console.log(error.response.status);
console.log(error.response.data);
} else {
console.log(error.message);
}
}
};
Chance this has something to do with using axios with Netlify edge functions? I think the OPEN AI Node sdk I am using has axios under the hood. Looks like fetch is used under the Nitro hood, right? Possible reason for the Nitro adapter error when deploying with Edge Functions vs vanilla Netlify
| common-pile/stackexchange_filtered |
How would you say something is like something else?
I know something similar to this has been asked before: Saying something is like/not like something else
I'm just wondering if there's a phrase to express 'like-ness' using senses, for example, "this apple tastes like a banana" or "my socks smells like sweat".
Would you express it as...
このりんごの味はバナナの味のようです。
This apple's taste is like a banana's taste.
靴下の匂いは汗の匂いみたいです。
[My] socks' smell is like the smell of sweat.
I can't help but think this way of expressing it is kind of long-winded...
Thank you very much! :)
What kind of weird apple/banana mutations are you eating?
@istrasci I don't know about bananas, but...
I'm not wholly confident about this, but I think your sentences could be more naturally expressed as このリンゴはバナナの味がする and 靴下は汗くさい, rather than trying to use a parallel "the A of B is like the C of D" structure.
@senshin: That's really weird. Also, I wouldn't put it past the Japanese to create such things.
| common-pile/stackexchange_filtered |
Apache Spark Simulator
Can Spark in standalone mode be used as a simulator or Databricks community edition for research purposes? I want to simulate an idea because I do not have access to a physical cluster for my research on optimizing Spark Configuration.
Yes, you can use Spark Local Mode and be able to run Spark as library in local mode from IDE and debug.
| common-pile/stackexchange_filtered |
Get information from multidimensional json in php
I am trying to get back information from a multidimensional json array but can't seem to get it right.
Below is an example of the output from the url which you can see yourself at http://<IP_ADDRESS>:3000/
{ "vnc_version": "<IP_ADDRESS>", "mod_version": "<IP_ADDRESS>", "server": { "name": "Pure Blood", "framework": "Microsoft Windows NT 6.2.9200.0" }, "stats": { "uptime": 53462.0, "uptime_peak": 53462.0, "online": 1, "online_max": 1, "online_peak": 2, "unique": 1, "unique_max": 1, "unique_peak": 2, "items": 111752, "items_max": 112259, "items_peak": 112259, "mobiles": 37963, "mobiles_max": 37976, "mobiles_peak": 37978, "guilds": 0, "guilds_max": null, "guilds_peak": 0 }, "players": [ { "info": { "id": 1, "name": "aN.Droid", "title": "", "profile": "", "guild_id": -1, "guild_abbr": "" }, "stats": [ ], "skills": [ ], "equip": [ ] } ], "guilds": [ ] }
What I would like to do is echo the name in the players array. There will be more than one player.
Can anyone please help me out here and point me in the correct direction to get this information?
I am very new to json so excuse my ignorance on the subject.
Thank you!
<?php
$json = '{ "vnc_version": "<IP_ADDRESS>", "mod_version": "<IP_ADDRESS>", "server": { "name": "Pure Blood", "framework": "Microsoft Windows NT 6.2.9200.0" }, "stats": { "uptime": 54383.3, "uptime_peak": 54383.3, "online": 1, "online_max": 1, "online_peak": 2, "unique": 1, "unique_max": 1, "unique_peak": 2, "items": 111672, "items_max": 112259, "items_peak": 112259, "mobiles": 37944, "mobiles_max": 37976, "mobiles_peak": 37978, "guilds": 0, "guilds_max": null, "guilds_peak": 0 }, "players": [ { "info": { "id": 1, "name": "aN.Droid", "title": "", "profile": "", "guild_id": -1, "guild_abbr": "" }, "stats": [ ], "skills": [ ], "equip": [ ] } ], "guilds": [ ] }';
$array = json_decode($json, true);
// you want 'true' as the second parameter so it tunrs this JSON data into a multidimensional array instead of objects.
foreach($array['players'] as $player){
print_r($player['info']);
//etc.. do what you want with each player
}
Thank you this works but when I replace the $json variable with $json = file_get_contents('http://<IP_ADDRESS>:3000/'); I get garbage back. Do you have any ideas as to why or if I am using file_get_contents incorrectly?
Fixed it. The output was compressed :) Using curl I got around it. Thanks again for the help!
$json = '{ "vnc_version": "<IP_ADDRESS>", "mod_version": "<IP_ADDRESS>", "server": { "name": "Pure Blood", "framework": "Microsoft Windows NT 6.2.9200.0" }, "stats": { "uptime": 53462.0, "uptime_peak": 53462.0, "online": 1, "online_max": 1, "online_peak": 2, "unique": 1, "unique_max": 1, "unique_peak": 2, "items": 111752, "items_max": 112259, "items_peak": 112259, "mobiles": 37963, "mobiles_max": 37976, "mobiles_peak": 37978, "guilds": 0, "guilds_max": null, "guilds_peak": 0 }, "players": [ { "info": { "id": 1, "name": "aN.Droid", "title": "", "profile": "", "guild_id": -1, "guild_abbr": "" }, "stats": [ ], "skills": [ ], "equip": [ ] } ], "guilds": [ ] }';
$arr = json_decode($json, true);
foreach($arr["players"] as $player) print($player["info"]["name"]."<br/>");
| common-pile/stackexchange_filtered |
Undefined property: Craft\WebApp::$urlManager in /var/www/craft/app/controllers/BaseController.php on line 37
So I'm trying to setup my first Craft installation on Digital Ocean. I'm bound to use Craft version 2.0.2535 since I'll be duplicating a site using this (so I cannot use Craft 3).
I've spun up a new Droplet on DigitalOcean. I've set up Apache, PHP and Mysql and verified it all works. After copying the the Craft files and resolved all the issues regarding rights to folders, I was finally ready to run the installer, I get this:
Notice: Undefined property: Craft\WebApp::$urlManager in
/var/www/craft/app/controllers/BaseController.php on line 37
Fatal error: Uncaught Error: Call to a member function
getRouteParams() on null in
/var/www/craft/app/controllers/BaseController.php:37 Stack trace: #0
/var/www/craft/app/framework/web/CController.php(308):
Craft\BaseController->getActionParams() #1
/var/www/craft/app/framework/web/CController.php(286):
CController->runAction(Object(CInlineAction)) #2
/var/www/craft/app/framework/web/CController.php(265):
CController->runActionWithFilters(Object(CInlineAction), NULL) #3
/var/www/craft/app/framework/web/CWebApplication.php(282):
CController->run('renderError') #4
/var/www/craft/app/framework/base/CErrorHandler.php(331):
CWebApplication->runController('templates/rende...') #5
/var/www/craft/app/framework/base/CErrorHandler.php(289):
CErrorHandler->render('error', Array) #6
/var/www/craft/app/etc/errors/ErrorHandler.php(149):
CErrorHandler->handleError(Object(CErrorEvent)) #7
/var/www/craft/app/framework/base/CErrorHandler.php(131):
Craft\ErrorHandler->handleError(Object(CErrorEvent)) #8
/var/www/craft/app/framework/base/CAp in
/var/www/craft/app/controllers/BaseController.php on line 37
I honestly have no idea on what can be wrong. Any pointers would be very welcome...
That's a super old version – could it be a PHP version issue? Craft didn't have PHP 7.0 compatibility until 2.4.2697.
That sounds very plausible! I'll try that out. Let's hope not too much else breaks when I increase versions ;-)
I tested it – can confirm its a PHP version issue. I can run 2.0.2535 locally without issues on PHP 5.4.45; get the same error as you when I run it using PHP 7.
You're probably running PHP v. 7.0 or higher on your DigitalOcean box; Craft CMS didn't have PHP 7.0 compatibility until Craft 2.4.2697, which is why it chokes.
Downgrading your PHP version to 5.6 or 5.4 should resolve your issue.
You should also beware of your MySQL version – Craft 2 has issues on MySQL 5.7; ideally you should make sure your droplet uses 5.6.
And also – I'd look into upgrading the install to the latest and greatest Craft 2.x, if not Craft 3. A lot of good stuff has happened since 2014 ;)
| common-pile/stackexchange_filtered |
WKInterfaceTable: Swipe to edit like the stock Mail app
In the stock Apple Watch mail app you can swipe left to access more options for an email. How would I go about doing this with a WKInterfaceTable?
According to Apple guideline that it is not possible in watchOS to implement custom gesture as we have in iOS. Because gestures in watchOS manage by OS itself.
What Apple says:
User interactions on Apple Watch generate touch events and gestures, but unlike iOS apps, your Watch apps don’t handle these events directly. The system provides automatic responses for all touch events and gestures, responding in the following ways:
• Taps trigger action-based events in your app
• Vertical swipes scroll the current screen
• Horizontal swipes display the previous or next page in a page-based interface
• Left edge swipes navigate back to a parent screen in a hierarchical interface
When the user taps a button or another control, Apple Watch calls that control’s associated action method. You define action methods for the controls in your interface and use them to respond to user interactions.
More on stackoverflow
Apple guideline
| common-pile/stackexchange_filtered |
Precise Statement of Strong Induction
I am a TA for a CS course and on an exam I was grading I had a "strong" discussion with the professor that the answer was incorrect involving strong induction. His explanation of strong induction is this:
Given some property P over the natural numbers we want to prove P(n) is true.
1. Let b be the base case and prove P(b) is true.
2. The inductive hypothesis is to assume that for all i, b < i < n, P(i) is true.
I tried to argue that if you don't assume that P(i) is true when i equals b then you can't use it. He says it is pointless to assume it is true because you already proved it in step 1. I pointed out that multiple books state that they include b but he just waved it off and said they were sloppy.
Does it matter if you include the base case or not in the inductive hypothesis?
Practically speaking, it doesn't matter much. You're allowed to use the fact that $P(b)$ is True whether or not you assume it, because you've proven it. Your professor is right that you don't have to assume it, and can only assume $P(i)$ for $b<i<n$. In that sense, it's more precise to not assume it for $b$. Personally I don't view the difference as particularly important and wouldn't have taken any points off.
| common-pile/stackexchange_filtered |
Get Cursor Postion of TextInput before it was focused out
I have Titlewindow Window having TextInput's. While it is popuped i do other operation like selecting MenuItem from Menu which is on toplevel application.
After selecting the menuitem i need to add text to TextInput of titlewindow which was previously focused. Now i get these previously focused TextInput. But cant found the index where the cursor or carret position was pointing when it was focused. At this position i need to insert text.
var window:Window = FlexGlobals.topLevelApplication.window;
window.focusManager.activate();
var textInput:TextInput = window.focusManager.getFocus() as TextInput;
Have you tried to listen to focus out on the input fields and then record caret position?
textInput.addEventListener(FocusEvent.FOCUS_OUT, internal_onFocusOutHandler, false, 0, true);
protected function internal_onFocusOutHandler(e:FocusEvent):void
{
trace(textInput.selectionBeginIndex()+","+textInput.selectionEndIndex());
}
there is also an example here
| common-pile/stackexchange_filtered |
Where does sudoer reports go?
So I'm testing out a small CentOS build (rackspace cloud). I set up my user and went to do some sudo'ing. Well, I forgot the step to add my user to the sudoers file with visudo. So of course, I get this error:
is not in the sudoers file. This incident will be reported.
Never saw it before, so now I'm wondering. Where does this get reported? Does it just get sent to mail, or is it logged somewhere?
Thanks for any help
It will typically get logged /var/log/secure, and mail will be sent to root on the local system. You can control this behavior in your /etc/sudoers file. There are a suite of mail_* configuration options that determine when sudo sends out mail, and there are additional options that control how it logs to syslog.
Source: http://xkcd.com/838/
(for the real answer, look at larsks's answer instead).
hah, looks like i'm getting coal for christmas this year :(
| common-pile/stackexchange_filtered |
Flash ProgressEvent Not Showing Total Size
I'm using a ProgressEvent in Flash to determine how long something will take to download. I've got this:
progress = event.target.bytesLoaded/event.target.bytesTotal;
to set a percentage.
After some scratching of my head, I did a trace on the two values - and it turns out that "event.target.bytesTotal" is always equaling zero.
I can't find any mention of this in the Flex/AS3/Flash API. Any hints on how to get bytesTotal to work?
(I'm currently reading from a PHP file on the webserver)
Have you tried:
progress = event.bytesLoaded/event.bytesTotal;
bytesTotal / bytesLoaded should be a property of the progress event.
Also... I had this problem yesterday, and it totally stumped me until I thought to check the file I was loading, and it ended up being corrupt and 0 bytes - so double check that too:)
Hm, your code produces the exact same effect (bytesLoaded works fine, bytesTotal always reports 0). I know the file is fine because it eventually downloads and works. :)
Really strange. If you try the code with another php file, or something else, does it do the same thing?
If you check the urlloader.bytesTotal (or whatever you're using) property of whatever you're loading, is that also zero?
Maybe the file you're loading doesn't have Content-Length header set somehow?
We've solved this issue on our server by disabling the compression of some file types.
The bytesTotal was 0 for files that were being served compressed. This compression happens on-the-fly and that is why the server cannot give the size of the file (because it doesn't know it yet). Removing the compression solved it.
| common-pile/stackexchange_filtered |
how to use std::copy for int**
My2DClass::My2DClass(const int r, const int c, int mat[3][3]):m_r(r),m_c(c)
{
matrix = new int*[r];
for (int i = 0; i < r; i++)
matrix[i] = new int[c];
for (int i = 0; i < m_r; i++) {
for (int j = 0; j < m_c; j++) {
matrix[i][j] = mat[i][j];
}
}
//std::copy(&mat[0][0], &mat[0][0] + m_r * m_c, &matrix[0][0]);
}
How to use std::copy() for int**? The commented line throws an exception at runtime.
The parameter you're passing to the contructor seems to always be mat[3][3]. What do the r and c parameters expect to accomplish, then? They are used to dynamically allocate a vector of pointers, and you seem to be attempting to copy mat[3][3] into the vector of pointers. Which only makes sense if both r and c are always 3. What do you expect to accomplish when they're not?
The ideal solution would probably be to flatten your 2d matrix and do a single copy operation. The matrix = new int*[r]; part seems out of place. Use a std::vector<int>.
The int mat[3][3] parameter is really treated as just int** mat. So, even though the code documents that the dimensions are expected to be 3x3, the compiler does not enforce that in this case, so r and c could be other values. If you really want to enforce 3x3, you would need to pass in the array by reference instead.
@SamVarshavchik Actually I want to accept the parameter also as double pointer but for debugging purpose I have used mat[3][3].
@RemyLebeau The int mat[3][3] parameter is really treated as just int** mat. nope. It is adjusted to be int(*)[3]. The inner dimension cannot be other than 3.
You are allocating an array of pointers to arrays, so a single call to std::copy() will not work. You would have to call std::copy() on every array individually. So really, only your inner-most loop can be replaced with std::copy(), eg:
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c)
{
matrix = new int*[r];
for (int i = 0; i < r; i++)
matrix[i] = new int[c];
for (int i = 0; i < m_r; i++) {
std::copy(&mat[i][0], &mat[i][m_c], matrix[i]);
// or: std::copy_n(&mat[i][0], m_c, matrix[i]);
}
}
If you consolidate the 2 remaining loops into a single loop, eg:
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c)
{
matrix = new int*[r];
for (int i = 0; i < r; ++i) {
matrix[i] = new int[c];
std::copy(&mat[i][0], &mat[i][c], matrix[i]);
// or: std::copy_n(&mat[i][0], c, matrix[i]);
}
}
Then you could replace that loop with std::for_each(), eg:
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c)
{
matrix = new int*[r];
std::for_each(matrix, matrix + r,
[=](int* &arr){
arr = new int[c];
std::copy(&mat[i][0], &mat[i][c], arr);
// or: std::copy_n(&mat[i][0], c, arr);
}
);
}
Though, you really should avoid using new[] manually at all, consider using std::vector instead, which would greatly simplify management of your arrays, eg:
// std::vector<std::vector<int>> matrix;
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c), matrix(r)
{
for(int i = 0; i < r; ++i) {
matrix[i].assign(&mat[i][0], &mat[i][c]);
}
}
Otherwise, consider flattening your matrix into a single 1-dimensional array instead. Especially since the input mat is a flat array in memory anyway. Then you can do a single std::copy() from one to the other, eg:
// int *matrix;
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c), matrix(new int[r*c])
{
std::copy(&mat[0][0], &mat[r][c], matrix);
}
Or, using std::vector instead:
// std::vector<int> matrix;
My2DClass::My2DClass(const int r, const int c, int mat[3][3])
: m_r(r), m_c(c), matrix(&mat[0][0], &mat[r][c])
{
}
Either way, you can convert 2-dimensional indexes into 1-dimensional indexes using this formula:
(r * m_c) + c
For example:
int& My2DClass::operator()(int r, int c)
{
return matrix[(r * m_c) + c];
}
And when c is more than 3, hillarity ensues.
Thanks @Remy Lebeau. It helps .Actually I was implementing a class having a double pointer as member variable.
@shrabanakumar I know, and that is not necessarily the best way to go.
| common-pile/stackexchange_filtered |
Creating Multiple Debian Packages with cmake
I have a cmake project consisting of a set of executable files, independent of each other with two shared libraries. I want to pack each executable file into a deb package.
As a result, I get one deb package with all programs and libs.
part of the source code:
cmake_minimum_required (VERSION 3.12)
set (CPACK_GENERATOR "DEB")
set (CPACK_DEBIAN_PACKAGE_MAINTAINER "i am")
set (CPACK_DEB_COMPONENT_INSTALL 1)
include (CPack)
add_executable (module1 main.cpp)
install (TARGETS module1
RUNTIME DESTINATION bin
COMPONENT component1)
add_library (my_lib SHARED map.cpp templates.cpp)
add_executable (my_lib main.cpp utils.cpp)
target_link_libraries (module2 PUBLIC my_lib)
install(TARGETS module2 my_lib
RUNTIME DESTINATION bin
LIBRARY DESTINATION lib
COMPONENT component2)
How to divide programs into different deb packages?
Well that's the answer
set (CPACK_GENERATOR "DEB")
set (CPACK_DEBIAN_PACKAGE_MAINTAINER "Your name")
set (CPACK_DEB_COMPONENT_INSTALL ON)
include (CPack)
function (add_package TARGET_NAME TARGET_PATH DESCR)
install(TARGETS "${TARGET_NAME}"
DESTINATION "${TARGET_PATH}"
COMPONENT "${TARGET_NAME}")
cpack_add_component_group("${TARGET_NAME}")
cpack_add_component("${TARGET_NAME}"
DISPLAY_NAME "${TARGET_NAME}"
DESCRIPTION "${DESCR}"
GROUP "${TARGET_NAME}"
INSTALL_TYPES Full)
endfunction ()
add_executable (my_program1 main.cpp)
add_package(my_program1 "bin" "Description")
add_executable (my_program2 main.cpp)
add_package(my_program2 "bin" "Description")
and run in terminal
make package
| common-pile/stackexchange_filtered |
5x vshost.exe in Windows Firewall Exceptions
Under Windows Firewall Exceptions I see 5 instances of vshost.exe. Is this normal, or should I be worried? I do run weekly scans with AVG Free, and have Windows Firewall on, that's basically all my protection.
Do you use Visual Studio to develop network connected applications? vshost.exe is the debugger hosting process. Anytime you debug an application in Visual Studio the hosting process is the actual process that is running and as such, has firewall exceptions for any app that acts as a network server. I'm pretty sure that vshost.exe always need at least couple exception, so I wouldn't worry about them.
aha.. yes I do develop some programs with socket connections, TcpClient.. Good, then should be no worry. Cheers!
| common-pile/stackexchange_filtered |
Browser stops responding when databinding to a dropdown
net c# web form application.
when I tried to fill a drop down with a huge record, about 225000 records :),
the browser stops responding. and showing an error
A script on this page may be busy, or it may have stopped responding. You can stop the script now, open the script in the debugger, or let the script continue.
what I have to do? is this because of the huge amount of data? or any other problem? Please help..
We can't help without seeing any code. 2. For the love of god don't fill a dropdown with 225k items, how would you expect that to be usable?
It is due to huge amount of data. You can use lazy load functionality with AJAX calls to achieve this. Also please don't expect a direct answer to this as your requirement is too broad. We can point you some tutorials and online references though
Which browser shows the error? Chrome? IE8? Firefox?
First of all I don't think that
a normal Drop-down would be an ideal User Control for such an amount of data.
The error is shown because of the DOM Rendering that is happening on the client-side is just taking so much time.
if you still want to use this control then you will need to implement load on demand logic for it. You'll need to create a new Dropdown User Control with a class that implements IScriptControl and on the client side you can control the rendering mechanism.
An explanation on how to do it:
https://forums.asp.net/t/1503727.aspx?load+on+demand+combobox+
| common-pile/stackexchange_filtered |
MobilePush Notification for Marketing Cloud Cordova plugn not displayed
We are using Marketing CLoud Cordova plugin (https://github.com/salesforce-marketingcloud/MC-Cordova-Plugin) to implement MobilePush notification on the App but its not working when the Cordova plugin Firebasex is included in the dependency package.
It seems like the message sent from Marketing cloud is received on the Cordova Firebasex SDK as an alert with the messageType "Data". Due to that the message is not displayed on the mobile by default. If we try to send the same message directly using Firebase API, it will show the notification and message type is "Notification"
subtitle: "Feb 07 V002"
messageType: "data"
_h: "nuU1OPKKJgxvvfuF4tgVPQAAAAAA"
_m: "Mjk6MTE0OjA"
_r: "b7d2ef7c-a19d-4bf3-8e59-270e75e29ab7"
id: "31"
_mt: "1"
ttl: "2419200"
_sid: "SFMC"
from: "751521060866"
alert: "Feb 07 V002"
sound: "default"
title: "Feb 07 V002"
sent_time: "1581051995078"
show_notification: "false"
If you're planning on using Firebase's messaging in conjunction with the Marketing Cloud SDK then you're going to have to configure your project for Multiple Push Providers (ref. Multiple Push Providers Setup).
If multiple push providers wasn't your intention then please follow the plugin implementation instructions, which do not require you to add the dependency you're adding. (ref. Cordova Plugin Installation)
Hi Bill, Thanks for the recommendation. Since our App is developed using Cordova, we will have to use the library Cordova Plugin Installation for marketing cloud integration. Currently, the App include FIrebase plug in for tracking Google Analytics data so its conflicting with the Cordova Marketing Cloud SDK
You can use Firebase Analytics without including Firebase Messaging.
| common-pile/stackexchange_filtered |
How to fix "default slot raw html rendering in Vue.js component"
I have a custom input component and try to use slot for pass another element to that component, but when use any html like a simple button inside custom input component tags on the parent component the content render as a raw html(like escaped html text)
I was tried to write html button code inside <slot></slot> tag on the custom input component and that's work fine but when pass from parent component broken!
the custom input template is like this:
<template>
<div class="form-group" :class="{ 'ltr ltr-input': ltr }">
<textarea :id="id" :value="value" @change="input"></textarea>
<label :for="id">{{ fieldLabel }}</label>
<slot>
<button>fallback</button>
</slot>
</div>
</template>
on the parent element:
<TextArea id="message" v-model="message" label="message" required>
<button type="submit">Submit</button>
</TextArea>
This is screenshot about the result of above code:
You shouldn't be using native HTML tags for your Vue component element tags. In the eyes of the parser, <TextArea> is the same as <textarea> since tags are case-insensitive. Because of this, any content inside your <TextArea> component will simply be rendered as a plain string in the native <textarea> element. Try to create an MCVE: this is not expected behavior of a <slot> component and without any further code it's not possible to pinpoint what has went wrong.
@Terry thanks you, you save my time, i wasn't know that, just change my component name and everything work's fine thank you, if you post your comment as a answer I will accept it and close this question ;-)
Consider that done :) glad that I've managed to help you with your problem.
A little more explanation from my comment: you are using a "reserved" element tag word for your Vue component name, which might explain the weirdness you have encountered. Due to the case insensitivity of HTML tag names, <TextArea> is simply parsed as <textarea> by the web browser, and inherits all the default rendering behavior for that native element. This means that whatever text content that is between your <TextArea> tags will simply be rendered as plain text as they would be in a native <textarea> element.
To circumvent this issue, you should always strive to name your Vue components to be unique: two-words is a good start, since HTML tag names don't comprise of two words for now. So, you can rename <TextArea> to <custom-textarea> to <v-textarea> and it shoudl work: just remember to update your template name as well.
| common-pile/stackexchange_filtered |
Create a mocked list with objects
I want to create a JUnit test with tests mocked objects:
public class BinCountryCheckFilterImplTest {
private RiskFilterService riskFilterService = null;
@Before
public void beforeEachTest() {
List<RiskFilters> list = new ArrayList<RiskFilters>();
riskFilterService = Mockito.mock(RiskFilterService.class);
// put here list of List<RiskFilters> and return it
}
@Test
public void testBinCountryCheckFilterImpl() {
List<RiskFilters> filter_list = riskFilterService.findRiskFiltersByTerminalIdAndType(11, "test");
// do something
}
}
How I can return the list List<RiskFilters> when RiskFilterService is calle?
Second attempt:
public class BinCountryCheckFilterImplTest {
private RiskFilterService riskFilterService = null;
@Mock
List<RiskFilters> mockList = new ArrayList<RiskFilters>();
@BeforeClass
public void beforeEachTest() {
//if we don't call below, we will get NullPointerException
MockitoAnnotations.initMocks(this);
mockList.add(new RiskFilters());
riskFilterService = Mockito.mock(RiskFilterService.class);
}
@Test
public void testBinCountryCheckFilterImpl() {
when(riskFilterService.findRiskFiltersByTerminalIdAndType(anyInt(), anyString())).thenReturn(mockList);
List<RiskFilters> filter_list = riskFilterService.findRiskFiltersByTerminalIdAndType(11, "BinCountryCheckFilter");
}
}
But I get NPE for riskFilterService. Looks like the method with annotation @test is called before @BeforeClass.
RiskFilterService is the class you want to test? And what method you want to mock?
Yes, I want ot mock the result from findRiskFiltersByTerminalIdAndType
something like: Mockito.when(riskFilterService.findRiskFiltersByTerminalIdAndType(anyInt(), anyString()).thenReturn(new ArrayList<RiskFilters>());
I get The method thenReturn(ArrayList<RiskFilters>) is undefined for the type List<RiskFilters>
can you post the method you want to test? And what jUnit version do you use?
I have a question. See the updated post with the second attempt. I get NPE for riskFilterService because @Test is called before initialization. How I can fix this?
@Willem I managed to fix it just NPE problem is left.
Junit 4 uses @Before and junit 5 @BeforeAll
Well @BeforeEach solved the problem.
You can just replace private RiskFilterService riskFilterService = null; with private RiskFilterService riskFilterService = Mockito.mock(RiskFilterService.class); and remove your @Before method
Yes i tried it but I have more test in which I have to initialize more test cases with different values.
By the way if I have for example 4 test methods with @Test annotation how I can make Java methods with precondition setup test data with each of them? I can make 4 different Test classes but this is another story....
The way i do it, is to declare the data inside the @Test method that should be returned. If i have complex object to build i declare a private method inside the test to get the object so i can reuse it in other Test classes. Only for data that is always the same i declare it in a field in the test class.
Ok, thanks. Looks a standard way to do this.
When a List or any other Collection is required in a unit test, the first question to ask yourself is: should I create a mock for it, or should I create a List or a Collection containing mocks.
When the logic being tested is not using the list, but just passing the list than you can mock it.
Otherwise it is usually better not to mock a List or a Collection but to create a normal one containing mocked objects because it can become very difficult to know which methods of the List or Collections need to be stubbed. Which methods are called when using a for loop to iterate the items, when using an iterator on them, when using a stream on them, ... ? I often use Collections.singletonList or Arrays.asList with mocked parameters to initialise lists when writing unit tests.
I see that you mock the list and then you call the add method to add data to it while setting up the test. It doesn't make sense to add data to a mocked list. You can use Mockito.when to return it when it should be returned, but then you would get in trouble because you might need to stub more methods and it would be hard to know which ones (isEmpty, size, ...). That you are adding a dataobject to list probably means the method being tested is not just passing the list but will access the data in it. In that case, don't mock the list, but mock the data objects which you put in it.
Well you provided very less information. BUt let me put through
you must be having BinCountryCheckFilter class. Please Initialise it in your test class and add annotation @InjectMocks
@InjectMock
private BinCountryCheckFilter binCountryCheckFilter;
take riskFilterService = Mockito.mock(RiskFilterService.class); out of @BeforeClass and put it openly.
But this will just mock your class and will not test anything. One thing you can test is no of calls made. See below
verify(mockList, times(1)).yourMethodName();
or add following in your test or before method
when(riskFilterService.yurMethodName).thenReturn(your Return value);
This way you will be able to mock the data you want.
Let me know if any other clarity needed.
I am not sure of your JUnit version but you can remove
comelete @BeforeClass from your code now and
@Mock
List<RiskFilters> mockList = new ArrayList<RiskFilters>();
too.
| common-pile/stackexchange_filtered |
How to compute an integral?
I am reading the lecture notes. I am trying to understand the prove of Lemma <IP_ADDRESS> on page 4. From line 3 to line 4 in the proof of Lemma <IP_ADDRESS>., how to prove that
$$
\int_{F^{n-1}} \hat{1}_{\mathfrak{p}^{-k}} \ (x) \pi \left( \begin{matrix} 1_{n-1} & x \\ 0 & 1 \end{matrix} \right) v dx = vol(\mathfrak{p}^{-k}) \int_{F^{n-1}} 1_{\mathfrak{p}^{m+k}} \ (x) \pi\left( \begin{matrix} 1_{n-1} & x \\ 0 & 1 \end{matrix} \right) v dx?
$$
Where do we use the condition $\mathfrak{p}^m$ is the conductor of $\psi$ in the proof of Lemma <IP_ADDRESS>? Thank you very much.
This question, and the deleted answer below, reminds me of a paper I read. The first sentence was "Let p and q be primes." After that, I understood nothing.
Oh well, to my family I am a mathematician, to my friends I am a mathematician, but to a mathematician I am not a mathematician.
@martycohen - don't we all know the feeling...
It is really only a statement about $\def\P#1{{{\mathfrak p}^{#1}}}\hat 1_\P k$, where
$$ \hat I_{\P {-k} }(x) = \int_{F^{n-1}}I_{ \P {-k} } (y) \psi ( y^tx) \, dy = \int_{(\P {-k})^{n-1}}\psi ( y^tx) \, dy ,$$ and follows from a general fact about integration over (compact) topological groups (with Haar measure):
Fact: Suppose $\psi: G \mapsto \mathbb C^*$ is a character. Consider
$$ \int_G \psi (g) dg.$$
If $\psi$ is not identically one, then the integral is zero. Otherwise, the integral is the volume of group.
Proof - if there exists $h\in G$ such that $\psi(h) \not = 1$, then $$\int_G \psi ( g )\, dg = \int_G \psi ( h g)\, dg = \psi (h) \int_G \psi( g) \, dg. $$ So the integral must vanish. On the other hand, if $\psi \equiv 1$, the integral is the volume.
The conductor $\P m$ is the largest subgroup of $F$ on which $\psi$ is trivial - correct? Therefore the statement follows from the above fact: $\hat I_{ \P {-k}}(x)$ is non-zero if and only if $x \in (\P {k+m})^{n-1}$, and then equal to the volume of $(\P {-k})^{n-1}$, i.e., $\hat I_{ \P {-k}} $is the corresponding indicator function $ I_{ \P {k+m}} $ multiplied by the volume.
thank you very much. But in the formula it is $vol(\mathfrak{p}^{-k})$ not $vol((\mathfrak{p}^{-k})^{n-1})$.
I had noticed that too. But I 'assumed' that it was short-hand (or a typo, and copied over from $n=2$). After all, on the one hand, the above has to be correct and on other, the 'indicator' functions $I$ above have as subscript the ideals, rather than the corresponding subgroups of $F^{n-1}$, which I would have preferred. (cont)
(cont) Also, the 'self-duality of measure' used in the last equality of the lemma (in the original text) implies a choice of measure, on $F^{n-1}$ certainly, even if dependent on ${\mathfrak p}^m$ (as in Tate's thesis on $F$); yet the text writes ${\rm vol}({\mathfrak p}^j)$. Correct? Therefore - short-hand or typo. Agree?
yes, I agree with you. Thank you very much.
| common-pile/stackexchange_filtered |
Nativescript Run on Device IOS - dyld Library not loaded @rpath/Nativescript.framework
Runs fine on emulator (pretty basic app, only a few changes to check deploy works etc before continue).
Throws this error
dyld: Library not loaded: @rpath/NativeScript.framework/NativeScript
Referenced from: /private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/mobileapp
Reason: no suitable image found. Did find:
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: code signature invalid for '/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript'
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: stat() failed with errno=25
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: code signature invalid for '/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript'
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: stat() failed with errno=1
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: code signature invalid for '/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript'
/private/var/containers/Bundle/Application/2F0F006E-3DC7-4017-A024-820AE0612E1D/mobileapp.app/Frameworks/NativeScript.framework/NativeScript: stat() failed with errno=1
Any assistanced much appreciated, totally stumped.
That error is from Xcode.
the application builds and deploys however on run just hangs on first screen with that error.
Are you trying to launch the app from Xcode on your device?
Since paying for the Apple Developer Program it deployed without any issues. I can only (assume) that this was the issue.
Previously I was only using the free certificate provided.
Interested to hear issues others had.
It seems like that Apple restricted the usage of free certificates in combination with embedded frameworks....
| common-pile/stackexchange_filtered |
Single line grid layout without grid-template-rows hack
I want to have a description list of items that, by using CSS grid, sits on a single row with a small variability to the width of each item and which hides each item as the width becomes too small to accommodate it.
So far I have come up with this which takes advantage of the auto-fill and minmax properties of grid-template-columns. Currently I have a bit of a hack to hide the items that won't fit. This is to set overflow: hidden but also to set a large number of 0's in grid-template-rows relative to the number of items in the list. If you remove grid-template-rows (and adjust the width) you'll see that you can see the overflowing items below.
Is there a way I can hide these overflowing elements without hardcoding a large number of 0's?
div {
display: inline-block;
}
dl {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(10rem, 1fr));
grid-column-gap: 20px;
grid-template-rows: 1fr 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
overflow: hidden;
}
dd {
font-weight: bold;
margin: 0;
}
<dl>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
</dl>
Questions seeking code help must include the shortest code necessary to reproduce it in the question itself preferably in a Stack Snippet. Although you have provided a link, if it was to become invalid, your question would be of no value to other future SO users with the same problem. See Something in my website doesn't work can I just paste a link.
It's not a different solution, but use repeat() to save you from having to write out a bunch of 0s.
So, use grid-template-rows: 1fr repeat(50, 0); instead:
Alternatively, you could set a height to the dl, say 80px, and use grid-template-rows: repeat(auto-fill, 80px);. This might not be an option if the content is variable in length/height.
div {
display: inline-block;
}
dl {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(5rem, 1fr));
grid-column-gap: 20px;
grid-template-rows: repeat(auto-fill, 80px);
overflow: hidden;
height: 80px;
}
dd {
font-weight: bold;
margin: 0;
}
<dl>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
<div>
<dt>Column name</dt>
<dd>Example value</dd>
</div>
</dl>
Yep, I think that's an improvement on several zeroes despite it still relying on a hard-coded value. It feels like I'm trying to get Grid to do something it wasn't really intended for and this is therefore the best solution.
| common-pile/stackexchange_filtered |
論語 4-5 裡的「之」指甚麼? What does 之 refer to?
I speak modern Mandarin. I'm trying to learn classical Chinese. Reading Van Norden's Guide for absolute beginners, I'm stuck in Lesson 7, Analects 4.5:
「子曰。富與貴。是人之所欲也。不以其道得之。不處也。貧與賤。是人之所惡也。不以其道得之。不去也。」
The issue, as Van Norden points out, is that in the straightforward interpretation the 之 in the second 得之 refers to 貧與賤, which doesn't make sense. (Who would mind avoiding poverty if it wasn't "deserved", so to say?) He gives one such seemingly nonsensical translation by James Ware.
All Chinese translations I could find online say something along the lines of:
貧窮與低賤是人人都厭惡的,但不用正當的方法去擺脫它,就不會擺脫的。(https://kknews.cc/culture/j8k98re.html)
There is another version:
貧和賤,是人人所憎惡的,如果不幸,而淪於貧賤,也不可違而去之。(http://twowin.com.tw/gogo_card/chinese/p8.htm)
The first version says 擺脫 instead of 得之, which appears to be opposite. The second version might suggest that 之 in 得之 actually refers to 去 in the following sentence: getting rid of it.
Could that be the case? Then the translation would make sense, but I'm not sure whether you can 「得」 「去-ing」 something (whether 去 can count as an object of 得), if you know what I mean.
Note: Van Norden seems to suggest something of the kind in his text, but he stays rather vague and never gives his own translation of anything throughout the entire book.
EDIT
Added some context contained in Van Norden's book.
Van Norden writes:
The second half of 4.5 has puzzled some interpreters, becuase the seemingly obvious way to take the grammar results in the second half of 孔子's comment not makeing sense. Can you see why? The key to understanding this quotation is correctly answering the following question: In the expression 得之, what does the pronoun 之 refer to? Normally it refers back to something earlier in the sentence. Here, I think, it refers to something later in the sentence.
James Ware's translation:
[...] Poverty and low estate are what men dislike; but if they come undeserved, don't flee them
(Van Norden: this doesn't make sense)
D.C. Lau's translation:
[...] Poverty and low station are what men dislike, but even if I did not get them in the right way I would not try to escape from them.
(Footnote by Lau: "This sentence is most likely to be corrupt. The negative is probably an interpolation and the sentence should read: 'Poverty and low station are what men dislike, but if I got them in the right way I would not try to escape from them.'")
Note that Lau's alternate translation is still quite different from the Chinese translations above.
James Legge's translation:
[...] Poverty and meanness are what men dislike. If it cannot be obtained in the proper way, they should not be avoided.
Here Legge maintains the ambiguousness of the original by using "it" and "they". What does "it" refer to here? It sounds to me like "it" might refer to the avoidance.
To me, the question is more like "what does 得 mean in this sentence?"
@joehua if you decide that 之 refers to 貧與濺 then yes, that does become the question. But I'm not aware of any reasonable answer other than ~to obtain?
By the way, it should be 賤 not 濺 .
@joehua Yep, can’t edit comments though :(
It just says: honesty and honour are the best policies.
之 refers to 富與貴
子曰。
Confucius said:
富與貴
wealth and nobility
是人之所欲也。
that's what everyone wants yeah!
不以其道得之。
But, if a noble man can't get that by honest means,
不處也。
a noble man cannot enjoy it.
貧與賤,
Poor and lowly
是人之所惡也!
that's what everyone loathes yeah!
不以其道得之
but, if a noble man cannot break free (from his lowly station) by honest means,
不去也。
a noble man will be content with his lot.
Where does it say, or imply, “break free” in your opinion? Are you saying the second 得之 also refers to 富與貴,just like the first one?
“贫与贱,是人人都厌恶的,” 人人不要““贫与贱” This Old Chinese is can be cryptic. Read between the lines. a
First comment went a bit wrong! “贫与贱,是人人都厌恶的,” 人人都不要““贫与贱” This Old Chinese is can be cryptic. Don't just take it at face value. Read between the lines. The second "不以其道得之" is used to keep things poetic, 得之 refers to "通过正当的途径摆脱“ 美德是最重要的。
「子曰。富與貴。是人"的"(之)所欲也。不以其道得"到"(之)。不處也。
Here, 之 refer to 富 and 貴. "不處也" can be explained as 不能常久(處)
貧與賤。是人"的"(之)所惡也。不以其道得"到"(之)。不去也。」
Here, 之 refer to 貧 and 賤. "不去也" means 不會離去.
IMO, the phrase 不以其道"得"之 will be more clear if change it to 不以其道"處"之 - means 不合理䖏理, 貧與賤不會離去. However, "不以其道得之" can be explained as "不循正道得來的貧與賤不會離去". At here, 不能擺脫 is the better choice of words than 不會離去.
Thank you for your answer. Interpreting 得 this way is something that many translators, including Van Norden, apparently didn't think of. 處理 doesn't strike me as an obvious meaning of 得. What makes you think this is possible? Are there any examples? I added some comments by those authors to the question for context.
I don't like that sentence, while I know what kind of messages the author wanted to deliver, the sentence does not make good sense. See my update that may be getting closer to its true meaning.
Yes, I agree with that assessment. Still I'm trying to figure out whether the sentence is just 'corrupt' as Lau claims, or whether there is a sensible interpretation. In your update "不循正道得來的貧與濺不會擺脫" looks like the obvious translation by James Ware I posted in the edit. But it doesn't really make sense does it? As Van Norden writes, "why on earth would anyone think they should not flee 'poverty and low estate' that they do not deserve"? OR do you mean the 貧賤 obtained due to not following the 正道?
No, it does not make sense, as 貧濺 (poor) has nothing to do with "其道" - "that way", or the "correct way" (正途) as in the first sentence. Also, the author implies one can't escape 貧濺 if the cause for becoming 貧濺 was caused by improper conduct, then, I think 恥辱 would make a better case/example. Anyway, this sentence has flaws, or I don't really have a good understanding of it.
By the way, you might want to edit the word "濺" to "賤". They are two different vocabularies, the former is "splash"; the latter is "lowly".
woops, thanks for catching the typo. I noticed it's in your last comment as well ':D
Yes, It was copied from the original text :)
hehe my bad -- fixed
| common-pile/stackexchange_filtered |
svm's margin equation derive question
I hava question about the margin equation
$$\frac{a}{||w||}$$
where this equation coming from?
I think it substract the $w^{T} +b -a - w^{T}x +b$ but not sure how margin equation derived
That is distance between two parallel hyperplanes: i.e. distance between $\mathbf{w}^T\mathbf{x}+b=c_1$ and $\mathbf{w}^T\mathbf{x}+b=c_2$. The normal (perpendicular) vector for these hyperplanes is $\mathbf{w}$ by definition. And, the distance is measured with $\frac{|c_1-c_2|}{||\mathbf{w}||}$, check here. In your case $c_1=a,c_2=0$ or $c1=0,c_2=-a$. A simpler situation is lines in 2D, or planes in 3D, which can be found (with its proof) in this wiki entry.
| common-pile/stackexchange_filtered |
How to get VIF from an h2o regression
I'm trying to get the the VIF scores from an h2o regression. Is there a VIF like function or data stored within h2o?
Here's my example:
library(ggplot2)
library(h2o, quietly = TRUE)
library(tibble)
#build h20 sessions
h2o::h2o.init()
#> Connection successful!
mtcars.df <- as.h2o(mtcars)
#>
|=================================================================| 100%
#set x & y vars
y <- "mpg"
x <- setdiff(dput(names(mtcars)), "mpg")
#> c("mpg", "cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
#> "gear", "carb")
dput(names(mtcars))
#> c("mpg", "cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
#> "gear", "carb")
model <- h2o.glm( y = "mpg", x = setdiff(dput(names(mtcars)), "mpg"), training_frame = mtcars.df)
#> c("mpg", "cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
#> "gear", "carb")
#>
|
| | 0%
|
|=================================================================| 100%
model
#> Model Details:
#> ==============
#>
#> H2ORegressionModel: glm
#> Model ID: GLM_model_R_1554907509984_6
#> GLM Model: summary
#> family link regularization
#> 1 gaussian identity Elastic Net (alpha = 0.5, lambda = 1.0132 )
#> number_of_predictors_total number_of_active_predictors
#> 1 10 9
#> number_of_iterations training_frame
#> 1 1 mtcars_sid_8128_1
#>
#> Coefficients: glm coefficients
#> names coefficients standardized_coefficients
#> 1 Intercept 26.298144 20.090625
#> 2 cyl -0.447375 -0.798977
#> 3 disp -0.005674 -0.703231
#> 4 hp -0.011042 -0.757065
#> 5 drat 0.859638 0.459630
#> 6 wt -1.185114 -1.159584
#> 7 qsec 0.000000 0.000000
#> 8 vs 0.655750 0.330509
#> 9 am 1.116929 0.557338
#> 10 gear 0.123540 0.091148
#> 11 carb -0.350465 -0.566071
#>
#> H2ORegressionMetrics: glm
#> ** Reported on training data. **
#>
#> MSE: 6.511253
#> RMSE: 2.551716
#> MAE: 2.00629
#> RMSLE: 0.113459
#> Mean Residual Deviance : 6.511253
#> R^2 : 0.8149633
#> Null Deviance :1126.047
#> Null D.o.F. :31
#> Residual Deviance :208.3601
#> Residual D.o.F. :22
#> AIC :172.7651
#formula
f <- as.formula(paste(y, paste(x, collapse = " + "), sep = " ~ "))
model_lm <- lm(f, data = mtcars)
#model output
model_lm
#>
#> Call:
#> lm(formula = f, data = mtcars)
#>
#> Coefficients:
#> (Intercept) cyl disp hp drat
#> 12.30337 -0.11144 0.01334 -0.02148 0.78711
#> wt qsec vs am gear
#> -3.71530 0.82104 0.31776 2.52023 0.65541
#> carb
#> -0.19942
# package for vif variables
library(car)
#> Warning: package 'car' was built under R version 3.5.3
#> Loading required package: carData
#>
#> Attaching package: 'car'
#> The following object is masked from 'package:dplyr':
#>
#> recode
# list of VIF values
car::vif(model_lm) %>% as_tibble(rownames = "x_vars") %>% arrange(desc(value))
#> Warning: Calling `as_tibble()` on a vector is discouraged, because the behavior is likely to change in the future. Use `enframe(name = NULL)` instead.
#> This warning is displayed once per session.
#> # A tibble: 10 x 2
#> x_vars value
#> <chr> <dbl>
#> 1 disp 21.6
#> 2 cyl 15.4
#> 3 wt 15.2
#> 4 hp 9.83
#> 5 carb 7.91
#> 6 qsec 7.53
#> 7 gear 5.36
#> 8 vs 4.97
#> 9 am 4.65
#> 10 drat 3.37
Created on 2019-04-10 by the reprex package (v0.2.1)
a VIF function isn't currently available in H2O-3, but you can always create a JIRA ticket and make a feature request for it, or try to do the calculation manually.
Alternatively, depending on your end goal, you could use the remove_collinear_columns which, as stated in the docs is used to: "specify whether to automatically remove collinear columns during model-building. When enabled, collinear columns will be dropped from the model and will have 0 coefficient in the returned model. This can only be set if there is no regularization (lambda=0)."
I am looking at this response right now and I think that the documentation was changed and does not mention what is the immediate result. The part: "collinear columns will be dropped from the model and will have 0 coefficient in the returned model.", seems to be missing from the docs.
| common-pile/stackexchange_filtered |
SQL Update One Table If Record Does Not Exist In Another Table
SELECT c.*
FROM customers c LEFT JOIN
invoices i
ON i.customer_id = c.id
WHERE i.customer_id IS NULL
The above works to give me all the customer accounts that have no invoices. It takes a long time to run, but I'm not concerned with the speed. I will likely only run this a couple times a year.
What I can't get right is updating a record in the customers table when the account has no invoices. I have tried a number of different ways to accomplish this but always get a syntax error.
One attempt is below...
UPDATE c
SET active=0
FROM customers c LEFT JOIN
invoices i
ON i.customer_id = c.id
WHERE i.customer_id IS NULL
I get a syntax error in the Join when I try to run this.
https://stackoverflow.com/questions/15209414/how-to-do-3-table-join-in-update-query
The correct MySQL syntax is:
UPDATE customers c LEFT JOIN
invoices i
ON i.customer_id = c.id
SET active = 0
WHERE i.customer_id IS NULL;
The use of JOIN in an UPDATE is rather database-specific. For instance, MySQL doesn't support the FROM clause in an UPDATE (SQL Server and Postgres do).
Standard syntax that should work in any database is:
UPDATE customers
SET active = 0
WHERE NOT EXISTS (SELECT 1 FROM invoices i WHERE i.customer_id = customers.id);
You just made little mistake below query will work
UPDATE customers c
LEFT JOIN invoices i ON i.customer_id = c.id
SET active=0
WHERE i.customer_id IS NULL
| common-pile/stackexchange_filtered |
Redirect only if url ends with slash
I have a redirect for product urls, and now I want to create one for categories.
I have tried this in htaccess:
RewriteRule ^product/(.*) https://my-url/product-categorie/$1 [R=301,L]
but it also affects the urls ending with *.html
How do I avoid this?
So no urls ending with *.html should be redirected, only the Urls ending with /
Well, your current matching pattern matches anything following the product/ prefix, correct? So you need to be more specific here:
RewriteRule ^product/(.*)/$ https://my-url/product-categorie/$1 [R=301,L]
I would also suggest some additional modifications, but that is optional:
RewriteRule ^/?product/([^/.]*)/$ https://my-url/product-categorie/$1 [R=301,END]
In case you receive an internal server error (http status 500) using the rule above then chances are that you operate a very old version of the apache http server. You will see a definite hint to an unsupported [END] flag in your http servers error log file in that case. You can either try to upgrade or use the older [L] flag, it probably will work the same in this situation, though that depends a bit on your setup.
It is a good idea to start out with a 302 temporary redirection and only change that to a 301 permanent redirection later, once you are certain everything is correctly set up. That prevents caching issues while trying things out...
This implementation will work likewise in the http servers host configuration or inside a distributed configuration file (".htaccess" file). Obviously the rewriting module needs to be loaded inside the http server and enabled in the http host. In case you use a distributed configuration file you need to take care that it's interpretation is enabled at all in the host configuration and that it is located in the host's DOCUMENT_ROOT folder.
And a general remark: you should always prefer to place such rules in the http servers host configuration instead of using distributed configuration files (".htaccess"). Those distributed configuration files add complexity, are often a cause of unexpected behavior, hard to debug and they really slow down the http server. They are only provided as a last option for situations where you do not have access to the real http servers host configuration (read: really cheap service providers) or for applications insisting on writing their own rules (which is an obvious security nightmare).
| common-pile/stackexchange_filtered |
How can I create a RadioButtonList in a MVC View via HTML class ( Razor syntax )
I need to show my list in a RadioButtonList , some thing like this:
@Html.RadioButtonList("FeatureList", new SelectList(ViewBag.Features))
But as you know there is no RadioButtonList class in HTML Helper class and when I use :
@Html.RadioButton("FeatureList", new SelectList(ViewBag.Features))
it shows me a blank list!
// Controller codes :
public ActionResult Rules()
{
ViewBag.Features = (from m in Db.Features where m.ParentID == 3 select m.Name);
return View();
}
Html.RadioButton does not take (string, SelectList) arguments, so I suppose the blank list is expected ;)
You could 1)
Use a foreach over your radio button values in your model and use the Html.RadioButton(string, Object) overload to iterate your values
// Options could be a List<string> or other appropriate
// data type for your Feature.Name
@foreach(var myValue in Model.Options) {
@Html.RadioButton("nameOfList", myValue)
}
or 2)
Write your own helper method for the list--might look something like this (I've never written one like this, so your mileage may vary)
public static MvcHtmlString RadioButtonList(this HtmlHelper helper,
string NameOfList, List<string> RadioOptions) {
StringBuilder sb = new StringBuilder();
// put a similar foreach here
foreach(var myOption in RadioOptions) {
sb.Append(helper.RadioButton(NameOfList, myOption));
}
return new MvcHtmlString(sb.ToString());
}
And then call your new helper in your view like (assuming Model.Options is still List or other appropriate data type)
@Html.RadioButtonList("nameOfList", Model.Options)
| common-pile/stackexchange_filtered |
How to handle SerializationException after deserialization
I am using Avro and Schema registry with my Spring Kafka setup.
I would like to somehow handle the SerializationException, which might be thrown during deserialization.
I found the following two resource:
https://github.com/spring-projects/spring-kafka/issues/164
How do I configure spring-kafka to ignore messages in the wrong format?
These resources suggest that I return null instead of throwing an SerializationException when deserializing and listen for KafkaNull. This solution works just fine.
I would however like to be able to throw an exception instead of returning null.
KIP-161 and KIP-210 provide better features to handling exceptions. I did find some resources mentioning KIP-161 in Spring Cloud, but nothing specific about Spring-Kafka.
Does anyone know how to catch SerializationException in Spring Boot?
I am using Spring Boot 2.0.2
Edit: I found a solution.
I would rather throw an exception and catch it than having to return null or KafkaNull. I am using my custom Avro serializer and deserializer in multiple different project, some of which are not Spring. If I changed my Avro serializer and deserializer then some of the other projects would need to be changed to expect the deserializer to return null.
I would like to shutdown the container, such that I do not lose any messages. The SerializationException should never be expected in production. The SerializationException should only be able to happen if Schema Registry is down or if an unformatted message somehow is sent to the production kafka. Either way, SerializationException should only happen very rarely, and if it happens then I want to shutdown the container such that no messages are lost and I can investigate the issue.
Just take into consideration that will catch all exceptions from your consumer container. In my specific case I just want to only shutdown if it is a SerializationException
public class SerializationExceptionHandler extends ContainerStoppingErrorHandler {
@Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, Consumer<?, ?> consumer,
MessageListenerContainer container) {
//Only call super if the exception is SerializationException
if (thrownException instanceof SerializationException) {
//This will shutdown the container.
super.handle(thrownException, records, consumer, container);
} else {
//Wrap and re-throw the exception
throw new KafkaException("Kafka Consumer Container Error", thrownException);
}
}
}
This handler is passed to the consumer container. Below is an example of a
KafkaListenerContainerFactory bean.
@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaListenerContainerFactory(JpaTransactionManager jpa, KafkaTransactionManager<?, ?> kafka) {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(1);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setErrorHandler(new SerializationExceptionHandler());
factory.getContainerProperties().setTransactionManager(chainedTxM(jpa, kafka));
return factory;
}
There is nothing Spring can do; the deserialization occurs before the consumer gets any data. You need to enhance the deserializer.
I would however like to be able to throw an exception instead of returning null.
That won't help anything since Kafka won't know how to deal with the exception. Again; this all happens before the data is available so returning null (or some other special value) is the best technique.
EDIT
In 2.2, we added an error handling deserializer which delegates to the actual deserializer and returns null, with the exception in a header; the listener container then passes this directly to the error handler instead of the listener.
Thanks for clarifying. I will continue with KafkaNull or maybe try AspectJ.
I managed to find a solution. I have created a custom exception handler and pass it to the container.
@kkflf can you please provide an example of how you created?I know its quite late to ask but this is exactly my situation.
| common-pile/stackexchange_filtered |
Fabric.js loadFromJSON() method undefined
I have wrote this function in a class:
this.loadCanvas = function(json) {
// parse the data into the canvas
stage.loadFromJSON(json);
// re-render the canvas
stage.renderAll();
}
Where stage is new fabric object.
But in the browser it is showing this error:
Uncaught TypeError: Cannot call method 'loadFromJSON' of undefined
Please tell how can I solve this?
We'll need jsfiddle showing the error. Also, which Fabric version?
loadFromJSON only exists on fabric.Canvas and not on fabric.Object. Is stage defined? It looks like your stage object is undefined.
| common-pile/stackexchange_filtered |
Qt c++ Does my program use static or dynamic linking?
Sorry about this probably stupid question - I don`t know much about linking :
I use Qt QtCreator to program a GUI in C++ in a program that existed before where I had to adapt it. Now my Question is : How do I know whether the program uses static or dynamic linking?
When I install the program I wrote on another device I find a list of the executable(s) plus 5 dlls ( libgcc_s_dw2-1.dll, libxml2.dll, mingwm10.dll, QtCore4.dll, QtGui4.dll)
In my pro-File I have an entry CONFIG += qaxcontainer and another entry where I do link libxml dynamically .
Does this mean all is linked dynamically?
Thank you very much
If your program requires the .dll file in order to run, it means that it is dynamically linked.
Static libraries are .lib (on Windows) and are embedded directly in the executable file.
That's very misleading; a .lib file can be an "import library," used to implicitly link against a .dll. For more information see this article on MSDN.
| common-pile/stackexchange_filtered |
Which API: JavaFX or JMF is better for audio processing in Java?
I am doing a project in which I have to transform the audio data (which would be most probably in mp3, wav or wma format) into a waveform and also get the FFT and pitch for it along with the time in milliseconds at which the pitch change.
I am just confused whether which of these APIs is better? What are the limitations of each of these?
JMF is ancient, clunky, and basically unmaintained.
JavaFX may or may not support what you need, but at least it's on Oracle's radar for future development.
The hour of need says ,keep using JMF coz I can't wait for that long...and Thanks for telling that JMF is unmaintained...I'll take it as a challenge to use it coz I love challenges and have started something with it
You may want to check out FMJ, which is basically an open source replacement for JMF after Sun dropped the ball with maintaining JMF:
http://fmj-sf.net/
I haven't used it, but it does seem to have quite a few users and recently committed code which is a good sign....
I had already checked the FMJ,it seems it would not be suitable since I will need to understand that as well along with JMF...so I think JMF will be suitable as I already have started something with it ...Thanks Anyway
| common-pile/stackexchange_filtered |
insertion sort using linked list and pointers
I need to sort linked list using insertion sort.
The elements looks like this
[ 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 ]
The result of sorting should look like
[ 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 ]
The problem is that my result looks like this
[ 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0 3 2 1 0 ]
I'm not sure why...I feel like it should work...but maybe other eyes will spot the problem.
Main:
int main (int argc, char **argv) {
IntElement *ints[N_ELEMENTS];
for (int i = 0; i < N_ELEMENTS; i++) {
ints[i] = new IntElement (i%4);
}
SortedList slist;
for (int i = 0; i < N_ELEMENTS; i++) {
slist.addElement (ints[i]);
}
slist.print();
printf ("last = %s\n", slist.get (slist.size ()-1)->toString ());`
sort function in .cpp File
void SortedList::addElement(IElement * element)
{
entryPtr n = new entry;
n->next = NULL;
n->me = element;
curr = head;
if (curr == NULL) {
//Executes when linked list is empty
head = n;
return;
}
if (n->me < curr->me)
{
//Executes if given data is less than data in first node of linked list
n->next = head;
head = n;
return;
}
else
{
while (curr != NULL)
{
if (n->me>curr->me)
{
//Traverse to location we want to insert the node + 1 node
prev = curr;
curr = curr->next;
continue;
}
else
{
//Insert the node
prev->next = n;
n->next = curr;
return;
}
}
//Insert node at last
prev->next = n;
}
}
.h File
class SortedList {
protected:
typedef struct entry { // stores an element and a pointer to the next element in the list
IElement *me;
entry *next;
}*entryPtr;
entryPtr head;
entryPtr curr;
entryPtr prev;
entryPtr temp;
public:
SortedList();
~SortedList() {};
void addElement(IElement *element); // adds element to the list and returns position, -1 if not added
IElement *get(int position); // returns element at given position
int size(); // returns the number of elements in the list
void print(FILE *file = stdout); // prints all elements
};
thanks for any help
Please [edit] your question to provide a [mcve].
Welcome to Stack Overflow. Please take the time to read The Tour and refer to the material from the Help Center what and how you can ask here.
The right tool to solve such problems is your debugger. You should step through your code line-by-line before asking on Stack Overflow. For more help, please read How to debug small programs (by Eric Lippert). At a minimum, you should [edit] your question to include a Minimal, Complete, and Verifiable example that reproduces your problem, along with the observations you made in the debugger.
Instead of comparing values pointed to by the pointers in statements like this
if (n->me < curr->me)
you are comparing pointers themselves.
You should rewrite the condition like
if ( *n->me < *curr->me )
| common-pile/stackexchange_filtered |
How to get item from Recycler View?
I created list using Recycler View. Now I want to get item after click. I heard about getAdapterPosition but I dont know how to use it.
I try something like this but honestly dont know how to get it.
My Fragment class code:
public class FragmentOne extends Fragment {
private static RecyclerView.Adapter adapter;
private RecyclerView.LayoutManager layoutManager;
private static RecyclerView recyclerView;
private List<MovieDb> data;
private MovieDb movieDb;
static View.OnClickListener myOnClickListener;
public static FragmentOne newInstance() {
FragmentOne fragment = new FragmentOne();
return fragment;
}
public class MovieTask extends AsyncTask<Void, Void, List<MovieDb>> {
@Override
protected List<MovieDb> doInBackground(Void... voids) {
MovieResultsPage movies = new TmdbApi("f753872c7aa5c000e0f46a4ea6fc49b2").getMovies().getUpcoming("en-US", 1, "US");
List<MovieDb> listMovies = movies.getResults();
return listMovies;
}
@Override
protected void onPostExecute(List<MovieDb> movieDb) {
data = movieDb;
adapter = new CustomAdapter(data);
recyclerView.setAdapter(adapter);
}
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View returnView = inflater.inflate(R.layout.fragment_one, container, false);
myOnClickListener = new MyOnClickListener(getContext());
recyclerView = (RecyclerView) returnView.findViewById(R.id.my_recycler_view);
recyclerView.setHasFixedSize(true);
layoutManager = new LinearLayoutManager(getContext()); // ???
recyclerView.setLayoutManager(layoutManager);
recyclerView.setItemAnimator(new DefaultItemAnimator());
MovieTask mt = new MovieTask();
mt.execute();
return returnView;
}
private class MyOnClickListener implements View.OnClickListener {
private final Context context;
private MyOnClickListener(Context context) {
this.context = context;
}
@Override
public void onClick(View v) {
int pos1;
pos1 = adapter.getItemId();
movieDb = data.get(pos);
TextView text;
text = (TextView) v.findViewById(R.id.text1);
Toast.makeText(getActivity(), text.getText(), LENGTH_SHORT).show();
}
}
}
Function OnClick is mess because I try a lot. I can get text from TextView but it is not what I want.
There is my Adapter class:
public class CustomAdapter extends RecyclerView.Adapter<CustomAdapter.MyViewHolder> {
private List<MovieDb> dataSet;
public int pos;
public static class MyViewHolder extends RecyclerView.ViewHolder {
TextView textViewName;
TextView textViewVersion;
ImageView imageViewIcon;
int pos;
public MyViewHolder(View itemView) {
super(itemView);
this.textViewName = (TextView) itemView.findViewById(R.id.text1);
this.textViewVersion = (TextView) itemView.findViewById(R.id.text2);
this.imageViewIcon = (ImageView) itemView.findViewById(R.id.imageView);
itemView.setOnClickListener(FragmentOne.myOnClickListener);
}
}
public CustomAdapter(List<MovieDb> data) {
this.dataSet = data;
}
@Override
public MyViewHolder onCreateViewHolder(ViewGroup parent,
int viewType) {
View view = LayoutInflater.from(parent.getContext())
.inflate(R.layout.cards_layout, parent, false);
//view.setOnClickListener(FragmentOne.myOnClickListener);
MyViewHolder myViewHolder = new MyViewHolder(view);
return myViewHolder;
}
@Override
public void onBindViewHolder(final MyViewHolder holder, final int listPosition) {
TextView textViewName = holder.textViewName;
TextView textViewVersion = holder.textViewVersion;
pos = holder.getAdapterPosition();
ImageView imageView = holder.imageViewIcon;
Glide.with(imageView).load("https://image.tmdb.org/t/p/w500" + dataSet.get(listPosition).getPosterPath()).into(imageView);
textViewName.setText(dataSet.get(listPosition).getOriginalTitle());
textViewVersion.setText(dataSet.get(listPosition).getReleaseDate());
}
public int getPos(){
return pos;
}
@Override
public int getItemCount() {
if(dataSet == null){
return 0;
}
return dataSet.size();
}
}
I try to get position using int pos but I dont know how to use it in function OnClick.
use onClickListener inside onBindViewHolder. :)
All this class u mean? private class MyOnClickListener implements View.OnClickListener
Can I open chat with u?
I would do something along these lines:
Adjust your MyOnClickListener to something like this:
private class MyOnClickListener implements View.OnClickListener {
private final Context context;
private int position;
//Adjust your constructor to take in the context as a paramater
private MyOnClickListener(Context context, int position) {
this.context = context;
this.position = position;
}
@Override
public void onClick(View v) {
//not sure what you're doing with movieDB here....
movieDb = data.get(position);
TextView text;
text = (TextView) v.findViewById(R.id.text1);
Toast.makeText(getActivity(), text.getText(), LENGTH_SHORT).show();
}
}
}
Update your CustomAdapter class:
public class CustomAdapter extends RecyclerView.Adapter<CustomAdapter.MyViewHolder> {
private List<MovieDb> dataSet;
public int pos;
private Context context;
public CustomAdapter(List<MovieDb> data, Context context) {
this.dataSet = data;
this.context = context;
}
//You'll need this later to update the data set
public void updateDataSet(List<MovieDB data){
this.dataSet = data;
notifyDataSetChange();
...
In your onBindViewHolder do something like:
@Override
public void onBindViewHolder(final MyViewHolder holder, final int listPosition) {
TextView textViewName = holder.textViewName;
TextView textViewVersion = holder.textViewVersion;
pos = holder.getAdapterPosition();
ImageView imageView = holder.imageViewIcon;
Glide.with(imageView).load("https://image.tmdb.org/t/p/w500" + dataSet.get(listPosition).getPosterPath()).into(imageView);
textViewName.setText(dataSet.get(listPosition).getOriginalTitle());
textViewVersion.setText(dataSet.get(listPosition).getReleaseDate());
holder.setOnClickListener(new MyOnclickListener(context, listPosition));
}
Move your adapter call from the MovieTask to the FragmentOne onCreateMethod:
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View returnView = inflater.inflate(R.layout.fragment_one, container, false);
myOnClickListener = new MyOnClickListener(getContext());
recyclerView = (RecyclerView) returnView.findViewById(R.id.my_recycler_view);
recyclerView.setHasFixedSize(true);
layoutManager = new LinearLayoutManager(getContext()); // ???
recyclerView.setLayoutManager(layoutManager);
recyclerView.setItemAnimator(new DefaultItemAnimator());
adapter = new CustomAdapter(getContext(), data);
MovieTask mt = new MovieTask();
mt.execute();
return returnView;
}
In your Movie Task update you data set:
@Override
protected void onPostExecute(List<MovieDb> movieDb) {
data = movieDb;
adapter.updateDataSet(data);
}
You might need to make some minor adjustments to this code, but it should give you a basic idea of how to accomplish what you are trying to do.
Best of luck :)
I also dont know what I am doing with movieDb there, I dont know what to do so I try random things.
There is a problem with these line in onCreateView myOnClickListener = new MyOnClickListener(getContext());
I dont have position there, I dont know how to fix it.
You don't create myOnClickListener in on create. It's created when you need it on onBindViewHolder
Create an interface to retrieve a MovieDb when clicked:
public interface OnMovieDbClicked {
void movieDbClicked(MovieDb movieDb);
}
In your adapter create a private member of this interface and a setter:
public class CustomAdapter extends RecyclerView.Adapter<CustomAdapter.MyViewHolder> {
private OnMovieDbClicked onMovieDbClicked;
public setOnMovieDbClicked(OnMovieDbClicked onMovieDbClicked) {
this.onMovieDbClicked = onMovieDbClicked;
}
}
You can then use getAdapterPosition() in the constructor of your ViewHolder and use onMovieDbClicked to send the item clicked to your fragment:
public class MyViewHolder extends RecyclerView.ViewHolder {
public MyViewHolder(View itemView) {
// Initialize your views
itemView.setOnClickListener(view -> {
int pos = getAdapterPosition();
if (onMovieDbClicked != null && pos != RecyclerView.NO_POSITION) {
MovieDb movieDb = dataSet.get(pos);
onMovieDbClicked.movieDbClicked(movieDb);
}
});
}
}
When you're creating your adapter in your fragment, call setOnMovieDbClicked() as well:
adapter = new CustomAdapter(data);
adapter.setOnMovieDbClicked(movieDb -> {
// Do whatever you want with movieDb
});
Looks good, but about this: "You can then use getAdapterPosition() in the constructor of your ViewHolder and use onMovieDbClicked to send the item clicked to your fragment".
I can't use onMovieDbClicked and dataSet here.
They will be accessible if you remove the static modifier from MyViewHolder, I'll edit my answer
Ok, its fine for now, but still I dont understand last part. Where I should put this code?
I try in different places but it looks like I can't do this.
The adpater.setOnMovieDbClicked(... part? You can put that after you create the adapter. Formatting in comments is pretty bad so I'll update the answer to be clearer
You mean this adapter = new CustomAdapter(data); recyclerView.setAdapter(adapter);
but I dont think its place where I should put this.
Can u look at my Fragment class code and tell me where it exactly should be?
Honestly I am out of idea.
http://prntscr.com/vgzssk
I try do this in this place. I must cast it and also problem with naming and I am confused which name I should change. Its good idea to put in OnPostExecute()? I don't know, just asking.
I would create the adapter in onCreateView and call recyclerView.setAdapter(adapter). In your adapter create a function to set the dataset (public void setDataSet(List<MovieDb> dataSet)), make sure to call notifyDataSetChanged()as well after setting the member variable dataSet (like the answer from BlackHatSamurai). In onPostExecute when you retrieve the list call adapter.setDataSet(movieDb)
The issue in your screenshot is because you have already used the name "movieDb". Rename it to something else like adapter.setOnMovieDbClicked(movieDb2 -> .... Also when you define your adapter if you define it as CustomAdapter instead of RecyclerView.Adapter you don't have to cast it later (and it probably shouldn't be static, same with the recyclerView variable)
It is working finally. One more question. I did this version inside OnPostExecute(). Can I open a dialog or something like that when I click?
Great! If you want to open a dialog when the item is clicked see this post or the Android developer page. That code should go inside adapter.setOnMovieDbClicked(movieDb2 -> { // Code here });.
| common-pile/stackexchange_filtered |
React/React Hooks: Unknown prop type error on input, can't figure out how to resolve
I have a component that I've set up using react hooks and I've passed a unique prop type to an input to handle the styling changes when there's an error with the user input. Everything works as expected but now I'm getting an unknown props error in the console and I can't figure out how to resolve it.
The error
React does not recognize the `isError` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `iserror` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
The Component
import React from "react";
import styled from "styled-components";
import { Col, Row, Input, Checkbox } from "antd";
function validateEmail(value) {
const errors = {};
if (value === "") {
errors.email = "Email address is required";
} else if (!/\S+@\S+\.\S+/.test(value)) {
errors.email = "Email address is invalid";
}
return errors;
}
const CustomerDetails = ({ customer }) => {
const { contact = {} } = customer || {};
const [disableInput, setDisableInput] = React.useState(false);
const [errors, setErrors] = React.useState({});
const [inputValue, setInputValue] = React.useState(contact.email);
function onBlur(e) {
setErrors(validateEmail(e.target.value));
}
function clearInput() {
setInputValue(" ");
}
function handleInputChange(event) {
setInputValue(event.target.value);
}
function CheckboxClick() {
if (!disableInput) {
clearInput();
}
setDisableInput(prevValue => !prevValue);
setErrors({})
}
return (
<Container>
<Row>
<Col span={8}>
<StyledInput
value={inputValue}
onChange={handleInputChange}
disabled={disableInput}
onBlur={onBlur}
isError={!!errors.email}
/>
{errors.email && <ErrorDiv>{errors.email}</ErrorDiv>}
</Col>
<Col span={8}>
<Checkbox value={disableInput} onChange={CheckboxClick} /> EMAIL OPT
OUT
</Col>
</Row>
</Container>
);
};
const Container = styled.div`
text-align: left;
`;
const StyledInput = styled(Input)`
max-width: 100%;
background: white;
&&& {
border: 2px solid ${props => props.isError ? '#d11314' : 'black'};
border-radius: 0px;
height: 35px;
}
`;
const ErrorDiv = styled.div`
color: #d11314;
`;
export default CustomerDetails;
This error is because styled-components passes through all props for custom react-components. See the documentation here: https://www.styled-components.com/docs/basics#passed-props
You can avoid the error by following the pattern described here: https://www.darrenlester.com/blog/prevent-all-props-being-passed
In your case this would look something like:
const ErrorInput = ({ isError, ...remaining }) => <Input {...remaining} />;
const StyledInput = styled(ErrorInput)`
max-width: 100%;
background: white;
&&& {
border: 2px solid ${props => (props.isError ? "#d11314" : "black")};
border-radius: 0px;
height: 35px;
}
`;
Full code: https://codesandbox.io/s/awesome-wright-2l32l
To support React PropTypes:
import PropTypes from 'prop-types';
const ErrorInput = ({ isError, ...remaining }) => <Input {...remaining} />;
ErrorInput.propTypes = {
isError: PropTypes.bool
}
This got rid of the error but now I'm picking up a warning from my eslint saying Line 296: 'isError' is missing in props validation react/prop-types any ideas on how to clear this too? Thanks also for the solution this was great
You should declare the prop type of isError to be a boolean. This is the documentation for React PropTypes. https://reactjs.org/docs/typechecking-with-proptypes.html. You want something like:
import PropTypes from 'prop-types';
ErrorInput.propTypes = {
isError: PropTypes.bool
}
The reason why this happens is:
The Input component from antd returns input html tag (<input ... />).
When you pass Input to styled, it also returns the input with the styles added.
const StyledInput = styled(Input)`...` // this will return <input ... />
styled(Input) isn't like a wrapper with some element around. It just get the component, and add the styles.
styled(SomeComponent) use your props to style SomeComponent but also pass props down to SomeComponent. This will pass isError to input tag (<input isError={...} />) and when you do this, react will try to find a input property isError wich doesn't exists, giving you the error.
I hope this explanation helps you understand better why this happens, but so far, what you can do is lowercase your prop name.
Edit:
As the other answers says and looking at this article, you can avoid isError to be passed to the input by creating a wrapper component that removes isError prop.
const WrappedInput = ({ isError, ...remaining }) => <Input {...remaining} />;
const StyledInput = styled(WrappedInput)`...`
It does, but when I changed everything to lower case I'm getting the following error ```index.js:1446 Warning: Received false for a non-boolean attribute iserror.
If you want to write it to the DOM, pass a string instead: iserror="false" or iserror={value.toString()}.```
@JCalkins89 you can check my edit and also the other answer.
It seems that the Input component will blindly forward all attributes it receives and can't recognise to the underlying DOM element. styled will also forward all props to the underlying element. The ideal solution is to check whether styled will allow you a syntax that "absorbs" props instead of forwarding them. There's an FAQ entry on this in the styled documentation:
Unfortunately the solution only works if you are styling your own components. As a workaround you can create a proxy Input you can then style:
const ProxyInput = ({ isError, ...props }) => <Input {...props} />
const StyledInput = styled(ProxyInput)`
max-width: 100%;
background: white;
&&& {
border: 2px solid ${props => props.isError ? '#d11314' : 'black'};
border-radius: 0px;
height: 35px;
}
`;
This is not ideal and you may opt to just make the properly lowercased iserror as others suggest. I only mention this alternative in case you don't like random attributes bleeding into your DOM elements.
I had a similar issue with react-fontawesome. The styled-components staff say this is most likely an issue that the maintainers of the library in which the problem is happening (antd) will need to solve. For now, I simply lowercased my DOM prop, which will cause the error to not get shown.
| common-pile/stackexchange_filtered |
Creating multiple subnets in Azure Terraform using modules
I'm trying to deploy two subnets with different ip prefixes and different names and not create too much repeated code. I'm not sure how to approach it best. I tried the below but its not working. My terraform version is 0.14.11 so I think some of what I am doing below might be outdated
My main.tf in root module
module "deploy_subnet" {
source = "./modules/azure_subnet"
subscription_id = var.subscription_id
resource_group_name = var.resource_group_name
region = var.region
vnet_name = var.vnet_name
count_subnets = "${length(var.subnet_prefix)}"
subnet_name = "${lookup(element(var.subnet_prefix, count.index), "name")}"
address_prefix = "${lookup(element(var.subnet_prefix, count.index), "ip")}"
}
My variables.tf in root module (only pasting relevant part)
variable "subnet_prefix" {
type = "list"
default = [
{
ip = "<IP_ADDRESS>/24"
name = "aks-sn"
},
{
ip = "<IP_ADDRESS>/24"
name = "postgres-sn"
}
]
}
My main.tf in my child module folder
resource "azurerm_subnet" "obc_subnet" {
name = var.subnet_name
count = var.count_subnet
resource_group_name = var.resource_group_name
virtual_network_name = var.vnet_name
address_prefixes = var.address_prefix
}
My variables.tf in my child module folder (only relevant part)
variable "subnet_name" {
description = "Subnet Name"
}
variable "count_subnet" {
description = "count"
}
variable "address_prefix" {
description = "IP Address prefix"
}
I get the error below
Reference to "count" in non-counted context
on main.tf line 8, in module "deploy_subnet":
8: subnet_name = "${lookup(element(var.subnet_prefix, count.index),"name")}"
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set
Reference to "count" in non-counted context
on main.tf line 9, in module "deploy_subnet":
9: address_prefix = "${lookup(element(var.subnet_prefix, count.index), "ip")}"
The "count" object can only be used in "module", "resource", and "data"
blocks, and only when the "count" argument is set.
It is exactly what it says. In the root module you try to reference a count.index but there is nothing being counted. All you do is pass a variable to the child module.
You should just pass subnet prefix to the child module, and in the child module set the count to the length of it, and reference count.index for the values of address_prefix and name
Alternatively, and probably more elegant you might look into for_each and each.value
| common-pile/stackexchange_filtered |
Scalar Curvature of a metric on the hemisphere, from a paper on the Min-Oo Conjecture
I'm reading a paper on the Min-Oo Conjecture (http://arxiv.org/abs/1004.3088), and I'm stuck on the following step in a proposition:
Given a metric $g_0(t)$ on the upper hemisphere $\mathbb{S}^n_+$, and the standard metric $\bar{g}$ on the sphere $\mathbb{S}^n$ restricted to the hemisphere, we define another metric $g(t)$ on the upper hemisphere by:
$g(t) = g_0(t) + \frac{1}{2(n-1)}t^2 u \bar{g}$
In the paper they say this implies:
$R_{g(t)}=R_{g_0(t)} - \frac{1}{2} t^2 (\Delta u + nu) + O(t^3)$
I'm not sure if I should break down and calculate like mad, or if there is a better way to see this. The conditions we have on $u$ are simply
$u|_{\partial \mathbb{S}^n_+}= 0$
I appreciate all help. Cheers!
| common-pile/stackexchange_filtered |
Handling Contraints And Identifiers In A Custom Object Model For Persisting To A Relational Database
I have a robust set of objects that relate to one another and I need to figure out that best way to handle saving them to the database and still account for things like data constraints.
Let's say I have the following classes:
class Foo
{
public int Id { get; set; }
public int BarId { get; set; }
Bar _bar;
public Bar Bar
{
get { return _bar; }
set
{
if(_bar != value)
{
_bar = value;
BarId = value.Id;
}
}
}
}
class Bar
{
public int Id { get; set; }
}
Then let's say I have the following code:
var f = new Foo() { Bar = new Bar() };
SaveFoo(f);
What should I do in SaveFoo to save the objects to the database in the proper order?
IMPORTANT NOTE: I am code generating all of this from my SQL data structure using MyGeneration, and I have access to all Constraints while I'm generating.
You have to work inside out.
If you have a "Navigation property" in your class (in this case, Bar in Foo) it will be accompanied with a foreign id (BarID). So you have to save your nested objects first before saving the object itself.
The problem you risk here are cyclic properties (Author write book, book has Author), in which case you have to properly define which is the primary relationship and which should be ignored.
Then the next problem is cascading deletes.
Was this what you were asking?
Basically, I guess I was just a little stumped, and I was hoping someone else had conquered this exact same problem.
Having built my own horrible horrible ORM before, I know how you feel :D There's a reason why I use 3rdparty ORMs where possible now. And even they haven't solved it. Entity Framework prevents you from doing cyclic relationships because it leads to multiple cascade paths. I believe this only affects 1-1 relationships though.
Well, I think I'm on the right path now, actually. I have a very customized set of needs for this app that nothing 3rd party fit like a glove. And it's my own pet project (unpaid) so I have a little leeway to screw around.
This led me the right way. What I did was basically code generate recursive save methods that followed relationships in both directions, saving dependencies first, but also saving dependents if they were found. It passed a HashSet of references around to make sure I didn't save the same data twice, which prevents cyclical behavior. Thanks again for the tip!
| common-pile/stackexchange_filtered |
Acquiring multiple attributes from .xml file in c#
I have an .xml file with the structure below. I am wanting to acquire the attribute value, 0.05 and so on, for specific EndPointChannelID's. I am currently able to get the value but it is for every EndPointChannelID instead of for a desired one. Another twist is that the readings are not always going to be 6. How can I achieve only storing the values from the desired EndPointChannelID? Any suggestions will be greatly appreciated!
<Channel ReadingsInPulse="false">
<ChannelID EndPointChannelID="5154131" />
<ContiguousIntervalSets>
<ContiguousIntervalSet NumberOfReadings="6">
<TimePeriod EndRead="11386.22" EndTime="2013-01-15T02:00:00Z"/>
<Readings>
<Reading Value="0.05" />
<Reading Value="0.04" />
<Reading Value="0.05" />
<Reading Value="0.06" />
<Reading Value="0.03" />
<Reading Value="0.53" />
</Readings>
</ContiguousIntervalSet>
</ContiguousIntervalSets>
</Channel>
Below is the current code I have to find the Value.
XmlReader reader = XmlReader.Create(FileLocation);
while (reader.Read())
{
if((reader.NodeType == XmlNodeType.Element) && (reader.Name == "Reading"))
{
if (reader.HasAttributes)
{
MessageBox.Show(reader.GetAttribute("Value"));
}
}
}
Why don't you use LINQ to XML?
Continuing with XMLReader path, you can do it by setting up a result list, wait for the desired channel ID, start collecting the values, and then end collecting them when the desired channel ID tag ends:
var values = new List<string>();
var collectValues = false;
var desiredChannelId = "5154131";
while (reader.Read())
{
if((reader.NodeType == XmlNodeType.Element))
{
if (reader.Name == "ChannelID" && reader.HasAttributes) {
collectValues = reader.GetAttribute("EndPointChannelID") == desiredChannelId;
}
else if (collectValues && reader.Name == "Reading" && reader.HasAttributes)
{
values.Add(reader.GetAttribute("Value"));
}
}
}
It can be easily done using LINQ to XML:
// load document into memory
var xDoc = XDocument.Load("Input.txt");
// query the document and get List<decimal> as result
List<decimal> values = (from ch in xDoc.Root.Elements("Channel")
where (int)ch.Element("ChannelID").Attribute("EndPointChannelID") == 5154131
from r in ch.Descendants("Reading")
select (decimal)r.Attribute("Value")).ToList();
Your code is a bit too simple. You need to read line by line and first match on EndPointChannelId. Set a flag to make it clear that you have the correct ChannelId, then, when that condition is met read the Value attributes. You'll need an array to save them into. An ArrayList would be ideal since it is of variable length.
| common-pile/stackexchange_filtered |
20.04 Alert Volume slider missing
In previous versions I was able to change volume of system alert so it is not playing on full power of my speakers with slider like this:
But this is not possible in system settings anymore. Is there any way to change this setting from console?
I'm not a GNOME user, but in the mean time I'd use pavucontrol to adjust the volume of System Sounds (it might be that's what you need to do, but you may get a better suggestion from a GNOME user)
Did not found a way to change this from command line but as @guiverc suggested I was able to lower system sounds volume using pavucontrol:
sudo apt install pavucontrol
Playback -> System Sounds
And it does not affect other applications volume so works perfect.
| common-pile/stackexchange_filtered |
Spring vs Guice instantiating objects
I recently started learning Spring. As I am new to Spring several question comes to mind of mine. One of them is this:
As stated here " All beans are instantiated as soon as the spring configuration is loaded by a container. org.springframework.context.ApplicationContext container follows pre-loading methodology." LINK
1 - Does this mean that all objects created with Spring ApplicationContext are Singletons ?
I created this simple test
@Component
public class HelloService {
private ApplicationContext context;
public HelloService() {
}
@Autowired
public HelloService(ApplicationContext context) {
this.context = context;
}
public String sayHello() {
return "Hi";
}
}
public class HelloApp {
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("spring-config.xml");
Injector injector = Guice.createInjector(new AbstractModule() {
@Override
protected void configure() {
}
});
HelloService helloService1 = context.getBean(HelloService.class);
System.out.println(helloService1);
HelloService helloService2 = context.getBean(HelloService.class);
System.out.println(helloService2);
HelloService helloService3 = injector.getInstance(HelloService.class);
System.out.println(helloService3);
HelloService helloService4 = injector.getInstance(HelloService.class);
System.out.println(helloService4);
}
}
The out put was
foo.bar.HelloService@191e8b08 // same instance
foo.bar.HelloService@191e8b08 // same instance
foo.bar.HelloService@6ba67ab5 // different instance
foo.bar.HelloService@7ec23849 // different instance
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<context:component-scan base-package="foo.bar"/>
</beans> // this is mu config bean.
In Guice you have to explicitly say that you want this object to be instantiated a singleton .
2- Doesn't this create some problems when Spring create objects that keep some state ?
3- How to tell Spring to create a new object ?
Let's see your config. Spring beans are singletons by default.
Would need that HelloService class as well.
1 - Does this mean that all objects created with Spring ApplicationContext are Singletons ?
No. But Spring's default scope is singleton. If you want a different scope, you must explicitly declare it in the bean configuration. In Java configuration, you do that with the @Scope annotation. With XML configuration, you do it with the <bean> scope attribute. Among other ways...
2- Doesn't this create some problems when Spring create objects that keep some state ?
You can always declare different scope, so no.
3- How to tell Spring to create a new object ?
That depends on the scope. If the scope is prototype, then calling ApplicationContext#getBean(String) will give you a new instance every time. If your bean is a singleton, then you will always get the same instance.
Note that you can have multiple beans of the same type but with different scopes. For example,
<bean name="my-proto" class="com.example.Example" scope="prototype" />
<bean name="my-singleton" class="com.example.Example" /> <!-- defaults to singleton -->
and later
(Example) context.getBean("my-proto"); // new instance every time
(Example) context.getBean("my-singleton"); // same instance every time
You can therefore use the singleton in some cases, and a different scope in others. Also, you don't have to use Spring everywhere.
"Note that you can have multiple beans of the same type but with different scopes" can you give an example ?
| common-pile/stackexchange_filtered |
Why is there a possible division by zero in the A matrix of a commutator?
Suppose we have the following zero trace matrix:
$$M = \begin {pmatrix}-b_1{_2} & s1 & 0\\-b_2{_2} & b_1{_2} & 0\\-b_3{_2} & s2 & 0 \end {pmatrix}$$
Because it is has zero trace, it has a commutator such that:
$$M = AB-BA$$
I've chosen the elements for $M$ carefully in order to pose this question as clearly as I can.
A solution for $A$ and $B$ in this case is:
$$A = \begin {pmatrix}0 & -(b_3{_2}-s_1)/b_2{_2} & 1\\1 & 0 & 0\\1 & -(b_1{_2}+b_3{_2}-s_2)/b_2{_2} & 1 \end {pmatrix}$$
$$B = \begin {pmatrix}0 & b_1{_2} & 0\\0& b_2{_2} & 0\\0 & b_3{_2} & 0 \end {pmatrix}$$
If I choose $b_2{_2}\neq0$, then there is no problem. But, if I choose $b_2{_2}=0$ then there is division by zero in $A$. Yet, in this case $M$ still has a zero trace.
Does the possible division by zero ruin the solution for $A$ and $B$?
Is $M$ still a commutator if $b_2{_2}=0$?
As you said, you've found "A solution for $A$ and $B$ that works when $b_{22}\neq0$. There are lots of other solutions, some of which work when $b_{22}=0$. So, to answer your two questions: Yes, the possible division by zero ruins this particular solution for $A$ and $B$. And yes, $M$ is still a commutator when $b_{22}=0$, because there will be other solutions for $A$ and $B$ in that case.
In generating your solution for A and B, there was clearly a division by $b_{22}$ along the way; going back and re-doing the process that got you A and B would "undo" the divide by zero problem and give you a valid commutator relation.
For example, I solve $ax^2 + bx + c = 0$ with $x = \frac{-b +/-\sqrt{b^2-4ac}}{2a}$. However, my solution will be ruined in a sense if I send $a$ to zero, but that doesn't mean the equation can't be solved.
If $a$ is zero, then you don't have a quadratic equation, you have a line. In the original question, if $b_2{_2}=0$, $M$ still looks like a commutator, but is it? The $A$ matrix has a divide by zero. Also note, that the determinant of $A$ is not zero so it is non-singular.
| common-pile/stackexchange_filtered |
Why use getters and setters/accessors?
What's the advantage of using getters and setters - that only get and set - instead of simply using public fields for those variables?
If getters and setters are ever doing more than just the simple get/set, I can figure this one out very quickly, but I'm not 100% clear on how:
public String foo;
is any worse than:
private String foo;
public void setFoo(String foo) { this.foo = foo; }
public String getFoo() { return foo; }
Whereas the former takes a lot less boilerplate code.
Of course, both are equally bad when the object doesn't need a property to be changed. I'd rather make everything private, and then add getters if useful, and setters if needed.
"Accessors are evil" if you happen to be writing functional code or immutable objects. If you happen to be writing stateful mutable objects, then they are pretty essential.
Tell, don't ask. http://www.pragprog.com/articles/tell-dont-ask
Well, I am surprised no one is talking about data synchronization. Suppose you are using public String foo; its not thread safe! Instead you can define getter setters with synchronization techniques to avoid data infidelity[i mean another thread messing up the foo]. I felt it's worth mentioning.
@goldenparrot: More often than never, synchronizing single accesses to an object is way to inefficient, or even prone to synchronization problems (when you need to change several properties of an object at once). IME, you often need to wrap operations as a whole that do multiple accesses to an object. That can only be accomplished by synchronization on the side of the users of a type.
Another article attempting guidelines about where and when to use getters and setters: Public or Private Member Variables? The principle of YAGNI guides us to not add constructs until we really know we are going to need them.
A recent question has reminded me of another reason for set-methods: the ability to trigger a property change.
In this case, it is essentially the same thing. In general, objects that have getters and setters only pretend to be objects, but in fact they are data structures (they violate the principle of encapsulation).
There are actually many good reasons to consider using accessors rather than directly exposing fields of a class - beyond just the argument of encapsulation and making future changes easier.
Here are the some of the reasons I am aware of:
Encapsulation of behavior associated with getting or setting the property - this allows additional functionality (like validation) to be added more easily later.
Hiding the internal representation of the property while exposing a property using an alternative representation.
Insulating your public interface from change - allowing the public interface to remain constant while the implementation changes without affecting existing consumers.
Controlling the lifetime and memory management (disposal) semantics of the property - particularly important in non-managed memory environments (like C++ or Objective-C).
Providing a debugging interception point for when a property changes at runtime - debugging when and where a property changed to a particular value can be quite difficult without this in some languages.
Improved interoperability with libraries that are designed to operate against property getter/setters - Mocking, Serialization, and WPF come to mind.
Allowing inheritors to change the semantics of how the property behaves and is exposed by overriding the getter/setter methods.
Allowing the getter/setter to be passed around as lambda expressions rather than values.
Getters and setters can allow different access levels - for example the get may be public, but the set could be protected.
concerning the last point: why, for example, the set could be protected?
@andreagalle There's many reason that could be the case. Maybe you have an attribute that is not to be changed from the outside. In that scenario you could also completely remove the setter. But maybe you want to change this change-blocking behaviour in a subclass, which could then utilize the protected method.
-1 Because point no. 1 is the dev trap #1: Classical Getters / Setters do not encapsulate behaviour. Because they do not express behaviour. Cutomer::setContractEndDate() / Customer::getContractEndDate() is no behaviour and it does not encapsulate anything at all. It's faking encapsulation. Customer::cancelContract() is a behaviour and that's where you actually can modify the logic without changing the interface. It won't work if you make atomic properties public through getters and setters.
There're many good reasons to walk outside in kevlar riot suit, gas mask and shield. You'll be safer! You'll be protected from chemical and biological threats. And many more!
So every approach that consider only positives or only negatives is unbalanced so in practice lead to looses. Because you have to pay, for the "good thing" and the price is not smth you can neglect. What is the price of getters and setters? Your time, your attention, your life in effect. So use them whenever you and others derive more benefits in categories of time spent on creating and maintaining code.
Unforeseen side effects, when someone sees a getter or setter, they don't assume additional behavior. Don't lie to your fellow programmers.
You shouldn't expose state, just design your interface around behaviors
Possible violation of Liskov Substitution Principle. Also, design behaviors, don't provide access to the state.
I don't see how getters/setters can achieve that
Don't write code with the sole purpose of debugging. Especially inside your classes that should be the core of your domain.
Such libraries are a nightmare to use and you should provide a DTOs for them, there's no a good way here, but the least bad is the reflection (please don't write anything using it)
Again, don't provoke the Liskov Substitution Principle violation
Just don't see a point. If the uncontrolled external modification of the object's state is a horrible idea, launching lambdas that to this in the air seems even worse.
The only semi-valid reason. But works only if you use inheritance, which is currently considered an anti-pattern.
To summarize my previous comments: there's hardly a valid reason to do such thing as getter and setter for a private member. It's basically the same as a public member but misguidingly concealed.
@Mikz, I don't understand your argument against #9. It has nothing to do with inheritance. I generally avoid public setters in all my classes because I want the object to be able to validate the values being assigned to it. So I use a public Get accessor and a public Set method (or have the value passed through a constructor). Yes, this could be used for inheritance, but that's just a side effect of the existence of access levels, not the purpose.
Because 2 weeks (months, years) from now when you realize that your setter needs to do more than just set the value, you'll also realize that the property has been used directly in 238 other classes :-)
I'm sitting and staring at a 500k line app where it's never been needed. That said, if it's needed once, it'd start causing a maintenance nightmare.
Good enough for a checkmark for me.
I can only envy you and your app :-) That said, it really depends on your software stack as well. Delphi, for example (and C# - I think?) allows you to define properties as 1st class citizens where they can read / write a field directly initially but - should you need it - do so via getter / setter methods as well. Mucho convenient. Java, alas, does not - not to mention the javabeans standard which forces you to use getters / setters as well.
Validation is a classic example of doing something special in the setter, especially if you don't fully trust the consumers of your code...
While this is indeed a good reason for using accessors, many programming environments and editors now offer support for refactoring (either in the IDE or as free add-ins) which somewhat reduces the impact of the problem.
@LBushkin - assuming your code will not be consumed publicly.
sounds like pre-mature optimization
The question you need to ask when you wonder whether to implement getters and setters is: Why would users of a class need to access the class' innards at all? It doesn't really matter whether they do it directly or shielded by a thin pseudo layer — if users need to access implementation details, then that's a sign that the class doesn't offer enough of an abstraction. See also this comment.
That problem is not an indicator that you needed to use set or get methods. It's an indicator that the entire team needs to have the IDEs slapped out of their hands and taught to write and understand and maintain OOP without one.
@ErikReppen As I've commented above, this depends on a particular language implementation. For the ones where properties are not first-class citizens (like Java) it's completely irrelevant whether you're using some IDE or plain vi to write your code - once property has been exposed directly you cannot take it back.
@ChssPly76 IMO, you really shouldn't be exposing properties directly or indirectly in the first place (most of the time - always exceptions to a rule). They are that object's responsibility. When you design in terms of data only changing as a side-effect of objects telling each other to do things other than what data to set, you have to stop and think occasionally but it leads to much more maintainable/modular architecture IMO. It's what makes it possible to maintain complexity in a dynamic language.
@ChssPly76 I didn't explain the point of removing the IDEs. Try writing code without one when you've effectively globally accessed the same var from 238 places and now have to figure out where something went wrong.
Why all the talk about removing the IDE? Why would anyone do that? Sure, I know how to use ed, and at times, it's the tool for the job. Writing software, however, isn't one of those times.
@sbi I don't see why? I mean, if you have a very basic case of a "person" class, a field is going to be first_name and obviously you want to just get that field verbatim at one point. And you should use a getter. For mayhap in the future you will completely change the implementation of your class such that the class asyncroneously retrieves this from a server or does whatever else.
@Zorf: If all your person does it to store first and last name, then it's nothing but a (glorified) struct and, thus, not OOP, but Structured Programming. (Whether setters and getters gain you anything in this scenario over plainly accessing the data is dubious at best: A) How else but in a string would you represent a person's names? And I'm talking practical scenarios here, not theory, because B) in 20 years of practice I have never run into such a situation.)
with new setter and getter syntax it won't be a problem, just public int myVar {get; set;} and to add functions myVar {get {bla bla bla}; set {bla bla bla}} and none of your code will break
How about just overload the = though it is for c++
Well if you are adding more logic to getters and setters your code has non transparent behavior and this is very bad thing.
A public field is not worse than a getter/setter pair that does nothing except returning the field and assigning to it. First, it's clear that (in most languages) there is no functional difference. Any difference must be in other factors, like maintainability or readability.
An oft-mentioned advantage of getter/setter pairs, isn't. There's this claim that you can change the implementation and your clients don't have to be recompiled. Supposedly, setters let you add functionality like validation later on and your clients don't even need to know about it. However, adding validation to a setter is a change to its preconditions, a violation of the previous contract, which was, quite simply, "you can put anything in here, and you can get that same thing later from the getter".
So, now that you broke the contract, changing every file in the codebase is something you should want to do, not avoid. If you avoid it you're making the assumption that all the code assumed the contract for those methods was different.
If that should not have been the contract, then the interface was allowing clients to put the object in invalid states. That's the exact opposite of encapsulation If that field could not really be set to anything from the start, why wasn't the validation there from the start?
This same argument applies to other supposed advantages of these pass-through getter/setter pairs: if you later decide to change the value being set, you're breaking the contract. If you override the default functionality in a derived class, in a way beyond a few harmless modifications (like logging or other non-observable behaviour), you're breaking the contract of the base class. That is a violation of the Liskov Substitutability Principle, which is seen as one of the tenets of OO.
If a class has these dumb getters and setters for every field, then it is a class that has no invariants whatsoever, no contract. Is that really object-oriented design? If all the class has is those getters and setters, it's just a dumb data holder, and dumb data holders should look like dumb data holders:
class Foo {
public:
int DaysLeft;
int ContestantNumber;
};
Adding pass-through getter/setter pairs to such a class adds no value. Other classes should provide meaningful operations, not just operations that fields already provide. That's how you can define and maintain useful invariants.
Client: "What can I do with an object of this class?"
Designer: "You can read and write several variables."
Client: "Oh... cool, I guess?"
There are reasons to use getters and setters, but if those reasons don't exist, making getter/setter pairs in the name of false encapsulation gods is not a good thing. Valid reasons to make getters or setters include the things often mentioned as the potential changes you can make later, like validation or different internal representations. Or maybe the value should be readable by clients but not writable (for example, reading the size of a dictionary), so a simple getter is a nice choice. But those reasons should be there when you make the choice, and not just as a potential thing you may want later. This is an instance of YAGNI (You Ain't Gonna Need It).
Great answer (+1). My only criticism is that it took me several reads to figure out how "validation" in the final paragraph differed from "validation" in the first few (which you threw out in the latter case but promoted in the former); adjusting the wording might help in that regard.
This is a great answer but alas the current era has forgotten what "information hiding" is or what it is for. They never read about immutability and in their quest for most-agile, never drew that state-transition-diagram that defined what the legal states of an object were and thus what were not.
I like the pushback against dogmatic explanations, but there are situations where a class author might want to bring in logic to property access that do not break existing contracts. You might add new functionality which causes you to want to store internal data differently, while still exposing it in the same form externally.
Lots of people talk about the advantages of getters and setters but I want to play devil's advocate. Right now I'm debugging a very large program where the programmers decided to make everything getters and setters. That might seem nice, but its a reverse-engineering nightmare.
Say you're looking through hundreds of lines of code and you come across this:
person.name = "Joe";
It's a beautifully simply piece of code until you realize its a setter. Now, you follow that setter and find that it also sets person.firstName, person.lastName, person.isHuman, person.hasReallyCommonFirstName, and calls person.update(), which sends a query out to the database, etc. Oh, that's where your memory leak was occurring.
Understanding a local piece of code at first glance is an important property of good readability that getters and setters tend to break. That is why I try to avoid them when I can, and minimize what they do when I use them.
This is argument against syntactic sugar, not against setters in general.
@Phil I’d disagree. One can just as easily write a public void setName(String name) (in Java) that does the same exact things. Or worse, a public void setFirstName(String name) that does all those things.
@tedtanner and when you do, it's no longer a mystery that a method is being invoked, since it's not disguised as field access.
@Phil I am with you on that one. I don't believe in hiding method calls. I just want to point out that Kai's argument is less about syntactic sugar and more about how setters (including ones that look like method calls) shouldn't have side effects like changing other data members in the object or querying the database.
@tedtanner I think we share opinions as to what setters and getters should do, and as to whether they should be hidden like this or not, but the fact is when they are hidden, it's always syntactic sugar - it's always a method call, on the machine. Back in 2009 when the answer was written, and in 2013 when I wrote my comment, one used annotations and either codegen or pointcut type solutions to do this, and it was all very non-standard. Today in 2022, I haven't touched Java for years
In a pure object-oriented world getters and setters is a terrible anti-pattern. Read this article: Getters/Setters. Evil. Period. In a nutshell, they encourage programmers to think about objects as of data structures, and this type of thinking is pure procedural (like in COBOL or C). In an object-oriented language there are no data structures, but only objects that expose behavior (not attributes/properties!)
You may find more about them in Section 3.5 of Elegant Objects (my book about object-oriented programming).
Interesting viewpoint. But in most programming contexts what we need is data structures. Taking the linked article's "Dog" example. Yes, you can't change a real-world dog's weight by setting an attribute ... but a new Dog() is not a dog. It is object that holds information about a dog. And for that usage, it is natural to be able to correct an incorrectly recorded weight.
Well, I put it to you that most useful programs don't need to model / simulate real world objects. IMO, this is not really about programming languages at all. It is about what we write programs for.
Real world or not, yegor is completely right. If what you have is truly a "Struct" and you don't need to write any code that references it by name, put it in a hashtable or other data structure. If you do need to write code for it then put it as a member of a class and put the code that manipulates that variable in the same class and omit the setter & getter. PS. although I mostly share yegor's viewpoint, I have come to believe that annotated beans without code are somewhat useful data structures--also getters are sometimes necessary, setters shouldn't ever exist.
I got carried away--this entire answer, although correct and relevant, doesn't address the question directly. Perhaps it should say "Both setters/getters AND public variables are wrong"... To be absolutely specific, Setters and writable public variables should never ever be used whereas getters are pretty much the same as public final variables and are occasionally a necessary evil but neither is much better than the other.
Perhaps method names sometimes reflect a developer's deeper understanding, and maybe it does go the other way...that language elements influence a developers habits. But language syntax and symbols are rife with elements that have evolved over time and are no longer (if ever) an accurate metaphor for what they are supposed to represent. Although you make some good points, it feels to me like arguing that Java is a stupid programming language name because we can't really drink it sitting in a cafe. It's okay to redefine "set" with a more complete meaning without actually doing away with it.
I have never understood getters/setters. From the first Java101 class I took at the college, they seemed like they encourage building lacking abstractions and confused me. But there is this case then: your object may possibly have so many different behavior. Then it is possible that you might not hope to implement them all. You make setters/getters and conveniently tell your users that if you need a behavior that I missed, inherit and implement them yourself. I gave you accessors after all.
In this sense of “alive objects”, there should be no set…() and get…(), but rather remember…() and tell…(). Just that get and set is shorter to type. Also, an object describing another object—like a paper-bound form—should include that in the class name, but that again is lengthy. We cannot change the weight of the dog, and we don’t do, what we change is the weight noted down on the dog admission form.
There are many reasons. My favorite one is when you need to change the behavior or regulate what you can set on a variable. For instance, lets say you had a setSpeed(int speed) method. But you want that you can only set a maximum speed of 100. You would do something like:
public void setSpeed(int speed) {
if ( speed > 100 ) {
this.speed = 100;
} else {
this.speed = speed;
}
}
Now what if EVERYWHERE in your code you were using the public field and then you realized you need the above requirement? Have fun hunting down every usage of the public field instead of just modifying your setter.
My 2 cents :)
Hunting down every usage of the public field shouldn't be that hard. Make it private and let the compiler find them.
that's true of course, but why make it any harder than it was designed to be. The get/set approach is still the better answer.
@GraemePerrow having to change them all is an advantage, not a problem :( What if you had code that assumed speed could be higher than 100 (because, you know, before you broke the contract, it could!) (while(speed < 200) { do_something(); accelerate(); })
It is a VERY bad example! Someone should call:
myCar.setSpeed(157);
and after few lines
speed = myCar.getSpeed();
And now... I wish you happy debugging while trying to understand why speed==100 when it should be 157
One advantage of accessors and mutators is that you can perform validation.
For example, if foo was public, I could easily set it to null and then someone else could try to call a method on the object. But it's not there anymore! With a setFoo method, I could ensure that foo was never set to null.
Accessors and mutators also allow for encapsulation - if you aren't supposed to see the value once its set (perhaps it's set in the constructor and then used by methods, but never supposed to be changed), it will never been seen by anyone. But if you can allow other classes to see or change it, you can provide the proper accessor and/or mutator.
I dont understand this argument, sure someone could set the value to null, if they wanted their code to break. Maybe on the most public of apis for your library, this would be useful for validation, but for internal code there is a lot more boiler plate code if all your setters and getters do are get and set the private variables
Depends on your language. You've tagged this "object-oriented" rather than "Java", so I'd like to point out that ChssPly76's answer is language-dependent. In Python, for instance, there is no reason to use getters and setters. If you need to change the behavior, you can use a property, which wraps a getter and setter around basic attribute access. Something like this:
class Simple(object):
def _get_value(self):
return self._value -1
def _set_value(self, new_value):
self._value = new_value + 1
def _del_value(self):
self.old_values.append(self._value)
del self._value
value = property(_get_value, _set_value, _del_value)
Yes, I've said as much in a comment below my answer. Java is not the only language to use getters / setters as a crutch just like Python is not the only language able to define properties. The main point, however, still remains - "property" is not the same "public field".
@ChssPly76: but the main reason for using getters and setters is to prevent having to change the interface when you want to add more functionality, which means that it's perfectly acceptable--idiomatic even--to use public fields until such functionality is needed.
@jcd - not at all. You're defining your "interface" (public API would be a better term here) by exposing your public fields. Once that's done, there's no going back. Properties are NOT fields because they provide you with a mechanism to intercept attempts to access fields (by routing them to methods if those are defined); that is, however, nothing more than syntax sugar over getter / setter methods. It's extremely convenient but it doesn't alter the underlying paradigm - exposing fields with no control over access to them violates the principle of encapsulation.
@ChssPly76—I disagree. I have just as much control as if they were properties, because I can make them properties whenever I need to. There is no difference between a property that uses boilerplate getters and setters, and a raw attribute, except that the raw attribute is faster, because it utilizes the underlying language, rather than calling methods. Functionally, they are identical. The only way encapsulation could be violated is if you think parentheses (obj.set_attr('foo')) are inherently superior to equals signs (obj.attr = 'foo'). Public access is public access.
@jcdyer as much control yes, but not as much readability, others often wrongly assume that obj.attr = 'foo' just sets the variable without anything else happening
@TimoHuovinen How is that any different than a user in Java assuming that obj.setAttr('foo') "just sets the variable without anything else happening"? If it's a public methods, then it's a public method. If you use it to achieve some side-effect, and it is public, then you had better be able to count on everything working as if only that intended side effect happened (with all other implementation details and other side effects, resource use, whatever, hidden from the user's concerns). This is absolutely no different with Python. Python's syntax to achieve the effect is just simpler.
EDIT: I answered this question because there are a bunch of people learning programming asking this, and most of the answers are very technically competent, but they're not as easy to understand if you're a newbie. We were all newbies, so I thought I'd try my hand at a more newbie friendly answer.
The two main ones are polymorphism, and validation. Even if it's just a stupid data structure.
Let's say we have this simple class:
public class Bottle {
public int amountOfWaterMl;
public int capacityMl;
}
A very simple class that holds how much liquid is in it, and what its capacity is (in milliliters).
What happens when I do:
Bottle bot = new Bottle();
bot.amountOfWaterMl = 1500;
bot.capacityMl = 1000;
Well, you wouldn't expect that to work, right?
You want there to be some kind of sanity check. And worse, what if I never specified the maximum capacity? Oh dear, we have a problem.
But there's another problem too. What if bottles were just one type of container? What if we had several containers, all with capacities and amounts of liquid filled? If we could just make an interface, we could let the rest of our program accept that interface, and bottles, jerrycans and all sorts of stuff would just work interchangably. Wouldn't that be better? Since interfaces demand methods, this is also a good thing.
We'd end up with something like:
public interface LiquidContainer {
public int getAmountMl();
public void setAmountMl(int amountMl);
public int getCapacityMl();
}
Great! And now we just change Bottle to this:
public class Bottle implements LiquidContainer {
private int capacityMl;
private int amountFilledMl;
public Bottle(int capacityMl, int amountFilledMl) {
this.capacityMl = capacityMl;
this.amountFilledMl = amountFilledMl;
checkNotOverFlow();
}
public int getAmountMl() {
return amountFilledMl;
}
public void setAmountMl(int amountMl) {
this.amountFilled = amountMl;
checkNotOverFlow();
}
public int getCapacityMl() {
return capacityMl;
}
private void checkNotOverFlow() {
if(amountOfWaterMl > capacityMl) {
throw new BottleOverflowException();
}
}
I'll leave the definition of the BottleOverflowException as an exercise to the reader.
Now notice how much more robust this is. We can deal with any type of container in our code now by accepting LiquidContainer instead of Bottle. And how these bottles deal with this sort of stuff can all differ. You can have bottles that write their state to disk when it changes, or bottles that save on SQL databases or GNU knows what else.
And all these can have different ways to handle various whoopsies. The Bottle just checks and if it's overflowing it throws a RuntimeException. But that might be the wrong thing to do.
(There is a useful discussion to be had about error handling, but I'm keeping it very simple here on purpose. People in comments will likely point out the flaws of this simplistic approach. ;) )
And yes, it seems like we go from a very simple idea to getting much better answers quickly.
Please note also that you can't change the capacity of a bottle. It's now set in stone. You could do this with an int by declaring it final. But if this was a list, you could empty it, add new things to it, and so on. You can't limit the access to touching the innards.
There's also the third thing that not everyone has addressed: getters and setters use method calls. That means that they look like normal methods everywhere else does. Instead of having weird specific syntax for DTOs and stuff, you have the same thing everywhere.
Well i just want to add that even if sometimes they are necessary for the encapsulation and security of your variables/objects, if we want to code a real Object Oriented Program, then we need to STOP OVERUSING THE ACCESSORS, cause sometimes we depend a lot on them when is not really necessary and that makes almost the same as if we put the variables public.
I know it's a bit late, but I think there are some people who are interested in performance.
I've done a little performance test. I wrote a class "NumberHolder" which, well, holds an Integer. You can either read that Integer by using the getter method
anInstance.getNumber() or by directly accessing the number by using anInstance.number. My programm reads the number 1,000,000,000 times, via both ways. That process is repeated five times and the time is printed. I've got the following result:
Time 1: 953ms, Time 2: 741ms
Time 1: 655ms, Time 2: 743ms
Time 1: 656ms, Time 2: 634ms
Time 1: 637ms, Time 2: 629ms
Time 1: 633ms, Time 2: 625ms
(Time 1 is the direct way, Time 2 is the getter)
You see, the getter is (almost) always a bit faster. Then I tried with different numbers of cycles. Instead of 1 million, I used 10 million and 0.1 million.
The results:
10 million cycles:
Time 1: 6382ms, Time 2: 6351ms
Time 1: 6363ms, Time 2: 6351ms
Time 1: 6350ms, Time 2: 6363ms
Time 1: 6353ms, Time 2: 6357ms
Time 1: 6348ms, Time 2: 6354ms
With 10 million cycles, the times are almost the same.
Here are 100 thousand (0.1 million) cycles:
Time 1: 77ms, Time 2: 73ms
Time 1: 94ms, Time 2: 65ms
Time 1: 67ms, Time 2: 63ms
Time 1: 65ms, Time 2: 65ms
Time 1: 66ms, Time 2: 63ms
Also with different amounts of cycles, the getter is a little bit faster than the regular way. I hope this helped you.
There's a "noticable" overhead having a function call to access the memory instead of simply loading an object's address and adding an offset to access the members. Chances are the VM flat-optimized your getter anyway. Regardless, the mentioned overhead isn't worth losing all the benefits of getters/setters.
Don't use getters setters unless needed for your current delivery I.e. Don't think too much about what would happen in the future, if any thing to be changed its a change request in most of the production applications, systems.
Think simple, easy, add complexity when needed.
I would not take advantage of ignorance of business owners of deep technical know how just because I think it's correct or I like the approach.
I have massive system written without getters setters only with access modifiers and some methods to validate n perform biz logic. If you absolutely needed the. Use anything.
I spent quite a while thinking this over for the Java case, and I believe the real reasons are:
Code to the interface, not the implementation
Interfaces only specify methods, not fields
In other words, the only way you can specify a field in an interface is by providing a method for writing a new value and a method for reading the current value.
Those methods are the infamous getter and setter....
Okay, second question; in the case where it's a project where you're not exporting source to anyone, and you have full control of the source... are you gaining anything with getters and setters?
In any non-trivial Java project you need to code to interfaces in order to make things manageable and testable (think mockups and proxy objects). If you use interfaces you need getters and setters.
We use getters and setters:
for reusability
to perform validation in later stages of programming
Getter and setter methods are public interfaces to access private class members.
Encapsulation mantra
The encapsulation mantra is to make fields private and methods public.
Getter Methods: We can get access to private variables.
Setter Methods: We can modify private fields.
Even though the getter and setter methods do not add new functionality, we can change our mind come back later to make that method
better;
safer; and
faster.
Anywhere a value can be used, a method that returns that value can be added. Instead of:
int x = 1000 - 500
use
int x = 1000 - class_name.getValue();
In layman's terms
Suppose we need to store the details of this Person. This Person has the fields name, age and sex. Doing this involves creating methods for name, age and sex. Now if we need create another person, it becomes necessary to create the methods for name, age, sex all over again.
Instead of doing this, we can create a bean class(Person) with getter and setter methods. So tomorrow we can just create objects of this Bean class(Person class) whenever we need to add a new person (see the figure). Thus we are reusing the fields and methods of bean class, which is much better.
It can be useful for lazy-loading. Say the object in question is stored in a database, and you don't want to go get it unless you need it. If the object is retrieved by a getter, then the internal object can be null until somebody asks for it, then you can go get it on the first call to the getter.
I had a base page class in a project that was handed to me that was loading some data from a couple different web service calls, but the data in those web service calls wasn't always used in all child pages. Web services, for all of the benefits, pioneer new definitions of "slow", so you don't want to make a web service call if you don't have to.
I moved from public fields to getters, and now the getters check the cache, and if it's not there call the web service. So with a little wrapping, a lot of web service calls were prevented.
So the getter saves me from trying to figure out, on each child page, what I will need. If I need it, I call the getter, and it goes to find it for me if I don't already have it.
protected YourType _yourName = null;
public YourType YourName{
get
{
if (_yourName == null)
{
_yourName = new YourType();
return _yourName;
}
}
}
So then does the getter call the setter?
I've added a code sample of how I've done it in the past - essentially, you store the actual class in a protected member, then return that protected member in the get accessor, initializing it if it is not initialized.
It's C# https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/using-properties
One aspect I missed in the answers so far, the access specification:
for members you have only one access specification for both setting and getting
for setters and getters you can fine tune it and define it separately
In languages which don't support "properties" (C++, Java) or require recompilation of clients when changing fields to properties (C#), using get/set methods is easier to modify. For example, adding validation logic to a setFoo method will not require changing the public interface of a class.
In languages which support "real" properties (Python, Ruby, maybe Smalltalk?) there is no point to get/set methods.
Re: C#. If you add functionality to a get/set wouldn't that require recompilation anyway?
@steamer25: sorry, mis-typed. I meant that clients of the class will have to be recompiled.
Adding validation logic to a setFoo method will not require changing the interface of a class at the language level, but it does change the actual interface, aka contract, because it changes the preconditions. Why would one want the compiler to not treat that as a breaking change when it is?
@R.MartinhoFernandes how does one fix this "broken" problem? The compiler can't tell if it's breaking or not. This is only a concern when you are writing libraries for others, but you're making it out as universal zOMG here be dragons!
Requiring recompilation, as mentioned in the answer, is one way the compiler can make you aware of a possible breaking change. And almost everything I write is effectively "a library for others", because I don't work alone. I write code that has interfaces that other people in the project will use. What's the difference? Hell, even if I will be the user of those interfaces, why should I hold my code to lower quality standards? I don't like working with troublesome interfaces, even if I'm the one writing them.
Objective-C uses annotations to mark properties. If the programmer of the class provides no accessor (getter/setter) methods for the annotated member variable, such methods are automatically synthesized. The authoring programmer can always go back and provide an implementation of the accessor anytime she desires. I have no idea why Java has not adopted such a simple solution to lacking properties.
One of the basic principals of OO design: Encapsulation!
It gives you many benefits, one of which being that you can change the implementation of the getter/setter behind the scenes but any consumer of that value will continue to work as long as the data type remains the same.
The encapsulation getters and setters offer are laughably thin. See here.
If your public interface states that 'foo' is of type 'T' and can be set to anything, you can never change that. You can not latter decide to make it of type 'Y', nor can you impose rules such as size constraints. Thus if you have public get/set that do nothing much set/get, you are gaining nothing that a public field will not offer and making it more cumbersome to use.
If you have a constraint, such as the object can be set to null, or the value has to be within a range, then yes, a public set method would be required, but you still present a contract from setting this value that can't change
Why can't a contract change?
Why should things keep on compiling if contracts change?
You should use getters and setters when:
You're dealing with something that is conceptually an attribute, but:
Your language doesn't have properties (or some similar mechanism, like Tcl's variable traces), or
Your language's property support isn't sufficient for this use case, or
Your language's (or sometimes your framework's) idiomatic conventions encourage getters or setters for this use case.
So this is very rarely a general OO question; it's a language-specific question, with different answers for different languages (and different use cases).
From an OO theory point of view, getters and setters are useless. The interface of your class is what it does, not what its state is. (If not, you've written the wrong class.) In very simple cases, where what a class does is just, e.g., represent a point in rectangular coordinates,* the attributes are part of the interface; getters and setters just cloud that. But in anything but very simple cases, neither the attributes nor getters and setters are part of the interface.
Put another way: If you believe that consumers of your class shouldn't even know that you have a spam attribute, much less be able to change it willy-nilly, then giving them a set_spam method is the last thing you want to do.
* Even for that simple class, you may not necessarily want to allow setting the x and y values. If this is really a class, shouldn't it have methods like translate, rotate, etc.? If it's only a class because your language doesn't have records/structs/named tuples, then this isn't really a question of OO…
But nobody is ever doing general OO design. They're doing design, and implementation, in a specific language. And in some languages, getters and setters are far from useless.
If your language doesn't have properties, then the only way to represent something that's conceptually an attribute, but is actually computed, or validated, etc., is through getters and setters.
Even if your language does have properties, there may be cases where they're insufficient or inappropriate. For example, if you want to allow subclasses to control the semantics of an attribute, in languages without dynamic access, a subclass can't substitute a computed property for an attribute.
As for the "what if I want to change my implementation later?" question (which is repeated multiple times in different wording in both the OP's question and the accepted answer): If it really is a pure implementation change, and you started with an attribute, you can change it to a property without affecting the interface. Unless, of course, your language doesn't support that. So this is really just the same case again.
Also, it's important to follow the idioms of the language (or framework) you're using. If you write beautiful Ruby-style code in C#, any experienced C# developer other than you is going to have trouble reading it, and that's bad. Some languages have stronger cultures around their conventions than others.—and it may not be a coincidence that Java and Python, which are on opposite ends of the spectrum for how idiomatic getters are, happen to have two of the strongest cultures.
Beyond human readers, there will be libraries and tools that expect you to follow the conventions, and make your life harder if you don't. Hooking Interface Builder widgets to anything but ObjC properties, or using certain Java mocking libraries without getters, is just making your life more difficult. If the tools are important to you, don't fight them.
From a object orientation design standpoint both alternatives can be damaging to the maintenance of the code by weakening the encapsulation of the classes. For a discussion you can look into this excellent article: http://typicalprogrammer.com/?p=23
Code evolves. private is great for when you need data member protection. Eventually all classes should be sort of "miniprograms" that have a well-defined interface that you can't just screw with the internals of.
That said, software development isn't about setting down that final version of the class as if you're pressing some cast iron statue on the first try. While you're working with it, code is more like clay. It evolves as you develop it and learn more about the problem domain you are solving. During development classes may interact with each other than they should (dependency you plan to factor out), merge together, or split apart. So I think the debate boils down to people not wanting to religiously write
int getVar() const { return var ; }
So you have:
doSomething( obj->getVar() ) ;
Instead of
doSomething( obj->var ) ;
Not only is getVar() visually noisy, it gives this illusion that gettingVar() is somehow a more complex process than it really is. How you (as the class writer) regard the sanctity of var is particularly confusing to a user of your class if it has a passthru setter -- then it looks like you're putting up these gates to "protect" something you insist is valuable, (the sanctity of var) but yet even you concede var's protection isn't worth much by the ability for anyone to just come in and set var to whatever value they want, without you even peeking at what they are doing.
So I program as follows (assuming an "agile" type approach -- ie when I write code not knowing exactly what it will be doing/don't have time or experience to plan an elaborate waterfall style interface set):
1) Start with all public members for basic objects with data and behavior. This is why in all my C++ "example" code you'll notice me using struct instead of class everywhere.
2) When an object's internal behavior for a data member becomes complex enough, (for example, it likes to keep an internal std::list in some kind of order), accessor type functions are written. Because I'm programming by myself, I don't always set the member private right away, but somewhere down the evolution of the class the member will be "promoted" to either protected or private.
3) Classes that are fully fleshed out and have strict rules about their internals (ie they know exactly what they are doing, and you are not to "fuck" (technical term) with its internals) are given the class designation, default private members, and only a select few members are allowed to be public.
I find this approach allows me to avoid sitting there and religiously writing getter/setters when a lot of data members get migrated out, shifted around, etc. during the early stages of a class's evolution.
"... a well-defined interface that you can't just screw with the internals of" and validation in setters.
There is a good reason to consider using accessors is there is no property inheritance. See next example:
public class TestPropertyOverride {
public static class A {
public int i = 0;
public void add() {
i++;
}
public int getI() {
return i;
}
}
public static class B extends A {
public int i = 2;
@Override
public void add() {
i = i + 2;
}
@Override
public int getI() {
return i;
}
}
public static void main(String[] args) {
A a = new B();
System.out.println(a.i);
a.add();
System.out.println(a.i);
System.out.println(a.getI());
}
}
Output:
0
0
4
Getters and setters are used to implement two of the fundamental aspects of Object Oriented Programming which are:
Abstraction
Encapsulation
Suppose we have an Employee class:
package com.highmark.productConfig.types;
public class Employee {
private String firstName;
private String middleName;
private String lastName;
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getMiddleName() {
return middleName;
}
public void setMiddleName(String middleName) {
this.middleName = middleName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getFullName(){
return this.getFirstName() + this.getMiddleName() + this.getLastName();
}
}
Here the implementation details of Full Name is hidden from the user and is not accessible directly to the user, unlike a public attribute.
To me having tons of getters and setters that do nothing unique is useless. getFullName is a exception because it does something else. Having just the three variables public then keeping getFullName will make the program easier to read but still have that fullname thing hidden.
Generally I'm completely fine with getters and setters if
a. they do something unique
and/or
b. you only have one, yeah you could have a public final and all that but nah
The benefit with this is that you can change the internals of the class, without changing the interface.
Say, instead of three properties, you had one - an array of strings. If you've been using getters and setters, you can make that change, and then update the getter/setters to know that names[0] is first name, names[1] is middle, etc.
But if you just used public properties, you would also have to change every class accessed Employee, because the firstName property they've been using no longer exists.
@AndrewHows from what I've seen in real life, when people change the internals of the class they also change the interface and do a big refactoring of all the code
There is a difference between DataStructure and Object.
Datastructure should expose its innards and not behavior.
An Object should not expose its innards but it should expose its behavior, which is also known as the Law of Demeter
Mostly DTOs are considered more of a datastructure and not Object. They should only expose their data and not behavior. Having Setter/Getter in DataStructure will expose behavior instead of data inside it. This further increases the chance of violation of Law of Demeter.
Uncle Bob in his book Clean code explained the Law of Demeter.
There is a well-known heuristic called the Law of Demeter that says a
module should not know about the innards of the objects it
manipulates. As we saw in the last section, objects hide their data
and expose operations. This means that an object should not expose its
internal structure through accessors because to do so is to expose,
rather than to hide, its internal structure.
More precisely, the Law of Demeter says that a method f of a class C
should only call the methods of these:
C
An object created by f
An object passed as an argument to f
An object held in an instance variable of C
The method should not invoke methods on objects that are returned by any of the allowed functions.
In other words, talk to friends, not to strangers.
So according this, example of LoD violation is:
final String outputDir = ctxt.getOptions().getScratchDir().getAbsolutePath();
Here, the function should call the method of its immediate friend which is ctxt here, It should not call the method of its immediate friend's friend. but this rule doesn't apply to data structure. so here if ctxt, option, scratchDir are datastructure then why to wrap their internal data with some behavior and doing a violation of LoD.
Instead, we can do something like this.
final String outputDir = ctxt.options.scratchDir.absolutePath;
This fulfills our needs and doesn't even violate LoD.
Inspired by Clean Code by Robert C. Martin(Uncle Bob)
If you don't require any validations and not even need to maintain state i.e. one property depends on another so we need to maintain the state when one is change. You can keep it simple by making field public and not using getter and setters.
I think OOPs complicates things as the program grows it becomes nightmare for developer to scale.
A simple example; we generate c++ headers from xml. The header contains simple field which does not require any validations. But still as in OOPS accessor are fashion we generates them as following.
const Filed& getfield() const
Field& getField()
void setfield(const Field& field){...}
which is very verbose and is not required. a simple
struct
{
Field field;
};
is enough and readable.
Functional programming don't have the concept of data hiding they even don't require it as they do not mutate the data.
Additionally, this is to "future-proof" your class. In particular, changing from a field to a property is an ABI break, so if you do later decide that you need more logic than just "set/get the field", then you need to break ABI, which of course creates problems for anything else already compiled against your class.
I suppose that changing the behaviour of the getter or setter isn't a breaking change then.
@R.MartinhoFernandes doesn't always have to be. TBYS
One other use (in languages that support properties) is that setters and getters can imply that an operation is non-trivial. Typically, you want to avoid doing anything that's computationally expensive in a property.
I'd never expect a getter or setter to be an expensive operation. In such cases better use a factory: use all setters you need and then invoke the expensive execute or build method.
One relatively modern advantage of getters/setters is that is makes it easier to browse code in tagged (indexed) code editors. E.g. If you want to see who sets a member, you can open the call hierarchy of the setter.
On the other hand, if the member is public, the tools don't make it possible to filter read/write access to the member. So you have to trudge though all uses of the member.
you can do right clic> find usage on a member exactly like on a getter/setter
In an object oriented language the methods, and their access modifiers, declare the interface for that object. Between the constructor and the accessor and mutator methods it is possible for the developer to control access to the internal state of an object. If the variables are simply declared public then there is no way to regulate that access.
And when we are using setters we can restrict the user for the input we need. Mean the feed for that very variable will come through a proper channel and the channel is predefined by us. So it's safer to use setters.
Getters and setters coming from data hiding. Data Hiding means We
are hiding data from outsiders or outside person/thing cannot access
our data.This is a useful feature in OOP.
As a example:
If you create a public variable, you can access that variable and change value in anywhere(any class). But if you create as private that variable cannot see/access in any class except declared class.
public and private are access modifiers.
So how can we access that variable outside:
This is the place getters and setters coming from. You can declare variable as private then you can implement getter and setter for that variable.
Example(Java):
private String name;
public String getName(){
return this.name;
}
public void setName(String name){
this.name= name;
}
Advantage:
When anyone want to access or change/set value to balance variable, he/she must have permision.
//assume we have person1 object
//to give permission to check balance
person1.getName()
//to give permission to set balance
person1.setName()
You can set value in constructor also but when later on when you want
to update/change value, you have to implement setter method.
your getBalance/setBalance aren't getter/setter if they have other code, are they? Also why expose balance? Wouldn't it be safer to have an applyPayment or applyDebt that allowed balance checking and maybe a memo field to say where the payment was from? Hey! I just improved your design AND removed the setter and getter. That's the thing about setters/getters, it pretty much always improves your code to remove them, not that they are "Wrong", just that they nearly always lead to worse code. Properties (as in C#), by the way, have exactly the same issue.
We are the develloper, a lack of setter/getter does not protect anything we can just add them....
From my experience, it is ideal to set variables as private and to provide accessors and modifiers to each variable.
This way, you can create read only variables, or write only variables depending on your requirements.
Below implementation shows a write only variable.
private String foo;
public void setFoo(String foo) { this.foo = foo; }
private String getFoo() { return foo; }
Below shows a read only variable.
private String foo;
private void setFoo(String foo) { this.foo = foo; }
public String getFoo() { return foo; }
I would just like to throw the idea of annotation : @getter and @setter. With @getter, you should be able to obj = class.field but not class.field = obj. With @setter, vice versa. With @getter and @setter you should be able to do both. This would preserve encapsulation and reduce the time by not calling trivial methods at runtime.
It would be implemented at runtime with "trivial methods". Actually, probably non-trivial.
Today this can be done with an annotation preprocessor.
I can think of one reason why you wouldn't just want everything public.
For instance, variable you never intended to use outside of the class could be accessed, even irdirectly via chain variable access (i.e. object.item.origin.x ).
By having mostly everything private, and only the stuff you want to extend and possibly refer to in subclasses as protected, and generally only having static final objects as public, then you can control what other programmers and programs can use in the API and what it can access and what it can't by using setters and getters to access the stuff you want the program, or indeed possibly other programmers who just happen to use your code, can modify in your program.
A clear example of this is when setting one attribute's value affects one or more other attribute values. Trivial example: Say you store the radius of a circle Using circ.set_radius() doesn't really achieve anything setting radius directly throughcirc.radius` doesn't. However, get_diameter(), get_circumference() and get_area() can perform calculations based on radius. Without getters, you have to perform the calculations yourself and check that you've got them right.
If you want a readonly variable but don't want the client to have to change the way they access it, try this templated class:
template<typename MemberOfWhichClass, typename primative>
class ReadOnly {
friend MemberOfWhichClass;
public:
template<typename number> inline bool operator==(const number& y) const { return x == y; }
template<typename number> inline number operator+ (const number& y) const { return x + y; }
template<typename number> inline number operator- (const number& y) const { return x - y; }
template<typename number> inline number operator* (const number& y) const { return x * y; }
template<typename number> inline number operator/ (const number& y) const { return x / y; }
template<typename number> inline number operator<<(const number& y) const { return x << y; }
template<typename number> inline number operator^(const number& y) const { return x^y; }
template<typename number> inline number operator~() const { return ~x; }
template<typename number> inline operator number() const { return x; }
protected:
template<typename number> inline number operator= (const number& y) { return x = y; }
template<typename number> inline number operator+=(const number& y) { return x += y; }
template<typename number> inline number operator-=(const number& y) { return x -= y; }
template<typename number> inline number operator*=(const number& y) { return x *= y; }
template<typename number> inline number operator/=(const number& y) { return x /= y; }
primative x;
};
Example Use:
class Foo {
public:
ReadOnly<Foo, int> cantChangeMe;
};
Remember you'll need to add bitwise and unary operators as well! This is just to get you started
Please note that the question was asked without a specific language in mind.
Although not common for getter and setter, the use of these methods can also be used in AOP/proxy pattern uses.
eg for auditing variable you can use AOP to audit update of any value.
Without getter/setter it's not possible except changing the code everywhere.
Personaly I have never used AOP for that, but it shows one more advantage of using getter/setter.
I wanted to post a real world example I just finished up:
background - I hibernate tools to generate the mappings for my database, a database I am changing as I develop. I change the database schema, push the changes and then run hibernate tools to generate the java code. All is well and good until I want to add methods to those mapped entities. If I modify the generated files, they will be overwritten every time I make a change to the database. So I extend the generated classes like this:
package com.foo.entities.custom
class User extends com.foo.entities.User{
public Integer getSomething(){
return super.getSomething();
}
public void setSomething(Integer something){
something+=1;
super.setSomething(something);
}
}
What I did above is override the existing methods on the super class with my new functionality (something+1) without ever touching the base class. Same scenario if you wrote a class a year ago and want to go to version 2 without changing your base classes (testing nightmare). hope that helps.
"What I did above is override the existing methods on the super class with my new functionality." With functionality that we all hope was already properly documented for the old interface, right? Otherwise you just violated the LSP and introduced a silent breaking change that would have been caught by the compiler if there were no getters/setters in sight.
@R.MartinhoFernandes "... and then run hibernate tools to generate the java code" this isn't a case of changing the behavior of some class, this is a work around to a shitty tool. Perhaps the contract (which in this case is in the subclass) always was for that "surprise". Could be an argument for has-a instead of is-a though.
I fail to see what you achieve by not using the superclass' properties directly.
| common-pile/stackexchange_filtered |
Trouble with a media query for a button
So I've implemented a button, a double-angle up arrow inside of a circle, on a site I'm designing which allows the user to quickly scroll back to the top:
Now I have a media query that's supposed to shrink both the circle and the arrow on screens narrower than 767px. However, the circle and the arrow end up separating: .
The circle shrinks, but arrow moves off-screen? I don't understand why.
Code ("#toTop" is the circle):
#toTop{
position: fixed;
bottom: 10px;
right: 10px;
cursor: pointer;
display: none;
}
#toTop .fa {
margin: 5px;
}
@media(max-width:767px) {
#toTop{
position: fixed;
height: 30px;
width: 30px;
bottom: 10px;
right: 10px;
cursor: pointer;
display: none;
}
#toTop .fa {
font-size: 0.5em;
}
}
Can you add your html too?
@Jase: https://github.com/a-minor-threat/pf-landing-page/blob/gh-pages/index.html
cool, a downvote from some jerk.
I solved my problem by removing the circle.
| common-pile/stackexchange_filtered |
Theory behind Targeted Maximum Likelihood Estimation (TMLE)
There are many fine how-to articles describing how to implement TMLE but they avoid the details of the underlying theory. I'm currently working my way through Targeted Learning: Causal Inference for Observational and Experimental Data by Mark J. van der Laan and Sherri Rose. The math isn't terribly complicated but the notation and terminology is a bit confusing.
I understand TMLE's aim of finding an unbiased estimate of the Average Treatment Effect by using machine learning, and am familiar with the theory behind causal inference, the Super Learner algorithm, and doubly robust models, but I hit a brick wall when it comes to calculating the efficient influence curve, the "clever covariate", and guaranteeing the final ATE estimate's unbiasedness with the Central Limit Theorem.
My understanding is that TMLE uses the delta method (1st order Taylor series) to approximate the ATE and then converges to an estimate of the ATE via gradient descent(?). Am I too far off?
See also https://stats.stackexchange.com/questions/407444/targeted-maximum-likelihood-estimation-for-dummies, https://stats.stackexchange.com/questions/134572/what-is-targeted-maximum-likelihood-expectation
@kjetilbhalvorsen I did read those questions but unfortunately they weren't helpful. Possibly targeted learning is still such a young field that aside from Mark J. van der Laan and Sherri Rose there aren't a lot of statisticians who do completely understand TMLE theory?
I should also add that I haven't been able to find a copy of Frank R. Hampel's 1974 paper on influence curves, "The Influence Curve and its Role in Robust Estimation" that isn't stuck behind a paywall.
I can mail you a copy if you give email
@kjetilbhalvorsen Yes, thank you! My email<EMAIL_ADDRESS>
Found an article that does a good job explaining the theory behind using influence functions to estimate ATEs: https://arxiv.org/pdf/1810.03260.pdf
And another article describing the fundamentals of functionals and functional derivatives, concepts which are indispensable to understanding TMLE theory: https://cds.cern.ch/record/1383342/files/978-3-642-14090-7_BookBackMatter.pdf
I found another helpful introduction to functionals and functional derivatives posted here by Professor Benhamin Svetitsky at Tel Aviv University: julian.tau.ac.il/~bqs/functionals.pdf
David Benkeser and Antoine Chambaz wrote a useful explanation of TMLE theory here: https://achambaz.github.io/tlride/
| common-pile/stackexchange_filtered |
How to reuse a linq expression for 'Where' when using multiple source tables
Let's have the following:
public interface IOne {
UInt64 Id { get; }
Int16 Size { get; }
}
public interface ITwo {
UInt64 OneId { get; }
Int16 Color { get; }
}
As explained here the way to reuse a linq expression is to write something like this:
public static Expression<Func<IOne, bool>> MyWhereExpression( int value ){
return (o) => (o.Size > value);
}
int v = 5;
IQueryable<IOne> records = from one in s.Query<IOne>()
.Where(MyWhereExpression(v))
select one;
When I want to do the same with two tables I encounter a problem.
The expression:
public static Expression<Func<IOne, ITwo, bool>> MyWhereExpression2(int color ) {
return (one,two) => (one.Id == two.OneId) && (two.Color > color );
}
Linq 1:
int color = 100;
IQueryable<IOne> records = from one in s.Query<IOne>()
from two in s.Query<ITwo>()
.Where(MyWhereExpression2(color))
select one;
This doesn't work as .Where sticks to only the 2nd from.
Linq 2:
int color = 100;
IQueryable<IOne> records = (from one in s.Query<IOne>()
from two in s.Query<ITwo>()
select new { one, two })
.Where(MyWhereExpression2(color));
This results in
Argument 2: cannot convert from 'Expression<System.Func<IOne,ITwo,bool>>'
to 'System.Func<AnonymousType#1,int,bool>'
I understand the error message about the AnonymousType, but I cannot figure out how to write the query.
The reason why I want to use an expression rather than just write
where (one.Id == two.OneId) && (two.Color > color )
directly in the linq query is because I want to reuse this expression in multiple linq queries.
Answering my own question...
After some experiments (including Nick Guerrera's original answer) I took another approach - instead of trying to reuse the expression I reuse the whole linq. However, it still required creating a container struct.
struct Pair {
public IOne One { get; set; }
public ITwo Two { get; set; }
public Pair(IOne one, ITwo two) : this() {
One = one;
Two = two;
}
}
public IQueryable<Pair> Get(ISession s, int color) {
return from one in s.Query<IOne>()
from two in s.Query<ITwo>()
where (one.Id == two.OneId) && (two.Color > color)
select new Pair(one, two);
}
Now I can call
Get(s, color).Count();
and
var records = (from data in Get(s, color) select data).Take(2000);
etc.
There may be a more elegant solution that escapes me at the moment, but you could just use Tuple<IOne, ITwo> instead of the anonymous type:
static Expression<Func<Tuple<IOne, ITwo>, bool>> MyWhereExpression2(int color) {
return t => (t.Item1.Id == t.Item2.OneId) && (t.Item2.Color > color);
}
int color = 100;
IQueryable<IOne> records = (from one in s.Query<IOne>()
from two in s.Query<ITwo>()
select Tuple.Create(one, two))
.Where(MyWhereExpression2(color))
.Select(t => t.Item1);
UPDATE: I probably answered too quickly above as that won't work with Linq to SQL since the call to Tuple.Create cannot be translated to SQL. To work with Linq to SQL, the only solution I see at the moment is to create a named type:
class Pair
{
public IOne One { get; set; }
public ITwo Two { get; set; }
}
static Expression<Func<Pair, bool>> MyWhereExpression2(int color) {
return p => (p.One.Id == p.Two.OneId) && (p.Two.Color > color);
}
int color = 100;
IQueryable<IOne> records = (from one in s.Query<IOne>()
from two in s.Query<ITwo>()
select new Pair { One = one, Two = two })
.Where(MyWhereExpression2(color))
.Select(p => p.One);
It looks that .Zip() is not supported by LINQ to SQL either.
Nick, this is a great answer. Unfortunately, event the updated variant will not work with NHibernate (3.2.0). Its linq to SQL provider will throw an exception.
First of all why do you have two tables with same set of fields? Ideally database design should avoid same field names in different tables. Design should use inheritance. Common fields should move into base class and EF let's you create table per hierarchy, that should resolve your problem.
If you create your model with Table Per Hierarchy then you will not need interface, and your linq queries can use shared filter expressions.
There is no way to achieve what you are asking unless you sit down and write a complex reflection based method that will clone your expression from one type to another type.
Where did I say I have two tables with the same set of fields? I want to reuse the expression for a count query and then for a query which pulls the data. I also don't understand how interfaces affect linq queries.
| common-pile/stackexchange_filtered |
Sed replace string with whitespaces
I have a string which has a format like this :
{u city : u Zurich , u name : u Hauptbahnhof , u address : u Test address, C106, Bahnhofsstrasse }
I need to remove all the " u " (with the spaces) and replace the ", u "(with the spaces) with a line breaker, but i have no idea how i could do that.
Is it possible with sed?
Output should be like
{city :Zurich
name :Hauptbahnhof
address :Test address, C106, Bahnhofsstrasse }
Thank you guys
Where did you obtain said string? To me it seems like an attempt to create JSON from Python, if so go fix your Python code.
it's from a json output which has been altered, but i have to use it like this unfortunately
The following seems to work (with some whitespace differences):
's/, u /\n/g;s/\bu //g'
i.e. first replace all ", u " with newlines, then remove all u, where u is not preceded by a word character.
Note that the output isn't a valid JSON.
This is really fragile, as any word ending with u will get it removed.
@andlrc: fixed.
i get the error "unterminated `s' command"
i use sed -e and i use it after a pipe, maybe that helps..
@Damon: please show the whole line. Also, what OS and sed version?
cat silo_json | sed -e 's/, u /\n/g;s/\bu //g'
RHEL 6.8
GNU sed version 4.2.1
@Damon Works for me with the same sed version, weird. What about sed -e 's/, u /\n/g' -e 's/\bu //g'?
Omg.. my mistake i forgot to remove a sed statement in the script, your solution works perfectly.
Use perl command line substitution as below, used the \b tags for matching exact words and not mess up with other strings.
perl -pe 's/\bu \b//g;' -pe 's/\b , \b/\n/g;' file
{city : Zurich
name : Hauptbahnhof
address : Test address, C106, Bahnhofsstrasse }
And as pointed by others, if it is a broken JSON use jq or other ways to fix it.
same problem as above
i get the error "unterminated `s' command"
i have to use the string like that and output doesn't need to be json
i used a tmp-file for your example to check if it works
.. my mistake i forgot to remove a sed statement in the script, your solution works perfectly.
yeah sure, sorry that i gave him the solution but his worked too and he brought it up first, but thanks again
| common-pile/stackexchange_filtered |
How to add translation class/model to an existing Sylius model?
I am trying to add translations to Sylius' product variant model, but having some trouble configuring the resource.
When dealing with custom models it is easy to add translation classes, just create the necessary classes with the right interfaces and then include them in the resources configuration file, like below:
# resources.yml
app.orientation:
driver: doctrine/orm
classes:
model: AppBundle\Entity\Orientation\Orientation
translation:
classes:
model: AppBundle\Entity\Orientation\OrientationTranslation
I have already added the necessary classes to the product variant and customised the product variant itself to make it translatable. The last step is to activate the translation classes (only the model in this case). The problem is that when I try to add the model to my config.yml I get the following error:
Unrecognized option "translation" under
"sylius_product.resources.product_variant"
So how am I supposed to enable the translation of the product variant model?
Configuration reference:
# config.yml
sylius_product:
resources:
product_variant:
classes:
factory: AppBundle\Factory\Product\ProductVariantFactory
model: AppBundle\Entity\Product\ProductVariant
form:
default: AppBundle\Form\Type\Product\ProductVariantType
translation:
classes:
model: AppBundle\Entity\Product\ProductVariantTranslation
There is no "tranlation" entry in the vendor/sylius/sylius/src/Sylius/Bundle/ProductBundle/DependencyInjection/Configuration.php file.
That why you get that error.
I guess you just have to define your translation class in the sylius_resource section
sylius_resource:
app.product:
translation:
classes:
model: AppBundle\Entity\ProductTranslation
That won't work unfortunately because you have to configure the classes node. Exception message:
The child node "classes" at path "sylius_resource.resources.app.product_variant" must be configured.
What about adding it also and pointing to the Sylius default one ?
Based on @ylastapis comments I came up with the following solution:
sylius.product_variant:
classes:
factory: AppBundle\Factory\Product\ProductVariantFactory
interface: Sylius\Component\Product\Model\ProductVariantInterface
model: AppBundle\Entity\Product\ProductVariant
repository: AppBundle\Repository\ProductVariantRepository
form: Sylius\Bundle\ProductBundle\Form\Type\ProductVariantType
translation:
classes:
model: AppBundle\Entity\Product\ProductVariantTranslation
Which I entered in my resources.yml file along with other custom resources. A clear drawback with this solution is that it's quite tightly coupled to the ProductVariant. As I have to reference all types of classes which are not using the default classes provided by the resource bundle.
| common-pile/stackexchange_filtered |
How to suppress ITK warning messages in python
Whenever I load .img.gz file using python medpy.io load function, I get a warning message like:
WARNING: In /usr/share/miniconda/envs/bld/conda-bld/simpleitk_1598369168428/work/build/ITK/Modules/IO/NIFTI/src/itkNiftiImageIO.cxx, line 1009
NiftiImageIO (0x56268f287910): /data/temp.img.gz is Analyze file and it's deprecated
Is there any way to suppress warning messages?
I tried to using import logging library and set logging.disable(sys.maxsize)
& import warnings and set
warnings.simplefilter("ignore", category=PendingDeprecationWarning)
Neither method worked for me.
ITK warning message display can be globally toggled with itk.ProcessObject.SetGlobalWarningDisplay. So you want
itk.ProcessObject.SetGlobalWarningDisplay(False)
See the static function itk::Object::SetGlobalWarningDisplay. C++ docs here.
This worked for me, and I think most specifically answers the OP's question about ITK.
You might want to check out How to disable warning in simpleitk.ReadImage in Python on Github.
You can disable all warning displays of SimpleITK by calling SetGlobalWarningDisplay.
import SimpleITK as sitk
from medpy.io import load
sitk.ProcessObject_SetGlobalWarningDisplay(False)
image_data, image_header = load('path/to/image.nii.gz')
sitk.ProcessObject_SetGlobalWarningDisplay(True)
What if you don't specify the category?
I do it this way in a case-by-case basis so that it doesn't affect the entire python file.
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
# do whatever after this
Update: It's probably an issue from the C++ side doing the warning rather than python. You'll have to dig into the C++ more to catch it.
https://github.com/InsightSoftwareConsortium/ITK/blob/34231b57021418fdb6afd4fcf5082a73b12969ed/Modules/IO/NIFTI/src/itkNiftiImageIO.cxx#L1009 and then look at itkWarningMacro: https://itk.org/Doxygen318/html/itkMacro_8h.html#a0c47033c6ca9aae1319b1793f4419ada
I tried to access the itkNiftiImageIO.cxx, but I was not able to find it. It says miniconda folder does not exist.
Sorry, what I said was imprecise. If you change the C++ source then you'd need to recompile the python module. But you probably shouldn't do that. What I meant to say earlier is that you'd need to understand how C++ is logging it, what itkWarningMacro does, so that you have a chance of capturing it with Python. You may also check the NIFTI github page to create an issue or search for solutions.
| common-pile/stackexchange_filtered |
What does forceBinFolderScan attribute in EPiServerFramework.config file means?
I was studying for EPiServer exam and have read this article about EPiServer initialization:
http://world.episerver.com/Documentation/Items/Tech-Notes/EPiServer-CMS-6/EPiServer-CMS-60/Initialization/
It describes how to configure initialization to limit assemblies to be scanned during startup, but there is mentioned forceBinFolderScan attribute which is not described. Examples has it always set to true.
I also found another article which describes how to improve startup time of EPiServer site:
http://world.episerver.com/Blogs/Alexander-Haneng/Dates/2011/12/Starting-CMS-6-R2-sites-faster-after-build/
Author says to change that attribute to false, but doesn't explain what it means. Does setting it to false stops scanning assemblies at all or scans them only at some conditions?
I have quite large site and it takes several minutes to load. I have lot of assemblies in bin folder, so wanted to limit scanning only to those which contains some initialization modules or plugins.
Because of Asp.Net build system and app startup optimization algorithms all assemblies may not be loaded in app domain. This may hide some assemblies from EPiServer initialization system which is executed during app start. To avoid this - EPiServer allows you to set this forceBinFolderScan flag to force bin/ folder (and probing folder for v7) scanning for assemblies. If flag is set to true - EPiServer is not asking AppDomain for loaded assemblies - but instead scans the filesystem and loads dlls that are not yet loaded.
Valdis is right - forceBinFolderScan flag just forces scanning bin and loading assemblies into AppDomain. For more information see my answer.
Decompiled EPiServer source code and found that when forceBinFolderScan attribute is set to true makes EPiServer framework to scan each dll in bin folder. It iterates through all dlls and loads them into AppDomain.
If this attribute is set to false, it doesn't scan bin folder, but relies on ASP.NET assembly loading.
After assemblies are loaded in AppDomain from bin (or not loaded if attribute is false) it gets all assemblies from AppDomain and then filters them based on add/remove configuration. So add/remove tags do not affect bin folder scanning.
Actually this behavior and reason for such behavior is described in EPiServer documentation: MEF - The Discovery Mechanism (see note at the bottom of paragraph). Author just didn't mentioned that it is controlled by forceBinFolderScan attribute.
UPDATE
Here is an example which describes that behavior.
Assume that we have two assemblies: MyPlugins.dll - which contains some plugins and NoPlugins.dll. Also assume that those DLLs are not loaded by ASP.NET at application start.
We have this configuration which tells not to force scan bin folder and include all assemblies except NoPlugins.dll:
<scanAssembly forceBinFolderScan="false">
<add assembly="*" />
<remove assembly="NoPlugins.dll" />
</scanAssembly>
With such configuration MyPlugins.dll and NoPlugins.dll is not loaded in AppDomain and can't be scanned. So no plugins will be available.
If we set forceBinFolderScan="true":
<scanAssembly forceBinFolderScan="true">
<add assembly="*" />
<remove assembly="NoPlugins.dll" />
</scanAssembly>
With such configuration both DLLs are explicitly loaded from bin folder into AppDomain. Then filtering occurs and NoPlugins.dll is removed from module/plugin scanning collection. As a result plugins from MyPlugins.dll will be loaded.
UPDATE 2
Tested different configurations to see if optimizations help, but seems that filtering assemblies gives even worse results.
My site has quite a lot of assemblies, so it takes a lot of time to start up. I tried 3 different scenarios.
1:
<scanAssembly forceBinFolderScan="false">
<add assembly="*" />
<!-- Remove assemblies generated by EPiOptimizer -->
<remove assembly="..." />
</scanAssembly>
2:
<scanAssembly forceBinFolderScan="false">
<add assembly="*" />
</scanAssembly>
3:
<scanAssembly forceBinFolderScan="true">
<add assembly="*" />
</scanAssembly>
I performed simple tests 3 times each - did iisreset and in Chrome refreshed page and checked timeline (I know it is not perfect test) and here are results:
1: 1.3 min, 1 min, 1.3 min
2: 1 min, 57 s, 1 min
3: 57 s, 57 s, 57 s
I tried these tests else several times in different order, but results were same. So it is not reasonable to make optimizations by adding tag. Seems that it is more expensive to filter out assemblies which are loaded in AppDomain then to scan all of them. The strange thing is that forcing bin folder scan gives better results, but probably it is my test issue.
With that flag set to true, EPiServer will scan the bin folder during the site startup for assemblies that have EPiServer plugins. This detection is done by loading the assemblies and checking if they have classes that have the EPiServer plugin attribute or are EPiServer initialization modules.
You can use this tool to check which assemblies can be removed from that section, in order tom improve startup time.
http://www.david-tec.com/2011/11/Optimising-EPiServer-start-up-times-during-build-with-EPiOptimiser/
But when this flag is set to false then it is not scanning bin folder? If so then how plugins and initialization modules are loaded? Now in my project it is set to false and has this add tag: Seems that initialization occurs successfully and plugins are found too.
Plugins and modules will be found anyway. Question is how much will be discovered with "forceBinFolderScan" set to "false". This flag was introduced because of Asp.Net build system - during app startup (when EPiServer initialization system runs) all assemblies may not be loaded in app domain (optimization).
You've told it not to scan the bin by setting it to false, but then you tell it to scann all assemblies by using the wildcard
That is correct, you tell it to not scan the bin folder and add all assemblies from it, with the exception of the ones with the keyword.
It's not true. When setting is false you tell it to not scan assemblies explicitly, but those are still loaded by ASP.NET at some time. and just filters loaded assemblies in AppDomain. It means that if your assembly is not loaded in AppDomain it will not be scanned for any initialization modules or plugins even if it has .
With forceBinFolderScan set to true, the EPiServer initialization engine will scan through all assemblies in the bin folder. If the assembly contains a reference to MEF (System.ComponentModel.Composition.dll), then that is scanned for classes marked with the IInitializableModule attribute. Any classes found then have its Initialize method called.
If you set forceBinFolderScan to false, then no assemblies are scanned, and you have to explicitly add any assemblies containing plugins by adding an entry in the scanAssembly element. E.g
<scanAssembly forceBinFolderScan="false">
<add assembly="AssemblyWithPlugins.dll" />
</scanAssembly>
Or the opposite, you can exclude assemblies known not to contain plugins, or those you don't wish to be scanned, and initialized by the initialization engine:
<scanAssembly forceBinFolderScan="true">
<remove assembly="AssemblyWithOutPlugins.dll" />
</scanAssembly>
Only scanning specific assemblies can improve site startup time, but you have to remember to update the list with any new assemblies added, or any plugins in those assemblies won't be loaded by EPiServer.
You can see this activity in the Log4Net log, by setting the log level to Warn or higher, you'll see messages such as "Not scanning {0} since it lacks any refererences to MEF" etc.
There is a great blog post about leveraging this feature to improve site startup time
Explicitly setting which assemblies to scan by "add/remove" doesn't affect anything if those are not loaded in AppDomain. "add/remove" is only for filtering assemblies loaded in AppDomain. Also "remove" tag is only used when there is also tag - otherwise it is ignored.
| common-pile/stackexchange_filtered |
Law regarding mixing harsh and mild wines?
In the Rambam's Introduction to the Mishnah, he mentions a Halacha L'Moshe Mesinai regarding mixing different sorts of wine. The quoted text is ביין התירו לערב קשה ברך הלכה למשה מסיני. Does anybody know what this law is?
Welcome to MiYodeya and thanks for this first question Yitzchak. Since MY is different from other sites you might be used to, see here for a guide which might help understand the site. Great to have you learn with us!
The source is Mishnah Bava Metzi’a 4:11.
בֶּאֱמֶת, בְּיַיִן הִתִּירוּ לְעָרֵב קָשֶׁה בְרַךְ, מִפְּנֵי שֶׁהוּא מַשְׁבִּיחוֹ.
In truth they permitted one to mix hard wine into soft, as it improves it.
As explained there by Bartenura, if one agrees to sell soft wine, he may mix some hard into it, as that increases the value of the product he is selling.
| common-pile/stackexchange_filtered |
Is it safe to store my passwords in a text document protected by a password?
How easily could someone break into a password protected text file like Word?
have you googled "how to crack a Word password"?
Why not use keepass2 or a similar password manager? It's designed for storing passwords in a file protected by a master password.
For modern versions of Word (Word 2007 and later), it appears that Microsoft has got the encryption pretty much right: They use multiple iterations of SHA-1 (since Office 2013, this has been replaced with SHA-512) for key derivation, and they use AES with a 128-bit key for encryption, compared to the older schemes that used everything from a 16-bit key to a 40-bit key. So if you are using Word 2007 or later, and ideally Word 2013 or later, it should be reasonably secure. Formats compatible with versions prior to 2007 will not be secure at all.
However, at least historically, Word has been known to litter temporary files all around itself while you are working on a document. I'm not sure if this is still the case, but remember that Word is a word processor; it's not really designed to keep secrets against a determined adversary who has access to your system. The document password protection only really helps if the adversary can get their hands only on the intentionally saved document file, and nothing else. There are many threat models where this protection would not be sufficient, and many of those would seem to apply to lists of passwords.
So it stands to reason that you could get reasonable protection against certain threats by keeping your passwords in an encrypted document maintained in a recent version of Word making sure to use non-backward-compatible formats.
On the other hand, by using a tool specifically designed to securely keep a list of passwords, you get one that is much smaller (thus far less risk of a bug having crept in, and far more likely for a compromising bug to be taken seriously), tailored for the purpose (thus far less likely to litter plaintext all around), and intended for the purpose (thus likely has useful features such as the ability to generate passwords, set password expiration dates, etc.). That's a password manager.
I discuss this also on my personal web site, where my recommendation is to use a password manager. (Actually, that's currently my second top advice, second only to do not share your passwords with anyone.)
Whichever way you go, wherever you store your list of passwords will require a high-grade passphrase. Remember that anyone who is able to guess that password will have access to all of your accounts; treat it accordingly. I suggest reasonably long Diceware passphrases (also) for this.
i can't belive word is actually decent about this! good info, thanks for sharing.
Related and highly highly useful info: it appears that LibreOffice offers a very high level of security - https://askubuntu.com/questions/827178/does-libreoffice-encrypt-password-protected-files
Some programs are made for encryption. Other programs are not.
Microsoft Office products' password-protected features (such as preventing certain spreadsheet cells from being edited, or a document from being viewed) are historically meant as convenience features. In 2003, the average office worker wouldn't open documents with special software to get around limitations. These days, such knowledge has matured and tools have been made widely available.
I don't know what the current state of affairs is in Microsoft Office Word, but I'd generally recommend to use special purpose software, i.e. a password manager. They are made to be secure and have features such as automatically erasing keys from memory.
To generally answer your question though: it could be secure, provided that the software you use does proper encryption. Encrypting a document with WinRAR would be secure, for example. But it's just a lot easier to use software that was made for this purpose, like KeePass.
Office 2007 and later uses iterated SHA-1 (later SHA-512) for KDF and AES-128 for encryption, at least according to Wikipedia. Bypassing that would be reasonably non-trivial. If you are using a different tool to encrypt and decrypt the document (such as your suggested WinRAR) then you have the problem that a plaintext version will necessarily exist somewhere, and will be recoverable using widely available data recovery tools.
@MichaelKjörling I read about MS Office in your answer (and upvoted) after posting this. And regarding having plaintext copies around, that's why I recommend using a password manager.
For storing passwords you should use file formats and software designed for this purposes. One popular tool is KeePass2. Use good passphrase and increase default iterations number and you will make life of potential breaker much harder.
--t.
The op's question is, how easy is to break a password protected word file?
Just google that phrase to see it is a real trivial matter to open. There is tools specifically designed to remove the password from office files.
Tegoo is correct to use a program designed for passwords.
For storing the password you are using password protected file and it is not safe. You should try https://www.truekey.com/ for storing the password as it is the product of Intel. In this you just need to remember the master password only and your other password should be managed by TrueKey itself.
Given the Intel AMT vulnerability I'm sure sure I'd call them security focused. KeePass2 is a better option IMHO.
Intel is in the business of making CPUs and related hardware, and software when it is required to make the hardware work. Also, instead of just pointing at a different product (which can be borderline spam), you should elaborate on why "storing [passwords] [in a] password protected file [...] is not safe".
Why promote this specific product and not password managers in general?
While I'm not fond of this answer, True Key actually seems like a fairly decent password manager. I read the white paper available on their website and at least on the surface it seems like they've made a lot of good design decisions including client-side encryption, a reasonably strong KDF, etc. My wife "upgraded" to True Key after Intel bought out and dropped support for PasswordBox and for the most part it's been pretty smooth sailing, at least smooth enough to make her uninterested in looking at any other options, many of them more expensive or complicated.
| common-pile/stackexchange_filtered |
SetThreadAffinityMask of pooled thread
I am wondering whether it is possible to set the processor affinity of a thread obtained from a thread pool. More specifically the thread is obtained through the use of TimerQueue API which I use to implement periodic tasks.
As a sidenote: I found TimerQueues the easiest way to implement periodic tasks but since these are usually longliving tasks might it be more appropriate to use dedicated threads for this purpose? Furthermore it is anticipated that synchronization primites such as semapores and mutexes need to be used to synchronize the various periodic tasks. Are the pooled threads suitable for these?
Thanks!
EDIT1: As Leo has pointed out the above question is actually two only loosely related ones. The first one is related to processor affinity of pooled threads. The second question is related to whether pooled threads obtained from the TimerQueue API are behaving just like manually created threads when it comes to synchronization objects. I will move this second question the a seperate topic.
If you do this, make sure you return things to how they were every time you release a thread back to the pool. Since you don't own those threads and other code which uses them may have other requirements/assumptions.
Are you sure you actually need to do this, though? It's very, very rare to need to set processor affinity. (I don't think I've ever needed to do it in anything I've written.)
Thread affinity can mean two quite different things. (Thanks to bk1e's comment to my original answer for pointing this out. I hadn't realised myself.)
What I would call processor affinity: Where a thread needs to be run consistently on a the same processor. This is what SetThreadAffinityMask deals with and it's very rare for code to care about it. (Usually it's due to very low-level issues like CPU caching in high performance code. Usually the OS will do its best to keep threads on the same CPU and it's usually counterproductive to force it to do otherwise.)
What I would call thread affinity: Where objects use thread-local storage (or some other state tied to the thread they're accessed from) and will go wrong if a sequence of actions is not done on the same thread.
From your question it sounds like you may be confusing #1 with #2. The thread itself will not change while your callback is running. While a thread is running it may jump between CPUs but that is normal and not something you have to worry about (except in very special cases).
Mutexes, semaphores, etc. do not care if a thread jumps between CPUs.
If your callback is executed by the thread pool multiple times, there is (depending on how the pool is used) usually no guarantee that the same thread will be used each time. i.e. Your callback may jump between threads, but not while it is in the middle of running; it may only change threads each time it runs again.
Some synchronization objects will care if your callback code runs on one thread and then, still thinking it holding locks on those objects, runs again on a different thread. (The first thread will still hold the locks, not the second one, although it depends which kind of synchronization object you use. Some don't care.) That isn't a #1, though; that's #2, and not something you'd use SetThreadAffinityMask to deal with.
As an example, Mutexes (CreateMutex) are owned by a thread. If you acquire a mutex on Thread A then any other thread which tries to acquire the mutex will block until you release the mutex on Thread A. (It is also an error for a thread to release a mutex it does not own.) So if your callback acquired a mutex, then exited, then ran again on another thread and released the mutex from there, it would be wrong.
On the other hand, an Event (CreateEvent) does not care which threads create, signal or destroy it. You can signal an event on one thread and then reset it on another and that's fine (normal, in fact).
It'd also be rare to hold a synchronization object between two separate runs of your callback (that would invite deadlocks, although there are certainly situations where you could legitimately want/do such a thing). However, if you created (for example) an apartment-threaded COM object then that would be something you would want to only access from one specific thread.
Microsoft's documentation also uses "thread affinity" to refer to the processor affinity of a specific thread: http://msdn.microsoft.com/en-us/library/ms684251%28VS.85%29.aspx
@bk1e, I stand corrected. That's confusing, then. :) By the sound of it I think the OP is worried about objects which need to be used consistently from a single thread rather than threads which need to be run consistently on a single CPU. I'll clarify my answer. Thanks!
"Some synchronization objects will care [...] Some don't care." could you give an example of a syncronization object that DOES care?
@Arne, Sure, I added a couple of examples for you.
You shouldn't. You're only supposed to use that thread for the job at hand, on the processor it's running on at that point. Apart from the obvious inefficiency, the threadpool might destroy every thread as soon as you're done, and create a new one for your next job. The affinity masks wouldn't disappear that soon in practice, but it's even harder to debug if they disappear at random.
| common-pile/stackexchange_filtered |
ES6 'import' causing error with Babel
I am trying to make a Poker game using JavaScript es6, but even with babel, when the game is run, the following error is thrown:
Unexpected reserved word { import Hand from './hand';
I have the following in my node_modules:
babel
babel-core
babel-loader
babel-preset-es2015
webpack.config.js:
"use strict";
module.exports = {
context: __dirname,
entry: "./player.js",
output: {
path: "./bundle",
filename: "bundle.js"
},
module: {
loaders: [
{
test: [/\.jsx?$/, /\.js?$/],
exclude: /node_modules/,
loader: 'babel',
query: {
presets: ['es2015']
}
}
]
},
devtool: 'source-maps',
resolve: {
extensions: ["", ".js", '.jsx']
}
};
I am trying to simply run player.js to test the constructor:
import Hand from './hand';
export default class Player {
constructor(name) {
this.name = name;
this.hand = dealHand();
}
dealHand() {
Hand.deal();
}
}
let me = new Player("sam");
console.log(`my name is ${me.name}`)
console.log(me.hand);
How are you transpiling with babel? Are you using webpack? Sounds like somewhere its not actually picking up the transpiled version
It's not enough to just install these modules—you have to add the presets you use to the .babelrc file.
run this: node -r babel-register player.js
@finalfreq Yes, I added the config file above.
It'd be worthwhile to npm i --save-dev babel-cli then do node_modules/.bin/babel player.js to see what the output looks like.
Can you copy/paste the full error message including stack trace please?
| common-pile/stackexchange_filtered |
Why is the covariance operator of a Gaussian measure not defined on the dual space?
I am studying Gaussian measures on nuclear spaces, which for concreteness I take to be Schwartz space $\mathcal{S}'(\mathbb{R}^n)$. Just as in finite dimensions, these Gaussian measures can be uniquely characterized by their mean and covariance. However unlike in finite dimensions, the covariance acts on the base space $\mathcal{S}(\mathbb{R}^n) \times \mathcal{S}(\mathbb{R}^n)$. But given that the measure itself is defined on the dual space $\mathcal{S}'(\mathbb{R}^n)$, I cannot get a good intuition on why the covariance operator is not also defined on the dual space $\mathcal{S}'(\mathbb{R}^n) \times \mathcal{S}'(\mathbb{R}^n)$.
I think this is a technical detail that results from how the Gaussian measure is constructed (i.e. from finite dimensional subsapces) but I am not sure. Is there some better way to think of why the covaraince operator acts on $\mathcal{S}(\mathbb{R}^n) \times \mathcal{S}(\mathbb{R}^n)$ instead of $\mathcal{S}'(\mathbb{R}^n) \times \mathcal{S}'(\mathbb{R}^n)$?
| common-pile/stackexchange_filtered |
Atiyah-Macdonald: Exercise 1.8
Let $A$ be a ring $\neq0$. Show that the set of prime ideals of $A$ has minimal elements with respect to inclusion.
I am trying to do this exercise from Atiyah-Macdonald.
Attempt:
We should assume that there is no such minimal prime ideal. Then we have a chain $P_1 \supset P_2 \supset P_3 \supset ...$. Then we should set $P= \bigcap_i P_i$, This would be a minimal element but I can't see why it should be prime?
Do you know Zorn's lemma ?
yes, to use This i would also need that the intersection of prime ideals is a prime ideal
@Mark Murray Not just arbitrary intersections, but particular intersections of collections of primes which are totally ordered by inclusion (or more generally rendered into downward directed sets by inclusion).
Strongly useful here: https://math.stackexchange.com/questions/2724350/prove-the-intersection-of-two-prime-ideals-is-prime-if-and-only-if-one-is-a-subs It's not that the intersection of prime ideals is prime (which is false; consider $\langle 2 \rangle \cap \langle 3 \rangle = \langle 6 \rangle$ in $\Bbb{Z}$) but it is true if you know one more more fact about the two ideals.
By the way, the intersection of an infinite descending chain of prime ideals is not necessarily minimal as you claim. To get minimality, you have to continue the chain transfinitely and ensure it is “downward cofinal”. Showing that’s possible is essentially the proof of Zorn’s lemma.
@Peter LeFanu Lumsdaine The official term for expressing the notion of ''downward cofinal'' being ''coinitial'', just as a tiny remark on terminology.
Hint: Prove it by contraposition: if neither $x$ nor $y$ belongs to $\mathfrak p$, then $xy$ is not in $\mathfrak p$.
You'll have to show first that, with this hypothesis on $x$ and $y$, there exists a prime ideal $\mathfrak p_i$ in the chain which contains neither $x$ nor $y$.
A last remark: to show the existence of minimal prime ideals, you should consider a totally ordered (by inclusion) family of prime ideals, not a mere sequence. You have no reason to suppose this family is countable.
Thanks for a good hint, i'll try this and get back to you!
The neat way to argue this is to realise that $\mathscr{Spec}(A)$ is inductively ordered by the dual of the inclusion (so that we may apply Zorn's lemma). More generally, consider a subset $\mathscr{M} \subseteq \mathscr{Spec}(A)$ which is upward directed with respect to the dual of inclusion or equivalently downward directed with respect to inclusion itself. Our objective is to prove that $\mathscr{M}$ has an upper bound with respect to the dual of inclusion, which amounts to a lower bound with respect to inclusion itself.
Since $A$ is not a degenerate ring, $\mathscr{Spec}(A) \neq \varnothing$ is nonempty and any prime ideal serves as a lower bound in the particular case $\mathscr{M}=\varnothing$.
When $\mathscr{M} \neq \varnothing$, let us consider $P\colon=\displaystyle\bigcap\mathscr{M}$ and show it is a prime ideal. As it is the nonempty intersection of a collection of proper ideals, it must also be a proper ideal. Assume by contradiction that it were not prime, which would mean the existence of $a, b \in A \setminus P$ such that $ab \in P$. Since neither $a$ are nor $b$ are in the intersection of all members of $\mathscr{M}$, there must exist ideals $Q, R \in \mathscr{M}$ such that $a \notin Q$ and $b \notin R$. Since $\mathscr{M}$ is donward directed with respect to inclusion, there exists $T \in \mathscr{M}$ such that $T \subseteq Q, R$.
Since $ab \in P \subseteq T$ and $T$ is prime, we must have either $a \in T$ or $b \in T$ which lead either to $a \in Q$ or $b \in R$, both of which are contradictions.
Remark: I have not argued at this level of generality above, but the same claim remains valid for arbitrary non-degenerate rings (the assumption of commutativity is not required).
This looks like an excellent answer to come back to after I have learned about Spec!
@Mark Murray For all your purposes that are in question here, regarding this problem, you have no need beyond that of merely knowing it is standard notation for the set of all prime ideals of a given ring (it is true that this notation is introduced in the context of defining the Zariski topology which the prime spectrum -- as it is called -- naturally carries, however here we are only concerned with the support set of this topological space).
| common-pile/stackexchange_filtered |
Sort the list and insert a new column in mongodb
I need to sort states and figure out the highest temperature for each state in this weather data residing in mongodb.
How do I iterate through each record in Javascript and insert a new column 'month_highest' when I figure out the highest temperature for that state ?
Day Time State Airport Temperature Humidity WindSpeed
1 54 Vermont BTV 39 57 6
1 154 Vermont BTV 39 57 4
1 254 Vermont BTV 39 57 5
1 354 Vermont BTV 38 70 5
1 454 Vermont BTV 34 92 11
16 53 Florida ORL 46 71 9
16 153 Florida ORL 47 71 8
16 253 Florida ORL 46 73 8
16 353 Florida ORL 47 74 8
16 453 Florida ORL 46 79 7
16 553 Florida ORL 46 79 5
16 653 Florida ORL 46 83 4
What have you tried? There aren't record or columns in MongoDb. Where would you insert the value?
I'd suggest you look at using the aggregation framework. Lots of good examples here that are similar in complexity here: http://docs.mongodb.org/manual/tutorial/aggregation-examples/
I recommend using Aggregation as well. MongoDB Map-Reduce is single-threaded and very slow.
MongoDB Map-Reduce
db.temps.mapReduce(
/* map */
function() {
emit( this.State, this.Temperature);
},
/* reduce */
function key(key, values) {
var ret = 0;
for (var i=0; i<values.length, i++) {
if (values[i] > ret) ret = values[i];
}
return ret
},
/* params */
{
query: {
DateTime: {
$gte: new Date('2013-01-01')
,$lt: new Date('2013-02-01')
}
}
,sort: {
State:1
}
}
);
As for inserting a new month-max, that depends on how you want to store it? How are you querying this data?
You could output the results into a new collection, or use that programmatically to update another collection. It depends on how you want your data shaped. It actually looks like you are really flat with your data as it stands.
I was thinking we can sort first by state, then by temperature. Then you can iterate through the documents and know that whenever the state changes you have reached the highest temperature for that state. The resulting output should have "month_high" inserted.
{
"_id" : ObjectId("520bea012ab230549e749cff"),
"Day" : 1,
"Time" : 54,
"State" : "Vermont",
"Airport" : "BTV",
"Temperature" : 39,
"Humidity" : 57,
"Wind Speed" : 6,
"month_high" : true
}
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.