text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Non-hacky way to share a boolean state between classes?
Say I have
class Cat:
def __init__(self, var, n_calls=None):
self.var = var
if n_calls is None:
self.n_calls = []
else:
self.n_calls = n_calls
def greet(self):
if self.n_calls:
raise ValueError('Already greeted, sorry')
self.n_calls.append(1)
print('hello')
def change_var(self, new_var):
return Cat(new_var, n_calls=self.n_calls)
A given cat can only greet once. But also, if any Cat derived from a given Cat greets, then no other Cat derived from that same initial Cat can greet.
Here's an example:
cat = Cat(3)
new_cat = cat.change_var(4)
other_cat = cat.change_var(5)
cat2 = Cat(6)
new_cat.greet() # passes
cat2.greet() # passes
other_cat.greet() # raises
new_cat.greet() passes, because it's the first time that new_cat greets. cat2.greet() also passes, because it's the first time cat2 greets.
But other_cat.greet() fails, because other_cat and new_cat were both derived from the same Cat, and new_cat has already greeted.
The code I've written works for what I'm trying to do, but using a list to share state feels very hacky.
Is there a non-hacky way to do this?
It sounds like the answer in a non-toy example is likely to be "change your design". This is a very contrived example.
please describe the actual problem you're trying to solve. Toy examples aren't really helpful.
Give each Cat some sharable context object, say CatContext:
from dataclasses import dataclass
from typing import Any
@dataclass
class CatContext:
has_greeted: bool = False
class Cat:
def __init__(self, var: Any, context: CatContext | None = None):
self.var = var
if context is None:
context = CatContext()
self.context = context
def greet(self) -> None:
if self.context.has_greeted:
raise ValueError('Already greeted, sorry')
self.context.has_greeted = True
print('hello')
def change_var(self, new_var: Any) -> Cat:
return Cat(new_var, context=self.context)
Any mutable object will do, but using a type that explicitly documents the intent of being a shared state between multiple instances makes the code much more understandable
isn't that almost exactly what they're doing already?
@gog: In terms of implementation mechanics, sure, but using a named class is much clearer and cleaner, especially if more shared context needs to be added.
To obtain the same effect, you can link the shared instance in the object returned by change_var. This will avoid carrying an extra object (list) in the method signatures:
class Cat:
def __init__(self, var):
self.var = var
self.shared = None
self.greeted = False
def greet(self):
if (self.shared or self).greeted:
raise ValueError('Already greeted, sorry')
(self.shared or self).greeted = True
print(f'hello {self.var}')
def change_var(self, new_var):
changed = Cat(new_var)
changed.shared = self.shared or self
return changed
This will give you a general approach for all other shared behaviours/states between instances
You could also make the code a bit more legible by using a property for greeted that manages and internal variable self._greeted for retrieval and assignment (instead of (self.shared or self).greeted)
How about self.shared = self instead of None?
I believe that would cause issues with object release when they go out of scope because of the self reference (and I didn't want to get into that)
| common-pile/stackexchange_filtered |
Mailchimp background-size cover
I tried to make a background image cover in many different ways but Mailchimp seems to delete that piece of code.
Any approach on how to make this work.
Cheers,
Michael
When you say it deletes it, at what point do you notice it? Is it after sending a test mail, or is it after just click Next?? As a number of email clients do not support it, which could be the issue and explain why it is being removed.
Ah Mailchimp. Well, in custom templates the best thing to do is literally DO IT ALL. Meaning, add it to the table or TD with CSS AND the default table styling.
Also you have to take into account the Outlook users that your email will go to.
SO you have to add a few things.
Let's start with the html tag above the head. First off, the best doctype to use is XHTML 1.0 transitional. Now I know that you CAN do one with an empty html tag and NO DOCTYPE, but you aren't doing yourself any favors.
Change <html> to <html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office">
This is going to say "You are using Microsoft. Okay, where are those rules?"
Next off, in the CSS you need to add this:
#outlook a{
padding:0;
}
.ReadMsgBody{
width:100%;
}
body{
width:100% !important;
min-width:100%;
-webkit-text-size-adjust:100%;
-ms-text-size-adjust:100%;
}
.ExternalClass{
width:100%;
}
v*{
behavior:url(#default#VML);
display:inline-block;
}
Pay attention to the ones like v* and #outlook a.
SO you have those, and now you can add in the stuff for the background. In EACH table or td that you want a background, you have to add it inline. BUT it is a good practice to ALSO use an xlmns wrapper. This doesn't have to be code, it can be commented out and will still be read as a backup.
<!--[if gte mso 9]>
<v:background xmlns:v="urn:schemas-microsoft-com:vml" fill="t">
<v:fill type="tile" src="YOURIMAGEPATH.jpg" color="#000000"/>
</v:background>
<![endif]-->
<table cellpadding="0" cellspacing="0" border="0" width="600" bgcolor="#000000" background="YOURIMAGEPATH.jpg" style="background-image:url(YOURIMAGEPATH.jpg)">
<tr>
<td align="center">
<!--YOUR CONTENT-->
</td>
</tr>
</table>
<!--[if gte mso 9]>
</v:textbox>
</v:rect>
<![endif]-->
Notice how the entire table is wrapped in that commented section? That says that basically if your recipient is using Outlook, it is going to go ahead and render the background.
Campaign Monitor made a really nifty too to do just this to table and td. Remember too that you can NOT use cover or skew the background at all. Just make it the 100% size your email will be.
<!--Hopefully not more than 600px-->
Backgrounds.cm by Campaign Monitor
| common-pile/stackexchange_filtered |
Changing the GET request query string value of the address bar from the view
Is this possible?
Say that we have something like this:
public ActionResult sim(string test)
{
return View();
}
So i can call it by doing something like this:
localhost:55319/test/rat/sim?test=hi
Would it be possible to change the value of the test query string explicitly?
I tried it w/
@{
Request.Params.Set("test","hello");
}
and my program just breaks, what i want to happen is to change query string value of test without using another get request again and only from the view itself.
Query string is very generic way of passing values through http request, you can decide the query string when you create the link ! But i am not able to understand why you want to change that after the receiving the request ?
Simply for experimental purposes as of now, i just wanna know if it's possible to change it through code. I know it can be accessed through Request but would it be possible to change them as well?
are you trying to change it in browser url (in adress bar) ?
Long and short, the query string is part of the URL, i.e. part of the thing that uniquely identifies a resource. When you change it, you're changing the resource you're linking to, which to a browser means loading a new page. You can't change it server-side without performing a redirect. It's for this reason that Request is read-only, which is why you can't set a param to something else without getting an error.
A) Server side: You can change a query string by redirecting to the same page with a different value for your parameter. The client will see a page refresh which may not be very pleasant.
return Redirect("/test/rat/sim?test="+ newValue);
B) Client side: If you just want the URL on the browser look different (without page refresh, i.e. redirect to a new page) you need to use javascript's History object (https://developer.mozilla.org/en-US/docs/Web/API/History_API) which, very unfortunately, is not supported in old browsers.
| common-pile/stackexchange_filtered |
How can I combine two separate scripts being piped together to make one script instead of two?
I have two scripts that I pipe together. script1.sh | script2.sh Originally they were part of the same but I could never make it work correctly. The last part of script1 calls on youtube-dl to read a batch file and outputs a list urls into the terminal. Note the trailing - allows youtube-dl to read from stdin.
cat $HOME/file2.txt | youtube-dl --ignore-config -iga -
And script2 begins with:
while read -r input
do
ffmpeg [arg] [input] [arg2] [output]
What am I not seeing that is causing the script to hang when the two halves are combined yet work if one is piped into the other?
EDIT - It's kind of funny how the answer is in the question.. Live and learn.
The short answer is that a | is needed to make the scripts work together. In the question above I originally had a script that ended like this:
cat $HOME/file2.txt | youtube-dl --ignore-config -iga -
while read -r input
do
ffmpeg [arg] [input] [arg2] [output]
But this does not work. We need to pipe into the while loop:
cat "$HOME/file2.txt" | youtube-dl --ignore-config -iga - | while read -r input
But we get the same results in a more efficient manner by doing this instead:
youtube-dl --ignore-config -iga "$HOME/file2.txt" | while read -r input
or if you rather:
youtube-dl --ignore-config -iga "$HOME/file2.txt" | \
while read -r input
I probably would use something like this (line by line processing):
#!/usr/bin/bash
inputFile="$HOME/file2.txt"
while read -r line
do
youtubeResult=$(youtube-dl --ignore-config -iga - "$line")
ffmpeg [arg] "$youtubeResult" [arg2] [output]
done < "$inputFile"
@RanyAlbegWein Do you mean youtubeResult assignment, no? I have updated the answer.
You have spaces before and after the assignment operator.
@RanyAlbegWein Thanks, I haven't noticed this.
@tripleee Thank you for this useful website.
| common-pile/stackexchange_filtered |
New fearture in react-router-dom v6 useOutletContext() issue, what versions support it
I use react-router-dom and react-router v6.0.2 and I want to use the useOutletContext() the new features in v6 to pass props through outlet but it return this issue.
How can't I fix it and what versions does useOutletContext() support
The useOutletContext hook and Outlet context was a feature introduced in v6.1.0.
<Outlet> can now receive a context prop. This value is passed to
child routes and is accessible via the new useOutletContext hook.
Uninstall the current version
npm uninstall -s react-router-dom
Either bump to this minimum version
npm install -s<EMAIL_ADDRESS>
or install the current/latest version
npm install -s react-router-dom@6
npm install -s react-router-dom@latest
| common-pile/stackexchange_filtered |
How to create a tolerance that makes all low values in the session become zero?
I'm making some calculations where I get values like number*e-17 , but i would like to make all those small values become zero.
Is there a way to make something like a tolerance that will change low values to zero in the whole program?
I'm using sympy btw.
many thanks,
This is an "XY problem": the correct solution in SymPy is to not make those small numbers to begin with. For that, use symbolic constants instead of numeric ones, as here
You could use math.isclose to test for closeness to zero, and set the values accordingly:
import math
value = 0 if math.isclose(value, 0) else value
more details in python docs
| common-pile/stackexchange_filtered |
linear independent over $\mathbb{Q}$
Let $r_1, r_2, \cdots, r_n$ be distinct rational numbers in the interval $(0,1)$. How to prove that in the space $\mathbb{R}$ over $\mathbb{Q}$ the numbers $2^{r_1}, \cdots, 2^{r_n}$ are independent?
Do you know how to prove that the square root of two is irrational? Notice that this corresponds to the case where $n=2, r_1 = 0, r_2 = \frac12$.
Yes, of course, I know. But how to prove it for $n=3$?
We may without loss of generality assume that
$i < j \Longrightarrow r_i < r_j; \tag 0$
since $0 < r_i \in \Bbb Q$, $1 \le i \le n$, we may write
$r_i = \dfrac{p_i}{q_i}, \; p_i, q_i \in \Bbb Z, 0 < p_i < q_i; \tag 1$
we choose $m = \text{lcm} \{q_i \mid 1 \le i \le n \}, \tag 2$
that is, $m$ is the least common multiple of the denominators $q_i$; as such, it is the least common denominator of the $r_i = p_i / q_i$, and so each $r_i$ may be written
$r_i = \dfrac{k_i}{m}, \; 0 < k_i \in \Bbb Z; \tag 3$
we note that (0) implies $k_i < k_j$ for $i < j$; if the numbers $2^{r_i}$ are linearly dependent over $\Bbb Q$, we have $\alpha_i \in \Bbb Q$, not all $0$, with
$\displaystyle \sum_1^n \alpha_i 2^{r_i} = 0; \tag 4$
we observe that at least two of the $\alpha_i \ne 0$, lest $2^{r_i} = 0$ for some $i$, impossible; using (3), (4) may be written
$\displaystyle \sum_1^n \alpha_i 2^{k_i / m} = 0; \tag 5$
since $r_i \in (0, 1)$ for all $i$, we have $k_1 > 0$ and $k_n < m$; thus we may factor $2^{k_1/m}$ out of (5) and, setting $t_i = k_i - k_1$, so that $t_1 = 0$ and $t_n < m$, write
$\displaystyle \sum_1^n \alpha_i 2^{t_i / m} = 0; \tag 6$
let
$f(x) = \displaystyle \sum_1^n \alpha_i x^{t_i} \in \Bbb Q[x]; \tag 7$
then (6) affirms that
$f(2^{1/m}) = 0; \tag 8$
we also have
$\deg f(x) \le t_n < m; \tag 9$
it is evident that $r=2^{1/m}$ satisfies
$g(x) = x^m - 2 \in \Bbb Q[x]; \tag{10}$
Eisenstein with $p = $ shows that $g(x)$ is irreducible over $\Bbb Q$; hence $2^{1/m}$ can satisfy no polynomial of degree less than $\deg g(x) = m$, but this contradicts (8), whence there can be no linear dependence such as (4). And we are done.
Note: in retrospect, having written up this proof, the division through by $2^{k_1/m}$ and the introduction of the $t_i$ don't really appear necessary; I must have been thinking along a different track when I introduced them; nevertheless, this doesn't affect the validity of the result, and it's less work to leave those steps in place than edit then out. End of Note.
In "Eisenstein with p= shows", what is p supposed to equal?
(1). Let $D$ be a positive common denominator of $r_1,...,r_n.$ Of course $1<D\in \Bbb N.$
Let $x=2^{1/D}.$
For $1\leq j\leq n$ let $r_j=N_j/D .$ We have $D>N_j\in \Bbb N$ and also $i\ne j\implies N_i\ne N_j.$
Suppose $q_1,..., q_n$ are rationals, not all $0,$ such that $\sum_{j=1}^nq_j2^{r_j}=0.$ We have $0=\sum_{j=1}^nq_jx^{N_j}.$
Now $f(y)=\sum_{j=1}^nq_jy^{N_j}$ is a polynomial in $y,$ of degree $d,$ where $1\leq d<D,$ with rational coefficients. We will return to $f(y)$ in (3), below.
(2). We quote an elementary theorem of Gauss: If $g(y)\in \Bbb Z[y]$ and $g(y)$ is irreducible over $\Bbb Z$ then $g(y)$ is irreducible over $\Bbb Q. $
The polynomial $y^D-2$ satisfies Eisenstein's Criterion so it is irreducible over $\Bbb Z,$ so by the above theorem it is irreducible over $\Bbb Q. $
(3). We have $x^D-2=0.$ Let $ h(y)$ be a non-$0$ member of $\Bbb Q[y]$ of smallest possible degree, such that $h(x)=0.$ Then for any $g(y)\in \Bbb Q[y],$ if $g(x)=0$ then $h(y)$ is a divisor of $g(y)$ in the ring $\Bbb Q[y].$
Referring back to $f(y)$ in (1): Since $f(x)=0,$ therefore $h(y)$ divides $f(y)$ so $\text {deg} (h)\leq \text { deg } (f)<D.$
But $x^D-2=0$ so $h(y)$ also divides $y^D-2$ in $\Bbb Q[y],$ with $\text {deg } (h)<D.$
This implies that $y^D-2$ is reducible in $\Bbb Q[y],$ a contradiction to (2).
| common-pile/stackexchange_filtered |
Conjugate of compact Lie subgroup strictly contained in itself
I'm currently reading the book "Lie Groups and Geometric Aspects of Isometric Actions" by Alexandrino and Bettiol and I'm having problems understanding a particular argument on page 73, Proposition 3.74:
$M$ is supposed to be a Riemannian manifold on which the Lie group $G$ acts properly and isometrically.
And for $x, y \in M$ we denote by $G_x, G_y$ the respective isotropy subgroups (which of course are compact).
What I don't understand: Why does $g G_x g^{-1} \subset G_y \subset G_x$ already imply that there must be equalities everywhere?
To put it more generally: Let $G$ be a Lie group and $H$ a compact subgroup and suppose that $g H g^{-1} \subset H$ for some $g \in G$. Why is it true that $g H g^{-1} = H$?
Of course, for a finite group this is trivial but I have seen examples for infinite groups where this property fails.
I would bet it has to do with the specific topology of isotropy groups, not just that they're compact. E.g. an injective, continuous map from a sphere to itself must be a homeomorphism (I think). I assume the same may be true of orthogonal groups.
Let $F=gHg^{-1}$.
If $H$ is connected, then $F$ is also connected. Since $H$ and $F$ are isomorphic, $\dim(H)=\dim(F)$. Since $F<H$, it follows that $H=F$. If $H$ is not connected and $H_0$ is a connected component of $H$, then $H_0=F_0$ and $|H/H_0|=|F/F_0|<\varnothing$. It follows that $H=F$.
Correction. $$|H/H_0|=|F/F_0|<\infty$$
| common-pile/stackexchange_filtered |
Is there a way to restrict Brownian Motion to a specific axis?
I have set up a simple particle system containing 1000 particles with Normal Velocity of 4.0, Z Velocity of -1 and gravity turned down to 0. I've set the Brownian Motion 20. Is there a way to limit the particles' movement to a specific axis?
This solution assumes that the motion is introduced by the Force Fields on planes. The emitter only has some Normal value to kick start the motion. Gravity is turned off.
You could use a Plane object set to a Harmonic Field (shape: Surface) with strength 4, rest length 0.2. I placed mine above the emitter plane.
To get a random pattern you could vary the emission of particles by using a texture. To get randomness in the oscillation you could use 2 opposing planes with Lennard Jones effect type. Using large values, they will juggle particles between themselves. Any that are spilled come to rest.
The plane at the bottom is the emitter.
You can fake the axis constraints using Animation Nodes addon (https://github.com/JacquesLucke/animation_nodes/releases).
The principle is to create dupli objects depending on the amount of alive particles. Then place these dupli with random values and make them follow the particles, but only on the wanted axis or direction.
The node setting:
Main part
1- Obtain the particle systems from the emitter object
2- Get the wanted particle system
3- Get the particles from it
4- Filter for only alive particles
5- Make dupli objects with an objects instancer node
6- Call a subprogram to set the initial position of the dupli objects
7- Call another one to make the objects follow the particles
Coordinates assignation
1- Input node which give each of the dupli objects
2- Generate random numbers for X Y and Z
3- Combine X, Y, Z as a vector
4- Transform the dupli object location with the previous random values
5- Output the objects
Follow particles
1- Gets each particle and dupli objects as input
2- Retrieve location information from the particle
3- Gets the dupli object that corresponds to the particle
4- Extract the wanted axis from the particle. The one which is to follow
5 and 6- Extract the object's coordinates which are to be constant
7- Combine 4 and 6 in a new vector
8- Set this vector as location for the dupli object
| common-pile/stackexchange_filtered |
CUBLAS dgemm performance query
These are my results of running cublas DGEMM on 4 GPUs using 2 streams for each GPU (Tesla M2050):
I have tested my results and they are alright; I am concerned about the high Gflops value that I am getting, compared with the versions that uses the default stream. I am calculating the Gflops using the formula:
Gflops = {2.0*10^-9*(N^3+N^2)}/elapsed_time_in_s
For the version that uses multiple streams, do I need to modify this formula in any way?
The HtoD-ker-DtoH is the time taken for host to device data transfer, kernel execution and device to host data transfer in seconds (this is the denominator of the formula above).
Crosspost to Nvidia forums - http://forums.nvidia.com/index.php?showtopic=219910&st=0#entry1350908
EDIT: Following the comment of @talonmies, I added a cudaStreamSynchronize before calculating the time, and the results are as follows:
Thanks,
Sayan
What do you mean when you say "running on 4 GPUs" and what does that mean for the DGEMM operation. Are you splitting the DGEMM up over 4 devices or something else?
I am splitting data in 4 parts for each GPU and then running cublasdgemm on the chunks (on each GPU)...
A single C2050 gives about 550 GFLOP/s peak, or about 2200 GFLOP/s for 4 peak for double precision, and DGEMM is considerably lower than peak), so I would guess that you timing is wrong in the streams case (probably something that was synchronous in the default stream case is now asynchronous). The FLOP/s calculation should not change no matter how you do the computations.
Thank you, I have added a cudaStreamSynchronize before I calculate time and I get reasonable results (added in EDIT).
A single C2050 gives about 550 GFLOP/s peak, or about 2200 GFLOP/s for 4 peak for double precision, and DGEMM is considerably lower than peak), so I would guess that you timing is wrong in the streams case (probably something that was synchronous in the default stream case is now asynchronous). The FLOP/s calculation should not change no matter how you do the computations.
I would review your code to ensure that whatever timing mechanism you use is synchronized to all the streams you launch, either via the cudaStreamWaitEvent mechanism across all streams, or cudaStreamSynchronize per stream. It is likely that the timing is falling out of the code you are trying to time before the GPU has finishing the CUBLAS operations.
Added answer to get this off the unanswered questions list. Could someone either upvoted this or accept the answer.
| common-pile/stackexchange_filtered |
Problem saving many instances of manyToMany relationship Typeorm
Description
Thank you very much in advance.
When using @joinTable specifying the name of the table and columns, the save method for multiple instances does not work by saving only the first one
I have a many-to-many relationship (N-> N), when users has where several users can have several types,I have the following tables:
Table users:
Columns
types
id
uuidv4
name
string
email
string
Table users_types:
Columns
types
id
uuidv4
name
string
description
string
Table users_users_types:
Columns
types
id
uuidv4
user_id
uuid
user_type_id
uuid
Models:
User:
import {
Column,
CreateDateColumn,
DeleteDateColumn,
Entity,
JoinTable,
ManyToMany,
PrimaryColumn,
UpdateDateColumn,
} from "typeorm";
import { TypeUser } from "@modules/accounts/infra/typeorm/entities/TypeUser";
@Entity("users")
class User {
@PrimaryColumn()
id?: string;
@Column()
name: string;
@Column({ unique: true })
email: string;
@ManyToMany((type) => TypeUser, (type_user) => type_user.users, {
cascade: true,
})
@JoinTable({
name: "users_types_users",
joinColumns: [{ name: "user_id", referencedColumnName: "id" }],
inverseJoinColumns: [{ name: "user_type_id", referencedColumnName: "id" }],
})
types?: TypeUser[];
@CreateDateColumn()
created_at?: Date;
@UpdateDateColumn()
updated_at?: Date;
@DeleteDateColumn()
deleted_at?: Date;
}
export { User };
TypeUser:
import {
Column,
CreateDateColumn,
DeleteDateColumn,
Entity,
JoinTable,
ManyToMany,
PrimaryColumn,
UpdateDateColumn,
} from "typeorm";
import { User } from "./User";
@Entity("types_users")
class TypeUser {
@PrimaryColumn()
id?: string;
@Column()
name: string;
@Column()
active: boolean;
@ManyToMany((type) => User, (user) => user.types)
users?: User[];
@CreateDateColumn()
created_at?: Date;
@UpdateDateColumn()
updated_at?: Date;
@DeleteDateColumn()
deleted_at?: Date;
}
export { TypeUser };
I'm assembling the seeds to test the insertion with the following code:
import { getConnection, MigrationInterface, QueryRunner } from "typeorm";
import { TypeUser } from "@modules/accounts/infra/typeorm/entities/TypeUser";
import { User } from "@modules/accounts/infra/typeorm/entities/User";
import { UsersTypesFactory } from "@shared/infra/typeorm/factories";
export class CreateUsersTypes1620665114995 implements MigrationInterface {
public async up(): Promise<void> {
const users = (await getConnection("seed")
.getRepository("users")
.find()) as User[];
const usersTypesFactory = new UsersTypesFactory();
const types = usersTypesFactory.generate();
await getConnection("seed").getRepository("types_users").save(types);
const types_list = (await getConnection("seed")
.getRepository("types_users")
.find()) as TypeUser[];
const types_users_list = Array.from({
length: types_list.length,
}).map((_, index) => types_list[index]) as TypeUser[];
users[0].types = types_users_list;
const relationshipUsersTypes = users[0];
await getConnection("seed")
.getRepository(User)
.save(relationshipUsersTypes);
}
public async down(): Promise<void> {
await getConnection("seed").getRepository("users_types_users").delete({});
await getConnection("seed").getRepository("types_users").delete({});
}
}
when executing the code the logs seem to be right:
query: START TRANSACTION
query: INSERT INTO "users_types_users"("user_id", "user_type_id") VALUES ($1, $2), ($3, $4), ($5, $6) -- PARAMETERS: ["8b90cacb-52bc-4635-bd2c-bc87d59b0d4d","9144899a-6380-4be6-8251-947b5bdccda9","8b90cacb-52bc-4635-bd2c-bc87d59b0d4d","7711c9a8-10c4-407d-a868-82234e85c614","8b90cacb-52bc-4635-bd2c-bc87d59b0d4d","581417e3-0790-4ea7-9bff-ca0f51bb16a4"]
query: COMMIT
query: INSERT INTO "migrations"("timestamp", "name") VALUES ($1, $2) -- PARAMETERS: [1620665114995,"CreateUsersTypes1620665114995"]
logger this instance save
User {
id: '8b90cacb-52bc-4635-bd2c-bc87d59b0d4d',
name: 'Brice',
last_name: 'Will',
cpf: 'xfpfnxowprx',
rg: '91r7aujxsl',
email<EMAIL_ADDRESS> password_hash: 'fe6eYzQXKp3kHyt',
birth_date: 2020-07-17T04:46:14.816Z,
created_at: 2021-05-12T05:43:22.064Z,
updated_at: null,
deleted_at: null,
types: [
TypeUser {
id: '9144899a-6380-4be6-8251-947b5bdccda9',
name: 'provider',
active: true,
created_at: 2021-05-12T05:43:22.291Z,
updated_at: null,
deleted_at: null
},
TypeUser {
id: '7711c9a8-10c4-407d-a868-82234e85c614',
name: 'client',
active: true,
created_at: 2021-05-12T05:43:22.291Z,
updated_at: null,
deleted_at: null
},
TypeUser {
id: '581417e3-0790-4ea7-9bff-ca0f51bb16a4',
name: 'admin',
active: true,
created_at: 2021-05-12T05:43:22.291Z,
updated_at: null,
deleted_at: null
}
]
}
but in the database only the content of the first one was saved
content users_types_users
content type_users
content users
Expected Behavior
the expected result is that all available types are registered
Actual Behavior
only the first element of the type class instance is registered
Steps to Reproduce
create a database with tables
includes data in table
My Environment
Dependency
Version
Operating System
Node.js version
v14.16.1
Typescript version
v4.2.3
TypeORM version
v0.2.32
[x] postgres
I added this as a Bug but, I wanted to answer the question if someone has already gone through the same problem or managed to solve it?
| common-pile/stackexchange_filtered |
2-column dropdownlist with one column hidden
I'm hoping this is a quick one. Sorry, still an ASP/C# n00b and I can't seem to find an example.
What I want to do is have a dropdownlist on an ASP page. I want the list to display 2 values; Benefit Type and Priority. However, the dropdownlist is used as a filter for the data to display, so my field names in the table are BENTYP and PRIO. So, the user will see "Benefit Type", but the code-behind will be able to read "BENTYP". It's sort of like a 2-column combo box with one column hidden.
Make sense? I know this is a snap in Access, I can't imagine it's too hard in ASP but I just don't have the experience yet. Also, if you would be so kind, can you tell me how the code-behind would read the text in the "hidden" column?
EDIT: Just to be clear, the dropdownlist would look something like this:
Column1 (visible) Column2 (invisible)
Benefit Type ---- BENTYP
Priority ----------- PRIO
<asp:DropDownList id="List"
AutoPostBack="True"
OnSelectedIndexChanged="Selection_Change"
runat="server">
<asp:ListItem Value="BENTYP"> Benefit Type </asp:ListItem>
<asp:ListItem Value="PRIO"> Priority </asp:ListItem>
</asp:DropDownList>
From code behind you just have to access the thing you want:
string item1 = List.DataTextField.ToString();
string item2 = List.DataValueField.ToString();
Hope it helps.
Founded in MSDN
What you've asked is exactly the functionality of the dropdown, it has a "Value" and a "Text" for each item it contains, just set the value as your second column and your text as the first.
From the code behind you can add it like so:
ListItem item = new ListItem("text (first column)", "value (second column)");
item.Selected = true; // whatever you want here
yourDropdownList.Items.Add(item);
To get the selected item in the code behind (the ListItem object) use:
var item = yourDropdownList.SelectedItem;
var text = item.Text;
var val = item.Value;
| common-pile/stackexchange_filtered |
Examples of homogeneous ideals with $J \cap B_+ \subset \sqrt{I}$ but $J \not\subset \sqrt{I}$.
This is a question about a Lemma in Liu's Algebraic Geometry and Arithmetic Curves.
The result states the following (this is Lemma 2.3.35, paraphrased to focus only on the part I am asking about):
Let $I, J$ be homogeneous ideals of a graded ring $B$. Then $V_+(I) \subset V_+(J)$ if and only if $J \cap B_+ \subset \sqrt{I}$.
I understand the proof, but I would like to have a deeper understanding of the appearance and implications of $B_+$ in the $\operatorname{Proj}$ construction, so I am wondering if someone can give an example for the necessity of taking $J \cap B_+$, rather than just $J$ (as happens in the affine case).
In other words, can someone provide an example of two homogeneous ideals $I, J$ of a graded ring $B$, with $J \cap B_+ \subset \sqrt{I}$, but $J \not\subset \sqrt{I}$?
Even better if you can give some intuition as to way we should "expect" that $B_+$ would show up in this Lemma.
This can’t happen if $J \subseteq B_+$, in particular if $J$ is generated by elements of positive degree. So for examples to occur, $B_0$ must not be a field. That explains why this doesn’t happen with the usual basic examples.
If $f \in B_0$ is a non-unit non-nilpotent element, then take $J = (f)$ and $I = J \cap B_+$. Then $f \notin \sqrt I$.
I guess you also need $f$ to be non-nilpotent.
Oops, you’re right.
| common-pile/stackexchange_filtered |
Run a powershell script with different credentials
I'm trying to run a powershell script to search for a network drive for a certain file. In my testing, I've found that my script works perfectly fine, however the network drive I need to search require my Domain Admin logon.
I have
Start-Process powershell.exe -Credential "domain\adminusername" -NoNewWindow -ArgumentList "Start-Process powershell.exe -Verb runAs"
as the very first line of my script, but whenever I run the script I get this error:
Start-Process : This command cannot be run due to the error: The directory
name is invalid.
At Path\to\script.ps1:1 char:1
+ Start-Process powershell.exe -Credential "domain\adminusername" -NoN ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Start-Process],
InvalidOperationException
+ FullyQualifiedErrorId :
InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommand
What directory name is it talking about? If I move the script to the actual network drive, I still get the same error. How do you run a script as a different user?
You could use the net use command to gain access or the new-psdrive command instead. Another option would be to start-process a cmd prompt and use runas within it. Also, you may need to include the full path of powershell.exe or add it to the path variable. %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe
I have used New-PSDrive to map network drives with my admin credentials many times. Or you could always just do a Run As Different User on PowerShell when you first launch it.
| common-pile/stackexchange_filtered |
PHP prepared statement: Why is this throwing a fatal error?
Have no idea whats going wrong here. Keeps throwing...
Fatal error: Call to a member function prepare() on a non-object
...every time it gets to the $select = $dbcon->prepare('SELECT * FROM tester1');part. Can somebody shed some light as to what I'm doing wrong?
function selectall() //returns array $client[][]. first brace indicates the row. second indicates the field
{
global $dbcon;
$select = $dbcon->prepare('SELECT * FROM tester1');
if ($select->execute(array()))
{
$query = $select->fetchall();
$i = 0;
foreach ($query as $row)
{
$client[$i][0] = $row['id'];
$client[$i][1] = $row['name'];
$client[$i][2] = $row['age'];
$i++;
}
}
return $client;
}
$client = selectall();
echo $client[0][0];
The obvious answer is that $dbcon hasn't been initialized at all or is initialized after this function is called.
What code is initializing $dbcon? Where and when is it run? You also realize that you will need to initialize it on every invocation of a script that accesses the database? The last is just to make sure that you understand what the global scope in PHP is. It means scoped to that single request. The term global is a little misleading.
If you read the errors, they tend to tell you EXACTLY what the problem is.
Um. Read the error. Didn't make sense. I put it in the original post
make sure you define $dbcon properly. If you are using mysqli, see how the connection is setup in the doc. you can also pass the connection object to the function
function selectall($dbcon){
....
}
| common-pile/stackexchange_filtered |
Where is the /usr folder in mintty (Git Bash for Windows)?
New to git on Windows. After installing the latest version of git (from the git for Windows website), you can type cd /usr/bin in standard Linux usage. But where exactly is this on my Windows file system? A search from the Windows command-line turned up two plausible locations. Which is the correct one and why are there two similar locations?
C:\Program Files\Git\usr\bin
C:\Program Files\Git\mingw64\bin
I have:
vonc@voncavn7 MINGW64 /usr
$ ls
bin etc lib libexec share ssl
If I check usr:
D:\prgs\gits\current\usr>ls
bin/ etc/ lib/ libexec/ share/ ssl/
And mingw64
D:\prgs\gits\current\mingw64>ls
bin/ doc/ etc/ lib/ libexec/ share/ ssl/
So it looks like it is C:\Program Files\Git\usr\bin.
As explained in "Why is “MINGW64” appearing on my Git bash?", The MINGW64 is the value from the MSYSTEM environment variable.
It is part of MSYS2, which consists of three relatively separate subsystems: msys2 , mingw32 and mingw64.
From "Zsh on Windows via MSYS2" from Borek Bernard:
msys2 (sometimes called just msys) is an emulation layer — fully POSIX compatible but slow.
mingw subsystems provide native Windows binaries, with Linux calls rewritten at compile time to their Windows equivalents.
For example, Git for Windows is a mingw64 binary (unlike msys Git which utilizes the compatibility layer and is therefore slow).
See also "How are msys, msys2, and msysgit related to each other?".
Yes, that looks right, but curious why the mingw64 folder. What is it for?
@AlainD I have edited the answer to address your comment.
Awesome, that answers the question
| common-pile/stackexchange_filtered |
PHP based Telegram bot InlineKeyboardMarkup not working
Trying to create inline buttons for my Telegram bot. this is the request I'm sending and for some reason I don't get the buttons but only the text "Inline Keyboard"
Here is my code
$args = [
'chat_id' => $this->chat_id,
'parse_mode' => 'HTML',
'text' => 'Inline Keyboard',
'reply_markup' => [
'inline_keyboard' => [
[
[
'text' => 'Try me',
]
]
]
]
];
$send = $this->api_url . "/sendmessage?" . http_build_query($args);
file_get_contents($send);
Found the problem, You must use exactly one of the optional fields .. 'text' field is not enough
If this fixed your issue, you'd better accept your own answer as the correct answer.
What exactly is code? What is else fieldsadded? Same problem in sedMessage
| common-pile/stackexchange_filtered |
What is the frequency for each level of categorical variables to obtain the most reliable results?
I'm working with a real dataset. In some variables, the frequency between each of the variables is not uniform. For example, in the Occup variable, the frequency for levels 2, 3, and 6 is very small. Also, for the PregHyp variable, the frequency at level 1 is too low.
[1] "Occup"
levels 1 2 3 4 5 6
Frequency 57 3 5 696 41 1
[1] "PregHyp"
levels 1 2
Frequency 9 631
What is the frequency for each level of categorical variables to obtain the most reliable results? I need a reference for this problem such as a book or article.
If you wanna use a chi-square test for testing independencg of data, there are several limitation you should consider. One of which asked for is about the frequencies. In several context you need to have frequencies above 5 at least in 80% of cells.
Here is a sample article
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3900058/
| common-pile/stackexchange_filtered |
TextView is cutting off a lot of information when I try to pull text from a website
In my app I need to pull text from this website.
http://www.cellphonesolutions.net/help-en-lite
Notice how its a very large file. When I try to pull the text it doesn't get the first half of the text. Is there a limit to the amount of characters a textview can hold? if so how do I go around this problem. Here is the code I use to gather the text.
//in the oncreate method
TextView faq = (TextView) findViewById(R.id.tvfaq);
faq.setText((Html.fromHtml(
cleanhtml(getText("http://www.cellphonesolutions.net/help-en-lite)
)));
This clears up the comments that Html.fromHtml doesn't filter out
public String cleanhtml(String original) {
String finalText = original.replaceAll("<![^>]*>", "");
return finalText;
}
This is how I get the text from the website
public String getText(String uri) {
HttpClient client1 = new DefaultHttpClient();
HttpGet request = new HttpGet(uri);
ResponseHandler<String> responseHandler = new BasicResponseHandler();
try {
String response_str = client1.execute(request, responseHandler);
return response_str;
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
return "";
}
}
Here is my xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout android:background="@drawable/splash_fade"
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="fill_parent"
android:layout_height="fill_parent">
<ScrollView android:layout_width="fill_parent" android:layout_height="wrap_content">
<TextView android:layout_gravity="center" android:text="TextView" android:gravity="center"
android:id="@+id/tvfaq" android:layout_width="wrap_content" android:textColor="#000000"
android:layout_height="wrap_content"></TextView>
</ScrollView>
</LinearLayout>
Try to remove the ScrollView
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout android:background="@drawable/splash_fade"
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical" android:layout_width="fill_parent"
android:layout_height="fill_parent">
<TextView android:layout_gravity="center" android:text="TextView" android:gravity="center"
android:id="@+id/tvfaq" android:layout_width="wrap_content" android:textColor="#000000"
android:layout_height="wrap_content"></TextView>
</LinearLayout>
TextView has an implemeted scroller that will do what you expect with a scrollview.
UPDATE
you need to apply that to your textView faq.setMovementMethod(new ScrollingMovementMethod());
If that doesn't work try to debug your application and see if response_str has all the characters from the website
I removed the scroll view and the lost text shows up! However now I can't scroll through the text!
Is your TextView wrapped around ScrollView. Inherently TextView does not have scrolling capability , so overflown text will get cut of from display.
to prevent this use wrap a ScrollView around your textview , something like this in xml
<ScrollView android:id="@+id/ScrollView01"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:fillViewport="true">
<TextView
android:layout_height="wrap_content"
android:id="@+id/TextView01"
android:layout_width="fill_parent"
/>
</ScrollView>
yes it is, I will edit my question and show my xml, the problem isn't that it can't scroll but that not all the text is appearing which is quite strange
I'm also experiencing this bug. I've found that arbitrary paddingBottom values fix it -- but I hate this solution.
I'm convinced it's a bug in android.
| common-pile/stackexchange_filtered |
How to save a dok matrix
How can I save a dok matrix and load it again later?
import scipy.sparse as sp
mat = sp.dok_matrix((df.shape[0], len(df['itemid'].unique())), dtype=np.float32)
for buyerid, itemid in zip(df['buyerid'], df['itemid']):
mat[buyerid, itemid] = 1.0
# my try
sp.save_npz('/content/gdrive/My Drive/train_matrix.npz', mat)
.
.
.
# Loading the dok matrix
train_mat = sp.load_npz('spotify_train_matrix.npz')
The error
/usr/local/lib/python3.6/dist-packages/scipy/sparse/_matrix_io.py in save_npz(file, matrix, compressed)
69 arrays_dict.update(row=matrix.row, col=matrix.col)
70 else:
---> 71 raise NotImplementedError('Save is not implemented for sparse matrix of format {}.'.format(matrix.format))
72 arrays_dict.update(
73 format=matrix.format.encode('ascii'),
NotImplementedError: Save is not implemented for sparse matrix of format dok.
Could someone please help me to save and load the dok matrix which I'm creating?
As you've noticed, you can't. But the notes on dok_matrix say:
Can be efficiently converted to a coo_matrix once constructed.
and coo_matrix does support saving with scipy.sparse.save_npz. So I would suggest converting with .tocoo() and then saving. You can convert back after loading (if you wish).
thank you very much for your answer! May you can help me how I could convert it?
@MichaelSokoij I wrote that in the answer already. Call .tocoo() on the dok_matrix object.
Sorry, so with sp.save_npz('train_matrix.npz', mat.tocoo()) right? And how do I convert it back sp.load_npz(rain_matrix.npz') ?
@MichaelSokoij Simply sp.load_npz(rain_matrix.npz').todok().
Sometimes it's too easy, thanks for your help and time.
| common-pile/stackexchange_filtered |
How to set right query using Retrofit
I'm trying to set GET query using Retrofit, but nothing works
@GET("/search/users?q={username}+type:user&page={page}&per_page=100")
Call<List<User>> getUsers(@Query("username") String username, @Query("page") int page);
example query: /search/users?q=Mike+type:user&page=1&per_page=100
Please, help
What exactly doesnt work, and which retrofit version?
AFAIK, based on my experience, Retrofit is supposed to build the query parameters for you. So try this instead, @GET("/search/users") Call<List<User>> getUsers(@Query("q") String username, @Query("page") String page, @Query("per_page") String per_page); I am not sure how to add +type:user yet so unless someone else can chime in and answer that you could make it part of the username string.
It seems that reason is in convertor. I'm getting one user and list of users by searching on GitHub. I have bean class for "User", and retrofit perfectly works for getting one user. And also I need a list of users by searching.But github api returns list of user in array "items". So, how can I set retrofit for parsing list of users from this array "items"?
Can you post the resulting JSON you're trying to parse?
| common-pile/stackexchange_filtered |
Why do we add the cost function as a phase in QAOA?
For QAOA, my understanding is that after we derive our cost Hamiltonian: $H_c|x$>$=C(x)x$ where $C(x)$ is the cost function on input x. We exponentiate it: $e^{-iH_c\theta}$, so we can simulate this Hamiltonian.
While doing so, we have effectively moved the value of the cost function as a phase.
Example
If $C(x)$ is defined as $C(0)=0$, $C(1)=1$ then:
$H=0.5(I-Z)=\begin{bmatrix}
0 & 0\\ 0 & 1
\end{bmatrix}$
We would then get $U_c=e^{-iH_c\theta}=\begin{bmatrix}
e^{-i(0)\theta} & 0\\ 0 & e^{-i\theta}
\end{bmatrix}$
Now we can see that $U_c|0$>$=e^{-i(0)\theta}|0$> and $U_c|1$>$=e^{-i\theta}|1$>
It looks like all we were trying to do was to add the cost function as a phase for our input. (And also varying that phase by some coefficient $\theta$)
Question
Why are we trying to do this? How does this help us in minimizing the hamiltonian (the cost function)?
Do you understand the relationship between the adiabatic theorem and QAOA? Because that’s basically all the justification for QAOA that exists.
@JahanClaes I read about the topic but I wasn't able to grasp the relation. Would you recommend any resources for me?
The original Farhi et al. paper explains their reasoning pretty well
It's not really the phase of the unitary that matters. The hamiltonian is the more important part.
However, the phase that the hamiltonian applys technically isn't useful in of itself, in the sense that it's the connection to Adiabatic Quantum Computing that allows us to choose an ansatz for VQE (which QAOA is just a special case of), which allows for some guarantees such as monotonically increasing accuracy from our classical optimizer as $p$, the number of iterations of the alternating unitaries, increases.
| common-pile/stackexchange_filtered |
Prove that A is a subset of B. {...for some odd integer}
A={ $a$ ∈ Z | $a= b^2 $ for some odd integer b}
B={ $a$ ∈ Z| $a= 8k +1 $ for some k ∈ Z}
I am totally lost on where to start. Would I make b= 2m+1?
Making $b=2m+1$ is a good start.
$b = 2k + 1$
$b^2 = (2k + 1)^2 = 4k^2 + 4k + 1 = 4(k^2 + k) + 1 = 4(k+1)k + 1$
Either $k +1 $ or $k$ is divisible by $2$, so $4(k+1)k = 8n$ for some $n$
so, $b^2 = 8n + 1$ if $b$ is odd. The opposite is not necessary true, e.g. 17 = 2*8 + 1 is not a full square.
Consider the only cases in which $b^2$ is odd:
$b\equiv1\pmod8 \implies b^2\equiv1^2\equiv 1\equiv1\pmod8$
$b\equiv3\pmod8 \implies b^2\equiv3^2\equiv 9\equiv1\pmod8$
$b\equiv5\pmod8 \implies b^2\equiv5^2\equiv25\equiv1\pmod8$
$b\equiv7\pmod8 \implies b^2\equiv7^2\equiv49\equiv1\pmod8$
| common-pile/stackexchange_filtered |
Jenkins: Limit Credentials to 'Manage Jenkins > Configure System'
We would like to use the GitHub Pull Request Builder plugin in Jenkins, however in order to use this plugin you are required to enter credentials in the 'Manage Jenkins > Configure System' section that gives access to a given GitHub Enterprise server.
Our issue is that credentials giving access to all of github are too strong to be stored in the credentials manager. I know that you can limit the scope of credentials by using the Folders plugin, however this just limits access to those credentials to jobs in certain folders. Is there a way to restrict credentials so they can only be used in the 'Manage Jenkins > Configure System' section?
No, there isn't. And @bitoiu gave you the next best thing. A personal access token with reduced scopes is what you need. If you only need to clone and build, I don't think you'll need the admin permissions for that. One thing you could look into is whether it's possible to develop your own plugin to implement a credential type that can only be used in the global configuration.
Our issue is that credentials giving access to all of github are too strong to be stored in the credentials manager.
This is why you can also use a Personal Access Token. Check the documentation for the plugin at: https://go.cloudbees.com/docs/plugins/pull-request-builder-for-github/. This is not the official plugin page but having read both, this one keeps to the best practices in terms of credentials. The important bits are:
Go to your GitHub settings page.
In the left sidebar, click Personal Access Token.
Click Generate new token.
Give your token a descriptive name
Select the scopes to grant to this token. Pull request tester plugin require permission to administer repository hooks and access repositories: repo, public_repo, admin:repo_hook, repo:status.
Then you can follow the rest of the guide to enter the token in the plugin configuration pages.
Hope this helps.
Hi and thank you for your response. Unfortunately my problem isn't with keeping a password in the password manager. Even if the credential is in the form of a personal access token, it will be available to anyone to use. Because the pull request builder requires access to all repositories on our github server, we need to be able to limit who can use that credential to prevent everyone having access to all repositories.
Our issue is that credentials giving access to all of github are too strong to be stored in the credentials manager.
There's no option then because Jenkins does not support Apps or the plugin doesn't work with GitHub Apps. My answer was directly pointing at your comment above. A username/password is completely diff from a PAT in what rights it gives. First of all the scope is reduced and there's destructive actions that you can't do with the PAT, but only through the WebUI, so that's your best choice.
This is in fact the next best thing to what is being asked —short of developing your own type of credential plugin and somehow restricting where it can be used. I don't get why it was voted down. An answer should not be voted down just because it's not the answer you want, especially when the answer you want is not feasible. I've voted it up to compensate.
| common-pile/stackexchange_filtered |
How to ignore nulls in PostgreSQL window functions? or return the next non-null value in a column
Lets say I have the following table:
| User_id | COL1 | COL2 |
+---------+----------+------+
| 1 | | 1 |
| 1 | | 2 |
| 1 | 2421 | |
| 1 | | 1 |
| 1 | 3542 | |
| 2 | | 1 |
I need another column indicating the next non-null COL1 value for each row, so the result would look like the below:
| User_id | COL1 | COL2 | COL3 |
+---------+----------+------+------
| 1 | | 1 | 2421 |
| 1 | | 2 | 2421 |
| 1 | 2421 | | |
| 1 | | 1 | 3542 |
| 1 | 3542 | | |
| 2 | | 1 | |
SELECT
first_value(COL1 ignore nulls) over (partition by user_id order by COL2 rows unbounded following)
FROM table;
would work but I'm using PostgreSQL which doesn't support the ignore nulls clause.
Any suggested workarounds?
You need a column to specify the ordering. SQL tables are inherently unordered.
You can still do it with windowing function if you add a case when criteria in the order by like this:
select
first_value(COL1)
over (
partition by user_id
order by case when COL1 is not null then 0 else 1 end ASC, COL2
rows unbounded following
)
from table
This will use non null values first.
However performance will probably not be great compared to skip nulls because the database will have to sort on the additional criteria.
But that's not really the same thing as the IGNORE NULLS clause.
A clause that postgresql does not support atm
This would not work if you are looking for the non-null first value after the current row. It will take the first non-null value for this user_id no matter of the position of the current row.
I also had the same problem. The other solutions may work, but I have to build multiple windows for each row I need.
You can try this snippets : https://wiki.postgresql.org/wiki/First/last_(aggregate)
If you create the aggregates you can use them:
SELECT
first(COL1) over (partition by user_id order by COL2 rows unbounded following)
FROM table;
There is always the tried and true approach of using a correlated subquery:
select t.*,
(select t2.col1
from t t2
where t2.id >= t.id and t2.col1 is not null
order by t2.id desc
fetch first 1 row only
) as nextcol1
from t;
The t.id in the t2.id >= t.id filter isn't being found when I run this
@user3558238 . . . What do you mean it isn't being found? t is the alias of the table in the outer query; t2 is the alias in the inner query.
it's saying the t.user_id does not exist, perhaps subqueries can't refer to outer query parameters in PostgreSQL?
@user3558238 . . . Postgres definitely supports correlated subqueries. You should edit your question and include your attempt.
Aggregate functions can be used as window functions
There's an aggregate filter clause.
Window spec can tell it to order most recent first. Get them into an array:
select (array_agg(COL1)filter(where COL1 is not null)over w1)[1]
from cte
window w1 as (order by d desc rows
between current row and unbounded following);
You pop the array[1].
Nice thing is, it's not just skip nulls, it's anything you can express in the where and it combines lag, lead, first, last and nth_value being able to target any position, which can also be an expression switching it dynamically. Looks and performance aside, one downside might be that if you want a negative subscript (supported for json arrays but not for the regular ones), you need to flip the window, or aggregate to json array, or bounce off of upper bound: (array_agg()over w1)[count()over w1 -3] or repeat array_agg() and take its array_upper().
Hope this helps,
SELECT * FROM TABLE ORDER BY COALESCE(colA, colB);
which orders by colA and if colA has NULL value it orders by colB.
You can use COALESCE() function. For your query:
SELECT
first_value(COALESCE(COL1)) over (partition by user_id order by COL2 rows unbounded following)
FROM table;
but i don't understand what the reason to use sort by COL2, because this rows has null value for COL2:
| User_id | COL1 | COL2 |
+---------+----------+------+
| 1 | | 1 |
| 1 | | 2 |
| 1 | 2421 | | <<--- null?
| 1 | | 1 |
| 1 | 3542 | | <<--- null?
| 2 | | 1 |
| common-pile/stackexchange_filtered |
insert text between two logos on title page (horizontally)
I need to insert two logos of my university between its name like this
But the best I made is this
Logos are slightly different but it doesn't matter. I was trying to put images into the figure environment, tried the wrapfig package, but none of them worked. So is there any way to put images as they should be?
{\small \sc \bf
\includegraphics[width=1.3cm, height=1.3cm]{msu-logo.png}
MOSCOW GOVERNMENT UNIVERSITY
\includegraphics[width=1.3cm, height=1.3cm]{cmc-logo2.png}\\
named after M.V.~Lomonosov\\
The faculty of Computational Mathematics and Cybernetics
\par\noindent\rule{\textwidth}{0.1pt}
}
please see if the answer meets the requirement -- the font size can be changed to fit the lines between the logos on both sides
@jsbibra, thanks, the only thing is that text should be centered, but I think I'll be able to do so. Thanks again!
\documentclass[]{article}
\usepackage{graphicx}
\usepackage{fancyhdr}
\usepackage{color}
\usepackage[dvipsnames]{xcolor}
\usepackage{tikz}
\long\def\mytitle{%
\begin{titlepage}
\begin{center}
\begin{minipage}{0.15\textwidth}%
\includegraphics[width=0.8\textwidth]{example-image-a}%
\end{minipage}\hspace{10pt}
\begin{minipage}{0.6\textwidth}%
MOSCOW GOVERNMENT UNIVERSITY\\
named after M.V.~Lomonosov\\
The faculty of Computational Mathematics and Cybernetics\\
\end{minipage}\hspace{10pt}
\begin{minipage}{0.15\textwidth}%
\includegraphics[width=0.8\textwidth]{example-image-a}%
\end{minipage}
\begin{tikzpicture}%
\draw[thick, brown] (0,0)--(0.99\textwidth,0);%
\end{tikzpicture}%
\end{center}
\end{titlepage}
}
\begin{document}
\mytitle
\end{document}
| common-pile/stackexchange_filtered |
How to achieve dependent foreign keys listing in django-admin?
Suppose I have 3 models:- Address, Country, State
Address Model:
class AddressModel(BaseModel):
country = models.ForeignKey(CountryModel, null=True, blank=True, on_delete=models.PROTECT)
state = models.ForeignKey(StateModel, null=True, blank=True, on_delete=models.PROTECT)
city = models.CharField(max_length=200, null=True, blank=True)
pincode = models.CharField(max_length=6, null=True, blank=True)
address_line_1 = models.TextField(max_length=200, null=True, blank=True)
address_line_2 = models.TextField(max_length=200, null=True, blank=True)
Country Model:
class CountryModel(BaseModel):
name = models.CharField(max_length=100)
code = models.CharField(max_length=30)
and State Model:
class StateModel(BaseModel):
country = models.ForeignKey(CountryModel, on_delete=models.PROTECT)
name = models.CharField(max_length=100)
code = models.CharField(max_length=30)
While adding a new address in django admin, I want to show the list of only those states which belong to the selected country i.e I want to implement something like dependent foreign key list in django-admin.
I would like to achieve it without using jquery or ajax
How can I do that?
probably you can't do that without js/jquery/ajax
ok
how can i achieve this with ajax
There some blogs like this one which might help you: https://simpleisbetterthancomplex.com/tutorial/2018/01/29/how-to-implement-dependent-or-chained-dropdown-list-with-django.html
I want to do this in admin
| common-pile/stackexchange_filtered |
Drawing in several windows with gl/glx
I am looking at the NeHe OpenGL tutorials (nehe.gamedev.net), which as almost every example also for Linux/glx.
But how can open several windows and draw into all of them?
Thanks!
Creating more than one window is easy, just repeat the procedure.
If you want to draw the same scene to different windows, you can draw the scene using multiple render targets. Google knows lots of tutorials for that.
If you want to draw different things into different windows, you can either use multiple OpenGL instances in separate threads/processes or use so-called swap-chains in Direct3D. I don't know exactly how to translate them into OpenGL. You can share a single OpenGL device between multiple rendering threads using makeCurrent(). Sharing common resources is not trivial though.
Hey,
When googling for render targets and opengl, I only find entries explaining how to render to a texture.
I want to open several windows and draw a different view of a scene into them. So they need to share textures.
I found a mini-tutorial here:
http://www.cs.uml.edu/~hmasterm/Charts/Managing_Multiple_Windows.ppt
| common-pile/stackexchange_filtered |
Unable to convert moment.js formatted date to valid date
Unable to convert moment.js formatted date to valid date using
new Date('08-Mar-19 06:01 AM') // Gives invalid date in IE
Note: it doesn't work in IE. Only works in Chrome.
Actually i got the format ('08-Mar-19 06:01 AM') using moment.js
const date = moment(value, 'YYYY-MM-DD hh:mm:ss.S'); // from "2019-03-08T06:01:52-05:00"
return date.isValid() ? date.format('DD-MMM-YY hh:mm A') : value;
i tried by parsing using moment.js still doesn't work.
new Date('08-Mar-19 06:01 AM') // Gives invalid date in IE
Expected:
new Date('08-Mar-19 06:01 AM');
Fri Mar 08 2019 06:01:00 GMT-0500 (Eastern Standard Time)
Actual:
new Date('08-Mar-19 06:01 AM'); // Invalid date
Have you tried specifying the format of the string you are parsing with moment explicitly? E.g., moment("08-Mar-19 06:01 AM").format("DD-MMM-YY hh:mm A")? https://stackoverflow.com/a/33732164 answer might be relevant.
Thank you so much @thmsdnnr.
It helped me with your solution
I slightly modified by following...
new Date(moment(value, 'D-MMM-YYYY').format())
| common-pile/stackexchange_filtered |
Is the difference between two MSEs significant?
I developed several Elo rankings and used MSEs to compare them on their predictive capacity of the 2018 World Cup. I've been asked to use a statistical test to find if the difference between two of my model's MSE (0,0588 and 0,0580) is significant.
Do you have any idea which test should I try?
I don't think it's very useful to do a hypothesis test to compare average prediction errors - that seems like a confusing mix of two different statistical philosophies. A more suitable approach would be to decide based on your subject knowledge how much of an improvement in prediction error would be meaningful, and proceed from there.
Given that this requires subject matter knowledge to evaluate, it's hard for us to tell you whether 0.0008 is a meaningful difference.
It makes little sense to compare predictive accuracies on a single outcome. Method A could be better or worse than method B for a single output just by chance.
If you have an entire dataset of multiple outcomes you have predicted using methods A and B, then you can more meaningfully test whether A or B is significantly better. The standard test for this is the diebold-mariano test. The tag wiki contains more information and pointers to literature.
I do not know about Elo rankings, but it should be similar to following example in statistical meaning.
Suppose $X_i, i=1,...,n$ follow normal distribution $N(\mu,\sigma^2)$, and the purpose is to estimate $\mu$. One estimate is sample mean, another is sample median.
Are two estimates the same? Obviously, the answer is NO.
Based on one pair of sample mean and median to compare two estimates get the conclusion which one is better? The answer is it is impossible.
| common-pile/stackexchange_filtered |
Reversing a number using recursion
I was tasked with reversing an integer recursively. I have an idea of how to formulate my base case but I'm unsure of what to put outside of the if statement. The parts I was unsure about are commented with question marks. With the first part, I don't know what to put and with the second part I'm unsure about whether it is correct or not.Thank you for the help.
Note: I'd like to avoid using external functions such as imports and things like these if possible.
def reverseDisplay(number):
if number < 10:
return number
return # ??????????
def main():
number = int(input("Enter a number: "))
print(number,end="") #???????????
reverseDisplay(number)
main()
If I have the base 10 number '123456' then its reverse is '6' concatenated with the reverse of '12345'.
I'm not going to give you the answer, but I'll give some hints. It looks like you don't want to convert it to a string -- this makes it a more interesting problem, but will result in some funky behavior. For example, reverseDisplay(100) = 1.
However, if you don't yet have a good handle on recursion, I would strongly recommend that you convert the input to a string and try to recursively reverse that string. Once you understand how to do that, an arithmetic approach will be much more straightforward.
Your base case is solid. A digit reversed is that same digit.
def reverseDisplay(n):
if n < 10:
return n
last_digit = # ??? 12345 -> 4
other_digits = # ??? You'll use last_digit for this. 12345 -> 1234
return last_digit * 10 ** ??? + reverseDisplay(???)
# ** is the exponent operator. If the last digit is 5, this is going to be 500...
# how many zeroes do we want? why?
If you don't want to use any string operations whatsoever, you might have to write your own function for getting the number of digits in an integer. Why? Where will you use it?
Imagine that you have a string 12345.
reverseDisplay(12345) is really
5 + reverseDisplay(1234) ->
4 + reverseDisplay(123) ->
3 + reverseDisplay(12) ->
2 + reverseDisplay(1) ->
1
So I should be separating the original input into new variables?
@user3495234 variables aren't the important concept here, but I think you have the right idea. We need the last part of the number to go first, and the rest to be fed back into the function. It doesn't matter whether we store them as variables in between, I just did that for readability.
Is there a way to do it while still retaining the print(,end="") function I had? I feel like there should be an easier way using that which would negate the use of exponents.
Honestly, it might be a terrible idea, but who knows may be it will help:
Convert it to string.
Reverse the string using the recursion. Basically take char from the back, append to the front.
Parse it again.
Not the best performing solution, but a solution...
Otherwise there is gotta be some formula. For instance here:
https://math.stackexchange.com/questions/323268/formula-to-reverse-digits
I was aware of that possibility but I'd like to try avoiding that if possible. Otherwise I'll never really learn recursion.
@user3495234 This is still "recursion" and is a perfectly valid way to do it. If you would rather not ever convert it to a string, think for a minute about what mathematical operations you would need to perform to do the equivalent thing.
@user3495234 I have an answer for you that includes just math. But it's too late to type it now. Will do tomorrow.
Suppose you have a list of digits, that you want to turn into an int:
[1,2,3,4] -> 1234
You do this by 1*10^3 + 2*10^2 + 3*10^1 + 4.*10^0. The powers of 10 are exactly reversed in the case that you want to reverse the number. This is done as follows:
def reverse(n):
if n<10:
return n
return (n%10)*10**(int(math.log(n,10))) + reverse(n//10)
That math.log stuff simply determines the number of digits in the number, and therefore the power of 10 that should be multiplied.
Output:
In [78]: reverse(1234)
Out[78]: 4321
In [79]: reverse(123)
Out[79]: 321
In [80]: reverse(12)
Out[80]: 21
In [81]: reverse(1)
Out[81]: 1
In [82]: reverse(0)
Out[82]: 0
Does exactly what @GregS suggested in his comment. Key to reverse is to extract the last digit using the modulos operator and convert each extracted digit to a string, then simply join them back into the reverse of the string:
def reverseDisplay(number):
if number < 10:
return str(number)
return str(number % 10) + reverseDisplay(number / 10)
def main():
print (reverseDisplay(int(input("Enter a number: "))))
main()
Alternative method without using recursion:
def reverseDisplay(number):
return str(number)[::-1]
Is there a way to do it without converting to string?
Yes there is a way, but if you have the number 19000, you will not get 00091 using that method. I will post the method as well
@user3495234, on second thought, I can't think of a way of doing it without resolving to log or math library like inspectorG4dget did
| common-pile/stackexchange_filtered |
Laravel Send or Queue Mail depending on config setting
I am looking for a neat way to send or queue email depending on a config setting.
Right now I am having to do something like this everytime I send an email
$mailContent = new AccountNotification($account);
$mailObject = Mail::to($email);
if(config('app.queueemail')){
$mailObject->queue($mailContent);
} else {
$mailObject->send($mailContent);
}
There has to be a simpler way to do this so I don't have to repeat this code each time I want to send an email.
You could extend the Mail class. Put in a custom send function that checks the config and uses the parent class's send/queue functions accordingly.
Extending @ceejayoz's comment, a simpler way could also be to use a global Helper function.
For example, you could have a send_email() global function that will send/queue email depending on your app's configuration.
if ( ! function_exists('send_email')) {
/**
* Sends or queues email
*
* @return mixed
*/
function send_email($mailer, $content)
{
return config('app.queueemail')
? $mailer->queue($content)
: $mailer->send($content);
}
}
To use it you would do:
send_email($mailer, $content);
| common-pile/stackexchange_filtered |
How to catch a certain part of a website?
Suppose that we have a website. We want to show a specified part of this site in another site, like a table of data that shows latest news, and we want to show this part in our website with javascript.
Is this possible? Are there any more information needed?
We all know that with this code:
<iframe src="http://www.XYZ.com">
</iframe>
we can load all website, but how to load a specific part of a website?
If you don't own both sites, this is frowned on. If a site wants you doing stuff like this they'll supply an API, widget, RSS feed, etc. And if you do own both sites, you could be sharing this content on the back end, not scraping via Javascript on the front.
Google screen scraping.
I think your best bet is JQuery.load(), but I'm not up to speed on whether there are crossdomain problems with that approach. • Just checked and there are.
You can use PHP to load the output of a page with file_get_contents() or similar and then scrape out what you want for your own output.
Example:
$str = file_get_contents("http://ninemsn.com.au/");
// Reallly shouldn't use regex with HTML.
$region = '/(\<div id\=\"tabloid\_2\" class\=\"tabloids\">)(.*)(\<\/div\>)/';
preg_match($region, $str, $matches);
echo $matches[0];
Finally, something to keep in the back of your mind is that many of the larger websites that developers may want to get content from for their own website offer APIs to easily and efficiently obtain information from their site. YouTube, Twitter and a number of photo sharing sites are good examples of these. JSON and XML are probably the most common data formats that you will receive from these APIs.
Here are some examples of APIs that produce usable JSON:
YouTube video feed: https://gdata.youtube.com/feeds/api/users/Martyaced/uploads?alt=json
Twitter feed: http://api.twitter.com/1/statuses/user_timeline.json?screen_name=Martyaced
I wrote it when there was just one paragraph:D.for that two php codes.
@foreign.man Okay, give the PHP I updated my answer with a try to see what I mean. It's an extremely fiddly approach - site owners don't really want you to scrape content from their site; if they want you to be able to do that, they'll provide an API to do so.
thanks, is that work for a table (td)? and how can i use it in my website? (i'm new with php)
@foreign.man You need to be using a server that supports PHP and the file needs to be a .php file. The code above also needs to be wrapped in <?php ...code... ?>.
| common-pile/stackexchange_filtered |
request_irq- irq flag set to 0 ; is this valid?
In some of the drivers while browsing 2.6.35, it is observed tht request_irq is passed a value 0 for irq flags. When seen in interrupt.h - 0 corresponds to IRQ_TRIGGER_NONE;
Is this equivalent to the case of IRQ_NONE in previous kernels?
Thanks.
The actual flags passed into request_irq() are defined in a comment in :
/*
* These flags used only by the kernel as part of the
* irq handling routines.
*
* IRQF_DISABLED - keep irqs disabled when calling the action handler.
* DEPRECATED. This flag is a NOOP and scheduled to be removed
* IRQF_SAMPLE_RANDOM - irq is used to feed the random generator
* IRQF_SHARED - allow sharing the irq among several devices
* IRQF_PROBE_SHARED - set by callers when they expect sharing mismatches to occur
* IRQF_TIMER - Flag to mark this interrupt as timer interrupt
* IRQF_PERCPU - Interrupt is per cpu
* IRQF_NOBALANCING - Flag to exclude this interrupt from irq balancing
* IRQF_IRQPOLL - Interrupt is used for polling (only the interrupt that is
* registered first in an shared interrupt is considered for
* performance reasons)
* IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished.
* Used by threaded interrupts which need to keep the
* irq line disabled until the threaded handler has been run.
* IRQF_NO_SUSPEND - Do not disable this IRQ during suspend
* IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set
* IRQF_NO_THREAD - Interrupt cannot be threaded
* IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device
* resume time.
*/
These are bits, so a logical OR (ie |) of a subset of these can be passed in; and if none apply, then the empty set is perfectly fine -- ie a value 0 for the flags parameter.
Since IRQF_TRIGGER_NONE is 0, passing 0 into request_irq() just says leave the triggering configuration of the IRQ alone -- ie however the hardware/firmware configured it.
IRQ_NONE is in a different namespace; it is one of the possible return values of an interrupt handler (the function passed into request_irq()), and it means that the interrupt handler did not handle an interrupt.
IRQ_NONE is a constant for the return values of IRQ handlers. It is still available in include/linux/irqreturn.h.
IRQ_TRIGGER_NONE is a specifier for the behaviour of the interrupt line.
So they are not equivalent.
| common-pile/stackexchange_filtered |
Can the support of a coherent sheaf be a numerically trivial divisor?
Let $X$ be a smooth projective variety with Picard number 1 over $\mathbb{C}$. Let $F$ be a coherent sheaf on $X$ such that $c_1(F)$ is algebraically trivial, and hence numerically trivial. Also rank $F=0$.
So $F$ is supported on a proper closed subscheme of $X$. Can $F$ be supported on a divisor, or is the codimension of Support $ F\geq 2$.
Since the Picard number of $X$ is 1, any non-zero effective divisor is ample. Since $c_1(F)$ is numerically trivial, if $F$ is supported on a divisor, it is non-effective. Is this possible?
Note, I posted this question on MathSE, but have not received any answers.
Your hypothesis that $F$ has rank $0$ and $c_1(F)$ is numerically trivial is enough to conclude that $F$ cannot be supported on a divisor. (The Picard number 1 hypothesis is not necessary for this.)
@ulrich, thank you for the clarification. It would be helpful if you can elaborate a bit.
@ulrich, I have been trying to understand what you said. Do we have that - if we have a numerically trivial divisor $D$, then it cannot be the support of any sheaf. Is it because the $D$ is not effective?
Yes, the point is that on a projective variety a nonzero effective divisor is never numerically trivial.
| common-pile/stackexchange_filtered |
Replace t-test with modified z-test - anyone ever seen this?
I just read about a procedure I'd never heard of, and was wondering if anyone has had any experience with it. In Donald Berry's "Statistics" book, he presents an alternative to doing a t-test by modifying the sample standard deviation, with a factor that increases with lower-N:
$s\rightarrow s\left(1+\frac{20}{n^2}\right)$
and then doing a simple z-test. Has anyone ever seen this, or seen a derivation of this? It seems to empirically match the t-distribution test at the 99% level, and overestimates the standard deviation for lower percentiles.
I doubt that it would make a lot of practical difference, but if justified, it might help students in intro stats classes.
thoughts?
I don't know that I have seen this particular approximation, but I have seen similar rules for improving the rate at which a normal approximation may be used for a $t$. It's clearly not based on matching the standard deviation of the standard normal to that of the standard $t$, which yields a quite different approximation, but it might be based on approximately matching some quantile, as your question seems to hint.
Haven't seen it either. Not sure why it would help students to have yet another apparently arbitrary formula to remember.
For any given significance level, a simpler formula does better--quite well, in fact.
What's really going on here is that we want to approximate quantiles of the t-distribution of $n-1$ degrees of freedom by adjusting the corresponding quantiles of the standard Normal distribution. A multiplicative adjustment is a natural one to try (rather than an additive one, especially if you have ever spent time staring at tables of these critical values). So what we would like to know is how the ratio of quantiles $t_{n,1-\alpha}/z_{1-\alpha}$ varies with $n$ for smallish values of $\alpha$ around $1$% or $5$%.
A good way to obtain the answer is to compute some of these ratios and plot them against $n$. Using standard methods of exploratory data analysis, I find that
$$f(n, \alpha) = 1 / \left(t_{n-1,1-\alpha}/z_{1-\alpha} - 1\right)$$
varies in a beautifully linear fashion with $n$ for a wide range of small $n$ (which is where any such adjustment actually matters). Not only that, the variation is linear regardless of the value of $\alpha$, too. The evidence is abundantly clear in this plot of $f(n,\alpha)$ versus $n$ for $\alpha$ ranging from $1/8 = 0.125$ down to $1/2^{10}\approx 0.001$ by factors of $1/2$:
In this figure, colors distinguish values of $\alpha$. The slopes get small as $\alpha$ decreases.
Knowing this, you can pick your favorite value of $\alpha$, such as $\alpha = 5$%, find the formula of the corresponding line (visually if you like or using least squares for more precision), and fiddle a little with the results to obtain a pleasing formula. For instance, with $\alpha=5$%, an intercept of $-2/3$ and slope of $1$ work fine. Specifically, this means the standard deviation should be multiplied by $1 + 1/(n-2/3))$. Here's the evidence:
A plot of the residuals (not shown) indicates this approximation is accurate to about one part in $300$ except for 1 and 2 degrees of freedom, where it errs by $4$% and $-1.5$%, respectively. This is pretty good considering the Student t quantiles themselves range from $3.84$ down to $1.03$ times the Standard Normal quantile.
Given that the actual behavior of quantiles of the Student t distribution is proportional to $1/n$ rather than $1/n^2$, I see little pedagogical value in using the $1/n^2$ adjustment proposed in the question: at best it might be useful as an approximation, but it provides no valid insight into the relationship between the Student t and standard Normal distributions.
Here is R code used to produce these plots.
#
# Approximate qt by qnorm.
#
y <- function(x, q) {
sapply(q, function(r) 1/(qt(r, x) / qnorm(r) - 1))
}
#
# Plot the approximation.
#
x <- 1:30 # Degrees of freedom
q <- 1 - exp(-seq(log(8), 7, log(2))) # Quantiles
k <- length(q)
data <- y(x, q) # Approximations
fits <- apply(data, 2, function(z) lm(z ~ x)$coeff) # Fitted lines
colors <- hsv(1 - (1:k - 1/2)/k, .8, .8)
plot(t(matrix(rep(x,k), ncol=k)), t(data),
pch=19, col=colors,
xlab="df", ylab="1/(t/z - 1)")
tmp <- sapply(1:k, function(i) abline(fits[,i], col=colors[i]))
#
# Figure 2: Evaluate the approximation for a particular quantile
#
plot(qt(.95, x),qnorm(.95) *(1 + 1 / (x-2/3)), pch=19, col="Red",
log="xy", main="Estimated versus actual Student t quantile")
abline(0,1)
#
# Plot the relative residuals
# (The approximation typically is better than 2 sig figs except for
# 1 and 2 df.)
#
plot(qt(.95, x) / (qnorm(.95) *(1 + 1 / (x-2/3))), pch=19, col="Red",
log="y", main="Approximation residuals")
abline(h=1)
| common-pile/stackexchange_filtered |
Two Sharepoint 2007 sites
I'm working with SharePoint 2007 and we're currently working with another organization for a long-term project. We need to access documents in their SharePoint site.
After trying to figure out the wide array of Microsoft products branded with the name 'SharePoint', I'm not even certain that SharePoint can link to an external SharePoint site outside of the domain.
Is it possible to link two separate SharePoint sites together? If so, how is it done? If not, is it possible to program a SharePoint webpart or some .NET component to transfer documents from site A to B?
There are a few ways to do it but from what details you've given, I'd suggest implementing ADFS at the sites. This should allow them to grant your account permissions to the site. If however all you need to do is access the docs (like a normal website) they can set up the site to allow anonymous access and you can post a link to it on your internal site.
Thanks for the ideas. I think ADFS would be a slightly better option in terms of security but I'm unsure if the other organization would be willing to work with us on that. Granted that problem is not in the scope of this particular discussion. What other information would be needed (specifically) from me to come up with other possible solutions?
Looks like ADFS is the Microsoft approved way of sharing sites. According to MSTechNet and google books online: http://books.google.com/books?id=-6Dw74If4N0C&pg=PA27&lpg=PA27&dq=sharing+sharepoint+sites+external+adfs&source=bl&ots=ojOlMP13tE&sig=FjsMmOHymCOMGo7il7vjWF_lagQ&hl=en&ei=ytqfStClO5mMtgejsfH0Dw&sa=X&oi=book_result&ct=result&resnum=5#v=onepage&q=&f=false
| common-pile/stackexchange_filtered |
LineSeries Chart Point Size Reduction
I want to reduce the point/marker size in this WPF Toolkit LineSeries Chart.
This is my XAML:
<Window.Resources>
<Style x:Key="DashedPolyLine" TargetType="{x:Type Polyline}">
<Setter Property= "StrokeThickness" Value="1"/>
</Style>
</Window.Resources>
<Grid>
<chartingToolkit:Chart Name="lineChart" Title="Convergence Plot" VerticalAlignment="Stretch" HorizontalAlignment="Stretch">
<chartingToolkit:LineSeries DependentValuePath="Value" IndependentValuePath="Key" ItemsSource="{Binding}" IsSelectionEnabled="True" AnimationSequence="FirstToLast" Title="Values" UseLayoutRounding="True" PolylineStyle="{StaticResource DashedPolyLine}"/>
<chartingToolkit:Chart.Axes>
<chartingToolkit:LinearAxis Orientation="Y" Maximum="1.5" Minimum="-1.5" Interval="0.2"/>
</chartingToolkit:Chart.Axes>
</chartingToolkit:Chart>
This is what I mean:
How can this be achieved? Thanks.
If it helps anyone I used this code to remove the points:
<Style TargetType="chartingToolkit:LineDataPoint">
<Setter Property="Opacity" Value="0" />
<Setter Property="Background" Value="Blue" />
</Style>
The setVisibility property doesn't work and it's a known issue.
| common-pile/stackexchange_filtered |
joi: Custom errors are not returned, abortEarly is set to false
I can't get this joi validation to return all the errors just like what it does with default errors.
So here I'm setting individual custom errors for each field:
const schema = Joi.object().keys({
a: Joi.string().error(new Error('must be string')),
b: Joi.number().error(new Error('must be number'))
});
Then when validating with abortEarly set to false, it only returns the first error it will encounter.
Joi.validate({a: 1, b: false}, schema, {abortEarly: false})
The error returned is like this,
{ error: [Error: must be string], value: { a: 1, b: false }}
when it should be returning all the errors in some manner.
Am I using abortEarly incorrectly or is there a process needed to be done in returning all custom errors? Thanks in advance for any response.
Well, I think I found the answer. My joi library wasn't updated so I had it rolled up to 10.4.1 from 10.2.x. There was some features that I've seen in the documentation that didn't work when I tried it in the older version, including the solution I did.
I tried using this pattern and it works:
const schema = Joi.object().keys({
a: Joi.string().error(() => 'must be string'),
b: Joi.number().error(() => 'must be number')
});
Like so:
{ [ValidationError: child "a" fails because [must be string]. child "b" fails because [must be number]]
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"a" must be a string',
path: 'a',
type: 'string.base',
context: [Object] },
{ message: '"b" must be a number',
path: 'b',
type: 'number.base',
context: [Object] } ],
_object: { a: 1, b: false },
annotate: [Function] }
Then I'll just parse the error.message to get all the error messages and process it.
'child "a" fails because [must be string]. child "b" fails because [must be number]'
I have another way to check each validation errors as well.
If you have one validation, it doesn't matter what condition would be, you can do like this :
username: Joi.string() // It has to be string
.label("Username") // The label
.error(new Error('It is whatever error')) // Custom general error
You can do this with arrow function too :
username: Joi.string() // It has to be string
.label("Username") // The label
.error(() => 'It is whatever error') // Custom general error
But If there are some validation params and errors we have these solutions :
password: Joi.string() // It has to be string
.min(8) // It has to have at least 8 characters
.required() // It has to have a value
.label("Password") // The label
.error(errors => {
return errors.map(err => { // Here we map the errors (ES6) discover within an array
if (err.type === "string.min") { // Check the type of error e.g: 'string.min,any.empty,number.min,...'
return { message: "The length of the parameter should be more than 8 characters" }; // Which message we want to display
} else {
return { message: "another validation error" };
}
});
})
There is another solution with Switch Case :
password: Joi.string() // It has to be string
.min(8) // It has to have at least 8 characters
.required() // It has to have a value
.label("Password") // The label
.error(errors => {
return errors.map(err => { // Here we map the errors (ES6) discover within an array
switch (err.type) {
case "string.min":
return { message: "The length of the parameter should be more than 8 characters" }; // Which message we want to display
case "any.empty":
return { message: "The parameter should have a value" };
}
});
})
Please noticed that if one of the validation errors did not specify you have error that err.type is undefined.
you can use err.context within the message to display the label, limit, max , ... in dynamic :
message: `${err.context.label} must have at least ${err.context.limit} characters.`
refrence : Joi Document
| common-pile/stackexchange_filtered |
Running a spring-boot-gradle-plugin fat jar with named application modules at runtime?
I've got a simplest modular sample application made with Gradle and Spring Boot, and I'm having trouble to launch it with the modules being full-fledged named modules at runtime.
My questions are the following
Can spring-boot-gradle-plugin build fat jars that you can run with full-fledged named modules at runtime?
If it can, then how do you make it build the jar, and how do you run it?
If it cannot, then what do I do to have modules at runtime in a Spring Boot app?
The Gradle version is 7.2
The source
(You can get it here as well: https://github.com/ptrts/modules-bootRun-bug)
settings.gradle
rootProject.name = 'app'
build.gradle
plugins {
id 'org.springframework.boot' version '2.6.4'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id 'application'
}
group = 'we'
sourceCompatibility = '17'
repositories {
mavenCentral()
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
}
application {
mainModule = 'app'
mainClass = 'app.Main'
}
src/main/java/module-info.java
module app {
requires spring.boot;
requires spring.boot.autoconfigure;
requires java.annotation;
exports app;
}
src/main/java/app/Main
package app;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Main {
public static void main(String[] args) {
// <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
// Output the module name
System.out.println("Module name = " + Main.class.getModule().getName());
// <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
SpringApplication.run(Main.class, args);
}
}
As you can see the app outputs to stdout the module name of the Main class when launched
The resulting fat jar layout
org/springframework/boot/loader
Spring Boot bootstrapping stuff
META-INF/MAINIFEST.MF
...
Main-Class: org.springframework.boot.loader.JarLauncher
Start-Class: app.Main
...
BOOT-INF
lib
dependency libs
classes
app/Main.class **the package is deep in the JAR**
module-info.class **the module descriptor is in the JAR root**
How I tried to launch the application
I tried three different ways of launching the app
./gradlew bootRun -i (-i is for INFO logging level)
java -jar build/libs/app.jar
java --module-path build/libs/app.jar --module app/app.Main
Launching with ./gradlew bootRun -i
The app prints out Module name = null
In the gradle logs we can see the command line used to launch the JVM, which also tells us that everything is in the class path
Here it is with some extra line brakes which I added for readability
C:\Program Files\Java\jdk-17.0.1\bin\java.exe
...
-cp
C:\data\projects\modules-bootRun-bug\build\classes\java\main;
C:\data\projects\modules-bootRun-bug\build\resources\main;
C:\Users\pavel\.gradle\caches\modules-2\files-2.1\org.springframework.boot\spring-boot-autoconfigure\2.6.4\36e75a2781fc604ac042945eed8be2fe049731df\spring-boot-autoconfigure-2.6.4.jar;
...
app.Main
Launching with java -jar build/libs/app.jar
The app prints out Module name = null
Launching with java --module-path build/libs/app.jar --module app/app.Main
Error occurred during initialization of boot layer
java.lang.module.FindException: Error reading module: app.jar
Caused by: java.lang.module.InvalidModuleDescriptorException: Package app not found in module
This is of course expectable, since module-info.class is in the jar root and it's package is somewhere deeper in the jar
Why are you trying to turn a fat jar into a module? A fat jar is intended as a self-contained deployment unit, somewhat similar to a war file. This means that many of the benefits of being a module aren't applicable.
@AndyWilkinson, it wasn't my intention to turn the fat jar into a module, I just put it to the module path as part of a shotgun debugging process. The main goal was to have the module "app" inside the jar to be a full-fledged named module at runtime.
| common-pile/stackexchange_filtered |
AWS S3 or Cloudfront Access denied ::: If refreshing on browser that contain url including path (i.e. domain.com/subdirectoryname)
when accessing only domain name(i.e. domain.com), I have got a page that I wanna get
but when accessing domain that contains pathname(i.e. domain.com/subdirectoryname), I got an error message like below
This XML file does not appear to have any style information associated with it. The document tree is shown below.
enter image description here
How Can I fix it? If clicking "enter image description here", you could see what I mean
more obviously
Are you trying to access a file on S3 (which don't exist) ? Or is it a dynamic path (like a single page web app) ? In that case you should configure cloudfront to redirect other path to your index
@AntoninRiche Thanks for your comment! I need to try what you advice
@AntoninRiche First of all, Thank you for caring for me
when I configure AWS Cloudfront distribution the first time, I mistook that I connected AWS Cloudfront to just S3 bucket that AWS show me as a recommendation on the configuration page. but, I needed to connect Cloudfront to S3 bucket's "static hosting site"!
[My Fault] If you want to connect Cloudfront to S3 bucket's static site,
Don't connect to just S3 bucket auto recommended by AWS like this image
[Solution] Connect to this URL from S3 bucket site link please.
| common-pile/stackexchange_filtered |
Preparing for Brexit on a personal level
Situation:
I am a UK citizen living and working in Berlin. I am a senior software developer, so have good job security, and am paid well enough that I could get an EU Blue Card.
I own a flat in the UK, which is currently providing me with a second income that covers my rent plus 50% of my living costs (I live cheaply). In 2017 I had planned to sell the flat in 2018, but a long-duration family emergency delayed me moving and by the time I was ready to consider selling, the housing market looked to be spooked by the possibility of No Deal. It is leasehold, there are ~92 years remaining on the lease.
My only debt is a UK student loan, "plan 1" (1.75% interest, tied to UK rate of inflation), which I could repay entirely right now with savings, or pay off over the next 18 months with, e.g., the UK income from the UK letting.
Question:
Given I wish to buy a home in Berlin, are there any clear low-risk actions that I can take with my assets at this time? Or are there too many unknowns?
(Regarding the title: While Brexit has technically happened, the withdrawal agreement has effectively delayed it until the end of this year - I am just using "Brexit" as a shorthand for that).
"are there any sensible actions that I can take at this time?" is a particularly broad and opinionated question; could you narrow this down to specific concerns that you have, or options you are considering?
Ah, thanks. I'd missed how easily that could go wrong. Hopefully this is at least no longer going to suffer from Opinion, though I'm not sure if I've improved it enough with regard to narrowing it down…
What's your argument for keeping the flat now? And do you have long-term plans to return to the UK at any point in your life?
@GS-ApologisetoMonica No plans to return to the UK for more than visits; The only argument I have for keeping it is "that's a nice side-income".
"Given I wish to buy a home in Berlin, are there any clear low-risk actions that I can take with my assets at this time?"
It is this phrasing which is critical to suggesting an action. If you are truly committing to buying a home in Berlin, then the lowest risk action you can take would be to buy a home in Berlin now, even if that means selling your current home. Brexit fears wearing off may cause the value of your home to rise after you sell it, and a worsening Berlin economy for whatever reason may cause your new home to decrease in price after you buy it, but the opposite is also true. By completing the purchase now, you would be fixing your costs, and hedging yourself against fluctuations in both property markets.
Where this may end up not decreasing your risk after all, is if you change your mind about (a) what city you want to live in, or (b) what type of home you want to buy, or (c) how soon you are able to move into this new home in Berlin. But assuming you 100% will be moving into a home you 100% will be buying in Berlin where you will be staying for the foreseeable future [ie: you won't change your mind and move in a few years, which is too short of a time period to be comfortable accepting the possibility volatility of home ownership], then you can reduce the risk of volatility in your housing costs by owning the property you rent.
Another way to look at this, is that owning a rental property in a country where you (a) don't live and; (b) don't plan to return to in the foreseeable future; is a fairly risky asset class. You bear the risk of management and repair costs that you cannot handle easily due to geography, and also bear the full risk of the UK housing market. If you were living in the UK, and lived in that house, at least you would have reduced your risk by locking in your housing costs; as it stands, by living in Berlin you are somewhat disconnected from the UK economy, and therefore volatility in your income from the UK will not necessarily be offset by volatility in your expenses in Berlin.
| common-pile/stackexchange_filtered |
Duplicate Rule Bypass with RecordEditForm in Lightning
I am struggling with figuring out how to pass
dml.DuplicateRuleHeader.allowSave = true;
in Apex on a Lightning RecordEditForm component. I am using RecordEditForm as a record creation form and unsure how alter the payload.
I am assuming I would pass something off from the lightning controller "onSubmit" to an apex controller.
Appreciate the help in advance.
| common-pile/stackexchange_filtered |
How to assign values that are available to threads
Im currently working on a scraper where I am trying to figure out how I can assign proxies that are avaliable to use, meaning that if I use 5 threads and if thread-1 uses proxy A, no other threads should be able to access proxy A and should try do randomize all available proxy pool.
import random
import time
from threading import Thread
import requests
list_op_proxy = [
"http://test.io:12345",
"http://test.io:123456",
"http://test.io:1234567",
"http://test.io:12345678"
]
session = requests.Session()
def handler(name):
while True:
try:
session.proxies = {
'https': f'http://{random.choice(list_op_proxy)}'
}
with session.get("https://stackoverflow.com"):
print(f"{name} - Yay request made!")
time.sleep(random.randint(5, 10))
except requests.exceptions as err:
print(f"Error! Lets try again! {err}")
continue
except Exceptions as err:
print(f"Error! Lets debug! {err}")
raise Exception
for i in range(5):
Thread(target=handler, args=(f'Thread {i}',)).start()
I wonder how I can create a way where I can use proxies that are available and not being used in any threads and "block" the proxy to not be able to be used to other threads and release once it is finished?
One way to go about this would be to just use a global shared list, that holds the currently active proxies or to remove the proxies from the list and readd them after the request is finished. You do not have to worry about concurrent access on the list, since CPython suffers from the GIL.
proxy = random.choice(list_op_proxy)
list_op_proxy.remove(proxy)
session.proxies = {
'https': f'http://{proxy}'
}
# ... do request
list_op_proxy.append(proxy)
you could also do this using a queue and just pop and add to make it more efficient.
Using a Proxy Queue
Another option is to put the proxies into a queue and get() a proxy before each query, removing it from the available proxies, and the put() it back after the request has been finished. This is a more efficient version of the above mentioned list approach.
First we need to initialize the proxy queue.
proxy_q = queue.Queue()
for proxy in proxies:
proxy_q.put(proxy)
Within the handler we then get a proxy from the queue before performing a request. We perform the request and put the proxy back to the queue.
We are using block=True, such that the queue blocks the thread if there is no proxy currently available. Otherwise the thread would terminate with a queue.Empty exception once all proxies are in use and a new one should be aquired.
def handler(name):
global proxy_q
while True:
proxy = proxy_q.get(block=True) # we want blocking behaviour
# ... do request
proxy_q.put(proxy)
# ... response handling can be done after proxy put to not
# block it longer than required
# do not forget to define a break condition
Using Queue and Multiprocessing
First you would initialize the manager and put all your data into the queue and initialize another structure for collecting your results (here we initialize a shared list).
manager = multiprocessing.Manager()
q = manager.Queue()
for e in entities:
q.put(e)
print(q.qsize())
results = manager.list()
The you initialize the scraping processes:
for proxy in proxies:
processes.append(multiprocessing.Process(
target=scrape_function,
args=(q, results, proxy)
daemon=True))
And then start each of them
for w in processes:
w.start()
lastly you join every process to ensure that the main process is not terminated before the subprocesses are finished
for w in processes:
w.join()
Inside the scrape_function you then simply get one item at a time and perform the request. The queue object in the default configuration raises an queue.Empty error when it is empty, so we are using an infinite while loop with a break condition catching the exception.
def scrape_function(q, results, proxy)
session = requests.Session()
session.proxies = {
'https': f'http://{proxy}'
}
while True:
try:
request_uri = q.get(block=False)
with session.get("https://stackoverflow.com"):
print(f"{name} - Yay request made!")
results.append(result)
time.sleep(random.randint(5, 10))
except queue.Empty:
break
The results of each query are appended to the results list, which is also shared among the different processes.
Queue would sound awesome I believe, but I have no idea how I can apply it unfortunately with this scenario, maybe you know? :( I dont know what I think about appending/removing. I was thinking maybe to do like a "available/busy" status for each proxy but there is a chance that two threads uses the same proxies if they use it at the same time, I think queue would be very great but how?
As mentioned above, the GIL prevents concurrent access on a list in CPython (more information here). Since append and remove should be atomic to the best of my knowledge you should have no problems. If you really want to be sure you still can use a Lock. This basically is the same as holding a busy or available status.
Oh really? but isnt then queue also a better suggestion where you just pull the data from the queue which means two threads will never happend to have two same proxies?
Yeah, that is actually what I was about to suggest. You can just assign each thread to one proxy and then put the data in a queue you are reading from. This also works with processes. If you want "real" parallelism in python you have to use multiprocessing instead of multithreading. To create a shared queue between subprocesses you could then use a manager and create the queue via it.
Sounds abit out of my knowledge, is there a small chance that you might are able to show me an example of how it can look like with queues?
Before you go on! i will unfortently need to use threading for different purpose so the multiprocessing might not needed to be written here unless you want!
I have added an example. The same principles also apply with threading, just that you do not have to create the queue viat the manager, but instead you can just create it directly. The rest of the operations are pretty much the same.
Oh wow! that looks awesome! but dont you need to do q.put(..) in the scraper? Or is it because we add it into the results list instead?
I assumed that each process/thread uses one proxy exclusively now. The queue contains your request uris (or any other data that you need to perform the requests), the results list is there to collect the results of your queries. As I said, with threads you do not need to use a manager, but can instead just use the usual object.
Yeah, now its alot better to understand. I was thinking now after your update that I could use queue straight off for the proxies queue instead and do q.get to get the proxy and q.put when its finished?
Yeah, sure that is basically what we have discussed before using the list. I will add another section to my answere using a proxy queue.
That would be awesome! I will of course set ur answer as the answer! Very well done said too!
Thank you very much, the edits are added :)
Glad for your examples! very well done! Legend
| common-pile/stackexchange_filtered |
Loading interstitial adMob in different threads
I want to load Interstitial AdMob after 5 second, after the Activity started, in another Thread. Is this code right, or I'm duplicating Runnables? Is there a better way?
Handler handler;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setActivityImmersiveMode();
setContentView(R.layout.activity_normal_mode);
// loading ads
handler = new Handler();
Runnable runnable = new Runnable() {
@Override
public void run() {
handler.postDelayed(new Runnable() {
@Override
public void run() {
loadInterstitialAd();
}
}, 5000);
}
};
Thread thread = new Thread(runnable);
thread.setName("AdThread");
thread.start();
}
There is no need to use Thread class you can directly do this:
// loading ads
handler = new Handler();
handler.postDelayed(new Runnable() {
@Override
public void run() {
loadInterstitialAd();
}
}, 5000);
So what is happening your handler will schedule your Runnable after 5 sec, you don't have to instanciate Thread and call runnable again.
| common-pile/stackexchange_filtered |
How can I change the data-icon attribute via jQuery (bootstrap-iconpicker)?
I am working with the bootstrap icon picker (http://victor-valencia.github.io/bootstrap-iconpicker/) and I want to change the default icon via jQuery by button click. Something is not working, and I cannot figure it out:
$( "#click" ).click(function() {
$('.icon').data('icon', 'glyphicon-bomb');
});
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/icon-fonts/elusive-icons-2.0.0/css/elusive-icons.min.css" />
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/icon-fonts/font-awesome-4.2.0/css/font-awesome.min.css" />
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/css/bootstrap-iconpicker.min.css" />
<script src="http://victor-valencia.github.io/bootstrap-iconpicker/jquery/jquery-1.10.2.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-3.2.0/js/bootstrap.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/iconset/iconset-glyphicon.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/iconset/iconset-all.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/bootstrap-iconpicker.js"></script>
<button class="icon btn btn-default" data-iconset="glyphicon" data-icon="glyphicon-camera" role="iconpicker"></button>
<div id="click">Click here to change the icon</div>
You say you want to select it but your code says you want to change it's data-icon attribute. Which is the problem?
@winseybash I updated my question to be more clear
The problem is you're not specifying the data-icon properly. data has to be included in the selector name.
See changed code below:
$( "#click" ).click(function() {
$('.icon').attr('data-icon', 'glyphicon-bomb');
});
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/icon-fonts/elusive-icons-2.0.0/css/elusive-icons.min.css" />
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/icon-fonts/font-awesome-4.2.0/css/font-awesome.min.css" />
<link rel="stylesheet" href="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/css/bootstrap-iconpicker.min.css" />
<script src="http://victor-valencia.github.io/bootstrap-iconpicker/jquery/jquery-1.10.2.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-3.2.0/js/bootstrap.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/iconset/iconset-glyphicon.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/iconset/iconset-all.min.js"></script>
<script type="text/javascript" src="http://victor-valencia.github.io/bootstrap-iconpicker/bootstrap-iconpicker/js/bootstrap-iconpicker.js"></script>
<button class="icon btn btn-default" data-iconset="glyphicon" data-icon="glyphicon-camera" role="iconpicker"></button>
<div id="click">Click here to change the icon</div>
I tested it, but it is not working. Does your code snippet work in your browser?
@Jarla my changes now update the data-icon attribute on the button, which is what your question asked for.
Ah ok, I see. I thought this would change the default selected icon. But it doesn't ;(
@Jarla it changes the icon when the id='click' div is clicked. Is that not what you wanted?
Well no, when I test it I still see the icon glyphicon-camera
But in your image there is no visible icon, like here: http://victor-valencia.github.io/bootstrap-iconpicker/
If you are trying to select the icon with the data-icon property of glyphicon-bomb, then use:
$('.icon[data-icon="glyphicon-bomb"]').
Edit: Posted before I saw the edit.
I tested it exactly like you wrote, but it is still not working. Did you test it in a fiddle or code snippet?
I have edited my answer as I forgot to place wrapping quotes inside the round brackets.
I was having a hard time with it, too. The data-icon thing was just not working (bug?)
That way it worked for me:
Setting a glyph
$("#buttonid").iconpicker("setIcon", "glyphicon-leaf");
Remove current glyph
$("#buttonid").iconpicker("setIcon", "glyphicon-");
You have to be careful not to forget the prefix glyphicon- - at least for the Bootstrap glyphs
| common-pile/stackexchange_filtered |
Delete all records in table first then insert new records in table Wordpress SQL query
I am trying to develop a custom plugin using object oriented programming. I want to check if table already have values first if table have values then delete all values in table and reset ID into Zero. After that I want to install new records in to the table in wordpress. How to do that in wordpress query? Could you please help me?
This is my insert code: before insert I need to check table have values if table have values I want to delete all values in that table after delete I need to insert new data into table.
foreach($wc_reviews as $wc_review){
echo $post_id = $wc_review->comment_post_ID;
$customer_name = get_comment_author($wc_review);
$location = "Test";
$description = get_comment_text($wc_review);
$featured_img_url = get_the_post_thumbnail_url($post_id,'full');
$table_name = $wpdb->prefix . 'review_data';
$wpdb->insert(
$table_name,
array(
'customer_name' => $customer_name,
'location' => $location,
'image' => $featured_img_url,
'description' => $description,
'time' => current_time('mysql'),
)
);
}
what have you tried? show some code
I insert data into database using wp-cron job function.each cron job run I want to insert data into database but before that i want to delete old data in the database.
I updated my question with code please check
What's the point of 'checking '?
You can check whether the table is empty or not by using select query:
$list = $wpdb->get_results("SELECT * FROM tablename)
If table is empty $list will be an empty array. If it returns any data you can truncate the table and ID will reset to 0.
$wpdb->query('TRUNCATE TABLE tablename')
After this you can do insertion.
if you want to delete all rows and reset id. then you should execute this query
Truncate table yourTableName
As per WordPress documentation you can check table is exist or not
if($wpdb->get_var("SHOW TABLES LIKE '$table_name'") != $table_name) {
sorry i want to check if that table is empty or not
| common-pile/stackexchange_filtered |
How do you print out elements from a Numpy array on new lines using a for loop?
Create an array with numpy and add elements to it. After you do this, print out all its elements on new lines.
I used the reshape function instead of a for loop. However, I know this would create problems in the long run if I changed my array values.
import numpy as np
a = np.array([0,5,69,5,1])
print(a.reshape(5,1))
How can I make this better? I think a for loop would be best in the long run but how would I implement it?
Some options to print an array "vertically" are:
print(a.reshape(-1, 1)) - You can pass -1 as one dimension,
meaning "expand this dimension to the needed extent".
print(np.expand_dims(a, axis=1)) - Add an extra dimension, at the second place,
so that each row will have a single item. Then print.
print(a[:, None]) - Yet another way of reshaping the array.
Or if you want to print just elements of a 1-D array in a column,
without any surrounding brackets, run just:
for x in a:
print(x)
You could do this:
print(a.reshape([a.shape[0], 1]))
This will work regardless of how many numbers are in your numpy array.
Alternatively, you could also do this:
[print(number) for number in a.tolist()]
If you want to change from a row array to a column array better use a.reshape(-1,1)
| common-pile/stackexchange_filtered |
How to choose variables from a list for a function and then use the solution in a subsequent function?
I am trying to calculate heating degree days and cooling degree days and output that information to a table. I am using mathematica's curated data to do this. In text this is what I would like to do. Use a city name to collect weather data (the mean temp for every day of a certain year), then calculate the HDD and CDD and to create a cumulative value of hdd and cdd for the entire year. I want to use the city name to collect a value for country name, country gdp, and city population. I have used a demonstration project to try and figure this out but I am stuck at this point. I have the following already worked out.
Module[
{dateRange, mean, cdd, hdd, station, country, location, population,
GDPPerCapita,
reference = (65 - 32)/1.8, cumList},
station = "Chicago";
country = CityData[station, "Country"];
population = CityData[station, "Population"];
location = CityData[station, "Coordinates"];
GDPPerCapita = CountryData[country, "GDPPerCapita"];
dateRange = {{2011, 1, 1}, {2011, 12, 31}, "Day"};
mean = WeatherData[station, "MeanTemperature", dateRange];
cdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[# - reference, 0] & /@ mean[[All, 2]]}], 2];
hdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[reference - #, 0] & /@ mean[[All, 2]]}], 2];
cumList = Transpose[{Join[
Transpose[{cdd[[All, 1]]}],
Transpose[{Drop[FoldList[Plus, 0, cdd[[All, 2]] + hdd[[All, 2]]],
1]}],
2]}];
Grid[station, country, location, population, GDPPerCapita,
Last[cumList]]]
In this particular example I used a set city name ie station = "Chicago". When I do this it returns the correct result for that specific city, in a grid in this case..
Grid["Chicago", "UnitedStates", {41.8376, -87.6818}, 2695598,
45230.2, {{{2011, 12, 31}, 3935.4}}]
I want the variables to be chosen from a list of cities....
cityLIST = CityData[#, "Name"] & /@ CityData[];
So basically I want to run this or something like it for every string in a list, and output the results to a table. I am new to mathematica, this is the first thing I have ever tried. If anyone has a minute to help me out with this I would really appreciate it.
Thanks
Welcome to Mathematica.Stackexchange. It's great to see you jump right in the middle and try all those things. However, I'd prefer you try reading some basic documentation first before asking questions. I'd start with tutorial/GettingStartedOverview and tutorial/CoreLanguageOverview in the documentation system.
A slight modification of your code makes this a real function:
myWeatherData[station_] :=
Module[{dateRange, mean, cdd, hdd, country, location, population,
GDPPerCapita, reference = (65 - 32)/1.8, cumList},
country = CityData[station, "Country"];
population = CityData[station, "Population"];
location = CityData[station, "Coordinates"];
GDPPerCapita = CountryData[country, "GDPPerCapita"];
dateRange = {{2011, 1, 1}, {2011, 12, 31}, "Day"};
mean = WeatherData[station, "MeanTemperature", dateRange];
cdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[# - reference, 0] & /@ mean[[All, 2]]}], 2];
hdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[reference - #, 0] & /@ mean[[All, 2]]}], 2];
cumList = Transpose[{Join[Transpose[{cdd[[All, 1]]}],
Transpose[{Drop FoldList[Plus, 0, cdd[[All, 2]] +
hdd[[All, 2]]], 1]}], 2]}];
{station, country, location, population, GDPPerCapita, Last[cumList]}
]
The city list
cityLIST = CityData[#, "Name"] & /@ CityData[];
It is pretty long:
cityLIST // Length
(* ==> 164186 *)
Therefore, we'll try it on the first ten cities in the list:
Grid[myWeatherData /@ cityLIST[[1 ;; 10]], Frame -> All]
You need to define a function like this
(*
We use StringQ in the variable pattern so that it takes only strings of city names.
Without the StringQ it will also work!
*)
MyDataCollecter[station_?StringQ] :=
Module[{dateRange, mean, cdd, hdd, country, location, population,
GDPPerCapita, reference = (65 - 32)/1.8, cumList},
country = CityData[station, "Country"];
population = CityData[station, "Population"];
location = CityData[station, "Coordinates"];
GDPPerCapita = CountryData[country, "GDPPerCapita"];
dateRange = {{2011, 1, 1}, {2011, 12, 31}, "Day"};
mean = WeatherData[station, "MeanTemperature", dateRange];
cdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[# - reference, 0] & /@ mean[[All, 2]]}], 2];
hdd = Join[Transpose[{mean[[All, 1]]}],
Transpose[{Max[reference - #, 0] & /@ mean[[All, 2]]}], 2];
cumList =
Transpose[{Join[Transpose[{cdd[[All, 1]]}],
Transpose[{Drop[
FoldList[Plus, 0, cdd[[All, 2]] + hdd[[All, 2]]], 1]}], 2]}];
(* For the look *)
Grid[{ToString[#] & /@ {Station, Country, Coordinate, Population,
GDP, CumList}, {station, country, location, population,
GDPPerCapita, Grid[Last[cumList], Frame -> All]}}, Frame -> All,
ItemStyle -> "Subsection", Background -> {None, {Pink, Cyan}}]];
(* To make the function accept a list of city names *)
SetAttributes[MyDataCollecter, Listable];
Now we apply this function on a randomly chosen list of of ten cities from your list of cities cityLIST.
MyDataCollecter@RandomChoice[cityLIST, 10] // TableForm
But I hope you will notice that for many cities there is no data in the Wolfram Database and that why MMA can report an error called WeatherData::notent:. Here is an example.
WeatherData::notent: "\!\(\"\\\"Kumbi\\\"\"\) is not a known entity,
class, or tag for WeatherData. Use WeatherData[] for a list of entities."
I suggest that you look at the answers of this post on how to adapt your code when such error occurs.
BR
| common-pile/stackexchange_filtered |
AutoUpdate Chrome Extension GPO
I have a chrome extension published via GPO with the chrome policies: ExtensionInstallForcelist and ExtensionInstallSources.
I also have an updates.xml file with describe the .crx version, the appid and and the url of the .crx to download.
The problem is, that I forgot to add the "update_url" property to my manifest.json file to autoupdate check the version of the extension.
Is it possible to force user's extension update without this property?
I think you cannot do it without the "update_url" property based on this SO question. They ended up using the update_url to update their extension. Just check the other option of the user in the linked SO question if you get any idea from it. For more information, check also this Autoupdating of the Chrome extension documentation.
For people with this issue, you can simply try to
1) Remove the ExtensionInstallForcelist entry.
2) Then have the users run Chrome : this will delete the old extension.
3) Then you set again the ExtensionInstallForcelist entry with the correct extension.
It requires a bit of coordination but it works.
| common-pile/stackexchange_filtered |
Convert csv into json for API request
Basically my code will convert a .csv file into a .json and send a request to an API. I think I successfully converted the json, but when I execute it, after a while I get the 422 error. I know converting csv to json is not the best practice, but I need a tabular structure where the end-user can write some information and send the request.
The only scenario the code works perfectly is when I hardcode the json.
Do you have any ideia how to solve those issues ? I'm running out of ideas!
import csv
import json
import requests
import pandas as pd
import tkinter as tk
from tkinter import filedialog
url_auth = 'https://api-gateway.com/token'
payload_auth = {'grant_type': 'client'}
username = ""
password = ""
response_auth = requests.post(url_auth, auth=(username, password), data=payload_auth)
data = response_auth.json()
url_prd = 'https://api-gateway.com/api/products'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer '+ data['access_token'],
}
root = tk.Tk()
root.withdraw()
def csv_to_json(csvFilePath):
jsonArray = []
with open(csvFilePath, encoding='utf-8') as csvf:
csvReader = csv.DictReader(csvf)
for row in csvReader:
jsonArray.append(json.dumps(row,indent=4))
return jsonArray
csvFilePath = filedialog.askopenfilename()
payload = csv_to_json(csvFilePath)
for line in payload:
response = requests.request("POST", url_prd, headers=headers, data=line)
if response.status_code == 200:
print ('The imports are completed with success')
else:
print ('The imports failed!')
And this is the content:
print(response)
<Response [422]>
print(response.content)
b'Reference is missing.'
The .csv file is like this:
And this is the json body in Postman:
{
"Reference":"XXXXX2",
"products" :[{
"active": True,
"skuDistributor": "XX2-DRVDEUB_TEST",
"designation": "Desctiption filled",
"smallDescription": "Desctiption filled",
"bigDescription": "Desctiption filled",
"price": "700",
"publicprice": "800"
} ]
}
If I print(payload) the format is a little weird...
['{\n "\ufeffReference;
Your csv_to_json function are returning the JSON string and you are triyng to iterate over it, it's means that every line value in your line in payload, actually it is a single character not a "request line"
You must change you function for somethin like that:
def csv_to_json(csvFilePath):
jsonArray = []
with open(csvFilePath, encoding='utf-8') as csvf:
csvReader = csv.DictReader(csvf)
for row in csvReader:
jsonArray.append(json.dumps(row,indent=4)
return jsonArray
By that way you will iterate over a "lines of requests" like you want.
And I think your request call need to send the line not the payload,
Thanks a lot! I fixed the question, however I'm getting response 422. b' Reference is missing.'
| common-pile/stackexchange_filtered |
Shell script running in cron with kubectl, shows "kubectl command not found". But it works when I manual execute the shell script
I have a shell script scheduled with a cronjob in my linux machine. The script has a kubectl command which sets the context and scaleup services based on a previous version file.
kubectl config set-context <AWSClusterARN> --namespace=<name>
kubectl get deploy --no-headers -n name | grep '^svc1\|^svc2' | awk '{print $1, $4}' > deploy_state_before_scale.txt
I have the output of the shell directed to a log file. When I checked it today, it shows
/path/filename.sh: line 19: kubectl: command not found
/path/filename.sh: line 21: kubectl: command not found
/path/filename.sh: line 23: kubectl: command not found
But when I run the file manually as ./filename.sh, the commands get executed.
Context "<AWSClusterARN>" modified.
Scaling down svc1
deployment.apps/svc1 scaled
You obviously forgot to set the PATH correctly in your script.
I ran these commands on following https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile
And
echo $PATH
/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
Don't put code into a comment. It is hard to read. Put it into your comment. Also your .bash_profile is irrelevant here. See the section INVOCATION in the bash man page. Put the PATH definition into the script, which your are going to invoke.
| common-pile/stackexchange_filtered |
Error when concatenate square of a each digit of a number
I'm doing a Codewars Challenge. I must square every digit of a number and concatenate them.
So, if we run 9119 through the function, 811181 will come out, because 92 is 81 and 12 is 1.
My code is below:
#include <math.h>
unsigned long long square_digits (unsigned n)
{
// Count for digits
int digits = log10(n) + 1, i = 0;
// Array to store the split number
int numDivided[digits];
// Store concatenated numbers
unsigned long long result = 0;
if (n == 0)
{
return result;
}
// Split number and store their square
while (n > 0)
{
numDivided[i] = pow(n % 10, 2);
n /= 10;
i++;
}
// Concatenated square of numbers
for (i = digits - 1;i >= 0;i--)
{
if (numDivided[i] == 0)
{
result *= 10;
}
else
{
// Count digits of the current number
digits = log10(numDivided[i]) + 1;
// Multiply current result for 10^(digits)
result *= pow(10, digits);
// Add the current number to result
result += numDivided[i];
}
}
return result;
}
The test cases are below:
Test(test_suite, sample_tests)
{
do_test( 3212u, 9414ull);
do_test( 2112u, 4114ull);
do_test( 0u, 0ull);
do_test( 999u, 818181ull);
do_test( 10001u, 10001ull);
do_test(3210987654u,<PHONE_NUMBER>362516ull);
do_test(3999999999u,<PHONE_NUMBER>181818181ull); // :p
do_test( UINT_MAX,<PHONE_NUMBER>64948125ull);
}
The code works until the two last tests, so:
for n =<PHONE_NUMBER>, expected<PHONE_NUMBER>181818181, but got<PHONE_NUMBER>181818449
I was think that the test case was wrong and check if the number was greater that ULLONG_MAX but it's less, so, all right.
What is my mistake at this point?
What is do_test? What does it do? Might it be possible to create a proper [mre] to show us instead (with a simple main function that calls your function with a hard-coded argument and displays the result)? You might want to refresh the help pages, take the SO [tour], read [ask], as well as this question checklist.
Also, have you tried to use a debugger to step through your function, while monitoring variables and their values, to see what happens and when things start to go wrong?
Lastly, you do know that pow is a floating point function, with all the possible problems that might entail?
do_test is the function provided for test the program. This takes an input how first argument and compare the expected output with the second argument.
So, for input 3212u is expected the output 9414ull, for example.
What problems could i have with pow function?
There is any alternative?
Simple multiplication of the number with itself? numDivided[i] = (n % 10) * (n % 10)?
Juan For avoid the case of digit 0 i check at first if N == 0 --> code does digits = log10(n) (bad) before if (n == 0) (too late).
What problems could i have with pow function? There is any alternative?
pow() typically returns, at best, a 53 significant bit result yet we have at least a 64-bit problem. powl() does not certianly help either as it may be as limiting as pow(). Do not use floating point functions for an integer problem. Using a floating point funtion for an integer problem is the wrong approach.
digits = log10(n) + 1 are numDivided[i] = pow(n % 10, 2) are not specified to get the desired value (due to imprecision) and certainly fail when n == 0.
Simple extract the least significant decimal digit with %10. Scale by an integer type power-of-10.
unsigned long long square_digits(unsigned n) {
unsigned long long nn = 0;
unsigned long long pow10 = 1;
while (n) {
unsigned digit = n % 10;
unsigned square = digit * digit; // 0 to 81
nn += square * pow10;
if (square < 10) {
pow10 *= 10;
} else {
pow10 *= 100;
}
n /= 10;
}
return nn;
}
The main culprit is this line:
result *= pow(10, digits);
Given that result is an unsigned long long while the return value of pow is a double, result is first converted to a double, then multiplied by the power and finally the result is converted back to an unsigned long long.
The problem is that while a double type has a much greater range than an unsigned long long, its precision is limited and not all the possible integral values can be represented (stored) exactly.
In particular, you can check the result of the following line:
printf("%.0lf\n",<PHONE_NUMBER>18181ull * 100.0); // It won't print<PHONE_NUMBER>1818100
See @chux's answer for further details and a proper implementation of the solution.
I found two mistakes in your code.
log() is not defined for 0. So, you cannot run your code against Test case 3.
Use powl() instead of pow().
PS: One change you can do to pass test case 3 is check if N == 0 in the beginning of the function itself. The next script should calculate the number of digits in N.
For avoid the case of digit 0 i check at first if N == 0 and return result that at this program point is 0 and so test case 3 works fine.
In the other cases i check if digit is 0 and, if it's true, it multiplies the result by 10 (to adding digit 0 to result) and so avoid use the function log10()
I used powl instead of pow and everything works fine. Here's a reference for you.
https://en.cppreference.com/w/c/numeric/math/pow
powl() has the same trouble as pow() when long double and double have the same encoding.
| common-pile/stackexchange_filtered |
Does jwplayer support playlist including youtube videos on mobile?
Ethan JWPlayer (https://stackoverflow.com/users/1645000/ethan-jwplayer) mentioned (here: jwplayer playlist is not working in mobile) that playlists are not supported with youtube videos on mobile, but it would be supported in the future.
Is it supported yet? If not what's the plan (ETA)?
Have you tried this code ?
<html>
<head>
<title>Test Page</title>
</head>
<body>
<script type="text/javascript" src="http://player.longtailvideo.com/jwplayer.js"></script>
<div id="player"></div>
<script type="text/javascript">
jwplayer("player").setup({
file: "http://gdata.youtube.com/feeds/api/playlists/PL895D164BB097AA0F",
flashplayer: "http://player.longtailvideo.com/player.swf",
volume: 80,
width: 465,
stretching: 'fill'
});
</script>
</body>
</html>
The playlist url must be in this format:
http://gdata.youtube.com/feeds/api/playlists/PL895D164BB097AA0F
| common-pile/stackexchange_filtered |
WooCommerce add custom email content based on payment method and shipping method
I'm trying to add different content to woocommerce completed order email notifications based on combinations of payment methods and shipping method.
My code so far:
// completed order email instructions
function my_completed_order_email_instructions( $order, $sent_to_admin, $plain_text, $email ) {
if (( get_post_meta($order->id, '_payment_method', true) == 'cod' ) && ( get_post_meta($order->id, '_shipping_method', true) == 'local pickup' )){
echo "something1";
}
elseif (( get_post_meta($order->id, '_payment_method', true) == 'bacs' ) && ( get_post_meta($order->id, '_shipping_method', true) == 'local pickup' )){
echo "something2";
}
else {
echo "something3";
}}
The payment part works (I get the right "something1" to "something3" content) but if I add the && shipping condition, I get "something3" with every payment method.
Any idea what's wrong and how could I make it work?
Thanks
Code Revised (2023)
There are multiple mistakes in your code… Try the following instead:
add_action( 'woocommerce_email_order_details', 'my_completed_order_email_instructions', 10, 4 );
function my_completed_order_email_instructions( $order, $sent_to_admin, $plain_text, $email ) {
// Only for "Customer Completed Order" email notification
if( 'customer_completed_order' != $email->id ) return;
// Targeting Local pickup shipping method
if ( $order->has_shipping_method('local_pickup') ){
if ( 'cod' == $order->get_payment_method() ){
echo '<p>' . __("Custom text 1") . '</p>';
} elseif ( 'bacs' == $order->get_payment_method() ){
echo '<p>' . __("Custom text 2") . '</p>';
} else {
echo '<p>' . __("Custom text 3") . '</p>';
}
}
}
Code goes in functions.php file of your child theme (or in a plugin).
This code is tested and works with WooCommerce 3 and above.
Related:
How to get WooCommerce order details
Get orders shipping items details in WooCommerce 3
Thanks and follow up question:I use the code as part of a custom plugin because I have to add different content to almost every type of customer email. I list the functions in the plugin and call them in the respective email templates in my child theme directly(I had to modify the templates anyway). I pasted the relevant part of your code to the plugin and it works.The other functions are based only on payment method,and my code above seemed to work so far. My question is: is it safe to leave them as they are or are there some 'hidden traps', and I should better modify them based on your code?
@Anna If you are using WooCommerce 3.+ you should use $order->get_id() instead in get_post_meta() … If some part of your code works just use them as they are…
| common-pile/stackexchange_filtered |
Generating terrain using Marching Cubes
I searched around the web but I found nothing that could help me, so I'm asking here.
I'm trying to procedurally generate terrain using the marching cubes algorithm, and I can generate a mesh. The problem is that the mesh that may be anything!
https://www.dropbox.com/s/w99lvynrfra2a5v/question.JPG
As you can see in that screenshot, everything is messed up.
Is this a problem with the noise function? I'm using a simple Perlin noise function. How can I achive a result like this:
http://www.youtube.com/watch?v=-hJkrr5Slv8
If the problem is with the noise, what do I need to change to achieve this? I want to create natural terrain with hills, mountains, plains etc.
(I'm using Unity3D, but I don't think this makes any difference.)
I suggest scaling down the vertical component of the sample point before sampling the noise function.
The starting point of the voxel terrain in the video looks like a heightmap, so they may have multiplied the component by 0.
Also, you have to add the vertical component to the field function.
so your field function should look something like this:
const float noise_vertical_scale = 0.2;
const float field_vertical_scale = 0.01;
const float iso_surface = 0.5;
/*
* returns field at point (pos):
* negative = inside
* 0 = surface
* positive = outside
*/
float sampleField(vector3 pos)
{
vector3 sample_pos = pos;
sample_pos.y *= scale;
return noise3D(sample_pos) + pos.y * field_height_scale - iso_surface;
}
Hope that helps.
It may help me if you could explain me some points x)
See, the noise that I'm using is a library that I downloaded from asset store and I don't actually know what "noise_vertical_scale" may be in that library... and what do you mean with "field_vertical_scale" ?
If you want here's a link for a forum where you can get the library:
Unity Community Forum CoherentNoise Lib
Could you help me out? xD
Still I'll try to figure out what your code means x) Sorry for my ignorance....
Ok it's already generating better mesh, with the code that you gave me. Only thing I had to do was change this last line
"noise3D(sample_pos) + pos.y * field_height_scale - iso_surface;"
in my code to fit what you told me and is already much better.
Thank you very much, I think I already understand what you said x)
still if could try to explain me what exactly are those two variable you would be great:
noise_vertical_scale
field_vertical_scale
xD
Yeah, sorry, I didn't explain very well. when taking the sample at a point in space, if you scale the sample point down, it is the same as scaling the noise up. For example, halving the height of the sample point will stretch the resulting noise shape by 2 in the vertical direction.
field_vertical_scale controls the influence the height has on the field function, so a large value will create a very flat terrain.
dude, you're awesome x)
thank you very very very much :)
and the best part with your approach is, I don't even need a heightmap to generate the terrain, it's working now. Man you're incredible. I owe you one ;)
Again thank you very much.
My pleasure. I hope I can see your progress sometime
you bet, I already saved your nick so I can give you credits in the future so, as soon as I find a way to texture it properly, I'll show you the progress ;)
Hey Sammael, are you doing this on the CPU or GPU? I'm trying to figure this out on the GPU at the moment, curius to know if you ever cracked this and would be interested in giving me some pointers?
The good news: your Marching Cubes algorithm looks just fine to me! That 3d surface reconstruction looks gorgeous. If you're committed to a voxel-based approach with isosurface visualization, you're off to a fine start.
The problem is that 3d noise in this form really isn't suitable for use with a Marching Cubes-type algorithm for terrain. If you want to be able to do the sort of terrain modification that the demo you point to does, what I would suggest is to build the heightmap first and then build your volumetric data based on the heightmap.
Building the heightmap in the first place is relatively straightforward; I'd just use one of the classic fractal methods. You can use an approach like square-diamond midpoint displacement (probably the most common), but I would encourage a Fourier-based approach instead because it should allow you a lot more control over how 'rolling' your terrain is (and you can always 'postfilter' by applying a gamma-type effect, e.g. setting h(x,y) -> c*h(x,y)^3 or h(x,y) -> c*sqrt(h(x,y)), to make the terrain more or less jagged - if you look at the early articles on fractal terrain you'll see a lot of this).
Once you've got your heightmap data, there are a bunch of different procedures for converting that information into volume data. For simplicity's sake I'll assume that your heightmap data has been stored at higher resolution than your voxel data has; this makes a lot of sense, since a 3d array with the same XY resolution would (obviously) have to be a lot larger than the 2d heightmap. The easiest way to do it is as simple as
for (0 < vx < VOXEL_X_RES) {
for (0 < vy < VOXEL_Y_RES) {
pick the closest point (x,y) on the heightmap corresponding to (vx,vy)
(e.g. x = HEIGHTMAP_X_RES*vx/VOXEL_X_RES, etc)
find the z value of the heightmap at (x,y) (and convert it to a voxel-space value vz)
fill every value below (vx, vy, vz) with 1 to indicate it's in the terrain
fill every value above (vx, vy, vz) with 0
}
}
Because this fills the voxel array with only 0s and 1s, it can mean that the marching cubes interpolation along edges looks a little too 'regular' and results in surfaces with too many identical slopes on them. Instead, you may want to 'subsample' the heightmap to build your voxel array, using something like:
for (0 < vx < VOXEL_X_RES) {
for (0 < vy < VOXEL_Y_RES) {
find all 'pixels' on the heightmap corresponding to (vx,vy)
for ( 0 < vz < VOXEL_Z_RES) {
compute how many of the pixels in the set above have z > vz
set the value of the voxel array at (vx, vy, vz) to the proportion of pixels with z > vz
}
}
}
The set of pixels on the heightmap for this second version can be anything from a straight subsampled grid (i.e., find hxmin = HEIGHTMAP_X_RES*vx/VOXEL_X_RES and hxmax = HEIGHTMAP_X_RES*(vx+1)/VOXEL_X_RES, similarly for hymin and hymax, and consider all pixels (hx, hy) with (hxmin <= hx < hxmax) and (hymin <= hy < hymax) ) to an 'overlapping samples' approach where you look at every heightmap-pixel within a circle centered around wherever (vx, vy) maps to on the heightmap; you could even subsample your heightmap and interpolate between heightmap pixels for finer detail.
One major caveat with this whole approach is that fractal terrain isn't really very 'terrain-aware' - it knows height, but it doesn't know anything about features : it doesn't really know what a river is or the effect it has on the surrounding geography; it can't distinguish easily between 'old' and 'new' geography; etc. If you want truly realistic terrain then you should consider some of the more simulationist approaches - but these tend to directly conflict with the kind of terrain modifications that the linked video shows, so if realism is your goal then it may be worth rethinking the voxel approach entirely.
nop, realism isn't the goal, I actully don't care if the terrain is realistic, I only want the effects, mounstains and plains and so on x)
I already tried the heightmap approach with cubes, like minecraft, and it generated good terrain. So I already thought about using the method that you told me, the problem is, the marching cubes algorithms is using float values, 'cause if you use bool values for voxels it becomes more cubic, so to me the problem about using the heightmaps is actually that, wich value will I insert in the array index 'cause it can't be only 1 or 0... Still I'll try it. thnks
about the marching cubes algorithm. Credits aren't mine, I made some changes and some optimizations, but it was given to me by a guy that is actually helping me who I found in unity community called scrawk, he's being great, so the credit is all to him. But I agree with you, the surface is pretty cool x) I understand how the algorithm works but in the begining I was getting doubts how to implement it, so scrawk gave me the algorithm so I could understand how does it works, and finally I understood and was able to do it my own way, but most of the things came from his algorithm x)
As I knew giving boolean values (1 or 0) to the voxels results in "cubic" terrain kind of minecraft terrain, a bit better, but, still "cubic", thank you for your explanation but it doesn't fit my problem, still thank you very much ;)
@Sammael That's true - all of your break points along edges will be at midpoints between cubes, and that can lead to some distinctive visual artifacting with long stretches of identical slopes. I'll add some notes to this answer in a bit to explain in more detail how to get non-binary values when you sample your terrain.
now, that is a great thing x) If you could that would help me too :)
Guys, you're being just great x)
I'm starting to implement your approach it's easy to understand, can't wait to see the results x) thank you very much
I'm having only one doubt. When you say z and y in 3D space do you mean y = height and z = depth or is the opposite, because in unity y = height and z = depth and it sems to me that you're using the first concept, I know that blender uses the first one, so... I'm a bit confused x)
almost working. My problem is the "proportion of pixels with with z > vz"
As I counted how many pixels exist in the set computed previously, with z > vz I'm assuming that this proportion may not be what I'm thinking it is.... Can you explain what it is, please ?
to calculate the proportion I divided the number of pixels in the set computed previously by the number of pixels with z > vz, is it right ?
ok I noticed another possible problem. The noise lib that I already have generates the heightmap for me. But it generates it as a Texture2D so to calculate the height of a pixel I read the grayscale of that pixel and then I multiply it by the VOXEL_Y_RES (wich is the voxels array height, wich in this case is 32) is it ok or do I really need to implement my own function to generate the heightmap, I'm planing to do it, but for testing reasons I'm working with this one for now x)
| common-pile/stackexchange_filtered |
Getting error in form_validation > set_rules > valid_email in codeigniter
In codeigniter framework, my code is:
$this->form_validation->set_rules('mobile', 'Mobile Number', 'trim|numeric|exact_length[10]');
$this->form_validation->set_rules('email', 'Email Address', 'trim|valid_email');
$this->form_validation->set_rules('address', 'Address' , 'trim');
if($this->form_validation->run() == TRUE){
$this->update_info();
}
Above code generate following error:
An uncaught Exception was encountered
Type: Error
Message: Call to undefined method CI_Form_validation::substr()
Filename: D:\xampp\htdocs\ci\system\libraries\Form_validation.php
Line Number: 1241
For just checking I've tried two things and error removed, but these are not the right way, so I need correct solution.
When I've removed valid_email from set_rules then code is working.
Code is working after commenting Line Number: 1241 in the file (where I've got error) system\libraries\Form_validation.php
i.e.
1241: $str = self::substr($str, 0, ++$atpos).idn_to_ascii(self::substr($str, $atpos), 0, INTL_IDNA_VARIANT_UTS46);
Codeigniter Version: 3.1.11
PHP Version: 7.4.2
I suppose that's because you use trim and numeric at the same time
@Vickel: No, I've checked it by removing all trim from set_rules, still the same error. I think its about valid_email
Add email helper
$this->load->helper('email');
For email validation set this way
$this->form_validation->set_rules('email', 'Email', 'trim|required|valid_email|xss_clean');
For mobile number validation
$this->form_validation->set_rules('mobile', 'Mobile Number ', 'required|regex_match[/^[0-9]{10}$/]'); //{10} for 10 digits number
Nops ! not working. Still the same error exists. :(
Hide this line $this->form_validation->set_rules('mobile', 'Mobile Number', 'trim|numeric|exact_length[10]'); check my updated answer and try again
| common-pile/stackexchange_filtered |
wso2 identity server facebook login with Oauth2.0
Is it possible to use wso2 identity server Facebook login with Oauth2.0 not saml ?
i searched several docs for this but i couldn't find a proper answer for this any ideas of doing this ?
WSO2 IS federate authentication with Facebook using "Oauth2.0". You can refer WSO2 offcial document for more details.
| common-pile/stackexchange_filtered |
dereference pointer to an array
I have a basic doubt, when we de-reference a pointer to an array why we get name of the array instead the value of first member of an array, I see this has been asked here but I didn't get it
exactly how?
dereferencing pointer to integer array
main()
{
int var[10] = {1,2,3,4,5,6,7,8,9,10};
int (*ptr) [10] = &var;
printf("value = %u %u\n",*ptr,ptr); //both print 2359104. Shouldn't *ptr print 1?
}
So, is it like when we dereference a pointer, it cancel out the reference operator and what we get is variable name, and in case of pointer to an array, it is the array name ?
main()
{
int a = 10;
int *p = &a;
printf("%d\n", *p) /* p = &a; *p = *(&a); and having a dereference cancel out the &, gives us *p = a ? */
}
Try print out **ptr, this should give you the element (1) rather than the address. It just happens that the address of ptr (the pointer to arrays) is the same as the address of the first array in the pointer.
As an aside, use %p for printing a pointer, your current code is broken though it seems to "work" for you.
Milan, Shouldn't *ptr print 1 --> ptr is a pointer to an array. *ptr is an array. An array is not an int 1.
@chux-ReinstateMonica, I am just confused how *ptr is decoded to an array, I understand ptr is a pointer to an array. how *(&a) gives an array instead of value 1. At address &a value stored is 1.
@Milan At address &a value stored is also the array (40 bytes long).
and that is what I failed to understand, how :(
@Milan Consider another case. If b was a pointer to a widget, would you expect *b to be a widget or an int?
it should be an integer value.
Let us continue this discussion in chat.
Because ptr has type int (*)[10] which is a pointer to an array, *ptr has type int[10] which is an array type.
This array decays to a pointer to its first element when it is passed to printf which then prints the pointer value. The result would be the same if you passed var to printf instead of *ptr.
dbush, if *ptr has type int[10] then *ptr+1 should point to next 1-D array .i.e. *ptr + 40 ?
@Milan No, that would be ptr + 1. With *ptr + 1, first you have *ptr which has array type, then that array decays to a pointer to the first member, then adding 1 results in a pointer to the second member of the array.
okay, one more follow up question, int a[10], here a is an array of type int[10]. This is how one should read this ?
You do not get a variable name when you dereference a pointer. If the pointer points to an object the you get that object. If it does not point to an object then you get undefined behavior. In particular, if the pointer points to a whole array, then you get that array. That's a fundamental aspect of how pointers work.
However, a fundamental aspect of how arrays work is that in most contexts, an expression of array type is automatically converted to a pointer to the first array element. This address corresponds to the address of the array itself, but has different type (int * in your case, as opposed to int (*)[10]). Typically, converting these to an integer type produces the same value. Thus, in your example code, *ptr is equivalent to var, and each is automatically converted to a pointer of type int *, equivalent to &var[0].
But note also that it is not safe to convert pointers to integers by associating them with printf directives, such as %u, that expect a corresponding integer argument. The behavior is undefined, and in practice, surprising or confusing results can sometimes be observed. One prints a pointer value with printf by using a %p directive, and converting the pointer value to type void *. Example:
printf("pointer = %p %p\n", (void *)*ptr, (void *)ptr);
Combined with the type and value of ptr from your example, this can be expected to reliably print two copies of the same hexadecimal number.
| common-pile/stackexchange_filtered |
Expected Length of Proof in a given Axiomatic System
Is there some sort of notion of the expected length of proof taken over the space of all theorems in an axiomatic system or something close to that in the far reaches of pure math? What type of math would study something like that?
Kolmogorov complexity?
Do you mean "accepted"?
No I don't, oops. Expected as in statistical sense of the work. Like excepted value.
The length of proofs is something that is studied by logicians, but not really in terms of probability in my very limited experience... See for instance https://en.wikipedia.org/wiki/G%C3%B6del%27s_speed-up_theorem
| common-pile/stackexchange_filtered |
A numerical boundary conditions paradox
For $(t,z)\in[0,1]\times[-1,0]$
zmin = -1; tmax = 1;
and some fields $w(t,z)$ and $y(t,z)$
n = 100; h = -zmin/(n-1);
W[t_] = Table[w[i][t], {i, n}];
Y[t_] = Table[y[i][t], {i, n}];
let there be the following PDE's system
$$\partial_tw=\partial_zy+w\partial_zw$$
$$\partial_ty=\partial_zw+w\partial_zy$$
For the implementation of the Method of Lines derivatives $\partial_zw$ and $\partial_zy$ are numerically approximated as
Wz[t_] = Join[{(w[2][t] - w[1][t])/h},
Table[(w[i + 1][t] - w[i - 1][t])/(2h), {i, 2, n - 1}], {(
w[n][t] - w[n - 1][t])/h}];
and
Yz[t_] = Join[{(y[2][t] - y[1][t])/h},
Table[(y[i + 1][t] - y[i - 1][t])/(2h), {i, 2, n - 1}], {(
y[n][t] - y[n - 1][t])/h}];
Notice that the above derivative formulas change for $i=1$ (i.e. $z=-1$) and $i=100$ (i.e. $z=0$). This is a way to handle the fact that numerical integration for $z$ is confined in $[-1,0]$ and does not imply any boundary condition.
Then the above PDE's system can be written as
wall[t_] = Yz[t] + W[t]*Wz[t];
eqw = Thread[
D[W[t], t] == wall[t] - PadLeft[{ wall[t][[n]]}, n]];
eqy = Thread[D[Y[t], t] == Wz[t] + W[t]*Yz[t]];
The only boundary condition that is implied by the above equations is that
$$w(t,0)=0$$ and this is the reason for the cumbersome statement of the dynamical equation for $w$ ( further explanation: press ctrl+F and type "here is the answer for point 3").
The boundary condition is accompanied by the following initial conditions
w0[z_] = -0.01*Sin[z*Pi]^2;
y0[z_] = 1;
initw = Thread[W[0] == Table[w0[zmin + (i-1)*h], {i, n}]];
inity = Thread[Y[0] == Table[y0[zmin + (i-1)*h], {i, n}]];
and then NDSolve is called to implement the method of lines
lines = NDSolveValue[{eqw, eqy, initw, inity}, {W[t], Y[t]}, {t, 0,
tmax}];
So there arise the following questions:
Except $w(t,0)=0$ is there any other boundary condition implicit in the finite difference equations? If it does then which? If it doesn't then why does the code run? The problem seems underdetermined.
Can one call the Method of Lines as internal routine so as to increase the accuracy of the above code?
I am working on these but would appreciate any help.
h = -zmin/n should be h = -zmin/(n-1). 2. /h, {i, 2, n - 1}] should be /(2h), {i, 2, n - 1}]. 3. What's the meaning of PadLeft[{ wall[t][[n]]}, n]? If you think this will impose $w(t,0)=0$, you're wrong.
I believe that boundary conditions are needed for both w and y. I am unaware of any way to call Method of Lines as internal routine, but you could use NDSolve directly to solve the coupled PDEs.
@xzczd here is the answer for point 3. PadLeft[{ wall[t][[n]]}, n] creates a list with n elements all of which is zero except the nth one that equals wall[t][[n]] . As a result the subtraction results in a list of n elements the first n-1 equal to Yz[t] + W[t]*Wz[t] and the last one equal to zero. This means that $\partial_tw(t,0)=0$ and results in $w(t,0)=w(0,0)=\sin(0)^2=0$.
@xzczd I edited my question according to point 2.
@bbgodfrey boundary condition about y concerns its value or could be stated in terms of its z-derivative? Also can you see why the code runs I mean if there is an implicit boundary condition in my code?
@xzczd concerning point 1. the choice h=-zmin/n results in a $z$-grid of $n$ points the $i=1$ point being $z=zmin+h$ and the $i=n$ point being $z=0$. So spatial integration is done for $z\in[-1+h,0]$ and not $[-1,0]$. I think this is not essential to the question but I will fix the code as soon as possible.
Oh… as to point3, you're right, I forgot about the i.c., but I think it's better to explain this a bit in the question. @bbgodfrey Have you read this post?: https://scicomp.stackexchange.com/q/28033/5331
@xzczd what do you think of point 1.? I mean is there any difference between h = -zmin/n and h = -zmin/(n-1) other than adding $h$ to the left end of $[-1,0]$?
There won't be any obvious difference when the grid is dense enough, of course. But it's just better to avoid this inaccuracy.
@xzczd I think it is ok now, what do you think?
@xzczd Thanks for the link, which I had not seen.
The solution you've observed is the artifact of 1st order one-sided difference formula
$$f' (x_n)\simeq \frac{- f (x_{n}-h)+ f (x_n)}{ h}$$
for approximating the PDE at the boundary. This can be confirmed by replacing it with 2nd order one-sided formula
$$f' (x_n)\simeq \frac{f (x_{n}-2h)-4 f (x_{n}-h)+3 f (x_n)}{2 h}$$
If you're not familiar with one-sided formula, start from page 6 of this book.
zmin = -1; tmax = 1;
n = 100; h = -zmin/(n - 1);
W[t_] = Table[w[i][t], {i, n}];
Y[t_] = Table[y[i][t], {i, n}];
help[var_] := With[{w = var}, Join[{-{1, -4, 3}.{w[3][t], w[2][t], w[1][t]}/(2 h)},
Table[(w[i + 1][t] - w[i - 1][t])/(2 h), {i, 2,
n - 1}], {{1, -4, 3}.{w[n - 2][t], w[n - 1][t], w[n][t]}/(2 h)}]]
Wz[t_] = help@w ;
Yz[t_] = help@y;
wall[t_] = Yz[t] + W[t]*Wz[t];
eqw = Thread[
D[W[t], t] == wall[t] - PadLeft[{ wall[t][[n]]}, n]];
eqy = Thread[D[Y[t], t] == Wz[t] + W[t]*Yz[t]];
w0[z_] = -0.01 Sin[z π]^2;
y0[z_] = 1;
initw = Thread[W[0] == Table[w0[zmin + (i - 1)*h], {i, n}]];
inity = Thread[Y[0] == Table[y0[zmin + (i - 1)*h], {i, n}]];
lines = NDSolveValue[{eqw, eqy, initw, inity}, {W[t], Y[t]}, {t, 0,
tmax}];
{testw, testy} =
ListInterpolation[
Developer`ToPackedArray@#[[0]]["ValuesOnGrid"] & /@ # //
Transpose, {#[[1, 0]]["Coordinates"][[1]], Array[# &, n, {zmin, 0}]}] & /@ lines
Plot3D[testw[t, z], {t, 0, tmax}, {z, zmin, 0}, AxesLabel -> {t, z, f}]
But if we use 1st order one-sided formula instead:
help[var_] := With[{w = var}, Join[{(w[2][t] - w[1][t])/h},
Table[(w[i + 1][t] - w[i - 1][t])/(2 h), {i, 2, n - 1}], {(
w[n][t] - w[n - 1][t])/h}]]
The solution will be
The difference is obvious i.e. the solution depends on how we approximate the differential term at the boundary!
Further check by varying n shows both solutions are stable. This behavior never shows up when b.c. is enough AFAIK. For example, when dealing with the initial-boundary value problem
tend = 1/10; xl = 0; xr = 1;
With[{u = u[t, x]}, eq = D[u, t] == D[u, x, x];
ic = u == Exp[-100 (x - (xl + xr)/2)^2] /. t -> 0;
bc = {u == 0 /. x -> xl, D[u, x] == 0 /. x -> xr};]
sol = NDSolveValue[{eq, ic, bc}, u, {t, 0, tend}, {x, xl, xr}]
Both 1st and 2nd order approximation for the b.c. lead to the same solution, when the grid is dense enough:
Clear@dx
formula = eq /. {D[u[t, x], t] -> u[x]'[t],
D[u[t, x], x, x] -> (u[x - dx][t] - 2 u[x][t] + u[x + dx][t])/dx^2};
points = 25;
dx = (xr - xl)/(points - 1);
ode = Table[formula, {x, xl + dx, xr - dx, dx}];
odeic = Table[ic /. u[t_, x_] :> u[x][t] // Evaluate, {x, xl, xr, dx}];
bcnew1 = bc[[1]] /. u[t_, x_] :> u[x][t];
bcnew2 = bc[[2]] /.
D[u[t, x_], x_] :> (u[x - 2 dx][t] - 4 u[x - dx][t] + 3 u[x][t])/(2 dx);
bcnew3 = bc[[2]] /. D[u[t, x_], x_] :> (- u[x - dx][t] + u[x][t])/(dx);
mid[bc_] := (sollst =
NDSolveValue[{ode, odeic, bcnew1, bc},
u /@ Array[# &, points, {xl, xr}], {t, 0, tend}];
ListInterpolation[
Developer`ToPackedArray@#["ValuesOnGrid"] & /@ # //
Transpose, {#[[1]]["Coordinates"][[1]], Array[# &, points, {xl, xr}]}] &@sollst)
soltest1 = mid[bcnew2];
soltest2 = mid[bcnew3];
Manipulate[Plot[{soltest1[t, x], soltest2[t, x]}, {x, xl, xr},
PlotStyle -> {Automatic, {Thick, Dashed}}, PlotRange -> {0, 2}], {t, 0, tend}]
OK, then how to explain this behavior? Is the difference formula actually equivalent to a hidden b.c.? This is exactly what I've asked in this and this post, but sadly nobody has found a satisfactory answer so far.
xzczd can you help me with this question?
xzczd, what do you think of this post? It may lead to a general method for first order IBVP problems that mix spatial and temporal derivatives.
@bbgodfrey commented above that boundary conditions (b.c.) are needed for both $y$ and $w$. These b.c. can be time-independent, i.e. of the form $y_b=const.$ and $w_b=const.$ But it is reasonable that b.c. could be time-independent as well, i.e. of the form $y_b=y_b(t)$ and $w_b=w_b(t)$. Fuctions $y_b(t)$ and $w_b(t)$ can have an analytic expression, but it is conceivable that this won't be always the case. They could be solutions of some differential equation that does not have any analytic solution. And in particular, I think here comes the answer, functions $y_b(t)$ and $w_b(t)$ could be solutions of the given PDE system.
In short I think that the code I posted gives a solution that by construction is characterised by b.c. $y(t,-1), y(t,0), w(t,-1)$ that are solutions of the PDE's system in question. These are complemented by b.c. $w(t,0)=0$ and together with initial conditions (i.c.) lead to a unique solution-the one to which 1st and 2nd order approximation mentioned by @xzczd converge as grid density increases.
I am not a methematician so I cannot be entirely sure that my answer is correct,though apparently reasonable. An expert's confirmation would be important.
Also if the answer is correct here comes the question if one can call method of lines as an internal routine and therefore avoid explicit discretization of the problem.
"In short I think that the code I posted gives a solution that by construction is characterised by b.c. $y(t,-1), y(t,0), w(t,-1)$ that are solutions of the PDE's system in question." Notice that given the highest differential order of $w$ and $y$ respect to $z$ is $1$, usually you only need $2$ boundary conditions. (Related: https://math.stackexchange.com/q/450367/58219) There exist more involved cases e.g. Inviscid Burgers equation though.
Ok then forget about $y(t,-1)$ and $ w(t,-1)$. My point is that the solution that occurs is not an artifact but a fare and square solution with b.c. that are time dependent, without any analytic expression, and which by construction are numerical solutions of the above PDE system.
Notice I've never denied that lines etc. are solutions for the PDE. I use the word "artifact" because those solutions aren't determined before a certain one-sided formula is chosen.
Yes but you have proven that any one-sided formula in the limit of very dense grid points to the same solution.
So we can reverse our logic and say: yes b.c. can be time dependent. Yes time dependent b.c. may be solutions of equations that do not have analytical solutions. And yes time dependent b.c. can be solutions of the specific PDE system we try to solve.
And then ask: how can we tell mathematica to integrate under such b.c.?
Simple answer: using the one sided derivative formula. Ask: which formula should I use? Say: All formulas point to the same answer in the limit of continuous grid.
The solution thus obtained is not trivial nor artificial. Is a certain solution with certain b.c.
"Yes but you have proven that any one-sided formula in the limit of very dense grid points to the same solution. " Only if the b.c. is enough. When b.c. is not enough like in your case, different one-sided formula leads to different solution.
| common-pile/stackexchange_filtered |
Deduce that $\mathbb E(X^3)=1^3+2^3+3^3+4^3+5^3+6^3$
A fair die is tossed and let the random variable $X$ be the number that appears.
Deduce that
$$
\mathbb E(X^3)=\frac{1^3+2^3+3^3+4^3+5^3+6^3}6.
$$
First of all, I would like to know the probability distribution of this random variable $X$.
We know what $X$ is, but what is $X_3$?
I edited X3 into $X_3$, maybe you intended $3X$?
You really need to indicate what does your notation mean. First of all, we have no means to understand what does "13" stands for. We have even less understanding of what it means to "add" the things "xy" and "wz" by naming it "xy + wz". Perhaps it makes sense to you, and I'm sure it is not a very fancy thing (dice problems usually aren't very fancy), so just a little more detail would make your question more easy to answer. =)
But comments apart, welcome to MSE =)
sorry, it is x cubed
E(X3)=sum of X3f(x) for 1≤x≤6. here since it is a fair die should i put f(x)=1/6 for all x in [1,6]?
If you want to cube things you can just add a "$^$" character in between the two things. I'll edit your question so that you can look at it. Where it says "edited n mins ago by", you can click on the link on "mins ago" to see what was edited.
$X$ takes the values $1$, $2$, $3$ ,$4$, $5$, and $6$ (assuming a six-sided die). Since the die is fair, outcomes are equally likely; so $X$ takes the value $i$ with probability $1/6$ for each $i=1, 2,3,4,5,6$.
The probability distribution of $X$ is therefore
$$
p_X(i)=\textstyle{1\over 6},\quad i=1, 2, \ldots, 6.
$$
As a warm up to your problem, let's find $\Bbb E(X)$. The expected value of $X$ is
$$
\Bbb E(X)=\sum_{i=1}^6 \,i\, p_X(i)= 1\cdot {1\over6}+2\cdot {1\over6}+3\cdot {1\over6}+4\cdot {1\over6}
+5\cdot {1\over6}+6 \cdot {1\over6}=3.5.
$$
To find the expected value of a function of $X$, you could use the following fact:
Fact:
For a discrete variable $X$ that takes the values $x_1, x_2,\ldots,x_n$,
the expectation of a function $h(X)$ of $X$ is
$$\Bbb E\bigl(h(X)\bigr) =\sum_{i=1}^n h(x_i) p_X(x_i).$$
To find $\Bbb E(X^3)$, we apply the above fact with $h(x)=x^3$:
$$
\Bbb E( X^3)=\sum_{i=1}^6\, i^3 p_X(i)= ( 1)^3\cdot {1\over6}+( 2)^3\cdot {1\over6}+( 3)^3\cdot {1\over6}+( 4)^3\cdot {1\over6}
+( 5)^3\cdot {1\over6}+( 6)^3 \cdot {1\over6} .
$$
thanks David , I realize the question is wrong set
| common-pile/stackexchange_filtered |
Console not opening after weblogic server 10.3.6 installation in UNIX server
I tried weblogic server 10.3.6 in remote unix server using putty.According to logs the server is started and is in running mode.But when I try to open the console in browser it shows server not found page. Please help me regarding this issue.
LOGS
The system is vulnerable to security attacks, since it trusts certificates signed by the demo trusted CA.
< Jul 14, 2015 6:20:35 AM EDT> < Notice> < Security> < BEA-090898> <
Ignoring the trusted CA certificate "CN=thawte Primary Root CA -
G3,OU=(c) 2008 thawte\, Inc. - For authorized use
only,OU=Certification Services Division,O=thawte\, Inc.,C=US". The
loading of the trusted certificate list raised a certificate parsing
exception PKIX: Unsupported OID in the AlgorithmIdentifier object:
1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015 6:20:35 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring the trusted CA certificate
"CN=T-TeleSec GlobalRoot Class 3,OU=T-Systems Trust Center,O=T-Systems
Enterprise Services GmbH,C=DE". The loading of the trusted certificate
list raised a certificate parsing exception PKIX: Unsupported OID in
the AlgorithmIdentifier object: 1.2.840.1135<IP_ADDRESS>.> < Jul 14,
2015 6:20:35 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring
the trusted CA certificate "CN=T-TeleSec GlobalRoot Class
2,OU=T-Systems Trust Center,O=T-Systems Enterprise Services
GmbH,C=DE". The loading of the trusted certificate list raised a
certificate parsing exception PKIX: Unsupported OID in the
AlgorithmIdentifier object: 1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015
6:20:35 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring the
trusted CA certificate "CN=GlobalSign,O=GlobalSign,OU=GlobalSign Root
CA - R3". The loading of the trusted certificate list raised a
certificate parsing exception PKIX: Unsupported OID in the
AlgorithmIdentifier object: 1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015
6:20:36 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring the
trusted CA certificate "OU=Security Communication RootCA2,O=SECOM
Trust Systems CO.\,LTD.,C=JP". The loading of the trusted certificate
list raised a certificate parsing exception PKIX: Unsupported OID in
the AlgorithmIdentifier object: 1.2.840.1135<IP_ADDRESS>.> < Jul 14,
2015 6:20:36 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring
the trusted CA certificate "CN=VeriSign Universal Root Certification
Authority,OU=(c) 2008 VeriSign\, Inc. - For authorized use
only,OU=VeriSign Trust Network,O=VeriSign\, Inc.,C=US". The loading of
the trusted certificate list raised a certificate parsing exception
PKIX: Unsupported OID in the AlgorithmIdentifier object:
1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015 6:20:36 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring the trusted CA certificate
"CN=KEYNECTIS ROOT CA,OU=ROOT,O=KEYNECTIS,C=FR". The loading of the
trusted certificate list raised a certificate parsing exception PKIX:
Unsupported OID in the AlgorithmIdentifier object:
1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015 6:20:36 AM EDT> < Notice> < Security> < BEA-090898> < Ignoring the trusted CA certificate
"CN=GeoTrust Primary Certification Authority - G3,OU=(c) 2008 GeoTrust
Inc. - For authorized use only,O=GeoTrust Inc.,C=US". The loading of
the trusted certificate list raised a certificate parsing exception
PKIX: Unsupported OID in the AlgorithmIdentifier object:
1.2.840.1135<IP_ADDRESS>.> < Jul 14, 2015 6:20:36 AM EDT> < Notice> < Server> < BEA-002613> < Channel "Default" is now listening on
<IP_ADDRESS>:8006 for protocols iiop, t3, ldap, snmp, http.> < Jul 14, 2015 6:20:36 AM EDT> < Notice> < WebLogicServer> < BEA-000329> <
Started WebLogic Admin Server "AdminServer" for domain "WLS5" running
in Production Mode> < Jul 14, 2015 6:20:36 AM EDT> < Error> < Server>
< BEA-002606> < Unable to create a server socket for listening on
channel "DefaultSecure". The address <IP_ADDRESS> might be incorrect
or another process is using port 7002: java.net.BindException: Address
already in use.> < Jul 14, 2015 6:20:36 AM EDT> < Notice> <
WebLogicServer> < BEA-000365> < Server state changed to RUNNING> <
Jul 14, 2015 6:20:36 AM EDT> < Notice> < WebLogicServer> < BEA-000360>
< Server started in RUNNING mode>
What port number are you using?
I gave 8006 as port number
| common-pile/stackexchange_filtered |
Generic endpoint for REST resources in Perl Catalyst
I have several REST resource endpoints defined, such as /user, /group, and /event, as separate controllers. They all inherit from a root controller (App::Web::Controller::Root). Is it possible to create a generic endpoint for all these resources within the root controller that is able to identify the resource type?
My main use-case is .../list, which I'd like to define generically, which would identify its parent resource and return an array of resource entities. For instance,
/user/list # Array list of user entities
/group/list # Array list of group entities
/event/list # Array list of event entities
I can easily create an action that inverts the resources (e.g., /list/event is naturally handled by sub list_GET).
Thanks!
An approach I'm using for identifying the resource is to have each resource controller define its resource in the stash, and then have the generic action reference that stash value.
For example, in each of the resource controllers:
package App::Web::Controller::User;
sub begin :Auto {
my ($self, $c) = @_;
$c->stash(resource => 'User');
}
Then, in the root controller:
package App::Web::Controller::Root;
sub list :Path('list') :ActionClass('REST') {}
sub list_GET {
my ($self, $c) = @_;
my $resource = $c->stash->{resource};
return $self->status_ok($c, entity => {
list => [ $c->model('App::' . $resource)->find->all ]
});
}
I am not very happy with this because it's not generic enough, as it requires every controller to define its resource.
| common-pile/stackexchange_filtered |
Calculating keno odds?
In keno, the casino picks 20 balls from a set of 80 numbered 1 to 80. Before the draw is over, you are allowed to choose 10 balls. What is the probability that 5 of the balls you choose will be in the 20 balls selected by the casino?
My attempt: The total number of combinations for the 20 balls is $80\choose20$. However, I get stuck at the numerator. I thought it will be $\binom{80}{10}\binom{10}5$ but that's wrong.
Thanks.
To choose exactly 5 balls from the 10 you picked, they must choose 5 from the 10 you picked, and 15 from the 70 you didn't pick. But I'm not sure they mean "exactly five." I think it's more likely they mean "at least five."
Another way to think about this is to realize that the casino must choose $5$ balls from the $10$ that you chose and $15$ balls from the $70$ that you didn't choose. So:
$$P = \frac{\binom{10}{5} \binom{70}{15}}{80\choose 20} \approx 0.0514...$$
Without loss of generality, assume the casino picks balls 1 to 20. Then for the stated scenario to happen:
Five of your picks are within $[1,20]$: $\binom{20}5$ ways
The other five are within $[21,80]$: $\binom{60}5$ ways
There are $\binom{80}{10}$ picks altogether, so the probability that five balls match is
$$\frac{\binom{20}5\binom{60}5}{\binom{80}{10}}=0.0514\dots$$
I would have sworn this was backwards -- that you had to do it as in bames's solution, but it's correct. Excellent.
| common-pile/stackexchange_filtered |
WPF DataGrid Grouping with sums and other fields
I have a DataGrid that is bound to collection and that I want to be grouped. Here is the code
Collection:
private string _ID;
private string _Descript;
private decimal _Amount;
public string ID
{
get { return _ID; }
set { _ID = value; NotifyPropertyChanged("ID"); }
}
public decimal Amount
{
get { return _Amount; }
set { _Amount = value; NotifyPropertyChanged("Amount"); }
}
public string Descript
{
get { return _Descript; }
set { _Descript = value; NotifyPropertyChanged("Descript"); }
}
C#;
ListCollectionView groupcollection = new ListCollectionView(myCollection);
groupcollection.GroupDescriptions.Add(new PropertyGroupDescription("ID"));
myDataGrid.ItemsSource = groupcollection;
XAML:
<DataGrid Name="myDataGrid">
<DataGrid.GroupStyle>
<GroupStyle>
<GroupStyle.HeaderTemplate>
<DataTemplate>
<StackPanel>
<TextBlock Text="{Binding Path=Name}" />
</StackPanel>
</DataTemplate>
</GroupStyle.HeaderTemplate>
<GroupStyle.ContainerStyle>
<Style TargetType="{x:Type GroupItem}">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type GroupItem}">
<Expander>
<Expander.Header>
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Path=Name}" Margin="5"/>
<TextBlock Text="Count" Margin="5" />
<TextBlock Text="{Binding Path=ItemCount}" Margin="5"/>
</StackPanel>
</Expander.Header>
<ItemsPresenter />
</Expander>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
</GroupStyle.ContainerStyle>
</GroupStyle>
</DataGrid.GroupStyle>
This works perfectly but now in the Expander.Header I want to added a summary of a "Amount" and "Descript" value. So for example if there were 3 records in the collection with ID "ABC" each one being 20 and the description for ABC being "My Count" I would want to see;
ABC My Count total 60
How would I do that?
You could use a converter that's passed the Items property of the group header e.g.
<Window.Resources>
<local:GroupsToTotalConverter x:Key="groupsConverter" />
</Window.Resources>
<Expander.Header>
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Path=Name}" Margin="5"/>
<TextBlock Text="total" Margin="5" />
<TextBlock Text="{Binding Path=Items, Converter={StaticResource groupsConverter}}" Margin="5" />
</StackPanel>
where the converter performs the calculation and passes back the total as the string for the text block:
public class GroupsToTotalConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
if (value is ReadOnlyObservableCollection<Object>)
{
var items = (ReadOnlyObservableCollection<Object>)value;
Decimal total = 0;
foreach (GroupItem gi in items)
{
total += gi.Amount;
}
return total.ToString();
}
return "";
}
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
return value;
}
}
As for the description I'd suggest also grouping by that, and writing another converter to pull out the description from the Items in a similar manner to above.
According to debugger, items variable contains your domain entries, so gi won't be exactly of type GroupItem but it will be more likely of your domain type, I think. This code as-is gave me System.InvalidCastException exception on init
| common-pile/stackexchange_filtered |
Modify global variable in Javascript inside geocode
I'm getting crazy with this.
geocoder.geocode( { 'address': address}, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
latitude = results[0].geometry.location.lat();
longitude = results[0].geometry.location.lng();
locations[j][0] = direcciones[j]['1'];
locations[j][1] = latitude;
locations[j][2] = longitude;
locations[j][3] = direcciones[j]['10'];
j++;
}
});
If I do an alert of locations[0][0] inside geocode function, it works fine, but if I do it out, i get the previous value, because I am not modifying global locations variable...
Someone could help me to chenge correctly that variable?
...but if I do it out, i get the previous value, because I am not modifying global locations variable...
Yes, it is, it's just doing it later. The call to geocode is asynchronous, and so won't see the result until the callback is made. Code immediately after the geocode function call will run before the callback runs, and so you won't see any change.
Let's use a simpler example for illustration:
// A variable we'll change
var x = 1;
// Do something asynchronous; we'll use `setTimeout` but `geocode` is asynchronous as well
setTimeout(function() {
// Change the value
x = 2;
console.log(Date.now() + ": x = " + x + " (in callback)");
}, 10);
console.log(Date.now() + ": x = " + x + " (immediately after setTimeout call)");
If you run that (fiddle), you'll see something like this:
1400063937865: x = 1 (immediately after setTimeout call)
1400063937915: x = 2 (in callback)
Note what happened first.
Thank you for your answer. I'll make a jQuery function and i'll get the success. It is the only method I know.
Thanks!
| common-pile/stackexchange_filtered |
Forward Windows 2012 event logs from workgroup host to domain host
Is this possible? I've gone through the following but no events are forwarded.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc748890(v=ws.11)
I have a DMZ server that I want to forward logs from to another server in my domain.
This is to satisfy PCI logging requirements so if there's a better way, please let me know.
Have you successfully setup event forwarding from other servers that are not in the DMZ?
Yes, this is possible when the Event Forwarding client and Event Collector server have certificates installed for authentication. Check out the 'Setting up a source initiated subscription where the event sources are not in the same domain as the event collector computer' section of this page.
| common-pile/stackexchange_filtered |
JPanel Positioning using the Border Layout not working
I am trying to setup my JPanel's position to the right using i.add(jp, BorderLayout.EAST); but it is not working. Any ideas why? Thanks for the help in advance.
/* INSTANCE DECLARATIONS */
private JTextField tf;//text field instance variable
private JLabel jl2;//label instance variable
/*****************
* WINDOW METHOD *
* ***************/
public void window() {
LoadImageApp i = new LoadImageApp();//calling image class
JFrame gameFrame = new JFrame();//declaration
JPanel jp = new JPanel();
JLabel jl = new JLabel("Enter a Letter:");//prompt with label
tf = new JTextField(1);//length of text field by character
jl2 = new JLabel("Letters Used: ");
jp.add(jl);//add label to panel
jp.add(tf);//add text field to panel
jp.add(jl2);//add letters used
gameFrame.add(i); //adds background image to window
i.add(jp, BorderLayout.EAST); // adds panel containing label to background image panel
gameFrame.setTitle("Hangman");//title of frame window
gameFrame.setSize(850, 600);//sets size of frame
gameFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);//exit when 'x' button pressed
gameFrame.setIconImage(new ImageIcon("Hangman-Game-grey.png").getImage());//set the frame icon to an image loaded from a file
gameFrame.setLocationRelativeTo(null);//window centered
gameFrame.setResizable(false);//user can not resize window
gameFrame.setVisible(true);//display frame
}//end window method
What layout manager does i, your LoadImageApp instance, use? I'm betting it's not BorderLayout. I'm betting that the LoadImageApp class extends JPanel and if so and if you never explicitly set its layout, then it uses a FlowLayout by default, and as you're finding out, FlowLayout doesn't respect the BorderLayout.EAST int constant.
The solution is likely quite simple: make it use a BorderLayout:
setLayout(new BorderLayout());
Edit
You state in comment:
When I set the border layout of i to EAST, my background image shifts to the right also, is there a way to get around that?
No, you're missing the point. You need to set the layout of LoadImageApp to BorderLayout. You're not supposed to add i BorderLayout.EAST. This was never recommended to you.
i.e.,
public class LoadImageApp extends JPanel {
// in the constructor
public LoadImageApp() {
setLayout(new BorderLayout());
}
// .... etc....
}
THe LoadImageApp instance (which I would name loadImageApp, not i), should be added BorderLayout.CENTER, which you were doing before. Please read the layout manager tutorials which you can find here.
When I set the border layout of i to EAST, my background image shifts to the right also, is there a way to get around that?
| common-pile/stackexchange_filtered |
IE8 addEventListener - Object doesn't support property or method 'addEventListener'
i am debugging a WPTheme for IE8. It has a feature that loads a post inside a lightbox window, only the parent page scroll Y coordinates get reset to the top of the page. -- so when you close the lightbox the you are at top of the page... so you lose the place where you were just browsing.
here is the code that fires right before the scroll bar shoots to the top of the page. right before the lightbox pops up.
document.addEventListener("touchmove",function(t){var n=t.targetTouches?t.targetTouches[0]:t;e.x=n.pageX,e.y=n.pageY}):document.addEventListener("mousemove",function(t){e.x=t.pageX,e.y=t.pageY}),e}()
how can i rewrite this to be compatible with IE8?
In IE 8 doesnt exist addEventListener. For that you must use attachEvent. You can use something like this for check what to use.
if (el.addEventListener) {
el.addEventListener('click', modifyText, false);
} else if (el.attachEvent) {
el.attachEvent('onclick', modifyText);
}
Please use proper words here. There is no character limit for answers, so there is no reason to use txtspk.
| common-pile/stackexchange_filtered |
Create CSS rule from query string
Get query string
var queryString = window.location.search;
removes ? from beginning of query string
queryString = queryString.substring(1);
query string processor
var parseQueryString = function( queryString ) {
var params = {}, queries, temp, i, l;
// Split into key/value pairs
queries = queryString.split("&");
// Convert the array of strings into an object
for ( i = 0, l = queries.length; i < l; i++ ) {
temp = queries[i].split('=');
params[temp[0]] = temp[1];
}
return params;
};
// query string object
var pageParams = parseQueryString(queryString);
// CSS variables
var target = pageParams.target;
var prop = pageParams.prop;
var value = pageParams.value;
// can't get to work -->
jQuery(target).css({
prop : value,
});
I want to be able to supply a query like this one "?target=body&prop=display&value=none" and make the whole body disappear or target certain elements by their class.
You wouldn't be able to use prop as a key-variable for the object you're passing to .css(). In this case, it would translate to a literal string 'prop'. Instead, you'd have to do something like:
jQuery(target).css(prop,value);
Note: be careful about that trailing comma in that hash (after value). Some browsers will error at that point.
In order to create a css object which you can pass to jQuery, I suggest something like this:
// Create css obj
var cssObj = {};
cssObj[prop] = value;
After this, the code works fine to me. See the full solution here:
http://jsfiddle.net/q97DH/4/
I recommend removing the question mark with a regex - see comment below.
The reason he is using substring is to remove the ?. For some reason I think that some browsers do not include the ?, so his method may not be the safest. Regardless, you should modify your regex (replace) so that it only removes question marks from the beginning of the string so that no subsequent question marks are removed: queryString.replace(/^\?/,'');
Jepp, I missed that line. I've edited the answer, thx for the hint :-)
About the regex: I agree, but I didn't use a regex. I just used replace. But still, question marks later in the string, maybe escaped, would be removed too, which could be critical. So let's be precise, I'll alter the fiddle.
Yeah :) I consider all replace to be regex. I might be wrong, but for some reason I thought that it uses regex under the hood, even when passing it a string as the match pattern. It's still not a bad idea to check if the first character is a question mark, since I still think some browsers include it, while others exclude it.
| common-pile/stackexchange_filtered |
htaccess with laravel 4 rewrite on my localhost URL
I use this solution for my problem of htaccess : Htaccess redirect Laravel 4
<IfModule mod_rewrite.c>
Options -MultiViews
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L,NE]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>
It's very nice but it rewritten my local url : localhost:8888/ => www.localhost:888/
I tryed to add :
RewriteCond %{HTTP_HOST} !=localhost [NC]
RewriteCond %{HTTP_HOST} !=<IP_ADDRESS>
But it not working
How do ?
Thanks
You can try:
<IfModule mod_rewrite.c>
Options -MultiViews
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteCond %{HTTP_HOST} !localhost [NC]
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L,NE]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>
Hi, Thanks for help but it isn't working. www is always here.
OK see updated code and make sure to test in a new browser to avoid caching issues.
Yes i tested on new browser
In that you might have some other code/framework/rule forcing this.
My framework is Laravel 4, i don't see that it can be
Well above rule by definition cannot redirect localhost to www.localhost so that has to be something behind the scenes. And I have tested this rule myself before posting.
I new tested and it's oki, Thanks.
| common-pile/stackexchange_filtered |
Download bunch of files in Bg
I need to download bunch of files (~600mb) from an ftp server.
The question is how would be best to implement that?
The simplest solution would be to pause\resume the download each time the app goes back to foreground.
The problem with that of course is that the users won't be able to initiate the download, lock the iPad, and come back when it's all done.
Does iOS allow downloading that amount of files totally in the background?
You are only allowed to run in background for downloading content if you are a newsstand app and downloading the issues of your magazine. See this document for reference. I assume your app is not a newsstand app.
However, you can also ask for extra time to finish your task when your app moves to the background. Checkout beginBackgroundTaskWithExpirationHandler: method of UIApplication to see how you can do that. It will give you about 10 minutes to finish your task. It probably won't be sufficient to download all of the files but it will allow your app to download some of them.
I guess you can also send a local push notification to the user when your background execution time is about to finish to let him open the app again and continue the download. However, I have not sent any local notifications before so I am not sure if it is possible to send them while executing a background task.
| common-pile/stackexchange_filtered |
Spring Data JPA - Select row for update
I have a requirement to read the first enabled account from DB2 database table and immediately update the column to disable it. While server 1 is reading and updating the column, no other server should be able to read the same row since I want one account to be used by only one server at a time.
This is what I have so far..
Account.java
Account{
private Long id;
private Character enabled;
.............
}
AccountRepository.java
public interface AccountRepository extends JpaRepository<Account, Long>{
Account findFirstByEnabled(new Character('Y'));
}
AccountServiceImpl.java
@Service
public class AccountServiceImpl {
@Autowrired
private AccountRepository accntRepository;
@Transactional
public Account findFirstAvaialbleAccount(){
Account account = accntRepository.findFirstByEnabled(new Character('Y'));
if(account != null)
{
account.setEnabled(new Character('N')); //put debug point here
account.save(proxyAccount);
}
return account;
}
}
But this isn't working.. I've put a debug pointer in the findFirstAvaialbleAccount() method. What I was expecting is, if the debug pointer reaches that line and waiting for me to resume execution, if I run select query directly on the database, the sql shouldn't execute. It should only execute after I resume the execution on the server so that transaction is completed. But instead, running the select query directly on the database gave me the complete result set immediately. What am I missing here? I'm using DB2 if it matters.
Why do you think putting a breakpoint on a Java program should prevent another program from executing a query? Would IBM be able to sell DB2 to anyone if only one transaction reading from a table blocked all the other transactions?
Pessimistic logging is done at Database level.. with the transaction pending in server, DB2 shouldn't allow for any other client to read the same row
Why? Why do you assume that pessimistic locking is used when you haven't used any lock? Why would it prevent a transaction from reading a row that another transaction reads.
yeah that was my mistake.. I forgot to put "for update" in the query I was running directly against DB.. the one I was running from sever had it, but not the one in terminal window..
Answering my own question... I had incorrect Select SQL running against the database. If I run the select sql with "select.. for update", then the execution waits until I hit resume on the server and transaction is complete.
SQL 1 - this executes immediately even though the transaction from server isn't complete.
select * from MYTABLE where ENABLED = 'Y';
SQL 2- this waits until the transaction from server is complete (it will probably timeout if I don't hit resume quick enough)
select * from MYTABLE where ENABLED = 'Y'
fetch first 1 rows only with rs use and keep update locks;
| common-pile/stackexchange_filtered |
Invoke google assistant device from command line instead of saying "Hey google"
Is there a way to invoke google assistant by executing a command (from a linux console) instead of saying "Hey google"?
I found a way, using nodejs, to send text to my home google mini to reproduce text, but I didn't find the way to enable the listening of it how I said "Hey google".
I found this package and there is an example in it that could lead to what you're looking for:
https://github.com/endoplasmic/google-assistant/blob/HEAD/examples/console-input.js
| common-pile/stackexchange_filtered |
Pagination not working on GWT DataGrid
I have a DataGrid which shows say Employee details. For example, every row corresponds to an employee(name, age, salary) and name+age are anchors and salary is plain-text.
Everything is working fine so far, but since the number of rows very high my browser starts to hang. So I decided to use pagination in my DataGrid. I did something like:
List<Employee> tableRowData = new ArrayList<Employee>();
DockPanel dock = new DockPanel();
empTable = new DataGrid<Employee>(tableRowData.size());
SimplePager pager = new SimplePager(TextLocation.CENTER);
pager.setDisplay(hotelsTable);
pager.setPageSize(25);
dock.add(empTable, DockPanel.CENTER);
dock.add(pager, DockPanel.SOUTH);
dock.setWidth("100%");
dock.setCellWidth(empTable, "100%");
dock.setCellWidth(pager, "100%");
empTable.setRowCount(1, true);
empTable.setRowData(0, tableRowData);
empTable.setStyleName(style.hotelTable());
empTable.setWidget(dock);
Now, my pager& table shows up fine with first 25 rows, but on clicking next on the pager Table body disappears and some loading bar shows up in the body forever.
I also read somewhere that paging can not be done without using DataProviders. Is it so?
I have seen the example of paging here. It looks easy but I get it messed up when use in my case.
Any help is highly appreciated. Also I wish you can provide basic code to get me going.
Thanks,
Mohit
I think you are missing ListProvider.
In your case.You should try:
empTable = new DataGrid<Employee>(25);
ListDataProvider<Employee> provider = new ListDataProvider<Employee>(tableRowData);
provider.addDataDisplay(empTable);
.
.
.
//empTable.setRowCount(1, true);
//empTable.setRowData(0, tableRowData);
And add data by using provider.getList().add() or something like this.
I had tried this before, and perhaps I was doing something wrong in provider.getList().add() but this time I skipped it and just used ListDataProvider provider = new ListDataProvider(tableRowData);
provider.addDataDisplay(empTable); and it is working fine somehow. :) Thanks a lot for help.
| common-pile/stackexchange_filtered |
DotGNU vs Mono
DotGNU and Mono seem to be attacking the same problem - namely implementing the .NET CLR in a free, open-source way with an eye to cross-platform compatibility.
I've been reading quite a bit about both, and I'm having a hard time deciding which implementation to use for an upcoming project. My particular project doesn't need System.Windows.Forms, so the graphical UI part of the libraries won't be too important.
So: has anyone tried comparing the two directly? What are the pitfalls of either with respect to the other? Is one more supported by the FOSS community than the other?
Thanks to all who respond :)
Well, Mono looks like a much more complete port to me, with a lot more backing.
Judging by the web site, DotGNU seems to be as much about telling people not to use .NET as it is about providing a viable alternative. Many of the links (such as the "latest changes") don't seem to go anywhere useful.
Mono, on the other hand, is very obviously under active development, supports the new DLR, has implemented C# 3.0 and LINQ support, is available to install from packages for multiple platforms, has working documentation etc. The winner seems pretty clear to me.
I also got a similar impression looking at the DotGNU site. It looked like the biggest news since 2006 (and one of the few) they had was switching from CVS to git!
Yup. And if they're only going to go by ECMA standards, they'll be stuck with C# 2.0.
Microsoft seems to be supporting Mono too.
That's not really fair. Portable.NET actually had a lot of work put into it by a very talented programmer (Rhys Weatherley). It's an honest attempt to create an implementation of the ECMA specs; it's simply that there were two projects doing the same thing and Mono won the Mindshare war.
@Simon: Which bit of my post are you actually disagreeing with? Whether Portable.NET is technically excellent or not, the website still seems to be more about politics than anything else - and I can't see how anyone could argue against my points that Mono is more complete, has more backing, is under more active development etc.
"Judging by the web site, DotGNU seems to be as much about telling people not to use .NET as it is about providing a viable alternative." seems to imply that you think DotGNU is just an anti-.NET campaign. And yes, I agree that Mono is more complete in every respect. If I had to choose one of the two to use for a project, I'd pick Mono every time. But DotGNU certainly isn't just a political campaign.
@Simon: I don't think it's just an anti-.NET campaign - but the impression given is one where the politics matter as much as the technology, and I think for most developers that's just not the case. I think the web site does the technology side a disservice. I'm sure there's real technology in the project, but it's clouded by the politics, and I for one find that deeply offputting.
"I'm sure there's real technology in the project, but it's clouded by the politics"
It's a GNU project. Not much surprise there :-) And I agree. It's a shame because the Portable.NET compiler actually does some things that Mono doesn't - last time I checked it had a prototype C compiler, for example.
Dotgnu does not support generics and anonymous delegates, while mono does.
I had successfully compiled dotgnu from git sources on a number of platforms with and without libjit. I had a much less success compiling mono from their latest sources.
If you compile pnet with libjit (./configure --with-jit) then the performance of dotgnu is slightly better than mono for the nbody benchmark.
So, if you need generics, go for mono. Otherwise go with dotgnu.
PS: There is certain development on dotgnu git-sources -- I update it once in a while and can see the new commits every so often.
Somebody negged me, I suppose for claiming that dotgnu is faster. Well, that what it is on a 64-bit Ubuntu 10.04 with supplied mono and fresh dotgnu 0.8.1 compiled from git-sources. Here is the result running nbody from computer language benchmarks:
mukjaj@mukjaj:~/Komod/js$ time mono nbody.exe 10000000
-0.169075164
-0.169077842
real 0m4.572s
user 0m4.560s
sys 0m0.010s
mukjaj@mukjaj:~/Komod/js$ time ilrun nbody.exe 10000000
-0.169075164
-0.169077842
real 0m4.546s
user 0m4.540s
sys 0m0.000s
Yes, and mono version was <IP_ADDRESS>. The difference is not big, but it is there.
Yes, and memory usage was 35M for mono and 31M for dotgnu. So, here dotgnu is again slightly better. But then again, this was for numerically heavy calculations -- that's what I mostly do. For other types of applications your mileage may vary.
And also a question to the person who negged me -- could you please tell me how to compile the latest mono on a 64-bit Ubuntu 10.04? It fails on me: "make check" crashes... And I'd like to have the lates mono.
| common-pile/stackexchange_filtered |
jQuery Bootgrid sort doesn't work - POST variable inconsistency
I'm having somme issues when trying to order the data on my jQuery bootgrid. Data is obtained and filtered on server side. No issues there, however whenever I press a column name to otder the data I can see on the console that instead of getting a sort[name-of-field] I receive sort[{{ctx.column.id}}] which makes it impossible to read (when I'm parsing the sort variable).
<table id="clienteData" class="table table-condensed table-bordered table-striped" >
<thead>
<tr>
<th column-data-id="cod_cliente" data-identifier="true" data-formatter="COD_CLIENTE" data-sortable="true">CODIGO</th>
<th column-data-id="nombre" data-formatter="NOMBRE">NOMBRE</th>
</tr>
</thead>
</table>
This is the function that's run at the begginning and sets everything in order:
$(document).ready(function() {
var Inicializa = function() {
$("#clienteData").bootgrid("destroy");
$("#clienteData").bootgrid({
ajax: true,
rowSelect: true,
labels: { noResults: "No hay resultados" },
post: function() {
return { id: "b0df282a-0d67-40e5-8558-c9e93b7befed" };
},
url: "DatosClientes.php",
formatters:{
"COD_CLIENTE": function(column, row) { return row.COD_CLIENTE;},
"NOMBRE": function(column, row) { return row.NOMBRE;}
}
})
}
Inicializa();
});
When I receive the sort variable in the DatosClientes.php, I check the value at the console and what I receive is :
Variables received on "DatosClientes.php"
This was so easy, and after days of debugging....
I had to replace column-data-id with data-column-id
Now works fine!!
| common-pile/stackexchange_filtered |
Using python to return a list of squared integers
I'm looking to write a function that takes the integers within a list, such as [1, 2, 3], and returns a new list with the squared integers; [1, 4, 9]
How would I go about this?
PS - just before I was about to hit submit I noticed Chapter 14 of O'Reilly's 'Learning Python' seems to provide the explanation I'm looking for (Pg. 358, 4th Edition)
But I'm still curious to see what other solutions are possible
You can (and should) use list comprehension:
squared = [x**2 for x in lst]
map makes one function call per element and while lambda expressions are quite handy, using map + lambda is mostly slower than list comprehension.
Python Patterns - An Optimization Anecdote is worth a read.
Besides lambda and list comprehensions, you can also use generators. List comprehension calculates all the squares when it's called, generators calculate each square as you iterate through the list. Generators are better when input size is large or when you're only using some initial part of the results.
def generate_squares(a):
for x in a:
yield x**2
# this is equivalent to above
b = (x**2 for x in a)
Important to mention imo (because it can be confusing at the beginning), you can iterate one generator only once.
For completeness: you can force eager evaluation of the generator (i.e. obtain the actual results) trivially with e.g. tuple(generate_squares(original)) or list(generate_squares(original)).
squared = lambda li: map(lambda x: x*x, li)
More "pythonic" would be: lambda lst: [x*x for x in lst]
@phihag Why introducing a lambda while [x*x for x li] fits well ?
@eyquem Probably because the question was "I'm looking to write a function that..."
@phihag Well, what's going wrong with def squarification(li): return [x*x for x in li] ?
@eyquem some people have strangely strict notions of what qualifies as functional programming - the same way that many people have strange (though usually not so much because of being strict) notions of what qualifies as OOP. :)
@eyquem Nothing, it's completely fine.
You should know about map built-in which takes a function as the first argument and an iterable as the second and returns a list consisting of items acted upon by the function.
For e.g.
>>> def sqr(x):
... return x*x
...
>>> map(sqr,range(1,10))
[1, 4, 9, 16, 25, 36, 49, 64, 81]
>>>
There is a better way of writing the sqr function above, namely using the nameless lambda having quirky syntax. (Beginners get confused looking for return stmt)
>>> map(lambda x: x*x,range(1,10))
[1, 4, 9, 16, 25, 36, 49, 64, 81]
Apart from that you can use list comprehension too.
result = [x*x for x in range(1,10)]
a = [1, 2, 3]
b = [x ** 2 for x in a]
good remark of kefeizhou, but then there is no need of a generator function, a generator expression is right:
for sq in (x*x for x in li):
# do
You can use lambda with map to get this.
lst=(3,8,6)
sqrs=map(lambda x:x**2,lst)
print sqrs
| common-pile/stackexchange_filtered |
Can I use the same binary on Linux, *BSD and Illumos?
I want to know, if I can use the binary of a program without modification on the three systems? After all they are all Unices. I talk about the same architecture.
No, you cannot, as the ABIs differ. Some BSDs do have binary compability with Linux binaries, with some caveats (enabling virtual 8086 mode is a common issue). Often you may need to patch the source, however, as many binaries will make assumptions about their environment based on the fact that the source is developed for Linux. As far as I am aware there is no BSD-binary compatibility in the Linux kernel at this time.
Andrey Sokolov is working on providing Linux binary support on Illumos without zones, but as far as I am aware there is no BSD-binary compability on Illumos that is planned at this time.
Which ABI? As far as I know, most programs don't talk directly with the kernel, but with the libc (is that right?). Isn't it possible to provide a ABI-compatible libc?
The ABI depends on the platform the binary is being built for. This article has some good information about binary compatibility which should help you.
| common-pile/stackexchange_filtered |
Efficently dealing with magic numbers in Javax.Swing Applications
I'm doing project at University right now where I have to develop a word learning/translating game where multiple clients compete to translate a words the fastest, handled by a central server.
The GUI for this uses Java Swing JFrames, and to get a good looking GUI with nicely placed texts and buttons and such, you need a lot of specific numbers for distances and widths and such.
Now my Professor is VERY strict about checkstyle-violations, and she provides her own checkstyle-files, which include checks on magic numbers...
Now a pretty simple code fragments with 2 methods (of 20) such as:
private void addLanguageText() {
languageText = new JLabel();
languageText.setText("Sprache auswählen:");
languageText.setBounds(20, 70, 150, 30);
frame.add(languageText);
}
private void addWelcomeText() {
welcomeText = new JLabel("Willkommen zum Lernspiel!", SwingConstants.CENTER);
welcomeText.setFont(welcomeText.getFont().deriveFont(20.0f));
welcomeText.setBounds(0, 10, 500, 30);
frame.add(welcomeText);
}
Already has 8 different magic numbers, so my checkstyle warnings have about 100 entrys for magic Numbers in total.
Now it seems pretty impossible to just add 100 number constants, escpacially since they would have to have names such as WELCOME_TEXT_LEFT_BOUNDS.
What would be an efficient solution for dealing with this?
The two solutions are a) exactly what you proposed, just define a constant for everything, or b) use a LayoutManager.
The best solution is to use a layout manager instead of hard coding coordinates. This allows the components to correctly scale with the window and OS settings like font scaling. Adding a constant for each of your coordinates does not make the code better, it only hides the fundamental problem from the linter.
Please also pay attention that the font size is also a magic number :)
Use layout managers. This way, your interface will not only be (almost) free of magic numbers -- it will also work as expected when windows are resized, or text suddenly takes more or less space than it used to (this happens often when you translate an app from one language to another).
In particular,
JFrames almost always benefit from a BorderLayout
GridBagLayout can take care of anything (but it is a hassle to learn at first). Nested BoxLayouts are easier to learn and use.
Simple popup dialogues can be built very easily using JOptionPanes, which include their own layout for accept/cancel/... buttons.
Remaining magic numbers should be things like "number of columns in this dialogue" or "uniform spacing used to separate form rows", and can be given meaningful constant names.
With layouts, you never need to call setBounds directly. Several widespread IDEs have built-in graphical support for playing with layouts, so do not need to dive as deep in the documentation as it may seem. I recomment NetBean's form editor.
| common-pile/stackexchange_filtered |
Getting error that invalid column name while writing SQL statement in Python
I am trying to write a SQL statement in Python: 'attribute' is a column name that I want to change its format and I am giving it as a parameter. Because its name can be different.
cur.execute("SELECT DATEADD(y," + attribute + ", '1980-01-01')")
But I am getting below error. attribute=Date1 and this column exists.
[42S22] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column name 'Date1'. (207) (SQLExecDirectW)"
Your query has no table. It needs a from. select dateadd(...) FROM some_table. Also be sure to use bind parameters rather than string concatenation to add values to a SQL query; see https://stackoverflow.com/questions/902408/how-to-use-variables-in-sql-statement-in-python
If I understand your code correctly, you're building the string in the cur.execute command. If your python is up-to-date, try using fstrings. They are a bit more readable and you don't get the messy code with all the quotes. If your python version doesn't support substrings, try building the request in a variable to make the code a bit more readable. Possible solution: cur.execute(f"SELECT DATEADD(y, {attribute}, '1980-01-01')")
The string will result in SELECT DATEADD(y, Date1, '1980-01-01')
There is also a FROM missing from the query, the error Invalid column is correct because you don't tell where to find that column
| common-pile/stackexchange_filtered |
Need assistance with t-sql query max date + distinct
Just like in the title I need some help with t-SQL query to deliver a report. What I need to do is to pull data from client table and shipment table. Next the records must exclude those clients which have not done any shipments starting from 100 and must include the day of the last order made by the client.
OK lets clear what is the goal of this query.
I do not know if it is good idea but I have pasted an image from excel.
Anyway, as can you see at the moment I am pulling data that includes all of those shipments lately made but I am looking to find out how to exclude those clients whose booked more shipments and they starting from '100'.
And this is my query
SELECT j.ClientName,
j.ContactName,
j.PhoneMumber,
j.Email,
js.OrderNumb,
js.SentDate
FROM Client j
outer apply (
SELECT top 1 *
FROM Shipment js
WHERE js.ClientNum= j.ClientNUm
ORDER BY
js.SentDate DESC
) js
where j.ClientBur= 'HB'
Can you help me out to get on the right track and find solution?
When asking SQL question, it's always nice to include (1) relevant examples from you dataset, and (2) expected output from the supplied example data.
Have a look Tobb on the example above the sql query the record highlighted on red is the one I am looking to exclude from read as its starts from '100'
What does it mean "starts from 100"? are you talking about OrderNum column? is that a string or a number?
Exactly as I mentioned earlier I need to exclude all client records where OrderNum starts from '100' and as you can see those are numbers the rest is self-explanatory.
@piotr well, no, this is not self-explanatory. OrderNum looks like a number, but it might as well be stored as a string. And the solutions will be different for string and an int.
Come on @trailmax, it is pretty clear he means a filter on all OrderNums that match the pattern '100%'
The question is not very clear, but I'm guessing you are probably looking for condition Not like '100%'
Something like this
select * -- whatever
from Client j
where j.ClientBur= 'HB'
and j.OrderNum not like '100%'
You can add one more OUTER APPLY to get TOP 1 row with OrderNumb started with 100 and then exclude them in WHERE statement:
SELECT j.ClientName,
j.ContactName,
j.PhoneMumber,
j.Email,
js.OrderNumb,
js.SentDate
FROM Client j
outer apply (
SELECT top 1 *
FROM Shipment js
WHERE js.ClientNum= j.ClientNUm
ORDER BY
js.SentDate DESC
) js
outer apply (
SELECT top 1 *
FROM Shipment js
WHERE js.ClientNum= j.ClientNUm
AND LEFT(js.OrderNumb,3) = '100'
ORDER BY js.SentDate DESC
) js100
WHERE j.ClientBur= 'HB'
AND js100.OrderNumb IS NULL
| common-pile/stackexchange_filtered |
postgres(redshift) query including to_char and group by returns some errors
Im using redshift now.
then Id like to run query like
SELECT to_char(created_at, 'HH24') AS hour , to_char(created_at, 'YYYY-MM-DD HH24') AS tmp FROM log GROUP BY tmp;
this returns error, when I do it in mysql, it seems to be good.
this error is
ERROR: column "log.created_at" must appear in the GROUP BY clause or be used in an aggregate function
when I changed group by clause like "group by created_at", it returns results, but it has duplicated list.
Is is due to redshift?
If you're using a GROUP BY clause, any column in your query must either appear in the clause or you have to specify how you want it to be aggregated.
In your case, you seem to be trying to aggregate your log entries by hour. I suggest using the postgres date manipulation functions, for example:
SELECT created_at::date AS date,
extract('HOUR' FROM created_at) as hour
FROM log
GROUP BY date, hour;
thx LGTM! Is there a good way to express 00-24 hour? extract(Hour) seem to return 0-24
@mo12mo34 you can use to_char to fix hour numeric format. Use to_char(extract('HOUR' FROM created_at), '00').
| common-pile/stackexchange_filtered |
Moving VS "Folders" section to Primary bar
I'm not sure what happened but the "Folders" view is on the left secondary bar.
It used to be with the primary bar on the right.
I am trying to combine it so I see the "Folders" on the Explorer view.
Version: 1.77.3 (Universal)
Commit: 704ed70d4fd1c6bd6342c436f1ede30d1cff4710
Date: 2023-04-12T09:19:37.325Z
Electron: 19.1.11
Chromium: 102.0.5005.196
Node.js: 16.14.2
V8: <IP_ADDRESS>-electron.0
OS: Darwin x64 21.6.0
Sandboxed: No
Just drag the little underlined folders icon to the Explorer view and drop it.
| common-pile/stackexchange_filtered |
Evaluating a statement without calculating the indefinite integral
I'm cramming for a supplementary exam so you might see a ton of questions like these in the 48+ hours to come <3
The question is more of just a yes or no ; Evaluate the statement without calculating the indefinite integral.
$$ \int \frac{2x+1}{x+1} \,\mathrm dx = 2x -\ln|x+1| + C$$
1.) Seeing as how I'm not allowed to calculate the indefinite integral I assume I should work backwards using it. ie
$${{d} \over {dx}} (2x - \ln|x+1| + C)$$
2.) The hell am I supposed to do with $\ln|x+1|$, I have this tendency to split the page and answer the question due to $|x+1|$ having two possible answers.
That being said all I did was assume that $|x+1|$ would be $(x+1)$ and not $-(x+1)$. The result was :
$${2x+1} \over {x+1}$$
Does that answer the question? How would I go about showing my assumptions?
Oh and unrelated/related , I can do the above because of the fundamental theorem of calculus right?
If you are not allowed to integrate to verify the statement (the anti derivative is indeed correct), then are you allowed to take the derivative of the right hand side? That way you can also verify the statement. The derivative of $ln|x+1|$ is $\frac{1}{x+1}$ I assume you can use that piece of information?
Our lecturer is on vacation so I can't really ask her ^-^ We didn't get a memo for the paper either. So I'm playing a pretty dangerous game.
Well, there are only two ways (as far as I know). Either you integrate the left hand side to arrive at the right hand side or you differentiate the right hand side to get to the left hand side. There is nothing else to it. The integral problem is easy...
Thanks! That's what I was thinking too :)
You should know that $\dfrac d{dx}\ln|x|= \dfrac 1 x$.
To see that that is true, first do it piecewise: for $x>0$, and then for $x<0$. If $x<0$ then you have $\ln(-x)$, and $-x$ is positive, and you use the chain rule. Once you have done this just once, then remember it for use on subsequent occasions such as the exercise that you quote in your posted question.
Your technique is otherwise just what the person who posed the question probably expects.
There is one slight subtlety: The "constant" $C$ should be piecewise constant, i.e. one constant on the interval $x>-1$ and a possibly different constant on the interval $x<-1$, because there is a gap in the domain at $-1$.
I have this huge wall behind me full of rules and formulas , somehow this one eluded me. Thanks!
@ArmandConnorDuPlooy : Notice the proper way to code $a\ln b$ or $a\ln(b)$. It's a\ln b or a\ln(b). The backslash does NOT ONLY prevent italicization, but also results in proper spacing. Notice that the spacing to the right of $\ln$ is different in the two expressions; so the spacing conventions are built in to the software. $\qquad$
1) Differentiation is definitely the way to go
2) Recall that $\int \frac{1}{x+1} dx = \ln|x+1|$... Therefore, the absolute value goes away when you differentiation, implying that $\frac{d}{dx} \ln|x+1| = \frac{1}{x+1}$
| common-pile/stackexchange_filtered |
Testing Express Js Server using mocha and chai
I'm trying to test my express server using mocha and chai but i'm not able to close the server connection once the test has been completed.
Index.js
const express = require('express');
const dbconnection = require('./dbConnection.js');
const app = express();
.....
(async ()=>{
await dbconnection.init();
/* Loading middleware and stuff */
const server = app.listen(port, host, ()=>{
console.log('Server Started!')
app.emit('ready');
});
})()
module.exports = app;
I would like to know how to close the server once the test is executed. Currently testing is working but after the test it hangs.
server.test.js
const server = require("../../index");
const chai = require("chai");
const chaiHttp = require("chai-http");
const should = chai.should();
chai.use(chaiHttp);
before(function (done) {
this.timeout(15000);
server.on("ready", () => {
done();
});
});
describe.only("Health Check Test", function () {
describe("/GET healthy", () => {
it("it should GET the health status", (done) => {
chai
.request(server)
.get("/healthy")
.end((error, res) => {
res.should.have.status(200);
done();
});
});
});
});
You're calling your anonymous async function. To fix this, you would need to not call it inline, and call it in another file:
// index.js
const app = express()
const startApp = async () => {
await dbconnection.init();
const server = app.listen(port, host, ()=>{
console.log('Server Started!')
app.emit('ready');
});
}
module.exports = { app, startApp }
// server.test.js
const { app: server } = requre('../../')
// index.boot.js, or start.js, or something else
require('./index').startApp()
If you end up needing the database connection during tests, you would need to also lift that up out of the function that calls app.listen so you can close it in your tests.
How can i close the app? That is the problem..i cant call app.close
For the actual closing, you can either use http.createServer with done, or use chai.request(server).close() as in the answers here.
If i call chai.request(server).close i think i will get 'not a function' exception. I guess .close is only available for what is returned from listener
| common-pile/stackexchange_filtered |
How can I have my mobile view in a container-fluid and the desktop site in a container?
I'm building a website using laravel and bootstrap 3. When I apply a .container class the site displays well on medium and large screen sizes but too small on smaller devices like phones (with huge padding on both sides). And when I apply the .container-flud class, it displays well in smaller devices and covers the whole screen in large devices (which I don't want).
So what I want is to have the mobile view in container-fluid and medium and large screen view in a container. Is this possible? and how can I achieve this?
I'm using bootstrap 3
Any help is highly appreciated
You can alternate .container and .container-fluid on window resize.
@Chayan No no ! People will never resize windows while using mobile devices !
You may use .container and put the following overwrite media query :
@media (max-width: 415px){
.container{
min-width:100% !important;
max-width:100% !important;
}
}
these are my codes. where do I put the codes you suggested?
I added it but nothing has changed
Don't forget to compile assets after modifying the file .
| common-pile/stackexchange_filtered |
How to use elements in autoCompleteTextView?
I want to convert choosen element to Integer. When it's done I want to add a random number between 1-20 to choosen Integer. Than show up that number in Toast.
Convert element to an integer? What element are you trying to convert?
Elements of AutoCompleteTextView.
What problems are you facing?
I don't know how to do that.
Do you know what is AutoCompleteTextView? Why are you not using edittext?
I think AutoCompleteTextView would make easier using the program from users point of view.
use widget for AutoCompleteTextView.Try this : "CustomAutoCompleteTextView"
To convert a value coming from a TextView to an integer, you just need to use the following code:
EditText tViewNum = (EditText) rootView.findViewById(R.id.number);
String strWord = tViewNum.getText().toString();
Random r = new Random();
int i1 = r.nextInt(21 - 1) + 20;
String Randomiser = strWord + " " + il; //the +" "+ is used to add a space between the word and the random number.
Toast.makeText(MainActivity.this, Randomiser + "//any other text you wish to include", Toast.LENGTH_SHORT).show();
Note that the random number given here is between 1 (inclusive) and 20 (inclusive).
Yes, I think this will be the solution, but how will program know what was typed in to EditText. It's not the same for example if user type in pear or apple.
because you are using this line to get the data from the TextView: int num = Integer.parseInt(tViewNum.getText().toString());
@mlaferla Toast takes String as second parameter, not int. Please edit it and there is class mismatch in first line, it has to be EditText. Please change the context to this or getApplicationContext(). And I don't know why are you using rootView. Do you mind explaining?
@Adrian For your purpose you should use EditText not AutoCompleteTextView. The above code is using EditText. If you want to use ACTV, you need to store the values in an Array first.
@mlaferla why are you using rootView?
The way I had implemented it was in a fragment not in an activity.
I only don't want to type number into EditText, but word, than it should recognize what word is typed in.
so each random number should be assigned to a different word?
No, that Word should be = some number, saved before in program. And than when this word is typed in and pushed the button, it should add that random number.
Please consider accepting the answer if this has helped you. Otherwise, we would be happy to help you :)
use a widget forAutoCompleteTextView.
"CustomAutoCompleteTextView"
public class CustomAutoCompleteTextView extends AutoCompleteTextView {
public CustomAutoCompleteTextView(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
protected void performFiltering(final CharSequence text, final int keyCode) {
// String filterText = "";
super.performFiltering(text, keyCode);
}
/**
* After a selection, capture the new value and append to the existing
* text
*/
@Override
protected void replaceText(final CharSequence text) {
super.replaceText(text);
}
}
Your Xml will be like this:[Depend where you add your widget ]
<com.example.widget.CustomAutoCompleteTextView
android:id="@+id/sent_message_to"
android:layout_width="0dip"
android:layout_height="match_parent"
android:layout_margin="10dip"
android:layout_weight="1"
android:imeOptions="actionSend"
android:hint="name"
android:gravity="left|center"
android:padding="10dip"
android:textColor="@color/green"
android:textSize="18dp"
android:visibility="visible"
android:selectAllOnFocus="true"
android:inputType="textPersonName"
android:completionThreshold="3"
/>
Main class
you can set adapter for list value or declare array
private AutoCompleteAdapter mAutoCompleteAdapter;
private ArrayList<String> mArray = new ArrayList<String>();
private CustomAutoCompleteTextView mAutoFillTextView;
.....
mAutoFillTextView = mViewUtils.createAutoFillTextView(view.findViewById(R.id.sent_message_to), false);
mAutoCompleteAdapter = new AutoCompleteAdapter(getActivity(), mArray);
mAutoFillTextView.setAdapter(mAutoCompleteAdapter);
and
mAutoFillTextView.addTextChangedListener(new TextWatcher() {
@Override
public void onTextChanged(final CharSequence s, int start, int before, int count) {
try {
mArray.clear();
String string = s.toString().trim();
if (mAutoFillTextView.getThreshold() <= string.length() && mAllowRequest) {
//GET DATA TO LIST
}
} catch (NullPointerException ignored) {
}
}
@Override
public void beforeTextChanged(CharSequence s, int start, int count,
int after) {
}
@Override
public void afterTextChanged(Editable s) {
}
});
| common-pile/stackexchange_filtered |
AngularJS, how do I select the correct default option element?
Problem:
Given an array of Person objects, and a separate array of possible names (a super-set containing all the names found within our Person array), how do I set the default option within the generated angular markup (see below).
I'm at a loss as to how to set the item in the names dropdown so that it represents the value of the name in the person object when the UI first loads.
http://plnkr.co/edit/hvIimscowGvO6Hje35RB?p=preview
<!DOCTYPE html>
<html>
<head>
<script src="https://code.angularjs.org/1.3.0-beta.5/angular.js"></script>
<link rel="stylesheet" href="style.css" />
<script src="script.js"></script>
</head>
<button ng-click="loadData()">Load Data</button>
<ul ng-repeat="person in model.persons" >
<li>name: {{person.name}}
<select ng-select
ng-options="item.name for item in data"
ng-change="updateName(selectedName)"
ng-model="selectedName">
</select>
</li>
<li>age : {{person.age}}</li>
<li>sex : {{person.sex}}</li>
</ul>
<script>
var app=angular.module("app",[]);
app.controller("test",function($scope){
$scope.model={};
$scope.model.persons=[];
$scope.updateName=function(item){
this.person.name = item.name;
}
$scope.data=[{name:'bob'},{name:'Sal'},{name:'Lee'},{name:"Fred"}];
$scope.loadData=function(){
$scope.model.persons=[{name:'bob',age:24,sex:'Yes Please'},
{name:'Sal',age:29,sex:'Not Today'},
{name:'Lee',age:34,sex:'If I must'}];
}
});
angular.bootstrap(document,["app"]);
</script>
A couple changes:
<ul ng-repeat="person in model.persons" ng-init="selectedName = person.name">
ng-options="item.name as item.name for item in data"
http://plnkr.co/edit/hXGfRZQRz7owxpFZyudB?p=preview
Excellent! So simple.
Perhaps you should consider a sligthly different approach by using ng-repeat on options and ng-selected to denote a selected option:
<select ng-model="person.name">
<option
ng-repeat="item in data"
ng-selected="item.name == person.name"
ng-model="person.name">
{{item.name}}
</option>
</select>
See it here: http://plnkr.co/edit/gOO3iTXFFNWzFVMZv9GJ?p=preview
Appreciate the option. I up-voted you but went with Andy's answer as I did not want to use the ng-selected option if I could stay away from it. Just a stylistic consideration I suppose. Thanks again.
Sure thing, whatever works for you. Sometimes things just get too complex and we have to chop it up to smaller bits. You're welcome!
| common-pile/stackexchange_filtered |
htaccess redirect of root domain, not subfolders with url masking
I am trying to do the following -
Redirect just the root domain to a different domain.
The redirect needs to be masked so the user still thinks they are on the url they typed.
Existing subfolders should still work with the existing root domain.
For example-
I have an installation using www.currentsite.com which has lots of subfolders for example www.currentsite.com/store
I want to redirect just the root of www.currentsite.com to www.newsite.com but want the browser to still say www.currentsite.com.
If the user goes to www.currentsite.com/subfolder I still want that to work with the original installation.
I have the following which seems to be handling redirecting just the root fine but does not mask the url...
RewriteEngine on
RewriteCond %{HTTP_HOST} www.currentsite\.com [NC]
RewriteCond %{REQUEST_URI} ^/$
Rewriterule ^(.*)$ http://www.newsite.co.uk/ [L,R=301]
Any help id appreciated.
For what you call "masked" the usage of apaches proxy module makes most sense:
ProxyPass https://www.currentsite.com https://www.newsite.co.uk
ProxyPassReverse https://www.currentsite.com https://www.newsite.co.uk
It maps one base url to another one and takes care to transparently and reliably rewrite all contained references.
The proxy module can also be used by RewriteRules, the P flag does that. But in the end it comes out itself and the above, direct usage is more transparent and less complex.
Here is the documentation, as typical for the apache project it is of excellent quality and comes with lots of good examples: https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
| common-pile/stackexchange_filtered |
Custom elements in iteration require 'v-bind:key' directives
In my Nuxt app I have the following line that triggers the error mentioned in the title of this question:
<template v-for="(project, index) in existingProjects">
<span :key="project.projectId"></span>
I tried to have the :key attribute on the template element and I also tried to use just index as the key, to no avail.
Any idea?
You'd have to key all elements inside the template. If you have more than just the span, those elements would also need unique keys. Consider moving those elements into a component.
May be use, looping (v-for) on a div instead of template and put keys then.
There are multiple ways to solve your problem :
You want to iterate on a template :
You have to put a key on all elements in your template because you can not put a key on a template: <template> cannot be keyed. Place the key on real elements instead.
<template v-for="(project, index) in existingProjects">
<span :key="project.projectId">foo</span>
<div :key="project.projectId">bar</div>
</template>
You can iterate on something else than a template : You just put the key on the parent html tag.
<div v-for="(project, index) in existingProjects" :key="project.projectId">
<span>foo</span>
<div>bar</div>
</div>
The first solution results in a warning Duplicate keys detected: 'ABC'. This may cause an update error. Can add a suffix like :key="project.projectId + '-span'" to make each key unique
| common-pile/stackexchange_filtered |
Awake iOS app using silent push notifications
I know this is not the first version of this kind of question - but all information I found seems to be outdated or even wrong. So I decided to ask the question again.
Currently I'm using remote notifications to send notifications to my iOS device. Because I'd like to "awake" my application every hour (even if the app was force closed by the user) my idea was to use silent-push notifications.
Just sending Notifications is working quite well - even in the background or after force-closed by the user. But how to wake my application when it's force-closed to perform a background task by using silent-push-notifications?
func application( _ application: UIApplication, didReceiveRemoteNotification userInfo: [AnyHashable : Any], fetchCompletionHandler completionHandler: @escaping (UIBackgroundFetchResult) -> Void) {
let aps = userInfo["aps"] as! [String: AnyObject] { // remote information
completionHandler(.newData) // call completion handler
}
This is the raw of the notification:
send notification but doesn't perform background task (doesn't awake my app)
{
"aps" : {
"alert" : {
"title" : "..."
},
"content-available" : 1,
"information" : "abc"
}
}
also doesn't perform background task (doesn't wake my app)
{
"aps" : {
"content-available" : 1,
"information" : "abc"
}
}
EDIT: When my application is not force-closed but just in background mode I'm able to awake my app. But after I've suspended the app by double-clicking & swiping up, I'm able to receive notifications but I'm not able to awake my application. Seems like didReceiveRemoteNotification is not getting called anymore.
If your app is force closed then iOS will not relaunch it for a silent push. The user killed it, so it is dead. Push kit can be used with VoIP apps to relaunch an app, but it doesn't sound as if your app is a voip app.
| common-pile/stackexchange_filtered |
How do I access OneDrive application folder using the Graph .NET SDK
I want to create a file withion the application folder using the Graph .Net SDK.
I can find a class SpecialFolder but i cannot workout how to use the class to enable me to create a file within the application folder.
You can find a collection of samples how to use the SDK in the aspnet-snippets-sample
Here's the snippet for creating a file:
// Create a text file in the current user's root directory.
public async Task<List<ResultsItem>> CreateFile(GraphServiceClient graphClient)
{
List<ResultsItem> items = new List<ResultsItem>();
// Create the file to upload. Read the file content string into a stream that gets passed as the file content.
string guid = Guid.NewGuid().ToString();
string fileName = Resource.File + guid.Substring(0, 8) + ".txt";
byte[] byteArray = Encoding.ASCII.GetBytes(Resource.FileContent_New);
using (MemoryStream fileContentStream = new MemoryStream(byteArray))
{
// Add the file.
DriveItem file = await graphClient.Me.Drive.Root.ItemWithPath(fileName).Content.Request().PutAsync<DriveItem>(fileContentStream);
if (file != null)
{
// Get file properties.
items.Add(new ResultsItem
{
Display = file.Name,
Id = file.Id,
Properties = new Dictionary<string, object>
{
{ Resource.Prop_Created, file.CreatedDateTime.Value.ToLocalTime() },
{ Resource.Prop_Url, file.WebUrl },
{ Resource.Prop_Id, file.Id }
}
});
}
}
return items;
}
| common-pile/stackexchange_filtered |
Save Changes with breeze Assembly could not be found for EntityName:#xx.xx.xx.xx.xx"
When saving changes, the follow Exception occurs:
"Assembly could not be found for EntityName:#xx.xx.xx.xx.Entities"
First 3 lines of Stack:
at Breeze.ContextProvider.ContextProvider.LookupEntityType(String entityTypeName)
at Breeze.ContextProvider.SaveWorkState.<.ctor>b__8(IGrouping`2 g)
at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()
JS:
function remove(entity) {
entity.entityAspect.setDeleted();
return this.manager
.saveChanges()
.then(ok, ko);
}
What can be causing this issue?
This related with my namespace prefix! My entities are not being probed as valid assemblies in Breeze.ContextProvider.IsFrameworkAssembly function.
Did you find a solution to this?
That exception happens if the Breeze ContextProvider cannot find a server-side class for the entity type you are trying to save.
When manager.saveChanges is called on the client, the JSON for each entity includes an entityAspect object with an entityTypeName property that is the fully-qualified class name of the entity. This tells the server how to materialize the entity.
In your case, there is no class on the server matching the entityTypeName that your client is sending.
Hi Steve, the entityTypeName exists in server! The entity that i'm trying to save was retrived from server!
I can read this entities, however, I can't save them...
If I change the namesapce to a shorter one, i can make this work! why?
Hi Steve, after debug, using breeze source code, i have found that my entities are getting dropped from ProbeAssemblies in "IsFrameworkAssembly" check function... There is a way to workaround this "namespace check"?
| common-pile/stackexchange_filtered |
Help identifying movie from Russia or similar
My recollection is very sketchy but this is what I remember....
Film was possibly black and white
I saw it as a child in the late '80s early '90s but it's older
It was subtitled and in Russian or similar language
It involved some astronauts travelling in space with possibly time travel
The astronauts (mostly men) encounter a planet/civilization of only women
The women no longer need men to breed and are so become superior and possibly want to eliminate all men
That's all I have to go on, sorry.
Another possibility is the 1984 Polish film, Sexmission (Seksmisja in Polish). It involves a female-dominated society who reproduces via parthenogenesis and there is time travel of the forward variety. Moreover, it was originally in Polish (which is a Slavic tongue much like Russian) and it did have a release in Russia (as Новые амазонки). However, there is no space travel involved.
The two protagonists, Max and Albert, played by Jerzy Stuhr and Olgierd Łukaszewicz, respectively, submit themselves in 1991 to the first human hibernation experiment. Instead of being awakened a few years later as planned, they wake up in the year 2044, in a post-nuclear world. By then, humans have retreated to underground living facilities, and, as a result of subjection to a specific kind of radiation, all males have died out. Women reproduce through parthenogenesis, living in an oppressive feminist society, where the apparatchiks teach that women suffered under males until males were removed from the world.
Excellent find. This is the one. Thank you.
And here I thought that one was a stab in the dark... :)
Actually there are no time travel, just our time and then plot moves to the future.
Indeed, that is not much to go on. Cat-Women of the Moon is about a pair of astronauts encountering a society of only women and it was indeed black-and-white, and most of them are none too happy about men. However, it's the moon, no time travel, and it was originally in English although I'm sure someone has translated it into Russian at some point.
An expedition to the Moon encounters a race of "Cat-Women", the last eight survivors of a 2-million-year-old civilization, deep within a cave where they have managed to maintain the remnants of a breathable atmosphere that once covered the Moon. The remaining air will soon be gone, and they must leave if they are to survive. They plan to steal the expedition's spaceship and migrate to Earth.
There were a number of films very similar to this one released at this time, including one explicit remake, Missile to the Moon.
| common-pile/stackexchange_filtered |
Flutter persistent bottom navigation bar and page view replacement approach
I have a Navbar state full widget that tracks current page and returns a widget with a bottom navbar and dynamic body based of current page which is stored as a state
class _PullingoNavbarState extends State<PullingoNavbar> {
static int _page = 1;
final _screens = {0: PullingoMap(), 1: Dashboard(), 2: Dashboard()};
@override
Widget build(BuildContext context) {
return Scaffold(
body: _screens[_page],
bottomNavigationBar: CurvedNavigationBar(
animationDuration: Duration(milliseconds: 200),
backgroundColor: Colors.blueAccent,
index: _page,
items: <Widget>[
PullingoIcon(icon: Icons.favorite),
PullingoIcon(icon: Icons.chrome_reader_mode),
PullingoIcon(icon: Icons.person),
],
onTap: (index) {
setState(() {
_page = index;
});
},
),
);
}
}
and the root widget as follows:
class RoutesWidget extends StatelessWidget {
@override
Widget build(BuildContext context) => MaterialApp(
title: 'PULLINGO',
theme: pullingoTheme,
routes: {
"/": (_) => PullingoNavbar(),
},
);
}
pre creating instances of _screens in a map doesn't feel like a good approach to me. this probably will create states of those screens regardless of user visits them or not. There are few suggestions given here. does the above method look okay or should I completely avoid this approach.
You can use PageView and AutomaticKeepAliveClientMixin to persist your widgets when navigating. With this approach a widget is only created when a user navigates to it by bottom navigation bar. I have recently written an article about how to use it, might be useful.
https://cantaspinar.com/persistent-bottom-navigation-bar-in-flutter/
| common-pile/stackexchange_filtered |
Comparing two object with int and string parameters in Java
I am trying to compare two objects from the same class that contains both String and Int parameters.
This is for my lab assignment, and I have tried to use just .equals method without overriding it, did not work, I did some more research on how to compare the String and Int parameters of both objects. So I build a override equals and it only worked with String parameters as I tested each parameter, the Int parameter keeps returning as True, so i am not sure where i am going wrong.
Updated Ok, now i got it working, I'm trying to figure out to return false if Names are different, or if Team names are different. I thought of using if and else but nested if-else?
//This is my equals method to compare the String and Int parameter.
public boolean equals(Object o)
{
Player comp = (Player) o;
if (o == this) {
if (comp.team.equals(this.team)) {
return true;
} else if (comp.jerseyNumber == this.jerseyNumber) {
return true;
}
}else
return false;
}
//And this is the method that calls on Player.
player1.readPlayer();
//Prompt for and read in information for player 2
player2.readPlayer();
//Compare player1 to player 2 and print a message saying
//whether they are equal
if (player1.equals(player2))
{
System.out.println("Same Players");
}else
{
System.out.println("Different Players");
}
So i expected the result to print out as either Same Players if Name, Team and jersey number are the same, or Different Players if all three are different, I think. Instead with all three parameters it prints out as Same player regardless the Int parameter value, when I grayed out the Int Parameter, it actually showed both Same and Different Players result.
Ok, now i got it working, I'm trying to figure out to return false if Names are different, or if Team names are different. I thought of using if and else but nested if-else?
You say if (o == this)--think about the meaning of that for a moment.
well I would be calling on Object o to compare to this which is the class's parameter, right? I'm bad with terminology
You are literally saying if (other object is the same object as this object), in which case the result is instatrue. (By the way, you also need to check what happens when you try player.equals("not a player").)
I fixed it, it looks like so
public boolean equals(Object o)
{
if (o == this)
return true;
Player comp = (Player) o;
if (!comp.team.equals(this.team))
return false;
if (comp.jerseyNumber != this.jerseyNumber)
return false;
else
return true;
}
@Stargazer7861 I suggest looking at auto-generated equals methods from Eclipse - they do null, reference, and type checking before doing actual field comparisons. To @chrylis point, it's certainly fair to check o == this and return true, but there are also other safety checks you should have.
@josh.trow If they do null checking, then that's bad auto-generated code; o instanceof Player will evaluate to false if o is null.
interesting so o instanceof Player will ensure that null checking will go through without causing any issues?
@chrylis I'm just going to link to the best discussion I can find on why it does what it does and note that I prefer the absolute knowledge that my items are equal and symmetrical (by using getClass): https://stackoverflow.com/questions/596462/any-reason-to-prefer-getclass-over-instanceof-when-generating-equals
@josh.trow getClass is an entirely reasonable approach that does in fact justify null check.
You have a self-comparison in your equality check:
Player comp = (Player) o;
if (comp.team.equals(this.team))
return true;
if (comp.jerseyNumber == ((Player) o).jerseyNumber) <== RIGHT HERE
return true;
I assume you intended:
if (comp.jerseyNumber == this.jerseyNumber)
Based on the comp variable, I'm guessing you extracted the cast but missed one, then got a bit tied up on review.
EDIT: I would also suggest using quick-exits, like so - otherwise, if the teams are the same, but jerseys different, the way you have it today will say they are the same person:
Player comp = (Player) o;
if (!comp.team.equals(this.team))
return false;
if (comp.jerseyNumber != this.jerseyNumber)
return false;
return true;
oh, funny thing I thought of trying java jerseyNumber == this.jerseyNumber
and it works, but I think that is better and ensuring that it will check comp.jerseyNumber.
Thanks! :D that worked for me, I still struggle with boolean logic and applying it toward coding, I have partly myself to blame for not practicing much, but my motivation for coding is bit meh and I really want to improve my skills on it
| common-pile/stackexchange_filtered |
Lightweight DMS with built in REST API
I'm puzzling myself trying to find a tool to use with my web app for saving docs. We are rewriting our own Java web app which deals mainly with binary (MS Office) documents. In the old one, we used to store that documents in the file system using Java File APIs, but now we want to go one step further to make our different web apps be able to share the content.
The main idea is to have a centralized server which is provided with a RESTful API and will be responsible from managing the files from the applications involved. We might be also interested in:
File versioning
Keeping some extra metadata per file and being able to search in that metadata
Also being able to search in file contents, but that could be covered by a framework as Apache Solr, which keeps a whole index of the stored content
Scalable tool
Accessible through a network (kind of web server)
We would prefer it to be open source
I've been looking though the internet and found some options like Alfresco but it seems to be too heavy for what we want (windows download is about 500Mb).
Anybody has an idea about that?
So far I've found two interesting options, LogicalDoc and OpenKM. They have both comunnity (GPL licensed) versions available and a professional propietary version too. Both are:
Runnable in a servlet container as Tomcat
Java based
Allow full-text searches based in Lucene
Have a kind of API to interact with (REST-SOAP-WebDav)
Support node versioning
Custom metadata attributes can be added
Got WebUI to work with
Both are enough to work as functional repositories plus search indexes for a homegrown application.
Does OpenKM work on a regular web browser without a java add-in or java extension?
| common-pile/stackexchange_filtered |
Reopen zener diode after reverse bias breakdown
I've been learning about using zener diodes to generate a reference voltage for a small comparator circuit, the intention is to detect when a motorcycle battery is charging (i.e more than 13v), and use that to activate an Arduino circuit with a relatively high draw of 2A)
I've been fortunate enough to be given an example I can work from here on SX. However, something occured to me whilst learning about Zener diodes.
Once they are electrically closed, the input voltage has to be removed, before the open again. (speaking in terms of electrical continuity). To put that another way, once more than 12v (for a 12v 5A Zener) is seen, and the reverse bias breaks down, I (apparently) have to completely remove the input voltage in order to sever the connection.
Is that accurate, and is this expected, normal behaviour? I need to be able to power down my device when the voltage at the battery drops back down below 12v.
Any chance you can use the analog comparator in the Arduino to turn it off?
Very possibly, but what would I be turning off? Cutting the input to the zener, and turning myself off? I suppose then I could watch the voltage after the zener with the Arduino to detect the opposite effect, such as low voltage.
That's not true about zeners - if you go back below the threshold where they appear to regulate/conduct you are back to square 1 unless of course you damaged the zener with too much current and it has gone short circuit due to melt-down.
Connect the Arduino through the NC contact on a relay, and energize it when you detect the decrease.
Andy, perfect, if you can post as an answer (with citation?) I'd be happy to accept.
It occurs to me that you're thinking of a zener diode as a thyristor or a diac.
For example, the IV curve for a diac:
Once either threshold voltage is reached, the diac "turns on" and stays on until the current drops below the holding current value. Importantly, note that there are two values of current associated with any voltage between \$V_F\$ and \$V_{BO}\$ since the diac can be "on" or "off".
However, the IV curve for a zener (or avalanche) diode:
is qualitatively different. Note that, for the zener, for any voltage, there is only one value of current associated so the zener isn't a two-state device. Yes, it does have regions of operation (forward biased, reverse biased, and breakdown) but these simply refer to different regions of the IV curve.
That's not true about zener diodes - if you go back below the threshold where they appear to regulate/conduct you are back to square 1 unless of course you damaged the zener with too much current and it has gone short circuit due to melt-down.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.