text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
An almost potential game
$\def\argmax{\mathop{\rm argmax}}$
Recall that potential games are non-cooperative games that allow for a potential function that behaves in a sense "similarly" as the pay-off functions. Inspired by a certain class of optimization algorithms, I met a game that is "almost" potential. It is defined as follows. There are $n$ players. Player $i$ has the strategy set $S_i$ and pay-off function $f_i: S\to\mathbb{R}$, where $S=S_1\times\cdots\times S_n$. There is a potential function $F: S\to\mathbb{R}$ such that for every $i\in\{1,\ldots,n\}$, $x\in S$ and $x'_i\in S_i$, the following three conditions hold:
$f_i(x)-f_i(x'_i,x_{-i})\ge0\;\Longrightarrow\;F(x)-F(x'_i,x_{-i})\ge0$
$F(x)-F(x'_i,x_{-i})>0\;\Longrightarrow\;f_i(x)-f_i(x'_i,x_{-i})>0$
$\displaystyle\argmax_{x_i\in S_i}f_i(x)\subseteq\argmax_{x_i\in S_i}F(x)$
where for $x=(x_1,\ldots,x_n)\in S$ we denoted $x_{-i}=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)$, and $\argmax$ is the set of all maximisers.
Note that the game is similar to (but not contained in) two known classes of games:
generalized ordinal potential games, defined by $f_i(x)-f_i(x'_i,x_{-i})>0\;\Longrightarrow\;F(x)-F(x'_i,x_{-i})>0$
pseudo-potential games, defined by $\displaystyle\argmax_{x_i\in S_i}f_i(x)\supseteq\argmax_{x_i\in S_i}F(x)$
It would help me to know if my game belongs to any known game class and if anything can be said about its pure Nash equilibria (perhaps under some assumptions on $S_i$, such as compactness). I googled a lot, but found nothing. Many thanks!
Tom
What is your game?
| common-pile/stackexchange_filtered |
How to return variable from external module
While using the interactive brokers api python library,
I try to return a dict object from specific functions in my program that supported by the ibapi module itself.
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.contract import Contract
from ibapi.ticktype import TickTypeEnum
from threading import Timer
from threading import *
import csv
import storage_handler as sh
from contractstock import stock
import datetime
import os
import foldernavigator as fn
class TestApp(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
def nextValidId(self, orderId ):
self.nextOrderId = orderId
self.start()
def tickPrice(self, reqId, tickType, price, attrib):
group = [1, 2, 4, 6, 7, 35, 37, 57, 75]
i = 0
while i < len(group):
data = {}
data['reqId'] = reqId
data[TickTypeEnum.to_str(tickType)] = size
print(str(data)
return data
i += 1
def start(self):
fn.foldercreator(self)
contract = Contract()
contract.symbol = stock["symbol"]
contract.secType = stock["secType"]
contract.exchange = stock["exchange"]
contract.currency = stock["currency"]
contract.primaryExchange = stock["primaryExchange"]
self.reqMarketDataType(4)
self.reqMktData(1, contract, "", False, False, [])
self.reqMktData(2, contract, "", False, False, [])
self.reqMktData(3, contract, "", False, False, [])
self.reqMktData(4, contract, "", False, False, [])
ids1 = [0,1,2,3,4,6,7,8,9,49,85]
ids2 = [[21,165],[46,236],[48,233],[54,293],[55,294],[56,295],[63,595],[64,595],[65,595],[89,236]]
for i in ids1:
self.reqMktData(str(i), contract, "", False, False, [])
for j in ids2:
self.reqMktData(str(j[0]), contract, str(j[1]), False, False, [])
def stop(self):
self.done = True
self.disconnect()
def main():
app = TestApp()
app.nextOrderId = 0
app.connect("<IP_ADDRESS>", 7497, 0)
if __name__ == "__main__":
main()
While I use the return statement in order to get the dict object I can't seem to figure out what is the object path after the return from the function.
How can I return objects from external functions back to my main program?
the "def tickPrice" is the call back from the line "self.reqMktData(1, contract, "", False, False, [])"
that get back NONE if i put it in variable
Where did you want to return something to where? I didn't quite get that.
thanks for the comment, i forgot to mention that the "def tickPrice" is the call back from the line "self.reqMktData(1, contract, "", False, False, [])"
| common-pile/stackexchange_filtered |
ignore encoding error when parsing pdf with pdfminer
from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdftypes import resolve1
fn='test.pdf'
with open(fn, mode='rb') as fp:
parser = PDFParser(fp)
doc = PDFDocument(parser)
fields = resolve1(doc.catalog['AcroForm'])['Fields']
item = {}
for i in fields:
field = resolve1(i)
name, value = field.get('T'), field.get('V')
item[name]=value
Hello, I need help with this code as it is giving me Unicode error on some characters
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/pdftypes.py", line 80, in resolve1
x = x.resolve(default=default)
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/pdftypes.py", line 67, in resolve
return self.doc.getobj(self.objid)
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/pdfdocument.py", line 673, in getobj
stream = stream_value(self.getobj(strmid))
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/pdfdocument.py", line 676, in getobj
obj = self._getobj_parse(index, objid)
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/pdfdocument.py", line 648, in _getobj_parse
raise PDFSyntaxError('objid mismatch: %r=%r' % (objid1, objid))
File "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/psparser.py", line 85, in __repr__
return self.name.decode('ascii')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 0: ordinal not in range(128)
is there anything I can add so it "ingores" the charchters that its not able to decode or at least return the name with the value as blank in name, value = field.get('T'), field.get('V').
any help is appreciated
Here is one way you can fix it
nano "/home/timmy/.local/lib/python3.8/site-packages/pdfminer/psparser.py"
then in line 85
def __repr__(self):
return self.name.decode('ascii', 'ignore') # this fixes it
I don't believe it's recommended to edit source scripts, you should also post an issue on Github
| common-pile/stackexchange_filtered |
apache server stop working regularly
we have huge product in php our database(Mysql) is now approx 1GB and we have approx 750 tables.almost 900 stored routines are there. we are facing the problem that our apache server stop working after interval of some time.
Server configuration is
intel xeon processor
16 GB ram (DDR3)
1 tb hdd
OS is Ubuntu 14.04
I think you should post your error log so people can figure out what's wrong with your web server
[Sun Jun 25 08:00:38.668674 2017] [mpm_prefork:notice] [pid 1323] AH00163: Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.19 configured -- resuming normal
[Sun Jun 25 11:23:15.934808 2017] [core:error] [pid 52345] [client <IP_ADDRESS>:53397] AH00126: Invalid URI in request GET login.cgi HTTP/1.0
[Sun Jun 25 15:16:03.233309 2017] [core:error] [pid 52345] [client <IP_ADDRESS>:37722] AH00126: Invalid URI in request GET login.cgi HTTP/1.0
[Mon Jun 26 01:09:47.135891 2017] [:error] [pid 38426] [client <IP_ADDRESS>:58870] script '/var/www/doctorg/app/testproxy.php' not found or unable to stat
| common-pile/stackexchange_filtered |
How to remove all those extra lines other than columns using sed and uniq, like the result given at the end?
.
THE EAST INDIA COMPANY PVT LTD
THE EAST INDIA COMPANY PVT LTD
Date : 14/03/2016 Time : 12:45:15 Page : 1
Office : INDIANA. Code : 101 Description : TSHIRTS Month : 03/2016
Office : INDIANA. Code : 101 Description : TSHIRTS Month : 03/2016
+=====================================================================================================+
! Slno ! CrId ! Name Of Customer ! Item Code & Descrptn ! Amount ! !
Slno CrId Name Of Customer Item Code & Descrptn Amount !
+=====================================================================================================+
! 2 ! 234567 ! CHARLES DICKENS ! 101 / TSHIRTS ! 65,805.00 ! !
! 3 ! 345678 ! ROOSEVOLT HUGAS ! 101 / TSHIRTS ! 50,140.00 ! !
! 4 ! 456789 ! RICH HILLSIDE ! 101 / TSHIRTS ! 48,130.00 ! !
! 5 ! 567890 ! SAMUEL PETER ! 101 / TSHIRTS ! 51,750.00 ! !
+-----------------------------------------------------------------------------------------------------+
Prepared by : : MANAGER
THE EAST INDIA COMPANY PVT LTD
THE EAST INDIA COMPANY PVT LTD
Date : 14/03/2016 Time : 12:45:14 Page : 2
Office : INDIANA. Code : 102 Description : PANTS Month : 03/2016
Office : INDIANA. Code : 102 Description : PANTS Month : 03/2016
+=====================================================================================================+
! Slno ! CrId ! Name Of Customer ! Item Code & Descrptn ! Amount ! !
Slno CrId Name Of Customer Item Code & Descrptn Amount !
+=====================================================================================================+
! 1 ! 234567 ! CHARLES DICKENS ! 102 / PANTS ! 915.00 ! !
! 2 ! 456789 ! RICH HILLSIDE ! 102 / PANTS ! 1,610.00 ! !
+-----------------------------------------------------------------------------------------------------+
Prepared by : : MANAGER
I have a file like above. Using sed / uniq / awk how to delete all the headers, banner text etc., as below :
! Slno ! CrId ! Name Of Customer ! Item Code & Descrptn ! Amount ! !
! 2 ! 234567 ! CHARLES DICKENS ! 101 / TSHIRTS ! 65,805.00 ! !
! 3 ! 345678 ! ROOSEVOLT HUGAS ! 101 / TSHIRTS ! 50,140.00 ! !
! 4 ! 456789 ! RICH HILLSIDE ! 101 / TSHIRTS ! 48,130.00 ! !
! 5 ! 567890 ! SAMUEL PETER ! 101 / TSHIRTS ! 51,750.00 ! !
! Slno ! CrId ! Name Of Customer ! Item Code & Descrptn ! Amount ! !
! 1 ! 234567 ! CHARLES DICKENS ! 102 / PANTS ! 915.00 ! !
! 2 ! 456789 ! RICH HILLSIDE ! 102 / PANTS ! 1,610.00 ! !
Its a lengthy file, but I've posted only a sample here.
One more question regarding this, I want to add the values of each customer from the entire list and print Name of Customer and Total Amount, using Awk. Please provide a solution.
Quite the customer list.
Sorry it is a list pertaining to my office, but I've created all dummy items, like replacing Basic pay for TSHIRTS and Fixed Personal Allowance as Pants, etc., in order to hide the identity of my office. I never thought it would give out a false pretext.
grep '^\!.*\!' < theFile.txt
Thank you for fast reply. It works fantastic. Please answer to my second question also, given at the end
@adam1969in, SO is not a coding service!
Sorry, I am a beginner in awk. I don't know the right command to use 'for' loop to repeatedly search for the pattern (in this case, Customer id) and add its values.
awk to the rescue!
$ awk -F! '/^!/{gsub(",","",$6);
if($6==$6+0)a[$4]+=$6;
else{h1=$4;h2=$6}}
END{print h1,h2; for(k in a) print k,a[k]}' mess
Name Of Customer Amount
SAMUEL PETER 51750
RICH HILLSIDE 49740
CHARLES DICKENS 66720
ROOSEVOLT HUGAS 50140
ADOLF HITLER 57080
perhaps you can look into how to format the amount (hint printf)
Very nice one. But why is it printing in reverse order of Customer id. Thank you sir.
If you can answer your question about ordering I'll post an solution as a motivation for you to learn more about awk.
| common-pile/stackexchange_filtered |
Does having a known suffix on the input to PBKDF2 make you more vulnerable?
I have an implementation of PBKDF2, which I know
Has two bytes of '=' at the end of the input
Has an input length of 24 (which is a Base64 encoded character representation of 16 bytes of entropy)
Uses one iteration
Has a known 9-character salt.
It's using a SHA1 hasher.
Does knowing the two = bytes at the end of input (which represent the base64 encoding for padding) reduce the complexity of finding all possible collisions for a 24 characters input?
Just to clarify,
password = "24_characters_ends_in_==";
key = sha1_pkbdf2_hmac( password, KNOWN_SALT, 1 );
Does knowing two characters at the end of the password further increase the ability to find collisions?
Does knowing the two = bytes at the end of input (which represent the base64 encoding for padding) reduce the complexity of finding all possible collisions for a 24-character input?
Since you pre-processed the users' password with base64, then the base64 may need padding. Depending on the input, base64 may add none, one, or two equal (=) characters. In short, base64 processes the input with 24-bit blocks, and
if the last 24 bit is full i.e. 3 ASCII characters, then no padding
if the last 24 bit contains only two characters then two zeros with one =
if the last 24 bit contains only one character then four zeros with two =
is appended. Using these rules, one can reverse the base64 padding.
This padding is not part of your password, so the entropy of your input doesn't change since base64 is reversible - neither reduces the entropy nor increases. The attacker still needs to work your actual password space.
Even the salt will not increase the entropy, it only reduces the attacker's capabilities by eliminating the Rainbow Tables.
Does knowing two characters at the end of the password further increase the ability to find collisions?
Collision are not related to password security, you need the second pre-image attack
Second Pre-image attack: given a message $m_1$ find another message $m_2$ such that $m_1 \neq m_2$ and $Hash(m_1)=Hash(m_2)$.
Collision attack : find two inputs that hash to the same output $a$ and $b$ such that $H(a)= H(b)$, $a \neq b.$
As we can see, a collision has nothing to do with the passwords. The second per-image does since the attacker sees the password hash then it tries to find one that works with the given password hash ( with the salt and the iteration).
The confusion may come from the fact that these two inputs are collisions, however, the terminology must not be used in this way. Assume that I found two colliding pairs of hashes then, this will work with the given password hash, NO!
A note on the iteration:
Currently, the iteration count is set to 1. This is not good. One needs to use as much as possible to reduce that attacker's capabilities, so set the iteration count like 100K or 1M. Remember that the higher the count, the more time the user will wait. The current practice is not letting users wait more than one second. You can see some in-use iteration examples in Wikipedia
If the password is exactly 24 characters long with a static "==" at the end, it does effectively reduce the search space to 22 characters, as any password without the "==" is invalid and would not be attempted.
These are not part of the original password, it comes from base64 encoding and that is reversible. See in my answer.
| common-pile/stackexchange_filtered |
Commutative von Neumann algebras and localizable measure spaces
This is not my subject so I apologize if my question is too obvious or understood from other pages.
I read some pages such as
Reference for the Gelfand-Neumark theorem for commutative von Neumann algebras and
von neumann algebras and measurable spaces.
If I understand correctly, there is some correspondence between localizable measure spaces and commutative von Neumann algebras given by
$$(\Omega,\nu)\mapsto L^{\infty}(\Omega,\nu).$$
But I wanted to clarify:
(1) What is the correct notion of a morphism of commutative von Neumann algebras? Is it a normal *-homomorphism? What is the exact definition of normal? is it the same as being σ-weakly continuous?
(2) Is it true that the opposite category of the category of commutative von Neumann algebras (with the appropriate class of morphisms) is equivalent to the category of localizable measure spaces and measurable maps? Or do we need to use another type of morphisms between localizable measure spaces?
Any good references on the above will be highly appreciated.
@DmitriPavlov would probably know.
In fact, he says it in his question: "...subcategory of von Neumann algebras and their morphisms (σ-weakly continuous morphisms of unital C-algebras)*"
(1) Yes to both questions. (2) Yes, provided that a measurable map is defined as an equivalence class modulo equality on a conegligible set. More details can be found at http://ncatlab.org/nlab/show/measurable+locale, http://mathoverflow.net/questions/20740/is-there-an-introduction-to-probability-theory-from-a-structuralist-categorical-p/20820#20820, and http://mathoverflow.net/questions/49426/is-there-a-category-structure-one-can-place-on-measure-spaces-so-that-category-th/49542#49542 (the last link has a list of further references).
Clarification to (2): equality on a conegligible set is understood in the sense that two measurable maps f and g are identified if and only if the preimages of any measurable set have a negligible symmetric difference.
It looks to me like waitaminute gave a conter-example. If $f$ is the identity map on $I^{\parallel}$ and the map $g$ is the one exchanging $t^+$ with $t^-$, then the preimage of $I^-$ is $I^-$ by $f$ and $I^+$ by $g$. The symmetric difference is thus $I^{\parallel}=I^-\cup I^+$ which is not negligible.
@IlanBarnea: I^- is not measurable, so this is not a counterexample.
@DmitriPavlov Thanks! What is a good reference for this equivalence of categories?
@IlanBarnea: Apparently, there is no single unified reference (I wish I had time to write one). I believe that by carefully combing through Fremlin's books you might be able to extract an equivalence between measurable spaces and measure algebras (= measurable locales). The equivalence between measurable locales and commutative von Neumann algebras is quite elementary, almost a one-liner.
@IlanBarnea: And now there is a reference: https://arxiv.org/abs/2005.05284
If one thinks about measurable functions on the interval [0,1], as a Cartan subalgebra of a semisimple Lie algebra, then measurable automorphisms become similar to the Weil group, which is the group of permutations for gl(n), hence one gets a definition of an infinite permutation group, and may study its representations, Young diagrams, etc
The technical part of your problem is resolved in the sadly little-known article "On point realization of $L^\infty$-endomorphisms" by Vesterstrøm and Wils in Math. Scand. 25 (1970) (journal link). By the way, von Neumann algebras are considered from a categorical viewpoint
by Guichardet in a paper in Bull. Math. Soc. 90 (1966).
For the paper of Guicharget see http://dmitripavlov.org/scans/guichardet.pdf
As far as I see they prove something for $\Omega$ a locally compact space and $\nu$ a positive Radon measure. Does this cover all localizable measure spaces? Also, they show that any normal *-homomorphism $$L^{\infty}(\Omega,\nu)\xrightarrow{F} L^{\infty}(\Omega_1,\nu_1)$$ is induced from a map $\Omega\xrightarrow{f} \Omega_1$. Is there any simple way to characterize the maps $f$ that are obtained this way? Does it give an equivalence of categories?
The split interval $I^{\parallel} = \{t^+ : t \in [0,1]\} \cup \{t^- : t \in [0,1]\}$ yields the standard counterexample to the second question (details can be found in Fremlin Vol 3I, section 343, especially 343J, the exercises and the notes and comments at the end of the section). There usually are many maps of the measure space that induce the same morphisms of the measure algebra.
The identity homomorphism of the measure algebra of $I^{\parallel}$ is induced both by the identity map $f$ and by the map $g$ exchanging $t^+$ with $t^-$. Since the measure algebra uniquely determines the associated von Neumann algebra, both these maps induce the identity map of the algebra. Obviously $f(x) \neq g(x)$ for all $x \in I^{\parallel}$, so no identification modulo zero helps here.
Similarly, the measure algebra of $I^{\parallel}$ is canonically isomorphic to the measure algebra of the unit interval, and the two maps $t^{\pm} \mapsto t^-$ and $t^{\pm} \mapsto t^+$ both induce this isomorphism, but they are nowhere equal.
The question in the original post does not define the equivalence relation on measurable maps, so it has different answers depending on the equivalence relation chosen. With the correct choice one does get an equivalence of categories.
Does anyone have .pdf formats for the .tex files in Fremlin Vol 3I?
@IlanBarnea: http://libgen.education/search.php?req=fremlin+measure+theory
| common-pile/stackexchange_filtered |
ECP5 Versa Board Example
I'm struggling to get my design on the ECP5 Versa board running. Currently it's just for hardware verification so there's not much going on. So this is my top entity...
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
USE work.config_package.ALL;
ENTITY top_entity IS
PORT(
SYSCLK : IN std_logic;
CLKOUT : OUT std_logic;
SOD : OUT std_logic_vector(73 DOWNTO 0);
ds : IN std_logic_vector(5 downto 0)
);
END ENTITY top_entity;
ARCHITECTURE rtl OF top_entity IS
SIGNAL osc, clk_160m, clk_50m, clk_i, rst_i, clken_1MHz, clken_1kHz, led : std_logic;
COMPONENT ECP5PLL IS
PORT(CLKI : IN std_logic;
CLKOP : OUT std_logic;
CLKOS : OUT std_logic
);
END COMPONENT ECP5PLL;
COMPONENT OSCG
GENERIC (
DIV : Integer := 128 );
PORT (
OSC : OUT std_logic := 'X' );
END COMPONENT;
COMPONENT powerup_rst IS
GENERIC(
rst_time : IN integer := 5000 --100ms
);
PORT(
clk_i : IN std_logic;
rst_i : IN std_logic;
force_rst : IN std_logic;
rst_o : OUT std_logic;
rst_n_o : OUT std_logic
);
END COMPONENT powerup_rst;
COMPONENT clockdivider_const
GENERIC(
div_value : integer;
reset_on_disable : boolean
);
PORT(
clk_i : IN std_logic;
rst_i : IN std_logic;
enable : IN std_logic;
clk_out_en : OUT std_logic;
clk_out : OUT std_logic
);
END COMPONENT clockdivider_const;
BEGIN
SOD(0) <= led;
SOD(1) <= '1';
SOD(2) <= '0';
SOD(3) <= '1';
SOD(4) <= '0';
SOD(5) <= '1';
SOD(6) <= ds(0);
SOD(7) <= '1';
SOD(9) <= '0';
SOD(10) <= '0';
SOD(11) <= '0';
SOD(12) <= '0';
SOD(13) <= '0';
SOD(14) <= '0';
oscillator : OSCG
GENERIC MAP (
DIV => 32 )
PORT MAP(
OSC => osc
);
ECP5PLL_inst : COMPONENT ECP5PLL
PORT MAP(
CLKI => osc,
CLKOP => clk_i,
CLKOS => clk_50m
);
SOD(15) <= clk_50m;
SOD(SOD'left downto 16) <= (OTHERS => '0');
-- Reset on power up
u1_powerup_rst_inst : powerup_rst
GENERIC MAP(
rst_time => 50
)
PORT MAP(
clk_i => clk_i,
rst_i => '0',
force_rst => '0',
rst_o => rst_i
);
FPGA_INT <= (OTHERS => '0');
CLKOUT <= clk_160m;
-- Clockdivider for 1MHz clock
u1_clockdivider_const_inst : clockdivider_const
GENERIC MAP(
reset_on_disable => true,
div_value => 160
)
PORT MAP(
clk_i => clk_i,
rst_i => rst_i,
enable => '1',
clk_out => OPEN,
clk_out_en => clken_1MHz
);
-- Clockdivider for 1kHz clock
u2_clockdivider_const_inst : clockdivider_const
GENERIC MAP(
reset_on_disable => true,
div_value => 1000
)
PORT MAP(
clk_i => clken_1MHz,
rst_i => rst_i,
enable => '1',
clk_out => OPEN,
clk_out_en => clken_1kHz
);
u3_clockdivider_const_inst : clockdivider_const
GENERIC MAP(
reset_on_disable => true,
div_value => 1000
)
PORT MAP(
clk_i => clk_i,
rst_i => rst_i,
enable => clken_1kHz,
clk_out => led,
clk_out_en => open
);
END ARCHITECTURE rtl;
But beside of the static assignments, nothing works
so SOD(0) that should be flickering with approximately 1Hz is just statically '0' so it seems that the oscillator doesn't work. I first tried to use the external 100MHz oscillator but that also didn't work.
The PLL and the OSC component seem to be infered according to the build report
OSC 1/1 100% used
PLL 1/4 25% used
Another thing that puzzles me is the following message during synthesis
WARNING - I/O Port SOD[6] 's net has no driver and is unused.
But SOD[6]'s driver should be ds[0] which is assigned to the dip switch pin H2
This is the first time I'm using Diamond and the ECP5 and I don't know if I have to include something more in my project or if the usage of the library components is wrong. It's hard to find any reference for such a design.
Did u check the simulation of signal 'led' and if it's toggling at every 1000 ms.
@MituRaj No, would I do that using ModelSim? Have to familiarize me with the tool then... but does it allow me to simulate Lattice IP blocks such as the OSCG and PLL?
Yea, Lattice Diamond should have support for third-party simulator like ModelSim, where you just have to compile the libraries of lattice blocks. By the way, don't Lattice have its own simulation tool?
I think for simulation you need a testbench that replaces the clock IPs. Do I have to do any special clock distribution or is there a global reset I have to clear?
I'm still struggling with this, so if anyone has experience with the ECP5 or event better with the Versa board, please let me know.
Try asking EDAboard, they have good response for tool related queries.
I'll give it a try
Issue has been resolved, as it turned out you explicitely need to add the corresponding chip library, in my case the one for the ECP5
library ecp5um;
use ecp5um.components.all;
Thanks for posting the answer to this issue, but where do you put this library reference if your code is in verilog instead of vhdl?
I am encountering the exact same problem with ECP5UM.
@JAM - Hi, I have converted your post into the comment above, so that it can be treated as a clarification question (which is allowed in comments on SE, but please read the [tour] & [help] to see how SE differs from typical forums and to see why your posting as an answer wasn't allowed). If the site member who kindly wrote this answer can clarify regarding the library reference then great. If not, then you will need to ask a new question. In that new question, you can link to this question for context, but the new question must be complete itself. Thanks.
@JAM Unfortunately I've never used verilog so I don't have an answer to your question
| common-pile/stackexchange_filtered |
In PostgreSQL, how can I get the body of a prepared statement?
My postgresql slow query log is showing lines like:
2014-07-11 21:00:34 GMT LOG: duration: 539.036 ms execute S_1: COMMIT
2014-07-11 21:00:39 GMT LOG: duration: 608.964 ms execute S_1: COMMIT
2014-07-11 21:00:39 GMT LOG: duration: 604.911 ms execute S_1: COMMIT
Is there a way for me to retrieve what prepared statement S_1 is so I can see which query is being slow?
It's right there at the end of the log message. In this case, the prepared statement is a COMMIT.
This format applies to statements prepared via the extended query protocol. If you're using an SQL PREPARE, you'll find the original statement in a DETAIL message on the following line:
2014-07-11 21:00:39 GMT LOG: duration: 0.118 ms statement: EXECUTE q
2014-07-11 21:00:39 GMT DETAIL: prepare: PREPARE q AS SELECT 1;
| common-pile/stackexchange_filtered |
Sending a request with a NodeMCU when a key is pressed
I want to send a request to a server with the GET method with URL parameters when a key is pressed on the NodeMCU. But I can't get this to work; I get the below error in the serial monitor:
ets Jan 8 2013,rst cause:4, boot mode:(3,7)
wdt reset
load 0x4010f000, len 1384, room 16
tail 8
chksum 0x2d
csum 0x2d
v951aeffa
~ld
My main code:
#include <ESP8266WiFi.h>
const char* ssid = "MobinNet1365";
const char* password = "G12229M1Q64";
const char* host = "webhook.site";
const int button = 8;
int temp = 0;
void setup() {
Serial.begin(115200);
pinMode(button, INPUT);
delay(10);
Serial.println();
Serial.print("Connecting to ");
Serial.println(ssid);
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED){
delay(500);
Serial.print(".");
}
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());
}
void loop() {
temp = digitalRead(button);
if (temp == HIGH) {
Serial.println(host);
WiFiClient client;
const int httpPort = 80;
String url = "/df35c8cd-0398-4c92-b55f-b9e36629b309";
url += "?switche=";
url += "1";
client.print(String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n");
}
}
My request sending code is below:
#include <ESP8266WiFi.h>
const char* ssid = "your-ssid";
const char* password = "your-password";
const char* host = "";
void setup() {
Serial.begin(115200);
delay(10);
Serial.println();
Serial.print("Connecting to ");
Serial.println(ssid);
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());
}
int value = 0;
void loop() {
delay(5000);
++value;
Serial.print("connecting to ");
Serial.println(host);
WiFiClient client;
const int httpPort = 80;
if (!client.connect(host, httpPort)) {
Serial.println("connection failed");
return;
}
String url = "/";
/* url += "?param1=";
url += param1;
url += "?param2=";
url += param2;
*/
Serial.print("Requesting URL: ");
Serial.println(url);
client.print(String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n");
unsigned long timeout = millis();
while (client.available() == 0) {
if (millis() - timeout > 5000) {
Serial.println(">>> Client Timeout !");
client.stop();
return;
}
}
while (client.available()) {
String line = client.readStringUntil('\r');
Serial.print(line);
}
Serial.println();
Serial.println("closing connection");
}
And my key press detector code is below:
const int button = 8;
int temp = 0;
void setup() {
Serial.begin(9600);
pinMode(button, INPUT);
}
void loop() {
temp = digitalRead(button);
if (temp == HIGH) {
Serial.println("LED Turned ON");
delay(1500);
}
else {
Serial.println("LED Turned OFF");
delay(1500);
}
}
My circuit is:
NodeMcu => MicroSwitch
VIN => C
D0 => NO
G => 10k => NO
How can I make it so that when the key is pressed it sends a request?
I don't quite understand--what is ypur actual code? One chunk of code never initializes "host", another does. What is the actual code that causes the error? Are you making a request between two NodeMCUs?
No its just for sending a request to a server when a key pressed but i get "ets Jan 8 2013,rst cause:4, boot mode:(3,7)
wdt reset
load 0x4010f000, len 1384, room 16
tail 8
chksum 0x2d
csum 0x2d
v951aeffa
~ld" error and that code has header named main code is my actual code
Please include only the relevant code.
why do you believe that the ets Jan 8 2013,rst cause:4, boot mode:(3,7) .... message is an error message?
IMHO, some consideration :
const int button = 8;
const int button = 5; // CHANGE TO Pin "D1 GPIO5"
GPIO8 is used to connect the flash chip, you may change to Pin "D1" which is GPIO 5.
Check with your board's pinout.
to prevent non-stop keep calling within in the loop, you may use millis();
for example :
void GetUrl() {
temp = digitalRead(button);
if (temp == HIGH) {
Serial.println(host);
WiFiClient client;
const int httpPort = 80;
String url = "/df35c8cd-0398-4c92-b55f-b9e36629b309";
url += "?switche=";
url += "1";
String link = String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n";
// Serial.println( "link : " + link );
client.print(String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n");
}
}
unsigned long previousMillis = 0;
const long interval = 1000 * 0.5; // 500ms (0.5 sec)
void loop() {
unsigned long currentMillis = millis();
if (currentMillis - previousMillis >= interval) {
GetUrl();
}
}
Reference :
https://www.arduino.cc/en/Tutorial/BuiltInExamples/BlinkWithoutDelay
https://iotbyhvm.ooo/gpio-pins-esp8266/
| common-pile/stackexchange_filtered |
cocos2d zorder is not working
is there any tutorial on zorder for cocos2d? I have a sprite in a parent layer that always appears behind the sprite in the child layer. I thought just setting the z would work, but I can see that is not possible.
Try this,
Here the z value is the z-order.
CCSprite *sprite = [CCSprite<EMAIL_ADDRESS>[self addChild:sprite z:100];
| common-pile/stackexchange_filtered |
Prove that $\frac{f(x)}{x^n}=\frac{f^{(n)}(\theta x)}{n!},0<\theta <1$ if $f^{'}(0)=...=f^{(n-1)}(0)=0$ using Cauchy's mean value theorem
I don't know how to apply theorem on the problem.
By this theorem, if two functions $f$ and $g$ are defined on $[a,b]$ continuous on $[a,b]$, differentiable on $(a,b)$ and $g^{'}(x)\neq 0$ for every $x\in (a,b)$, then there exists point $c\in (a,b)$ so that
$$\frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f{'}(c)}{g^{'}(c)}$$
Can we define $f(x)=x^n$ and $g(x)=n!$? Can we define $\theta$ as a point $c$?
In fact,
\begin{eqnarray}
\frac{f(x)}{x^n}=\frac{f(x)-f(0)}{x^n-0^n},
\end{eqnarray}
by the Cauchy Mean Value Theorem, there is $c_1$ such that $c_1$ is between $0$ and $x$ and
\begin{eqnarray}
\frac{f(x)}{x^n}=\frac{f(x)-f(0)}{x^n-0^n}=\frac{f'(c_1)}{nc_1^{n-1}}.
\end{eqnarray}
Again
\begin{eqnarray}
\frac{f'(c_1)}{nc_1^{n-1}}=\frac{f'(c_1)-f'(0)}{nc_1^{n-1}-n0^{n-1}},
\end{eqnarray}
and by the Cauchy Mean Value Theorem, there is $c_2$ such that $c_2$ between 0 and $c_1$ and
\begin{eqnarray}
\frac{f'(c_1)}{nc_1^{n-1}}=\frac{f'(c_1)-f'(0)}{nc_1^{n-1}-n0^{n-1}}=\frac{f''(c_2)}{n(n-1)c_2^{n-2}}.
\end{eqnarray}
Repeat this $(n-1)$ times and you will get the answer.
| common-pile/stackexchange_filtered |
Linkbutton's event in GridView not firing its event on 2nd Time
Seems to be a weird problem but my Linkbutton which is in GridView is not firing its event on 2nd Time.
In Detail:
I have a GridView which has linkButton in it which is firing an event. This event is fired perfectly on first time but not working(not posting back) when i click it on 2nd time.
<asp:GridView ID="dg1" runat="server" OnSorting="dg1_Sorting" OnRowCreated="GridViewSortImages"
SkinID="grid" Width="100%" Font-Underline="false" HeaderStyle-Font-Underline="false"
OnRowCommand="dg1_RowCommand" AllowPaging="True" HeaderStyle-HorizontalAlign="Left"
OnRowDataBound="dg1_RowDataBound" ShowFooter="true">
<Columns>
<asp:TemplateField ItemStyle-Width="15px">
<ItemTemplate>
<asp:ImageButton ID="imgbtndel" runat="server" OnClick="imgbtndel_Click" ImageUrl="~/css/Images/delete.gif"
OnClientClick="return confirm('Do you want to Delete')"></asp:ImageButton>
</ItemTemplate>
<ItemStyle Width="15px" />
</asp:TemplateField>
<asp:TemplateField HeaderText="Account Type ID" SortExpression="ID" ItemStyle-Width="60px"
HeaderStyle-Font-Underline="false">
<ItemTemplate>
<asp:LinkButton ID="lnkbtnno" runat="server" ForeColor="#123B61" Text='<%#Eval("ID") %>'
OnClick="lnkbtnno_Click"></asp:LinkButton>
</ItemTemplate>
<HeaderStyle Font-Underline="False" />
<ItemStyle Width="60px" />
</asp:TemplateField>
<asp:TemplateField HeaderText="Description" SortExpression="Description" ItemStyle-Width="200px">
<ItemTemplate>
<asp:Label ID="lblDesc" runat="server" Text='<%#Eval("Description") %>'></asp:Label>
</ItemTemplate>
<ItemStyle Width="200px" />
<HeaderStyle HorizontalAlign="Left" />
</asp:TemplateField>
</Columns>
</asp:GridView>
C#
protected void lnkbtnno_Click(object sender, EventArgs e)
{
LinkButton lnkbtn = sender as LinkButton;
txtaccid.Text = lnkbtn.Text;
Label lblDesc = lnkbtn.FindControl("lblDesc") as Label;
txtdesc.Text = lblDesc.Text;
}
Can you show the code that is responsible for gridveiw data binding?
your linkbutton not firing events for second time.Because some other events got fired in second time instead of your linkbutton. So in events stack linkbutton events got disabled. Checkit out whether you added dynamic control are firing or not. Make use developer tools like firebug(F12)
| common-pile/stackexchange_filtered |
Input field data not deleted once add
I have a form which is including with cakephp3 and JS.
I have a question how can i store data into the input field?
There is some condition for this task, if once user add input into the input field these data will not remove , if the page is load by someone or close the browser and again get back to that page user will get same data in input field what he type, but when he submit the form then only data will remove from the form. any possible solution for this.
Maybe store the input.value on localStorage until the user submits the data? Then delete localstorage and reset the input field data to an empty string after that?
yes something like that.
But what event should we listen for? I am assuming you have only 1 submit button which would post the form? Otherwise you would need a second button which would save the data to local storage. Otherwise I cannot think of a way to store the data in local storage just by the user typing in and pressing nothing.
i am trying that when user keyup or keydown on the input field then the data submit it directly and when they submit the form then it will remove and the form become fresh again
Try the code below.
Did the code do what you wanted? :)
hi @WizardOfOz i found the solution by storing the value first on local storage then when i submit the form it will delete, first took the value from the input field and store into the local store and then when i submit the value will get vanish, thanks for the suggestion
As mentioned in the comments, you can handle this by;
Saving the input.value to localstorage
Set your input value to whatever is in local storage
On submit you clear localstorage
// Get the input field
let input = document.getElementById("myInput");
let submit = document.getElementById('submit');
// Save data to local storage on keyup or keydown
input.addEventListener('keydown', (e) => {
if (e.key === 'ArrowUp' || e.key === 'ArrowDown') {
e.preventDefault();
localStorage.setItem('Name', JSON.stringify(input.value))
}
});
//Populate input data with whatever is in localstorage
function populateUI(){
if (localStorage.getItem("Name") !== null) {
input.value = JSON.parse(localStorage.getItem('Name'))
}
}
// Reset localstorage and input field value after submit
submit.addEventListener('click', () => {
localStorage.removeItem("Name");
input.value = ''
}, false)
populateUI();
| common-pile/stackexchange_filtered |
Using artifacts from Eclipse Luna P2 repository in Maven
Has anybody an idea how to define a regular Maven dependency against an artifact that is hosted in an Eclipse P2 repository (e.g http://download.eclipse.org/releases/luna)? The only answer I found was this: Use dependencies from Eclipse p2 repository in a regular Maven build?.
In my case I cannot change from the pom-first approach to the manifest-first approach and especially, I don't want to change the packaging from bundle to eclipse-plugin. Unfortunately, I cannot find any up-to-date releases in any of the public Maven repositories.
The artifacts I am interested in are:
org.eclipse.equinox:org.eclipse.equinox.http.jetty:3.0.200.v20131021-1843
org.eclipse.equinox:org.eclipse.equinox.http.servlet:1.1.500.v20140318-1755
I have tried it with simply defining the Eclipse P2 Repository in my root pom:
<repository>
<id>eclipse-luna-repository</id>
<url>http://download.eclipse.org/releases/luna</url>
<layout>p2</layout>
</repository>
Of course this does not work and I get the following error:
Could not transfer artifact org.eclipse.equinox\:org.eclipse.equinox.http.jetty\:pom\:3.0.200.v20131021-1843 from/to eclipse-luna-repository (http\://download.eclipse.org/releases/luna)\: No connector available to access repository eclipse-luna-repository (http\://download.eclipse.org/releases/luna) of type p2 using the available factories WagonRepositoryConnectorFactory
Has anybody an idea how to solve this problem?
Both those artifacts are Eclipse plugins and will probably only work in the Eclipse environment. They both have dependencies on other Eclipse plugins.
That's right but these dependencies are usual OSGi package dependencies that are easy to handle. Both bundles are "normal" OSGi bundles there are no Eclipse-specifics.
What's wrong with the answer provided in the link you provided? Looks ok to me.
Really, the Equinox project should simply also publish their bundles in Maven central. There is a bug requesting this, but it apparently needs more votes ;-)
The referenced solution does not work properly with the current Tycho release and I don't want to repackge the builds. It should be possible to add a "normal" dependency in other poms in the from of "org.eclipse.equinox:org.eclipse.equinox.http.jetty:3.0.200.v20131021-1843", ... I don't want to work against my internal builds, because these are external third-party artifacts.
Agreed... It is really a drawback that most of the Equinox artifacts are not released in Maven Central or any other public repository . I got so far that I was able to fetch the bundles from the Luna P2 repository copy them to a folder and install them in the local repository. Unfortunalty, this solution is insufficient because it does not allow to reference the artifacts in the form of "org.eclipse.equinox:org.eclipse.equinox.http.jetty:3.0.200.v20131021-1843".
| common-pile/stackexchange_filtered |
Fastest Way to Find nearest indices between two sorted lists
I have a sorted list of floats of length n and an n x n list of lists, where each nested list is are also sorted. I want to know the index of the element in list that is nearest to but less than each entry in matrix (essentially, the index in list after which to insert each element of matrix). For example, if n=5:
list=Sort@RandomReal[{-7,7},5]
matrix=Sort/@RandomReal[{-7,7},{5,5}]
For what I’m trying to do, n will be very large (on the order of 2000) and I will have to iterate over this many times, so the code needs to be as fast as possible. Right now, my method for doing this is to flatten the matrix (I won’t actually care about the matrix form in the end---all I want is the index for each entry) and using the following code:
OrdinalFunction[list1_,nearestFn_][list2_]:=With[{r=Flatten@nearestFn[list2]},r-UnitStep[list1[[r]]-list2]]
indexFn=OrdinalFunction[list,Nearest[list->"Index"]];
matrixElements=Flatten@matrix
indexFn[matrixElements]
The advantage of this code is that I can feed the entirety of matrixElements to indexFn to get the indices in list that are nearest to but less than each element. But the code makes no use of the fact that each list in matrix is sorted, and I feel that there must be a faster way to do this. Also, since the lists in matrix are sorted, once one of the elements returns the index of the last elements in ```list```, all subsequent elements will also return this index. Is there a way to take advantage of these facts to make it (ideally much) faster?
Suppose that a and b are two sorted lists of real numbers of length n and m, respectively. The following CompiledFunction finds for each element b[[j]] in the second list the desired position in the first list:
cf = Compile[{{a, _Real, 1}, {b, _Real, 1}},
Block[{i, bj, idx, n, m},
n = Length[a];
m = Length[b];
idx = Table[0, m];
i = 1;
Do[
bj = Compile`GetElement[b, j];
While[(i <= n) && (Compile`GetElement[a, i] < bj),
++i;
];
idx[[j]] = i - 1;
, {j, 1, m}];
idx
],
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
It does so by at most $O(n+m)$ work. Algorithmically, it is similar to the merge step in merge sort.
I am not entirely sure what Nearest[a->"Index"][b] does in this case, but I believe it orders a (because it cannot know that it was ordered before) and then simply performs binary search in the array a. That would take $O( \log_2(n) \, m)$. So the algorithm in cf should be more efficient for large values of n. ( n = 2000) is actually rather small. A stronger advantage of cf is that both arrays are accessed in order, while binary search jumps around in a. So cf should
produce fewer cache misses (this is probably also irrelevant because a fits completely into L1 chache) and
be more predictable for the processor's branch prediction.
Plus cf is parallelized (but Nearest ist probably, too).
This is how the two algorithms compare on my 8-Core machine:
OrdinalFunction[list1_, nearestFn_][list2_] := With[{r = Flatten@nearestFn[list2]}, r - UnitStep[list1[[r]] - list2]];
indexFn = OrdinalFunction[list, Nearest[list -> "Index"]];
result = indexFn[Flatten@matrix]; // RepeatedTiming // First
result2 = cf[list, matrix]; // RepeatedTiming // First
ArrayReshape[result, {n, n}] == result2
0.0879834
0.00599676
True
Hence, while Nearest requires $O(n^2 \, \log_2(n))$ work, cf should do all the job in $O(n ( n + n) ) = O(n^2)$. Things change when list has length m independent of n. Then Nearest requires $O(n^2 \, \log_2(m))$ and cf requires $O(n ( m + n) )$ work. The latter grows linearly in m, so Nearest should perform better, when m is several orders of magnitude greater than n.
Edit: Swapped the order of the checks i <= n and Compile`GetElement[a, i] < bj to prevent out-of-bounds reading of a.
Amazing! Just to clarify what the code is doing: when you write cf[list,matrix], cf splits matrix into each of its sorted lists because of the specification RuntimeAttributes->{Listable}, correct? Also how could I write a similar code if I also wanted it to work for a sorted list and a matrix whose lists were reverse sorted? Like {7,5.1,0.2,-3,...}.
Yes, it's exactly the Listable attribute that allows to "thread" through the rows of matrix; and because of Parallelization -> True, it does so in parallel. If the rows of matrix are reverse sorted, then you need only to reverse the Do loop, i.e., Do[ ... , {i,m,1,-1}];.
Perfect! Thanks for the help.
You're welcome!
Quick follow-up: since we have Parallelization->True inside the compiled function, what would happen if I used cf in a piece of code that is inside, say, ParallelTable? I assume that there would be some kind of conflict over resources. Would Parallelization automatically be reset to False in cf, or would there still be a benefit to keeping this piece in?
| common-pile/stackexchange_filtered |
What is the meaning of "date for a date"?
I was in a meeting in Ireland where we were choosing deadline for tasks.
In the meeting note there was the following statement:
Task should be done by January 2015 (i.e. "date for a date" )
What is the meaning of "date for a date" ?
One can only guess, since you have X'd out the actual task: perhaps the date on which some other date must be known? As in, by January 1st we will need to have a clear sense of the delivery date for the final deliverable.
I'm removing the XX'd as misleading
Any connection to this? http://www.dateforadate.com/DateForADate.aspx
Other than that, my googlefu is returning a bunch of database query/sql-type results... I have no idea...
@miltonaut no, tasks are totally unrelated to these dating service.
It may mean that in January the team will give an estimate of when the work can be completed. We often set a deadline for when we will finish our assessment of the task and provide the date we can realistically complete it.
This usually means the time by which some other date should or must be decided upon. So in this case: The task of deciding when to do X (or when it will occur) must be decided by Jan 2015.
A date for a date is a common term in technology companies for the date by which you expect to have determined another date. It sounds as though in your meeting you were determining many deadlines for many tasks, and that this determination was itself challenging or time-consuming, and so a deadline had been set for the task of determining the other deadlines. In other words, your group expected to finish assigning all the deadlines by January 2015, even though presumably some of those deadlines would themselves be later dates.
| common-pile/stackexchange_filtered |
jQuery - Masonry showing no error yet not working?
I've been trying for over 7 hours to get the layout sorted, im wanting the end product to look something along these lines - !
instead i get this !
Heres the code thats being used
HTML -
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<head>
<title>_Box</title>
<link href="styles.css" rel="stylesheet" type="text/css">
<body>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="masonry.js"></script>
<script>
$(function(){
$('#container').masonry({
columnWidth: 150,
itemSelector: 'div'
});
});
</script>
<div id="container" class="clearfix masonry">
<div class="item1"><img src="images/eventbox.png"></img></div>
<div class="item3"><img src="images/forumbox.png"></img></div>
<div class="item2"><img src="images/weekbox.png"></img></div>
<div class="item2"><img src="images/weekbox.png"></img></div>
<div class="item2"><img src="images/weekbox.png"></img></div>
<div class="item2"><img src="images/weekbox.png"></img></div>
<div class="item3"><img src="images/top10box.png"></img></div>
<div class="item1"><img src="images/eventbox.png"></img></div>
</div>
</head>
</body>
CSS -
html {
height:100%;
}
body {
width:900px;
height:100%;
margin:0 auto;
margin-top:100px;
background-image: url(images/gridbg.png);
}
#container {
width:900px;
}
.item1,.item2,.item3 {margin:5px;}
.item1 {width:350px;}
.item2 {width:175px;}
.item3 {width:150px;}
Any ideas? because it seems nothing will work
$('#container').masonry({ columnWidth: 150, itemSelector: 'div' });
@Jashwant Remove the last comma or older versions of IE will throw an error
that was a typo. I never do that, not even in php :)
Anyone else got any ideas? Nothing seems to work!
Your head tag closes at a weird place even though I don't think that is the problem.
Ah yea, thanks though like you guessed thats not fixed anything
I've made a small fiddle, and am trying to tweak it around. If you want to put the size of your images on the divs (in the fiddle) and add it to your question (after re saving it with the good sizes)!
@HugoDozois You beautiful man! haha finally fixed - copy that code as answer and i'll accept yours and +rep
Well here is a fiddle that I made.
I've noticed that by putting low columns width number it improves the way the Masonry works.
So I modified the Scrip to something like that :
$(function () {
$('#container').masonry({
columnWidth: 1,
itemSelector: 'div'
});
});
Also, adding fixed weight/height might help. Especially for handling margins around the item because masonry seems to have some problems with the margins between elements.
So if your big items is 350px be sure that items that go under are not more than (350 -(2*margin)) /2px so it places them properly.
| common-pile/stackexchange_filtered |
Looping Through Dates in SQL? (Databricks)
I am currently learning SQL and ran into a problem. Through my searches I have found that looping in SQL is a big no-no, so I was wondering if anyone could point me into the correct direction?
The dataframe looks like this:
Group
ATP Date
JTH Date
A
5/17/2022
6/17/2022
A
5/17/2022
Null
B
5/17/2022
Null
A
5/16/2022
6/16/2022
B
5/16/2022
6/16/2022
B
5/15/2022
6/17/2022
B
5/15/2022
Null
A
5/14/2022
6/1/2022
A
5/13/2022
Null
A
5/13/2022
6/1/2022
A
5/13/2022
6/5/2022
I am trying to make a query to pull this:
Date
Group
CountNo
CountYes
Ratio (No/Yes)
5/17/2022
A
1
1
1
5/17/2022
B
0
1
0
5/16/2022
A
1
0
Null
5/16/2022
B
2
1
2
5/14/2022
A
1
0
Null
5/13/2022
A
2
0
Null
This is what I currently created:
max(ATP_Date) as Date,
Group,
sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date > '2022-05-18' then 1
else 0
END
) as CountNo,
sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date IS null then 1
else 0
END
) as CountYes,
sum(
case
when ATP_date < '2022-05-18'
and JTH_Date > '2022-05-18' then 1
else 0
END
) / sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date IS null then 1
else 0
END
) as ratio
from
dataframe
where group = "A"
GROUP BY group
Which outputs this:
Date
Group
CountNo
CountYes
Ratio
5/17/2022
A
318
1064
0.3
This is what I want, but I need to do it for each date for ~ last 4 years, so it looks like the second table posted. I could manually edit the dates for each query, but that would take forever. This made me think of looping. I believe I would basically need to loop through the Select portion with dates, in order to get the output I want. If anyone has advice or could point in me in the correction direction, it would be greatly appreciated, thanks.
Should the comparison date be constant (as shown) or dynamic? If needs to be dynamic what is the intended relationship of that comparison date to atp_date?
To avoid using loops here you can include atp_date into the group by clause which will result in 1 row for each combination of that date plus the "grouping" column
It isn't clear why you compare to '2022-05-18' but this appears to be the day following the maximum date found in atp_date. So to avoid hardcoding, you could approach it by using a derived table of 1 row, cross joined to the data:
SELECT
ATP_Date
, grouping
, sum(CASE
WHEN ATP_Date < cj.max_dt AND JTH_Date > cj.max_dt
THEN 1
ELSE 0
END) AS CountNo
, sum(CASE
WHEN ATP_Date < cj.max_dt AND JTH_Date IS NULL
THEN 1
ELSE 0
END) AS CountYes
, sum(CASE
WHEN ATP_date < cj.max_dt AND JTH_Date > cj.max_dt
THEN 1
ELSE 0
END)
/ sum(CASE
WHEN ATP_Date < cj.max_dt AND JTH_Date IS NULL
THEN 1
ELSE NULL
END) AS ratio
FROM dt_query
CROSS JOIN (select max(atp_date) + interval '1 day' max_dt from dt_query) AS cj
GROUP BY
grouping
, atp_date
ORDER BY
atp_date DESC
, grouping
atp_date | grouping | countno | countyes | ratio
:--------- | :------- | ------: | -------: | ----:
2022-05-17 | A | 1 | 1 | 1
2022-05-17 | B | 0 | 1 | 0
2022-05-16 | A | 1 | 0 | null
2022-05-16 | B | 2 | 1 | 2
2022-05-14 | A | 1 | 0 | null
2022-05-13 | A | 2 | 1 | 2
db<>fiddle here
nb: to avoid issues with the term "group" I have used the column name "grouping" instead, and the example sql is written in postgres so there may be some syntax that needs alteration (e.g. the addition of 1 day). Also note that the ratio calcuation can result in a divide by zero error so instead of zero I used NULL.
I couldn't write the query as a comment, so posting it here. If it's not what you are expecting, let me know will delete this.
Assuming 2022-05-18 is constant (as per your sample)
Creating sample table
create or replace table dt_query
(group string, atp_date date, jth_date date);
insert into dt_query values
('A','2022-05-17','2022-06-17')
,('A','2022-05-17',NULL)
,('B','2022-05-17',NULL)
,('A','2022-05-16','2022-06-16')
,('B','2022-05-16','2022-06-16')
,('B','2022-05-16','2022-06-15')
,('B','2022-05-16',NULL)
Slightly modified your select statement
select
max(ATP_Date) as Date,
Group,
sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date > '2022-05-18' then 1
else 0
END
) as CountNo,
sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date IS null then 1
else 0
END
) as CountYes,
sum(
case
when ATP_date < '2022-05-18'
and JTH_Date > '2022-05-18' then 1
else 0
END
) / sum(
case
when ATP_Date < '2022-05-18'
and JTH_Date IS null then 1
else 0
END
) as ratio
from
dt_query
GROUP BY group,atp_date
The result matches with what you are expecting.
| common-pile/stackexchange_filtered |
Building new Android App Bundle fails with aapt
I'm trying to setup building the new Android App Bundles via Gradle. I'm building with this command:
./gradlew bundleLiveDebug
(live is my flavor)
The build always fails with:
Execution failed for task ':app:bundleLiveDebugResources'.
> Failed to execute aapt
I read in the documentation that it requires aapt2 and it should be enabled via android.enableAapt2=true in gradle.properties, but the error is the same.
What's the error are you getting?
Why are you using bundleLiveDebug?
Can you make sure that ./gradlew clean and ./gradlew bundleDebug are working or not?
@sam_k The error is written in the post. Clean works. I can't use bundleDebug because I have two flavors, flavor (live) needs to be specified.
You're just giving the highest-level error. If you look at the build log there will be the actual errors from aapt2. They should be just above the stacktrace and should start with "Error:" or "AAPT:"
I have already solved it, see my answer
I found the problem. Android App Bundle feature requires Android Gradle Plugin version 3.2.0+, I had version 3.1.2
| common-pile/stackexchange_filtered |
iOS Show some error about PNG image
I am using Xcode 4.3.3, I already tried to build my app before and it run. But now I arranged my files inside my project's folder, grouped them by 'button' , 'icon' , 'background'... I also copied some resources/images in other folder put them in order but now I'm in trouble.
Im trying to build my app again in Xcode, I found a CopyPNG Error:
Can't find /Users/vella/Desktop/Sample/res/2.png
Command /Users/vella/Desktop/installer/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/PrivatePlugIns/iPhoneOS Build System Support.xcplugin/Contents/Resources/copypng failed with exit code 1
Now, I don't know if I missed some png files. I also read some answers like I should save PNG files as NOT INTERLACED or there is a PNG file that is corrupted. How will I know what PNG file is missing or corrupted?
Check wheather you added 2.png into your project folder.while adding images into your project,click the checkbox "Copy items into destination group folder".If your png file is corrupted, it will be in red colour in your project.
I checked it and I was able to add 2.png in my project folder but the error is still there.
delete 2.png(move to trash) and then add again 2.png,try this.
You have to save your PNG files as NOT INTERLACED. For example, bu using Photoshop, go to menu File->Save For Web and Devices. Unchecked the box of 'Interlaced' and save the file. Usually interlaced box is unchecked already. Hope that helps
Clean up the png file in your project and re-import it:
Delete (backup) that file in project navigator. Just delete and move file to trash can.
Go to project Build Phases:
i. Select the root project
ii. Select TARGETS
iii. Select Build Phases tag
iv. Expand Copy Bundle Resources list
If you see the file you just deleted still exists (maybe in red), delete it
Re-import the file again
This worked for me.
Yep, Add it again and this time dont arrange or rearrange any stuff in your resources folder.
Also give specific naming to the images so that its easy for you to remember and implement them later in ur project.
It didn't work, I just run my previous project (the one i haven't arrange yet).
I just had the same problem. I always do the same thing to add images to my projects but I never had a problem like this before.
I found in the error message that, the image path that Xcode was trying to find the image was wrong. The path that Xcode is searching was like " ...../My Project Folder/images/favourites.png" but the correct path is like "...../My Project Folder/myProjectName/images/favourites.png". To solve this, I create a folder with path " ...../My Project Folder/images", then I backed up my images and deleted all of them from project window and selected "move to trash". I copied and pasted the images from my backup folder to this new folder. Then I drag-dropped them to "Supporting Files" and select "Copy items into destination group's folder (if needed)". Finally I cleaned the project and run it. It worked for me. Good luck.
| common-pile/stackexchange_filtered |
Why is the div bigger than the font-size?
See http://jsfiddle.net/6taruf65/1/
The following html appears as 20 pixels tall in Firefox31 and Chrome36 on Windows7. I expected it to be 16 pixels tall.
<style>
* { margin: 0; padding: 0; border: 0; overflow: hidden; vertical-align: baseline; }
</style>
<div style="font-size: 16px;">help 16px</div>
Notice the bottom of the p is cut off when you limit the div's height to 16px. That suggests to me there's unused space above the text. It might be a problem with vertical alignment. But then how would I go about preventing that issue when I want to precisely control the height and alignment of the text?
Adding line-height: 16px fixes it. A more universal solution would be to use line-height: 1em, though.
This is because the default line-height value that is applied by the user agent. Some of web browsers apply a line-height of 1.2em or 1.2 or 120% to the elements while the spec recommends:
We recommend a used value for normal between 1.0 to 1.2.
CSS Level 2 Spec states:
line-height
On a block container element whose content is composed of inline-level
elements, line-height specifies the minimal height of line boxes
within the element. The minimum height consists of a minimum height
above the baseline and a minimum depth below it, exactly as if each
line box starts with a zero-width inline box with the element's font
and line height properties.
The accepted values are normal | <number> | <length> | <percentage> | inherit
Hence, you could override the applied value by adding a line-height of 16px or simply a value of 100% or 1em or 1 to the element. (Click on each one to see the demo).
<number> - e.g. line-height: 1 - is the preferred value of line-height as it always refers to the element's font size. Therefore you don't have to specify different values for different font sizes.
For further info about the difference between these values, you could refer to my answer here:
Calculate line-height with font in rem-value
Maybe you need line-height: 16px;
The div size is not 20px because the font-size is larger than 20px when you have letters that hang below the baseline (such a p and q). If you want the div itself to be of height 20px, just set the div css to height: 20px.
JSFiddle
<div style="height: 20px; font-size: 20px; border:1px solid #444;">help 20px (with cut off text)</div>
<br />
<div style="height: 23px; font-size: 20px; border:1px solid #444;">help 20px (without cut off text)</div>
<br />
| common-pile/stackexchange_filtered |
Normalizr: how to work with an array of arrays
I'm trying to normalize a config for a keyboard. Here's a small portion of it:
keyboard.json
{
"rows": [
[
{ "label": "~" },
{ "label": "1" },
{ "label": "2" },
{ "label": "3" },
{ "label": "4" }
],
[
{ "label": "tab", size: 1.5 },
{ "label": "Q" },
{ "label": "W" },
{ "label": "E" },
{ "label": "R" }
]
]
}
My goal is to create a normalised object, something like this:
{
entities: {
keys: {
'k~0': { label: '~' },
'k10': { label: '1' },
// etc..
},
rows: {
0: ['k~0', 'k10'],
1: ['ktab0', 'kQ0'],
// etc..
}
},
result: {
rows: [0, 1, /* etc */]
}
}
Currently I have this:
import { normalize, schema } from 'normalizr'
const keySchema = new schema.Entity('keys', {}, {
idAttribute: (k) => {
return `k${k.label}${k.location || 0}`
}
})
let rowI = 0
const rowSchema = new schema.Entity('rows', {
keys: [ keySchema ]
}, {
// btw: I know this is not the best way to set an id, but I'll solve that later
idAttribute: (a, b, i) => rowI++
})
const keyboardSchema = {
rows: [ rowSchema ]
}
export default normalize(keyboardData, keyboardSchema)
This basically "copies" the rows from the original json to entities, without making a new key for each row entity. See screenshot:
So here's where I'm lost. I think I need some kind of intermediate step, "within" rowSchema but I don't understand how.
Any help appreciated!
Well, thank you StackOverflow, for helping me this issue :)
What I needed was a processStrategy for rowSchema function. First time I had use of it. Took some time, but now I understands what happens.
For anyone who's interested, a detailed breakdown (read bottom-up):
// 5: Lastly here's where we define how a key entity looks like
const keySchema = new schema.Entity('keys', {}, {
idAttribute: (k) => {
// 6: This time we construct the id from properties found in the json
return `k${k.label}${k.location || 0}`
}
})
// 2: Defining an Entity makes it appear in the `entities` object within our normalised data
const rowSchema = new schema.Entity('rows', {
// 3: Here we do the same as before. We assign `keys` the keySchema...
// ..but wait.. `keys` is not defined in our json. We simply have a nested array. See 4
keys: [ keySchema ]
}, {
// (EXTRA) Each row we assign an id by checking which row's entry matches the current
idAttribute: (value, parent) => parent.rows.indexOf(value),
// 4: THIS is how we tell rowSchema that it should assign the current value to keys
processStrategy: (value) => ({ keys: value })
})
// 1: This looks like destructuring; `rows` should match rows in the JSON
// Here we say that rows contains an array and each entry should be given the "layout" of `rowSchema`
const keyboardSchema = {
rows: [ rowSchema ]
}
// All nice and ESNexty:
import { normalize, schema } from 'normalizr'
const keySchema = new schema.Entity('keys', {}, {
idAttribute: (k) => `k${k.label}${k.location || 0}`
})
const rowSchema = new schema.Entity('rows', {
keys: [ keySchema ]
}, {
idAttribute: (keys, { rows }) => rows.indexOf(keys),
processStrategy: (keys) => ({ keys })
})
const keyboardSchema = {
rows: [ rowSchema ]
}
export default normalize(keyboardData, keyboardSchema)
This will do, now the output of rows will be 0: {keys: ['k~0', 'k10', 'k20']}. Wonder if I could get rid of the keys prop there.
So it'd be: 0: ['k~0', 'k10', 'k20']
| common-pile/stackexchange_filtered |
Insufficient privilege when creating an index in Oracle 11g
I am trying to create an index in Oracle, my ddl :
create index OMD_DOCTEXT2_CTX on table_name(col_name)
indextype is ctxsys.context local
parameters ('datastore CTXSYS.FILE_DATASTORE filter ctxsys.null_filter lexer E2LEX wordlist E2WORDLIST stoplist E2STOP section group E2GROUP') parallel 4;
I am getting error :
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-10758: index owner does not have the privilege to use file or URL datastore
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366
Any ideas?
From the Oracle Text Documentation:
File and URL datastores enable access
to files on the actual database disk.
This may be undesirable when security
is an issue since any user can browse
the file system that is accessible to
the Oracle user. The FILE_ACCESS_ROLE
system parameter can be used to set
the name of a database role that is
authorized to create an index using
FILE or URL datastores. If set, any
user attempting to create an index
using FILE or URL datastores must have
this role, or the index creation will
fail.
For example, the following statement
sets the name of the database role:
ctx_adm.set_parameter('FILE_ACCESS_ROLE','TOPCAT');
where TOPCAT is the role that is
authorized to create an index on a
file or URL datastore. The CREATE
INDEX operation will fail when a user
that does not have an authorized role
tries to create an index on a file or
URL datastore.
So, does your user have the necessary role?
I have the necessary permissions.
But, I found some of the filepaths in the column are invalid, those files are missing. Thats probably the issue?
Missing files were infact the issue.
| common-pile/stackexchange_filtered |
Only one data listed from xml
Database.xml
<?xml version="1.0" encoding="iso-8859-1"?>
<staff>
<data1>Hello.jpg</data1>
<data1>World.jpg</data1>
</staff>
Class:
for (int i = 0; i < nodeListCountry.getLength(); i++) {
items.add(elementText.getChildNodes().item(0).getNodeValue());
}
** only one data listed**
I'd like to add all xml data in items list collection. But it lists same value.
I am having a hard time understanding this question but I am assuming you mean that your items container only has 1 data value in it. This is because you are only adding the first element (element 0) to your items container.
items.add(elementText.getChildNodes().item(0).getNodeValue());
Will only add the first one.
thanks for your answer but what should i write for getting all the items instead of "item(0)" ?
First thing you want to do is take the List items = new ArrayList(); out of the for loop that way its not recreating it every iteration. Have you tried item(i)?
Might need to be in another for loop or outside the for
| common-pile/stackexchange_filtered |
json boolean vs integer - which takes up less space?
When sending a value in JSON otw, is it better to use a boolean or an integer to use up less space?
e.g:
{
foo: false
}
Or:
{
foo: 0
}
Would using a number use less space, considering its just a number, compared to 4 or 5 characters for a boolean value? (true/false)
Also is there a speed difference between the two approaches if you convert them from JSON to object format?
Firstly, this is micro-optimisation, and very unlikely to be important. If you are transporting thousands or millions of such values, it might become significant; but in that case, you probably want something much more efficient than JSON anyway (a plain CSV would be better in many cases, but ideally you'd use some packed binary format).
Secondly, JSON is a way of representing data in a string; so storing or sending JSON means you are storing or sending strings. Measuring the size of the data is therefore trivial: how long is the string? The string 0 has one character; the string false has five characters.
Thirdly, if you're optimising for space, you'd remove all insignificant whitespace, so your examples should be {"foo":false} (13 characters) and {"foo":0} (9 characters). Note that you can't, as you have in your example, skip the quote marks around foo - that is not valid JSON.
Fourthly, how much memory or other resources the structure will take up when you convert it from JSON into an object depends on what language you're using, what implementation of that language, and any number of other factors, so is completely unanswerable (and, again, a micro-optimisation that is very unlikely to be important).
I think integer is a better solution because, besides using less space (and consequentially, being potentially faster to parse), it is also more future proof. Someone can easily convert it into a three (or more) state variable if needed by just assigning other values like -1, 2, 3..., while the conversion from boolean would be less straight forward.
| common-pile/stackexchange_filtered |
Replacing a DECLARE in an SQL command with a cell value
I'm working with a SQL query in an excel workbook and trying to make it a little more user friendly by pulling the date value from a cell.
Right now I have the query setup in the Connection Properties that looks like this:
DECLARE @BEGIN AS DATETIME = DATEADD(second, DATEDIFF(second, GETDATE(), GETUTCDATE()), '2020-11-08 00:00:00:000');
Ideally I'm looking for something like DECLARE @BEGIN = Sheet1!a1
Is it possible to replace it this way, or should I be looking to something else?
I finally found an answer that solved this for me here:
https://stackoverflow.com/questions/5434768/how-to-pass-parameters-to-query-in-sql-excel
| common-pile/stackexchange_filtered |
deployed cassandra datastax enterprise and got java.lang.AssertionError
I am trying to deploy the cassandra datastax enterprise 4.5.1 on my cluster, and I always got java.lang.AssertionError, the log is as below:
INFO [main] 2014-10-13 06:01:03,142 CLibrary.java (line 63) JNA not found. Native methods will be disabled.
INFO [main] 2014-10-13 06:01:03,155 CacheService.java (line 105) Initializing key cache with capacity of 100 MBs.
INFO [main] 2014-10-13 06:01:03,167 CacheService.java (line 117) Scheduling key cache save to each 14400 seconds (going to save all keys).
INFO [main] 2014-10-13 06:01:03,169 CacheService.java (line 131) Initializing row cache with capacity of 0 MBs
INFO [main] 2014-10-13 06:01:03,177 CacheService.java (line 141) Scheduling row cache save to each 0 seconds (going to save all keys).
INFO [main] 2014-10-13 06:01:03,471 ColumnFamilyStore.java (line 249) Initializing system.schema_triggers
INFO [main] 2014-10-13 06:01:03,522 ColumnFamilyStore.java (line 249) Initializing system.compaction_history
INFO [SSTableBatchOpen:1] 2014-10-13 06:01:03,547 SSTableReader.java (line 223) Opening /apps/datastax-enterprise/9161/ddata/data/system/compaction_history/system-compaction_history-jb-4349 (163599 bytes)
ERROR [SSTableBatchOpen:1] 2014-10-13 06:01:03,565 SSTableReader.java (line 233) Cannot open /apps/datastax-enterprise/9161/ddata/data/system/compaction_history/system-compaction_history-jb-4349; partitioner org.apache.cassandra.dht.RandomPartitioner does not match system partitioner org.apache.cassandra.dht.Murmur3Partitioner. Note that the default partitioner starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit that to match your old partitioner if upgrading.
INFO [Thread-1] 2014-10-13 06:01:03,569 DseDaemon.java (line 477) DSE shutting down...
ERROR [Thread-1] 2014-10-13 06:01:03,635 CassandraDaemon.java (line 199) Exception in thread Thread[Thread-1,5,main]
java.lang.AssertionError
at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:1263)
at com.datastax.bdp.gms.DseState.setActiveStatus(DseState.java:171)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:478)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:384)
Dose anynone know this or maybe give me any infomation? Any answer will be appreciated
partitioner org.apache.cassandra.dht.RandomPartitioner does not match system partitioner org.apache.cassandra.dht.Murmur3Partitioner.
Note that the default partitioner starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit that to match your old partitioner if upgrading.
@RussS Thanks for reminding, I have modified that and will try it agiam later
Problem solved, as @RussS said, this is because the partitioner doesn`t match, we should use partitioner: org.apache.cassandra.dht.RandomPartitioner in cassandra.yaml
| common-pile/stackexchange_filtered |
Uber jar not reading external properties files
I am creating an uber jar i.e. jar with dependencies for my project. I have a bunch of properties files that the project uses. I want to be able to change these properties files before running my project so i want them to be outside of the jar. Here is the relevant sections of my pom
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2</version>
<configuration>
<artifactSet>
<excludes>
<exclude>**/*.properties</exclude>
<exclude>**/*.json</exclude>
</excludes>
</artifactSet>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>path.to.main.Main</mainClass>
</manifest>
<manifestEntries>
<Class-Path>.</Class-Path>
<Class-Path>conf/</Class-Path>
</manifestEntries>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>copy-resources</id>
<phase>install</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target/conf</outputDirectory>
<resources>
<resource>
<directory>src/main/resources</directory>
<includes>
<include>**/*.properties</include>
<include>**/*.json</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
so essentially, I want to create a folder ${basedir}/target/conf and copy all the .properties and .json files to it. Also, here is how I am reading the files
InputStream in = this.getClass().getClassLoader().getResourceAsStream("filename.properties");
I am facing a couple of problems
When i do mvn clean install, i still see the all the .properties and .json files in the classes folder. Shouldn't they have been excluded?
The conf folder is created with all of the files, but when I run the jar adn try to change the properties, the changes are not picked up. How can i ensure that the conf folder is being added to the classpath?
I want to be able to load the .properties and .json files from the src/main/resources folder while i am developing so i dont want to put them in a separate folder. Is this possible?
I was facing the same issue where Uber jar is not reading the external configuration file.
I tried below configuration and it worked like charm. Refer below configuration it may help someone having the issue with uber jar not reading extenarl files.
I am not sure if this is the best way but haven't found any soultion online :)
I have included the resources using IncludeResourceTransformer.
Using filter removed the properties file from uber jar.
In classpath /conf reading the properties from external folder.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions> <!-- Run shade goal on package phase -->
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
add Main-Class to manifest file
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries>
<Main-Class>JobName</Main-Class>
<Class-Path>conf/</Class-Path>
</manifestEntries>
</transformer>
<transformer
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
<resource>src/main/resources/config.properties</resource>
<file>${project.basedir}/src/main/resources/config.properties</file>
</transformer>
</transformers>
<finalName>FinalJarName</finalName>
<filters>
<filter>
<artifact>groupId:artifactId</artifact>
<excludes>
<exclude>**/*.properties</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
good luck.
| common-pile/stackexchange_filtered |
How do I make List<T> include item as variables references (ref item)?
Assume we have some codes
public class A { }
public class B : A { }
static void Main(string[] args)
{
A a1 = new A();
A a2 = new A();
A a3 = new A();
Process(ref a1);
Console.WriteLine(a1.GetType().Name == "B"); // show True : that i want
List<A> list = new List<A>();
list.Add(a2);
list.Add(a3);
ProcessList(ref list);
Console.WriteLine(a2.GetType().Name == "B"); // False : I want True
Console.WriteLine(a3.GetType().Name == "B"); // False : I want True
Console.Read();
}
static void ProcessList(ref List<A> list)
{
for (int i = 0; i < list.Count; i++)
{
list[i] = new B();
}
}
static void Process(ref A item)
{
item = new B();
}
As marked ref in ProcessA method I can change A type variable object of caller (Main) to B type. But like that, when I want to process list of A in ProcessList I can't make A type variable object of caller (Main) to B type.
It seems List<T> does pass its item by reference; or how can I make this happen?
Why you make ProcessA a generic method and named A generic type? that would override access to type A.
also you are just passing list by ref. the elements of list are not passed by ref. When you call list.Add(a) as you see its not list.Add(ref a). you have to create your own custom class that takes items by ref.
What you are asking is effectively to do is to have in your list pointers to pointers (or in the C# vernacular, "references to references"). C# doesn't have that feature. See the marked duplicate for more details.
@LeQuangHoa You can do it like this: public class Box<TItem> { private Action<TItem> _onReplace; public Box(TItem item, Action<TItem> onReplace) { this.Item = item; this._onReplace = onReplace; } public void Replace(TItem item) { this.Item = item; this._onReplace(item); } public TItem Item { get; private set; } }
Then change these lines in Main: List<Box<A>> list = new List<Box<A>>(); list.Add(new Box<A>(a2, r => a2 = r)); list.Add(new Box<A>(a3, r => a3 = r));
Then change ProcessList to look like static void ProcessList(List<Box<A>> list) And then inside change the assignment = to list[i].Replace(new B());
Looks like question is very close to be re-opened - would be interesting to see if someone comes up with anything better than "no" as covered in current duplicate - http://stackoverflow.com/questions/24054186/how-to-do-a-pointer-to-pointer-in-c (which I think relatively good one for this post).
| common-pile/stackexchange_filtered |
How to solve $\frac{3\vert x\vert }{\vert x\vert -2}<\frac{3}{2}$
I'm having difficulties on how to start off this question: $
\frac{3\vert x\vert }{\vert x\vert -2}<\frac{3}{2}$
I'm tempted to square both sides but I don't think it will help in
simplifying the inequality. In short, I have no idea where to start and
was wondering if anyone could help
Just multiply by $|x|-2$ and remember that when you multiply an inequality by a negative number the inequality sign changes.
Can you solve $\dfrac{3t}{t - 2} < \dfrac{3}{2}$?
You can consider two cases: $x \ge 0$ and $x<0$ to get rid of the absolute value.
We have $$\frac {3|x|}{|x|-2}-\frac 32<0$$
So, $$\frac {6|x|-3(|x|-2)}{2(|x|-2)}<0$$
So, $$\frac {|x|+2}{|x|-2}<0$$
So, $|x|\in (-2,2)$
Since $|x|>0$, we get:
$$|x|<2$$ as our final condition. This means:
$$x \in (-2,2)$$
This method makes it much more simpler than I thought, thanks!
| common-pile/stackexchange_filtered |
LINQ to SQL query filter, with the name match ignoring multiple punctuation characters
I have the following LINQ to SQL query which works fine but looks ugly:
var filter = "filter";
query = query.Where(x =>
x.Name.Replace("'", "").Replace("\"", "").Replace("#", "").Replace("/", "").Replace("-", "").Contains(filter) ||
x.FullName.Replace("'", "").Replace("\"", "").Replace("#", "").Replace("/", "").Replace("-", "").Contains(filter));
It'd be nice to be able to do something similar to this (which isn't possible because LINQ to Entities won't recognize the method):
var filter = "filter";
var removals = new string[] { "'", "\"", "#", "/", "-" };
query = query.Where(x =>
Replaces(x.Name, removals).Contains(filter) ||
Replaces(x.Full, removals).Contains(filter));
... but I can't figure out how that could be written. I've written predicates that dealt with entire expressions, but not with just a single property.
This is a LINQ to SQL expression, so I can't just pull it out into its own method or I'll get an error like:
Additional information: LINQ to Entities does not recognize the method
'System.String RemoveAll
Please remove the second snippet, it's not a working code but a pseudocode. Your question could get closed for this. I'm pretty sure you'll get a solution that works similar to what you suggest.
@t3chb0t It's clear that part isn't part of the existing code, so it should be fine.
It'd be great if someone could show how to write a custom query-provider for this case :-) I unfortuantelly cannot do it (yet).
@t3chb0t writing a custom query provider is lots, lots of fun :-)
@Mat'sMug as a matter of fact I've been trying to crack query providers for quite some time, and even studied your question (I have it as favourite) but I somehow still don't get it :-[
@t3chb0t I'm not sure I get it either =)
You are rather removing those parts instead of replacing them, so more appropriate name would be remove all.
You can make your own extension method like this :
public static class Extensions
{
public static string RemoveAll(this string source, string[] charsToRemove)
{
return charsToRemove.Aggregate(source, (current, t) => current.Replace(t, string.Empty));
}
}
If you really want to replace them with something you can do it like this :
public static string ReplaceAll(this string source, string[] charsToRemove, string[] charsToReplace)
{
string result = source;
for (var i = 0; i < charsToRemove.Length; i++)
{
result = result.Replace(charsToRemove[i], charsToReplace[i]);
}
return result;
}
Example usage :
var filter = "filter";
string[] itemsToRemove = {"'", @"""",};
query = query.Where(x =>
x.Name.RemoveAll(itemsToRemove).Contains(filter) ||
x.FullName.RemoveAll(itemsToRemove).Contains(filter));
UPDATE
LINQ to SQL would require you to call .AsEnumerable(), .ToList() or .ToArray() first before operating on strings, you might loose some performance from that but the other way is to write your custom query provider or stick with what you have.
A single for loop is fine too as they should be in the same order anyway, I assume .Zip will be more costly.
This won't work bc it's LINQ to SQL, so the "RemoveAll" method won't be recognized -- that's why ".Replace" is being used (it translated in LINQ to SQL). This will generate something like: "Additional information: LINQ to Entities does not recognize the method 'System.String RemoveAll"
@DocHoffiday Just remove the part that makes it extension method wouldn't that work ? I can provide example if you need one.
@denis - it's not just extension methods, any custom method will throw the "LINQ to Entities does not recognize the method" error.
Ok, this is true, they are cool but very costly if not used properly. @DocHoffiday I guess this would require a custom query-provider to interpret the extensions or other methods. This is quite a lot of work.
@DocHoffiday can you verify if the updated answer will work for you ?
It doesn't. Throws an "LINQ to Entities does not recognize the method 'System.String Aggregate" error.
@dochoffiday would it work if you operate on list instead of string ?
It would work if you do query.AsEnumerable().Where(..) but this will get everything from the server and filter it on the client side. Probably not so optimal.
Indeed not optimal but for small amount of data it's fine.
@denis i'm sure it would work, but making it a list first will kill performance
@dochoffiday It depends on the size of the list as it will perform O(n) operation.
var filter = "filter";
query = query.Where(x =>
x.Name.Replace("'", "").Replace("\"", "").Replace("#", "").Replace("/", "").Replace("-", "").Contains(filter) ||
x.FullName.Replace("'", "").Replace("\"", "").Replace("#", "").Replace("/", "").Replace("-", "").Contains(filter));
If you need that many replacements for a simple search then I think either the data or the filter is broken.
I guess all those delimiters (?) have some meaning, usually they have and they look like they have, so try to build the filter according to the rules instead of changing the data to match the invalid filter.
You could create the following helper extension method:
private static string RemoveAll(this string text, IEnumerable<char> removals)
{
return new string(text.ToCharArray().Except(removals).ToArray());
}
Then your code will looks like:
var removals = new [] { '\'', '"', '#', '/', '-' };
query = query.Where(x =>
x.Name.RemoveAll(removals).Contains(filter) ||
x.FullName.RemoveAll(removals).Contains(filter)).ToArray();
Sample test:
string s = "1'2'3#4-5";
var removals = new [] { '\'', '"', '#', '/', '-' };
Console.WriteLine(s.RemoveAll(removals));
Output:
12345
This won't work bc it's LINQ to SQL, so the "RemoveAll" method won't be recognized -- that's why ".Replace" is being used (it translated in LINQ to SQL). This will generate something like: "Additional information: LINQ to Entities does not recognize the method 'System.String RemoveAll"
You could write extension method that combines Replaces and returns new query.
static class LinqExtensions
{
public class Projection<T>
{
public T Item { get; set; }
public string Field1 { get; set; }
public string Field2 { get; set; }
}
public static IQueryable<T> ContainsEx<T>(this IQueryable<T> query,
string[] toRemove, string filter, Expression<Func<T, Projection<T>>> projection)
{
var projectionQuery = query.Select(projection);
foreach (var str in toRemove)
{
projectionQuery = projectionQuery.Select(x => new Projection<T>
{
Field1 = x.Field1.Replace(str, ""),
Field2 = x.Field2.Replace(str, ""),
Item = x.Item
});
}
return projectionQuery
.Where(x => filter.Contains(x.Field1) || filter.Contains(x.Field2))
.Select(x => x.Item);
}
}
And use it:
var removeCharacters = new[] { ",", "#", "/", "-" };
var query = context.Accounts;
var result = query.ContainsEx(removeCharacters, "filter", x => new LinqExtensions.Projection<Accounts>
{
Field1 = x.Name,
Field2 = x.FullName,
Item = x
}).ToArray();
You should change the Where condition to .Where(x => x.Field1.Contains(filter) || x.Field2.Contains(filter)) otherwise you could get different results. Nevertheless +1
Here's a generic solution I created to solve these sorts of issues, and the specifics for this particular one. It uses an Attribute class to mark methods (normally extension methods) as needing special processing for LINQ to SQL/EF and an ExpressionVisitor to re-write the queries for each marked method.
First, the Attribute class:
[AttributeUsage(AttributeTargets.Method)]
public class ExpandMethodAttribute : Attribute {
private string methodName;
public ExpandMethodAttribute(string aMethodName = null) => methodName = aMethodName;
public MethodInfo ExpandingMethod(MethodInfo mi) {
var methodType = mi.DeclaringType;
var origMethodName = mi.Name;
var argTypes = new[] { typeof(Expression) }.Concat(mi.GetParameters().Skip(1).Select(pi => pi.ParameterType)).ToArray();
var bf = BindingFlags.Public | BindingFlags.NonPublic | (mi.IsStatic ? BindingFlags.Static : BindingFlags.Instance);
var expandMethodName = methodName ?? $"{origMethodName}Expander";
var em = methodType.GetMethod(expandMethodName, bf, null, argTypes, null);
if (em == null)
throw new NullReferenceException($"Unable to find MethodInfo for {methodType.Name}.{expandMethodName}");
else
return em;
}
}
Now, an IQueryable extension to trigger the expansion:
public static class IQueryableExt {
private static object Evaluate(this Expression e) => (e is ConstantExpression c) ? c.Value : Expression.Lambda(e).Compile().DynamicInvoke();
/// <summary>
/// ExpressionVisitor to replace x.method("x..z") to methodexpander(x, "x..z")
/// </summary>
private class ExpandableMethodVisitor : ExpressionVisitor {
public override Expression Visit(Expression node) {
if (node?.NodeType == ExpressionType.Call) {
var callnode = node as MethodCallExpression;
var ema = callnode.Method.GetCustomAttribute<ExpandMethodAttribute>();
if (ema != null)
return (Expression)ema.ExpandingMethod(callnode.Method).Invoke(callnode.Object, callnode.Arguments.Select((ae, n) => n == 0 ? ae : ae.Evaluate()).ToArray());
}
return base.Visit(node);
}
}
private static T ExpandMethods<T>(this T orig) where T : Expression => (T)(new ExpandableMethodVisitor().Visit(orig));
public static IQueryable<T> Expand<T>(this IQueryable<T> q) => q.Provider.CreateQuery<T>(q.Expression.ExpandMethods());
}
Finally, the specific extension needed to filter characters from a field expression:
public static class LINQExt {
// body only for LINQ to Objects use
[ExpandMethod("CleanUp")]
public static string RemoveAll(this string src, string removeChars) => removeChars.Aggregate(src, (ans, ch) => ans.Replace(ch.ToString(), ""));
private static Expression CleanUp(this Expression dbFn, string charsToRemove) {
var toCharE = Expression.Constant(String.Empty);
var replaceMI = typeof(string).GetMethod("Replace", new[] { typeof(string), typeof(string) });
var methodBody = dbFn;
foreach (var ch in charsToRemove)
methodBody = Expression.Call(methodBody, replaceMI, Expression.Constant(ch.ToString()), toCharE);
return methodBody;
}
}
Now you can use the RemoveAll extension in a query, and process the query with Expand before instantiating it.
So, for the example:
var filter = "filter";
var removals = "'\"#/-";
query = query.Where(x =>
x.Name.RemoveAll(removals).Contains(filter) ||
x.Full.RemoveAll(removals).Contains(filter))
.Expand();
This could probably be added to LINQKit to be handled with their IQueryable/IProvider wrappers.
I think the trick here is to move the logic in to SQL, and build it out as a SQL function then call the SQL function from your LINQ query.
Something like this perhaps ...
https://stackoverflow.com/questions/20131632/calling-a-sql-user-defined-function-in-a-linq-query
| common-pile/stackexchange_filtered |
How to use Crashlytics with iOS / OS X today view extensions?
Since today extensions run as separated a process I am sure they will not log any crashes out of the box. I assume we need to initialize Crashlytics on the widget separately. E.g. in the viewDidLoad method of the TodayViewController.
Is anybody already using Crashlytics inside any iOS / OS X extensions? If so, how did you implemented it?
I am also wondering if it would make sense to create a separate app in Crashlytics just for the extension.
I haven't been able to use almost anything inside the extensions (Flurry does not work, crashlytics does not work, and even a .h file I have with some asssert macros does not work...)
This is a support question for Crashlytics. Please contact them directly.
@wolffan thats disappointing so far.
@Kerni as far as I know they ask people to create issues here on stackoverflow with their tag. However, I cannot find the quote anymore … so I might be wrong.
Crashlytics support got in touch with me and provided these steps. I tested them and it now works for me iOS 8 app.
Add the Crashlytics Run Script Build Phase to your extension's target as well (copy / paste the same you added to your main app)
Add the Crashlytics.framework to your extension's linked libraries
(e.g. simply check the extension target in its file inspector)
Add the Crashlytics.startWithAPIKey("yourApiKey") to your extension's view controller's initWithCodermethod. (In Apple's today extension template it is called TodayViewController by default)
> if you have no initWithCoder method yet, it should look like this afterwards:
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
Crashlytics.startWithAPIKey("yourApiKey")
}
Forgot to mention: Since every extensions needs to have it's own bundle id, Crashlytics create a separate app in the dashboard on its own anyway.
This fails 50% of the time for me. It may be a problem with multiple extensions (Today and Watch). Basically, I need to rebuild my project all the time because it fails due to Crashlytics not finding the dSYM about every other time.
How can it be? I do Fabric.with([Crashlytics.self]) inside init(coder aDecoder: NSCoder), it says [Fabric] [Fabric +with] called multiple times. Only the first call is honored, please pass all kits you wish to initialize
Does Crashlytics call CrashlyticsDelegate when a previous crash occurred on an app extension? https://stackoverflow.com/q/61801745/9636
Here is Twitter's own guide to implementing it:
https://twittercommunity.com/t/integrate-fabric-crashlytics-with-ios-8-extension/28905
So, copy the libraries, for instance if you're using CocoaPods you can add Fabric and Crashlytics to the Extension target:
In Podfile:
target :TodayExtension do
pod 'Fabric'
pod 'Crashlytics'
end
and run pod install. And don't forget to set Build Active Architecture Only to NO, or you may get linker errors
Then in your TodayViewController:
#import <Fabric/Fabric.h>
#import <Crashlytics/Crashlytics.h>
...
-(id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
[Fabric with:@[CrashlyticsKit]];
return self;
}
and copy the Fabric Run Script in build phases to your Today Extension target, and copy the Fabric entry from the info plist from the main application into your Today Extension's info plist
this is something that i forgot "copy the Fabric Run Script in build phases to your Today Extension target"
Here is official how-to described how to use Crashlytics in iOS Extensions:
Add this line to your viewController's initWithCoder method Fabric.with([Crashlytics.self])
Copy the "Fabric" Dictionary from your main app's Info.plist and paste into your extension's Info.plist.
Copy/paste the Run Script Build Phase from your main app's target into your extension's Run Script Build Phase.
And... you good to go!
Another pitfall: You have to enable Answers in the Fabric dashboard for the extension. Don't know if that information is part of the above linked how-to, since I am greeted with "Sorry, you don't have access to that topic!" when I follow the link.
Answer from maremmle works also if you want to add Crashlytics to share extensions on iOS 8.0+. Just remember to put[Crashlytics startWithAPIKey:@"apiKey"]; inside init method from your first ViewController.
Thanks for all instructions, it works fine in my Share Extension.
I did notice that for my Share Extension, the Fabric Answers dashboard did not show actual data for:
Active Users
Median Total Time Spent in App per User
It does for the companion app.
So I was wondering how the Answers SDK would determine this. The most logical seems to monitor the UIApplication notifications.
Since the Lifecycle of an Extension is related to a ViewController, these UIApplication notifications are not posted. And hence Fabric doesn't know when the Extension is active.
So I implemented the following solution, which provides the above data in the Fabric Dashboard:
In 'viewDidLoad' of the Extensions main ViewController, post UIApplicationDidBecomeActiveNotification which will trigger the start for Fabric.
Prior before closing the Extension (via completeRequestReturningItems:completionHandler: or cancelRequestWithError:) post UIApplicationWillResignActiveNotification. This will trigger the stop for Fabric.
Please note there is a delay between the action on device, and when the data becomes visible in the Dashboard.
Especially for Active Users. It takes around 20-30 seconds after the Extension is presented. But when the Extension is closed, it might take up to 5 minutes before the Active Users is decremented.
| common-pile/stackexchange_filtered |
WordPress plugin + Composer?
I'm making WordPress plugin that is using a few third party libraries. Is it common practice to use Composer for WordPress plugin?
If it's okay to use it, then I assume that I should provide all Composer files along with my plugin, because I don't want to make people manually run composer install.
Another question is, is it safe to use Composer's autoloading? I configured it to autoload my own classes and the libraries are of course autoloaded as well. That's convenient.
Is using Composer with WordPress plugin an overhead? Or does it have any additional issues?
Most people who install WordPress plugins have no clue what Composer is or does. If you're distributing your plugin to the average demographic, or through the official plugin repo, you should include all dependencies right in your plugin.
Sure thing. Do you think it's ok to include them all through Composer?
This is old question, but nothing has changed since 3 years. Using Composer for requiring dependencies in WordPress plugin/themes is usually a bad idea. PHP does not allow to load more than one class with the same FQN. So if two different plugins will install the same library independently, classes from random installation of library will be loaded which may result really weird bugs (especially if these are different versions of the same library). To avoid such problems you should have only one composer.json instance per project, so in this case Composer should be run at WordPress level.
In general if you have the same package installed multiple times, you probably will get some troubles (and this will happen if every plugin maintainer will use Composer on its own). Note that this is not directly related to Composer - if you copy libraries manually you will get exactly the same problem (maybe even worse).
If you really want to use Composer in your plugin you may try tools like humbug/php-scoper which will modify namespaces of used dependencies and make them unique.
There are several tools that can be used to prefix a WordPress plugin. Beyond humbug/php-scoper, you can also use Interfacelab/namespacer, coenjacobs/mozart, and PHP-Prefixer.
I'm PHP-Prefixer's lead developer. I've written this tutorial about how to use Composer in a WordPress plugin: New Tutorial: Using PHP Composer in the WordPress Ecosystem
| common-pile/stackexchange_filtered |
"why oh why" or "why, oh why"?
Is this punctuated correctly?
If happy little bluebirds fly beyond the rainbow,
Why oh why can’t I?"
Or should “oh why” be set off by commas?
If happy little bluebirds fly beyond the rainbow,
Why, oh why, can’t I?
I'm curious...seeing as that is an established lyric from the song "Over the Rainbow", is there a reason you wouldn't look up the punctuation online or did you find conflicting versions of punctuation for the song?
Because it's a song lyric and why oh why can't is four consecutive quavers to be sung as a single rising phrase.
Be that as it may, @StoneyB, it's traditionally not sung as straight quarter notes and some artistic stylization is usually applied that would make the addition of slight pauses (commas) helpful to the singer.
@KristinaLopez There's always a big ritard on it, because it's the final phrase, and any artist may perform it with any degree of rubato; but the original and still canonical performance by Judy Garland takes it as written. I think there's a deal of musical symbolism in that tag, with the straining and fluttering giving way finally to an effortless soar on the final phrase.
All true, @StoneyB, but are you advocating commas or no punctuation between the last 5 words?
@KristinaLopez If (as appears to be the case) Yip Harburg wrote it without commas, that's the way it should be printed; lyrics are meant to be sung by singers, not read by readers. (And there's really no reason but convention to point it; people don't say it that way, either.)
I'm voting to close this question as off-topic because song lyrics famously don't have 'incorrect' punctuation.
Shouldn’t you be setting off oh by commas as well, now that you’re at it? You would do that in most contexts: “Oh, why can’t I fly like the bluebirds?”. If you consider oh why a parenthetical to be set off by commas, surely it ought to be “If happy little bluebirds fly beyond the rainbow / Why, oh, why, can’t I?”. But what would that do to this bit in Buenos Aires from Evita: “Birds fly out of here, so why oh why oh why the hell can’t I?”? You’d end up with “Birds fly out of here, so why, oh, why, oh, why, the hell can’t I?”, which is just dreadful.
I think (outside of a music lyric) you can punctuate it any way you want. YOY do people obsess about such things??
I eagerly await a definitive explication of the proper punctuation of "Why oh why oh why oh why did I ever leave Ohio?" from the 1953 musical Wonderful Town.
@StoneyB - I don't know much about music or singing. Your explanation gave me goosebumps when I thought about it and Judy Garland's singing. Thanks for explaining.
I think the commas after the "why"s are rests, and are needed because Garland shifts from a 2 4 rhythm to a 3 4 rhythm beginning at the two measures I've marked with ||. (I'm not saying that's in the score, but I've just listened to the song a bunch of times, and I think that's what she does.)
If |happy |little |bluebirds |fly be|yond the |rainbow, ||why, oh ||why, can't ||I?
Afterthoughts: But maybe the first of those rests comes after "oh" instead of before it. And it should be a 4 4 rhythm, not 2 4.
If |happy little |bluebirds fly be|yond the rainbow, ||why oh, ||why, can't ||I?
The second one is correctly punctuated; however, it does not require a capital "W".
If happy little bluebirds fly beyond the rainbow, why, oh why, can't I?
I think I might have put a semi-colon rather than a comma after 'rainbow'.
With no sources or even explanations as to why you consider the second correctly punctuated (and presumably thus the first incorrectly punctuated), this seems more like just stating an opinion than giving an actual, factual answer.
| common-pile/stackexchange_filtered |
How to combine two array in ios
Array1 : { A , B , C }
Array2 : { 1 , 2 , 3 }
I need to combine of this array list like:
Combine array :
{
A(1),
B(2),
C(3)
}
How can I implement this?
It would be easier to see what you are trying to achieve if you showed your PHP code as an illustration.
@dasblinkenlight i need in ios
Of course you do. But it is quite unclear what exactly you need to do in ios, so PHP code would help to understand it a great deal: somebody with the knowledge of both PHP and iOS would teach you how to "translate" it.
you can use NSDictionary
NSDictionary *dict = [NSDictionary dictionaryWithObjects:array1
forKeys:array2];
NSString *value = dict[@"1"]; //value = @"A";
| common-pile/stackexchange_filtered |
write the contents of application page to a file in WP7
I want to write the entire contents of my application page (eg Mainpage.xml) to a file (in Isolated Storage ) How do I do it in WP7 ? are there any methods available to parse the page contents and write it to file in windows phone 7 ?
Why would you want to do this?
I want to extract test results from Microsoft.Silverlight.Testing framework result page. So that I can write them to a file.
There is no built in way to do this.
However there are a couple of approaches you could try:
If the structure is static you could try and extract the resource containing this from the DLL. For future re-use it would be easier to load the page from the DLL again though.
If you're generating a page (or part of a page) at runtime (based on user input/preferences) and you want to be able to save/reload this then just save enough information to be able to recreate it. It's unlikely that XAML would be the best format for this though.
You could create this as you build the UI. Alternatively you could walk the visual tree to get details of all that is rendered. I'd recommend recording as you go so you can more easily keep track of non-default values in the rendered objects.
| common-pile/stackexchange_filtered |
java.lang.ClassCastException: com.tibco.tibjms.naming.TibjmsFederatedQueueConnectionFactory cannot be cast to javax.jms.QueueConnectionFactory
I am encountering the below exception when trying to look up for the JNDI context, although a similar question was already answered on this site referring the missing tibjms.jar from the class path to be the root cause.
java.lang.ClassCastException: com.tibco.tibjms.naming.TibjmsFederatedQueueConnectionFactory cannot be cast to javax.jms.QueueConnectionFactory
at com.xxx.host.tibco.ConnectionHandler$JMSConnectionFactory.<init>(ConnectionHandler.java:337)
at com.xxx.host.tibco.ConnectionHandler.init(ConnectionHandler.java:94)
at com.xxx.host.tibco.ConnectionHandler.<init>(ConnectionHandler.java:84)
at com.xxx.host.tibco.ConnectionHandler.getInstance(ConnectionHandler.java:63)
at com.xxx.productOne.host.HostGetMemberBalanceRequest.doDecision(HostGetMemberBalanceRequest.java:42)
at com.audium.server.voiceElement.DecisionElementBase.service(DecisionElementBase.java:386)
at com.audium.server.controller.Controller.goToDecision(Controller.java:2857)
at com.audium.server.controller.Controller.goToElement(Controller.java:2687)
at com.audium.server.controller.Controller.continueCall(Controller.java:2511)
at com.audium.server.controller.Controller.goToElement(Controller.java:2742)
at com.audium.server.controller.Controller.continueCall(Controller.java:2511)
at com.audium.server.controller.Controller.doPost(Controller.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Thread.java:662)
In addition to this, I have the same piece of code working fine in another server with exactly same version/no. of libraries in TOMCAT.
Here is the code snippet of how the context is being looked up:
InitialContext iniCtx;
try {
iniCtx = new InitialContext(oProperties);
PoolableObjectFactory objectFactory = new JMSConnectionFactory(iniCtx);
this.pool = new GenericObjectPool(objectFactory);
createQueues(iniCtx);
singleton = this;
System.out.println("Connection Handler is initialized");
} catch (NamingException ne) {
ne.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
Any help in trouble-shooting is highly appreciated.
Try to replicate the problem in a small, self contained project.
Pasting 1 line of a stack trace with no code isn't going to get you much interest. Follow these guidelines: http://sscce.org/
This cannot be the case. If same piece of code is working in one server and not in other, then mostly likely cause is the mis- configuration. If everything is similar and has no difference, then there is no reason for contrasting behaviours.
I cannot rule out any changes between these two servers, however all i have verified is the exactly same set of libraries are ported from a working server to a newly configured server.
Problems like this are always one class (javax.jms.QueueConnectionFactory in this case) loaded by different class loaders. Often, but not always, different classloader means different location from where the class was loaded. The location from where the class was loaded in turn is easy to figure out in debugger
javax.jms.QueueConnectionFactory.class.getProtectionDomain().getCodeSource().getLocation();
and
connectionFactory.getSuperclass()..
If the locations are different in most cases the reason gets clear imediately.
I guess the class is loaded twice by different classloaders
Yes! I just found that the jms.jar was by default available in WEBINF folder and i had copied again in common/lib. Removing one of them solved the issue..
| common-pile/stackexchange_filtered |
Multiplication with Perl 6 Sequence Whatever (...) operator
I have seen examples of the Perl 6 whatever (...) operator in sequences, and I have tried to find out how to do a sequence which involves multiplications.
The operator does the following, if one starts with some numbers, one can specify a sequence of the numbers following it.
@natural = 1,2 ... *;
@powersOfTwo = 1,2,4 ... *;
and so on.
One could also define a sequence using the previous numbers in the sequence as in the fibonacci numbers (shown in this question), where one does the following:
@fibonacci = 1,1, *+* ... *;
The problem is that the multiplication operator is * and the previous numbers are also represented with *.
While I can define a sequence using +, - and /, I can not seem to find a way of defining a sequence using *.
I have tried the following:
@powers = 1,2, *** ... *;
but it obviously does not work.
Does anyone know how to this?
For one thing, Perl 6 is sensitive to whitespace.
1, 2, * * * ... *
is perfectly legitimate and generates a sequence that's sort of like a multiplicative fibonacci; it's just a little bit hard to read. *** and * * * mean something different.
If the ambiguity bothers you, you can use an explicit block instead of the implicit one that using "whatever star" gives you:
1, 2, -> $a, $b { $a * $b } ... *
and
1, 2, { $^a * $^b } ... *
both produce the same sequence as 1, 2, * * * ... * does (tested in Rakudo).
my @powers_of_two := { 1, 2, { $^a * 2 } ... *);
my $n = 6;
my @powers_of_six := { 1, $n, { $^a * $n } ... *);
| common-pile/stackexchange_filtered |
How can mongomock can be use with motor?
I have a server implemented with Tornado, and Motor,
and I've come across this mock of pymongo:
https://github.com/vmalloc/mongomock
I really like the idea of doing the unit tests of my code with no real call to the DB, for the sake of running them very fast.
I've tried patching motor to pass calls to the mongomock, like that:
from mock import MagicMock
import mongomock
p = mock.patch('motor.MotorClient.__delegate_class__', new=mongomock.MongoClient)
p1 = mock.patch('motor.MotorDatabase.__delegate_class__', new=MagicMock())
p.start()
p1.start()
def fin():
p.stop()
p1.stop()
request.addfinalizer(fin)
it was failing like that:
Traceback (most recent call last):
File "C:\Users\ifruchte\venv\lib\site-packages\pytest_tornado\plugin.py", line 136, in http_server
http_app = request.getfuncargvalue(request.config.option.app_fixture)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1337, in getfuncargvalue
return self._get_active_fixturedef(argname).cached_result[0]
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1351, in _get_active_fixturedef
result = self._getfuncargvalue(fixturedef)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1403, in _getfuncargvalue
val = fixturedef.execute(request=subrequest)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1858, in execute
self.yieldctx)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1784, in call_fixture_func
res = fixturefunc(**kwargs)
File "C:\Users\ifruchte\PycharmProjects\pyrecman\tests\__init__.py", line 65, in app
return get_app(db=motor_db(io_loop))
File "C:\Users\ifruchte\PycharmProjects\pyrecman\tests\__init__.py", line 27, in motor_db
return motor.MotorClient(options.mongo_url, io_loop=io_loop)[options.db_name]
File "C:\Users\ifruchte\venv\lib\site-packages\motor\__init__.py", line 1003, in __getattr__
return MotorDatabase(self, name)
File "C:\Users\ifruchte\venv\lib\site-packages\motor\__init__.py", line 1254, in __init__
delegate = Database(connection.delegate, name)
File "C:\Users\ifruchte\venv\lib\site-packages\pymongo\database.py", line 61, in __init__
**connection.write_concern)
TypeError: attribute of type 'Collection' is not callable
Anyone know how it can be done ? or I'm wasting my time here ?
No need to instantiate the magicmock.
p1 = mock.patch('motor.MotorDatabase.__delegate_class__', new=MagicMock)
This does not work. Could you please provide a working scenario?
| common-pile/stackexchange_filtered |
subscribe() is not sending "finalized" object?
In my component, i.e. componentDidMount() I am subscribing to a state of my rxjs store, which holds an object array workingHours: WorkingHours[], that might have items for this month (for example an amount of 8 working hours for the date 27.02-2023 as one item out of 28).
this.context.store.state$
.pipe(map((state) => state.workingHourList[this.props.employeeId] || []))
.subscribe((workingHours) => {
if (workingHours.length > 1) {
this.updateWorkingHours(
workingHours.filter(
(wl) => wl.date.getMonth() === this.props.month.getMonth()
)
);
}
The problem is, that workingHours in subscribe((workingHours) => { ... might have zero items, until it finally receives the data from the database. So previously I check, if there are items in workingHours with
if (workingHours.length > 1) {
I don't want to go the bad way of coding and use setTimeout on a few seconds (which works), so I actually want to check if the workingHours are the "finalized" workingHours and there won't come another one.
I don't know much Rxjs (at least, that's what it looks like to me) but isn't this just a matter of getting the last value?
As I see you use store$ which is updated by remote request, from here you know nothing about the request state. I guess you should add a new field to the state about the request state (started, completed, failed etc).
BehaviorSubject immediately emits the last value to new subscribers. This is not a new subscription. In my console, the first three outputs for workingHours are empty arrays, then I get a full array
You can pass the request state to that subject, and show data when the state is completed
| common-pile/stackexchange_filtered |
In PostgreSQL, how can I unwrap a json string to text?
Suppose I have a value of type json, say y. One may obtain such a value through, for example, obj->'key', or any function that returns values of type json.
This value, when cast to text, includes quotation marks i.e. "y" instead of y. In cases where using json types is unavoidable, this poses a problem, especially when we wish to compare the value with literal strings e.g.
select foo(x)='bar';
The API Brainstorm page suggests a from_json function that will intelligently unwrap JSON strings, but I doubt that is available yet. In the meantime, how can one convert JSON strings to text without the quotation marks?
Does this answer your question? Postgres: How to convert a json string to text?
Text:
To extract a value as text, use #>>:
SELECT to_json('foo'::text) #>> '{}';
From: Postgres: How to convert a json string to text?
PostgreSQL doc page: https://www.postgresql.org/docs/11/functions-json.html
So it addresses your question specifically, but it doesn't work with any other types, like integer or float for example. The #> operator will not work for other types either.
Numbers:
Because JSON only has one numeric type, "number", and has no concept of int or float, there's no obvious way to cast a JSON type to a "correct" numeric type. It's best to know the schema of your JSON, extract the text and then cast to the correct type:
SELECT (('{"a":2.01}'::json)->'a'#>>'{}')::float
PostgreSQL does however have support for "arbitrary precision numbers" ("up to 131072 digits before the decimal point; up to 16383 digits after the decimal point") with its "numeric" type. JSON also supports 'e' notation for large numbers.
Try this to test them both out:
SELECT (('{"a":2e99999}'::json)->'a'#>>'{}')::numeric
Very good, thanks. Any idea on what to do with ints and floats?
@lucid_dreamer I added to my answer for you.
Oh I see. So #>>'{}' does work for other types but always casts to text, thus requiring an additional cast to numeric. PS: You're very helpful.
The ->> operator unwraps quotation marks correctly. In order to take advantage of that operator, we wrap up our value inside an array, and then convert that to json.
CREATE OR REPLACE FUNCTION json2text(IN from_json JSON)
RETURNS TEXT AS $$
BEGIN
RETURN to_json(ARRAY[from_json])->>0;
END; $$
LANGUAGE plpgsql;
For completeness, we provide a CAST that makes use of the function above.
CREATE CAST (json AS text) WITH json2text(json) AS ASSIGNMENT;
Ugh. There must be a better way. Though I recall this being a real irritation with the 9.3 json API.
@CraigRinger Hopefully someone who knows a better solution sees this!
There's a better way now - see the answer below by adjenks
| common-pile/stackexchange_filtered |
How to understand the CRC Algorithm from the CAN specification?
I am trying to understand how the cyclic redundancy check (CRC) algorithm from the Controller Area Network (CAN) specification works. Here is the pseudocode.
CRC_RG = 0; // initialize shift register
REPEAT
CRCNXT = NXTBIT EXOR CRC_RG(14);
CRC_RG(14:1) = CRC_RG(13:0); // shift left by
CRC_RG(0) = 0; // 1 position
IF CRCNXT THEN
CRC_RG(14:0) = CRC_RG(14:0) EXOR (4599hex);
ENDIF
UNTIL (CRC SEQUENCE starts or there is an ERROR condition)
I do understand the standard algorithms but not the CAN algorithm. I have calculated it by hand and programmed it and it works fine. I just don't understand why/how it works.
What is NXTBIT initialised to?
@PeterTaylor NXTBIT is a function reading next bit of input data
CRC(x) is remainder of polynomial division of x by some fixed polynomial. Here bits of x, as well as bits of result, represents polynomial with binary coefficients, f.e. 0b101 may represent polynomial 1*x^2 + 0*x + 1.
So, the algorithm you have cited, is the most straightforward one - if q(x) = p(x) mod f(x), then p(x)*x mod f(x) is either just q(x)*x (if it doesn't contain x^N where N is order of f(x) polynomial), or q(x)*x - x^N + (x^N mod f(x)). As you may note, the last sum component is some fixed polynomial for given f(x).
So, each step of this cycle do the following:
shift result by 1 bit left - equivalent to multiplication by x
if result contains x^N then xor result by x^N mod f(x)
Oh, well, there are some more details. Please read a painless guide to CRC error detection algorithms for real explanation.
I have read the painless guide but you explained it better. I understand it now. Thank you.
| common-pile/stackexchange_filtered |
What is the fastest way to read, sort and merge multiple files in Java?
I am working on a project that deals with reading and processing huge .txt files containing various data for certain individuals.
Multiple files are to be read and sorted by the individual ID (which is present in all files) and then merged, in terms of retrieving all the entries from all the files that are assigned to the same ID. In other words, each individual can have multiple entries (i.e., lines) in every file. I need to retrieve all info that I find regarding one ID, store it and then pass to the next one.
Until now I've tried FileChannel, FileInputStream and MappedFileBuffer, but apparently the best suited for my case is FileInputStream with a BufferedReader and to compare them I saw that Collection.sort() is recommended. An important issue is that I am not aware of the performance of the PCs that are going to make use of the application and the files can be bigger than 2GB. Any help would be appreciated.
Is there a restriction against databases?
What is the expected number of lines per Id and the total number of id's across all files
@KARASZIIstván there is no restriction against databases, but the processing of the files is done in intermediate steps (at each step, probably a new sort will be needed depending on the workflow to follow and the intermediate input for other modules in the application). I wanted to keep it all coded in Java, without inserting any SQL statements or similar, as the application will be later on passed to Java developers only...so it was pretty much a request
@The expected number of lines per ID it will not be higher than 500 across all the files and the number of IDs is more than 2 - 3 million.
If you expect to be processing more data than the target environment can fit into memory then you will either have to use some form of on-disk streaming or reparse the file multiple times.
The decision as to which option to pursue depends on the distribution of data.
If there are relatively few lines per id (ie lots of distinct ids) then reparsing will be the slowest assuming you need the collated results for all ids.
If there are relatively few ids (ie lots of lines) then reparsing may become more efficient.
My guess is that reparsing for each id will be inefficient in the general case (but if you know there are maybe <10 distinct ids then I would consider a reparse based solution)
The idea then is that you parse the file just once putting the results into a kind of map of lists...
Map<Id,List<Record>>
The problem you face is that you don't have enough memory to hold such a map...
So you will need to create an intermediary temporary on disk store to hold the lists for each id.
You have two options for the on disk store:
Roll your own
Use a database (eg derby or hsqldb or ...)
Option 1 is more work but you can optimise for your use case (namely writing by append only, and then at the end read all the records back in and sort them)
Option 2 will be easier and quicker to implement at the risk of performance as the database will be maintaining an index on the ids in case you want to randomly read the data while parsing (which you don't in this use case)...
If I had to choose I would start with option 2 and only introduce the maintenance headache on myself that option 1 will be if performance is sub-optimal. (avoid premature optimisation)
You will need to use a buffered reader (with a really large (64k) buffer to avoid trashing the disk with competing read/write operations (disk is what will kill performance)
The requirement for java only is met if you use a java based db, eg derby or hsqldb
I was thinking the same with reparsing the file, but I do not have enough entries per ID to become efficient. The available memory is an issue, that being the reason for trying to use a MappedFile Buffer as I can reserve as virtual-memory the necessary space and pass it to the heap. The idea is that I want also to avoid writing on the disk as it will be time consuming to switch the kernel context in order to write.
I'll try and see what I get with derby or hsqldb
If the files are large enough you will have to use an external sort, in which case a database really starts to become the most practical alternative. There are no external sort methods in the JDK.
Yeah, I know, I would have done it using a database also, but the specifications are to use Java only :/ The thing is I wanted to go line by line in the file which does not contain ID duplicates and retrieve all the info I got regarding the current ID from the other files. Maybe I do not even need to sort them if the .indexOf() method is fast enough.
Why not use something like the H2 In memory database? It can spool the tables to local storage if needed which gets you around any memory limitations. Performance might take a hit, though.
@Erik Thanks for the suggestion, I will try it in comparison with derby and see how much the performance is affected.
| common-pile/stackexchange_filtered |
How to add External data source into MySQL?
I have two database. One is FileMaker database and another one is mysql. I want to use this database in mysql.I created odbc data base connection so that I can sync both database e.g when I maker changes in mysql database then FileMaker database should also be updated. Is this thing possible in mysql? If no then in which open source database this thing is supported?
The best way to do that with FileMaker would be to hook the MySQL database up to FileMaker via External SQL Sources (ESS).
You can create layouts in FileMaker that display the actual MySQL data, and the MySQL can be used in FileMaker scripts, calculations, etc. just like any other FileMaker data.
You can choose to simply build layouts that are based on MySQL tables, or you can create scripts in FileMaker to copy data from MySQL based layouts to actual FileMaker based layouts.
| common-pile/stackexchange_filtered |
SuiteCRM API full authentication and login
Is there a away to gain full access to SuiteCRM through background API login?
For example, we use LDAP on a separate site and I want to pass authenticated users to suitecrm in the background then have suitecrm grant access to said user without them having to log in for a second time.
I have been able to get the API to work and 'login' a user and return a session_id, however it still directs me to the login page. If I attempt to force a redirect I get an err_to_many_redirects.
EDIT:
I do not believe the issue arises from the API script but using the returned data to complete the session data required, which I have yet been able to generate.
Code below acquired from http://support.sugarcrm.com/Documentation/Sugar_Developer/Sugar_Developer_Guide_6.5/Application_Framework/Web_Services/Examples/REST/PHP/Logging_In/
<?php
error_reporting(E_ALL);
ini_set('display_errors',1);
require_once('./shield_secureaccess.php');
$url = "http://{site_location}/service/v4_1/rest.php";
session_start();
$username = $_SESSION['SHIELD_user'];
session_write_close();
if(isset($username)){
$password = shield_secureaccess($username);}
else{require 'shield_session.php';}
//function to make cURL request
function call($method, $parameters, $url)
{
ob_start();
$curl_request = curl_init();
curl_setopt($curl_request, CURLOPT_URL, $url);
curl_setopt($curl_request, CURLOPT_POST, 1);
curl_setopt($curl_request, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
curl_setopt($curl_request, CURLOPT_HEADER, 1);
curl_setopt($curl_request, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($curl_request, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl_request, CURLOPT_FOLLOWLOCATION, 0);
$jsonEncodedData = json_encode($parameters);
$post = array(
"method" => $method,
"input_type" => "JSON",
"response_type" => "JSON",
"rest_data" => $jsonEncodedData
);
curl_setopt($curl_request, CURLOPT_POSTFIELDS, $post);
$result = curl_exec($curl_request);
curl_close($curl_request);
$result = explode("\r\n\r\n", $result, 2);
$response = json_decode($result[1]);
ob_end_flush();
return $response;
}
//login ------------------------------
$login_parameters = array(
"user_auth" => array(
"user_name" => $username,
"password" => md5($password),
"version" => "1"
),
"application_name" => "RestTest",
"name_value_list" => array(),
);
$login_result = call("login", $login_parameters, $url);
echo "<pre>";
// print_r($_SESSION);
print_r($login_result);
// print_r($current_user);
print_r($_SESSION);
echo "</pre>";
//get session id
$session_id = $login_result->id;
// header("Location: http://{site_location}/index.php?module=Home&action=inde");
// header("Location: http://{site_location}/index.php?MSID=$session_id");
?>
post your code.
added, however the API doesn't appear to be the issue it's what to do with it after
so you are logging in from API and then redirect user to suitecrm ?
Basically. The hard authentication is done from another main site, and I am passing an authorized user into suitecrm. My goal is to bypass the login screen and give an impression of an SSO, easiest way to do this I found is generate an API call in the background to log the user into suitecrm and go directly to their home page. The login appears to be successful however the redirect fails and there is no $_SESSION data generated on the login, even though it generates an MSID.
for SSO type functionality, this will not work. API login only work for API not for SuiteCRM.
Well, nuts. I suppose now my only option is to rewrite the authentication mech which I was trying to avoid. I appreciate the help Star
Yes you are right. to make it easier, here is hint: look into "modules/Users/Authenticate.php" which is being used by SuiteCRM for user validation and then logging-in a user into app.
moreover it will be good if you post your code after completion of this task. This will help other for similar thing.
I can do that, though the original question will be misleading as since API wont work it will be custom authentication file. Initially looking at this, it appears it will be easier than I thought which is typical of always trying the hardest thing first ha. Appreciate it Star
You're almost there. The session id can't be used straight away to login, you first must call seamless_login.
So the flow is:
Do API login as above.
Call seamless_login this will return 0 or 1
Navigate the user to http://{site_location}/index.php?MSID=$session_id where $session_id is the session id from the original API login call.
| common-pile/stackexchange_filtered |
Sum of data in a dimension by comparing data in two other dimensions in dc.js
I got stuck at a point where I need to find sum of the data of one dimension by using the data indexed by other two dimensions.
Example:
"mode_device","method","discount","time","first_time","paid","p_id","p_sku"
"Desktop","EBS",,"1344887090","1344887090","1079","8786","PPLB03571285"
"Desktop","MOBIKWIK-WALLET","89","1474371140","1474371140","591","99068","PPLB009DCBBFREE"
"AndroidApp","COD","97","1474371149","1438844849","647","72321","PPLB034601"
"Desktop","JUSPAY","60","1474371158","1474371158","398","92389","PPLB713SQ306"
"AndroidApp","COD","190","1474371247","1448993680","1261","72685","PLB0029regenerist3"
"Desktop","JUSPAY","90","1474371346","1474371346","599","86728","PPLB66719804817"
"Desktop","DEBITCARD","60","1474371366","1465733603","398","92389","PPLB713SQ306"
"AndroidApp","COD","0","1474371404","1474371404","577","106032","PPLB0335PA0990NM"
"Desktop","COD","43","1474371404","1468956726","356","13221","PPLB039605"
Here we compare mode_device and method and we have to return the sum of the data from the paid.
Example:
AndroidApp and COD may repeat for several times lets say data has something like
"Android","COD","234"
"Android","Ebs","234"
"Ios","COD","234"
"Ios","COD","234"
"Android","COD","234"
We have to return like
Android-COD:468
Android-Ebs:234
Ios-COD:468
using dc.js graphs.
I'm not clear from your question how you want to plot this data - will it be a bar chart with bars that are labeled with both fields?
But if you want to aggregate by both mode_device and method, you can simply create a dimension which uses both values in its key:
var modeMethodDimension = cf.dimension(function(d) { return [d.mode_device, d.method].join('-'); });
var modeMethodGroup = modeMethodDimension.group(function(d) { return +d.paid; });
Now the group should have key value pairs like
[{key: 'Android-COD', value: 468}, {key: 'Android-Ebs', value: 234}, ...]
and if you put it into a dc.barChart those keys would be the names of the bars.
| common-pile/stackexchange_filtered |
Re-rendering vaporizes std:accounts-ui's <Accounts.ui.LoginForm />?
[Updated title to reflect current research on this bug.]
How to Fix this Bug?
I'm really not sure what is happening here. When I click on the "DraftJS" contentEditable - the <Accounts.ui.LoginForm /> instantly disappears. I haven't the foggiest idea why.
Demo:
To see the bug in its current state:
Go to: http://draftjsmeteor.autoschematic.com/
To duplicate the problem in your own development environment:
Clone this repository: https://github.com/JeremyIglehart/DraftJSMeteor
Create a settings-development.json file (you can leave it blank)
Run with npm start
The main question:
Does anyone know why DraftJS kills the <Accounts.ui.LoginForm />?
After looking into this problem for two days now, I suspect the problem is hiding somewhere in how <Accounts.ui.LoginForm /> is doing something - perhaps in their STATES API? I'm really confused here. Any help would really be appreciated.
Problems I've ruled out:
It's not a Session Variable problem. DraftJS doesn't use them - in fact, there are no Session Variables being used at all right now as far as I can detect (using Meteor Toys) (Thanks @mattsouth)
It is not DraftJS killing the page somehow. When removing DraftJS from the equation the problem still exists with just the form and reloading React. (Thanks @wursttheke )
The problem is not located in zetoff:accounts-material-ui - I removed the package and the problem still exists.
It doesn't seem to me this problem has anything to do with Meteor, or React specifically.
Where I'm looking now to solve this:
something to do with std:accounts-ui
I have no idea where else to look. Based on the gif demo above you can see that the div with the className accounts-ui is rendering just fine - after clicking in the DraftJS contentEditable area, however, something inside this component breaks.
Issues Tracking this Bug:
I've created three issues to try and track this down:
Open
std:accounts-ui:Issue #96
(I've also mentioned this on the Meteor Forums)
Closed
accounts-material-ui:Issue #26 (removed package, bug still exists)
DraftJS's Github Issue #962 (removed DraftJS from the equation, bug persists)
Once I find a solution I'll update all of the issues, forums and questions I've posted everywhere.
This problem is most likely related to how std:accounts-ui manages state.
I have found a way to "Put this bug in a jar" perhaps - assuming this is actually a bug. Take a look at this other StackOverflow question where I have produced a way to "hack" around this issue - but, it still doesn't solve why the state is being lost upon a externally forced re-render: http://stackoverflow.com/questions/41911509/is-this-a-bug-or-is-this-react-component-behaving-normally
Put the bug in a "jar"
This doesn't answer the question, but it does at least work around the bug in a semi-acceptable way.
The question remains, is this a bug - or is this the way React components SHOULD behave?
Okay, so this is the end of a very long battle trying to understand what I think is some kind of strange behavior of this react component.
We found out later, that any kind of "re-rendering" caused this react component to "lose it's state" of sorts and just break.
import React, { Component } from 'react'
import { Accounts, STATES } from 'meteor/std:accounts-ui';
class MyEditor extends Component {
constructor(props) {
super(props)
}
render() {
return (
<div>
<button onClick={() => this.forceUpdate()}>Rerender</button>
<Accounts.ui.LoginForm />
</div>
)
}
}
export default MyEditor;
Okay, so re-rendering of any kind is going to cause this component to fail.
I updated the issue on the github, and went to bed having React/Meteor code nightmares (I know you've been there too if you've read this far).
In the morning I woke up to find this little gift from Sean.
Basically to break it down. If you take the <Accounts.ui.LoginForm /> out of the same component where the DraftJS component is being drawn - it some how "protects" it from getting re-rendered. Essentially isolating the bug into a corner where it won't effect the application.
Here's the code:
The original "broken" code where <Accounts.ui.LoginForm /> lives next to the DraftJS <Editor />
// Imports
class MyEditor extends Component {
// Other component stuff
return (
<div className="editor-container">
<h1>DraftJS and Meteor Editor:</h1>
<IconButton onClick={this._onBoldClick.bind(this)} touch={true} tooltip="Format Bold" tooltipPosition="top-right">
<FontIcon className="material-icons" style={iconStyles}>format_bold</FontIcon>
</IconButton>
<Editor
editorState={this.state.editorState}
onChange={this.onChange}
handleKeyCommand={this.handleKeyCommand.bind(this)}
/>
<RaisedButton label="Log Editor State to Console" onClick={this.logState.bind(this)} primary={true} style={buttonStyle} />
</div>
<Accounts.ui.LoginForm />
<p>This is a test - do I stay?</p>
)
// Exports
Then Sean took the code out and separated it into respective components in a three step process:
1. Removed <Accounts.ui.LoginForm /> from <MyEditor />
// Imports
class MyEditor extends Component {
// Other component stuff
return (
<div className="editor-container">
<h1>DraftJS and Meteor Editor:</h1>
<IconButton onClick={this._onBoldClick.bind(this)} touch={true} tooltip="Format Bold" tooltipPosition="top-right">
<FontIcon className="material-icons" style={iconStyles}>format_bold</FontIcon>
</IconButton>
<Editor
editorState={this.state.editorState}
onChange={this.onChange}
handleKeyCommand={this.handleKeyCommand.bind(this)}
/>
<RaisedButton label="Log Editor State to Console" onClick={this.logState.bind(this)} primary={true} style={buttonStyle} />
</div>
)
// Exports
2. Created a <LogIn /> component:
// Imports
class LogIn extends Component {
constructor(props) {
super(props)
}
render() {
return (
<div className="login-container">
<Accounts.ui.LoginForm />
<p>This is a test - do I stay?</p>
</div>
)
}
}
// Exports
3. Then called the two from a parent <Home /> component:
class Home extends Component {
render () {
return (
<div>
<MyEditor />
<LogIn />
</div>
)
}
}
But now, I have this question... Is this actually a bug, or is this <Accounts.ui.LoginForm /> component behaving properly?
Another way of asking this question might be - is this normal behavior for a react component, or should the be built to handle arbitrary re-rendering without completely breaking down?
| common-pile/stackexchange_filtered |
Find cut-off frequency of a low-pass filter for a given output signal
Suppose we have an ideal low-pass filter $H(e^{j\theta})$ with a cut-off frequency $\theta_c$ in the range of $0\leq \theta_c \leq \pi$.
I want to know the input signal $x[n]$, as well as the cut-off frequency $\theta_c$, which produces the following output signal $y[n]$.
To solve this I started off with transforming $y[n]$ to the frequency domain by calculating the DTFT:
$$
Y(e^{j\theta})=\sum_{n=-\infty}^{\infty}y[n]e^{-j\theta n}=1+e^{-j\theta}+2e^{-j2\theta}+e^{-3j\theta}+e^{-j4\theta}
$$
Now I can calculate $X(e^{j\theta})=\frac{Y(e^{j\theta})}{H(e^{j\theta})}$. This is only possible for $|\theta| \leq \theta_c$, since outside of this region $H(e^{j\theta}) = 0$ and $X(e^{j\theta})$ would approach infinity. But inside this region $H(e^{j\theta})$ is simply 1 and when transforming $X(e^{j\theta})$ back to $x[n]$ it is just the same as $y[n]$, a behaviour that is expected of a low-pass.
The thing I am not sure about is the cut-off frequency. If I simply plot $Y(e^{j\theta})$, I see that there are frequency components until $\pi$. So I could just say alright, $\theta_c=\pi$. But is it really like that? Because the low-pass is actually just dependent on $\theta$ and not all the multiples of it, which I find in the complex exponentials of $Y(e^{j\theta})$. So how do I calculate $\theta_c$?
Something's wrong here. The ideal filter you mention has an infinite impulse response, so that the output sequence $y$ can't be as simple as you said. You can also see it in the frequency domain: as you said, this $y$ has frequency components all the way through $\pi$, so that it can't be the output of a filter that cuts away completely everything from $\theta_c$ to $\pi$.
@fonini The output signal is infinite, indicated by the dots to the left and right of the graph, it's just zero where n is outside of 0 to 4. You are right, that might have been the more appropriate forum ..
The domain is infinite, but its support is finite. The output vanishes after $n=4$, that's what matters. Anyway, you can see there's something wrong using the frequency-domain argument you mention.
set $\theta_c = \pi$ and set $x[n] = y[n]$ and you're done. and you're not dividing by zero anywhere.
| common-pile/stackexchange_filtered |
How can I partition records vertically across machines?
I recently asked two questions: (1) how to store and connect partial records in two or more machines, with the record halves correctly associated, and (2) how to determine the increase in time/space complexity due to the use of remote databases/multiple machines. The responses didn't really address the heart of the issue, so I'd like to clarify my questions:
First, I'm not asking if I should do this but rather the best way to do it. For example, I want to store a huge family tree with info such as date of birth (DOB), place of birth, mother's maiden name, citizenship/immigration status, ethnic/religious affiliation, and so on for each member, but I want to partition that info across databases (and machines) to reduce the amount of personally identifiable information (PII) exposure that would occur through the breach of a single machine.
This security measure is not replacing the standard safeguards of encryption, etc. but rather is in addition to them. The issue is that databases with those safeguards already in place have still been hacked (recent examples are the Anthem breach, OPM breach, etc.), so, clearly, "standard" isn't enough.
Data replication is not an option; I do not want any server to store the full data from the records at any time. A proxy server may aggregate the data during a query, but at no time should any server actually store the full record. The records queried will obviously be stored at the client side, but if a client is hacked, the only records exposed are those actually queried, not the millions of others that a server might store.
So I'm looking for the best, most efficient way to partition the data found in a record while ensuring that all parts of the record are linked in a query. Example: I create personal records containing DOB, place of birth, mother's maiden name, citizenship status, ethnic/religious affiliation, and medical information, but only the DOB and citizenship status are stored in the tables contained in database 1. Then database 2 contains tables with medical information and mother's maiden name, and so on. The databases are on separate servers so that if one machine is hacked, only that part of the record is accessed.
As a client, if I pull a record, I want to acquire all information about a single entity, but as noted above, I don't want to store all of the info of any record in a single database or server.
Does anyone have suggestions for the best ways to do this? I'm also interested in how to test the increase in time/space complexity due to the use of remote databases/multiple machines.
| common-pile/stackexchange_filtered |
Limit $a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x$
I need to find the limit when $n$ goes to $\infty$ of
$$a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x, \quad a_{1}=\frac{1}{4}$$
Thanks in advance!
It looks to me that the integral tends to a finite value, while the $n + 5$ factor increases, so you get something that grows quite fast.
What have you done so far?
@vonbrand The integral tends to $0$. It is indeed finite, but actually $n+5$ times the integral is bounded.
Sorry, misread the upper index. Too little coffee...
@vonbrand The upper bound is indeed not so far from being $3n$ modulo a character...
First show by induction that
$$
0<a_n\leq \frac{1}{4}
$$
for all $n$.
Then use that to show that $a_n$ is decreasing.
Since it is also bounded below, $a_n$ converges to $a\in[0,1/4]$.
Now
$$
\int_0^{3/n}e^{-2x^2}dx\sim \frac{3}{n}.
$$
So, passing to the limit in the induction formula, we get
$$
a=\frac{a^2}{2}.
$$
So the only possibility is $a=0$.
Hence
$$
\lim_{n\rightarrow +\infty} a_n=0.
$$
Let me know if you want me to expand some points.
Hey! thank you very much, it was very clear. I have only one doubt, How do i get that $$\int_0^{3/n}e^{-2x^2}dx\sim \frac{3}{n}.$$??
Obviously $a_n > 0$ for all $n \in \mathbb{N}$. Since
$$
\int_0^{3/n} \exp(-2 x^2) \mathrm{d}x < \int_0^{3/n} \mathrm{d}x = \frac{3}{n}
$$
We have
$$
a_{n+1} < \frac{1}{2} a_n^2 \left(1 + \frac{5}{n} \right) \leqslant 3 a_n^2
$$
Consider sequence $b_n$, such that $b_1 = a_1$ and $b_{n+1} = 3 b_n^2$, then $a_n \leqslant b_n$ by induction on $n$. But $b_n$ admits a closed form solution:
$$
b_n = \frac{1}{3} \left(\frac{3}{4} \right)^{2^{n-1}}
$$
and $\lim_{n \to \infty }b_n = 0$. Thus, since $0 < a_n \leqslant b_n$, $\lim_{n \to \infty} a_n = 0$ by squeeze theorem.
Nice approach, I don't even know why I made it more complicated, +1.
HINT: apply the mean value theorem on the integral.
| common-pile/stackexchange_filtered |
Fatal error: Call to undefined function: imagejpeg() on Heroku PHP. Fix?
So, I am getting the following error whilst trying to upload images on my Heroku site (PHP).
Fatal error: Call to undefined function: imagejpeg()
It refers to the following line in my AmazonS3Handler.php file.
//build the jpeg
imagejpeg($destinationImage);
Any ideas on how I could fix this?
Your PHP version doesn't have GD support enabled.
Oh ok, how can I correct this? It works locally, just not on Heroku
You need to go to php.ini and check whether your GD library is commented out. If yes, uncomment it.
I had GD lib installed and got the same error. The problem was in compiling without jpeg support. Here what you should do to make things work.
1. Install libjpeg
apt-get install libjpeg62-turbo-dev
2. Configure PHP with jpeg support
./configure \
--with-gd
--with-jpeg-dir=/usr/lib64
# other options
3. Build PHP
make clean
make
make install
Step 2 should be executed in the ~ folder? Or somewhere else? As well as step 3?
@DeesOomens step 2 and 3 should be executed in the directory with PHP source code. You can download it from GitHub https://github.com/php/php-src/releases
For building a docker image (based off php:7.2-apache-stretch), I had to use --with-jpeg-dir=/usr/include. The specific commands for the official php image for docker are a bit different (docker-php-ext-configure, docker-php-ext-enable, etc), but it's easy to map the steps above into those.
First stop the local server XAMPP or WAMP
In XAMPP
Goto xampp installed folder and then go to php folder and find php.ini file
and see if your GD library is commented out. If so, remove the comment.
;extension=gd (this one is commented one)
extension=gd (Remove ; to uncomment)
Example path: C:\xampp\php\php.in
In wamp
Go to the wamp installed folder, then to the bin folder, and then go to the php folder.
And then goto php7.0.10, php5.6.25, or something else php version folder and find php.ini file.
;extension=gd (this one is commented one)
extension=gd (Remove ; to uncomment)
Example path: C:\wamp64\bin\php\php7.0.10
If you can't find the extension=gd, add it yourself
Find "extension=" in the php.ini file and place it after any extension.
Then restart your local server, and everything should be fine.
| common-pile/stackexchange_filtered |
Migration from D7 to D8 is failing with a source plugin exception
I've installed the migration-plus module and used it successfully for other migrations on this project. When I try to use it with Simplenews it fails with this error:
[error] Migration failed with source plugin exception: tid is defined as a source ID but has no value.
I installed and configured Simplenews on D8. I imported the migration yml files from the Simplenews module and ran it with 'drush migrate-import d7_simplenews_newsletter'. I've tried it with and without a custom key for the d7 database. I've tried it with and without recreating the D7 newsletter categories in D8.
I expected to see it successfully migrate the Simplenews newsletters from D7 to D8 but it's not working. I'm not defining 'tid' as a source ID in the migration, so where is it coming from and what is causing it to fail?
I don't know what the problem was, but upgrading to the latest dev version of D8 Simplenews fixed it.
| common-pile/stackexchange_filtered |
what is the best choice here
"I was about to email you why I have not received the parcel when..."
I was wondering if present perfect for receive is a good choice since at the time of writing the parcel has not arrived or should I use past perfect to match tenses. I don't think so because the situation is still the same.
I believe this works fine, though with a slight modification:
"I was about to email you as to why I have not yet received the parcel when..."
The first addition is unrelated, but the focus for your question is to add a qualifier. Saying you have not yet received the package is a very concise way of saying:
"At the time of writing, I have not yet received the parcel..."
| common-pile/stackexchange_filtered |
My new little friend
A few days ago a new little friend became part of our family.
I asked him to introduce himself and he left this message for you:
Who is my new friend??
Your new friend is..
A very good boy
Explanation:
The entire empty row on the top and the amount of rows made me believe it's a binary encoded text. Parsing it gives us
01101010 01110110 01110110 01110010 01110101 00111010 00101111 00101111 01110010 01100011 01110101 01110110 01100111 01100100 01110001 01100011 01110100 01100110 00101110 01100101 01110001 00101111 01001010 01100101 01100011 01100001 00110010 00111000 01000011 00101110 01101100 01110010 01101001
Which translates to
jvvru://rcuvgdqctf.eq/Jeca28C.lri
This definitely looks like an url, with the first 5 letters translating to https://. Rotating the letters we get -2, which returns..
https://pasteboard.co/Hcay28A.jpg
Which brings us to the fact that your new friend is an extremely precious boy :) Make sure you give him lots of pats!
httpu:/pauteboard.co/ so maybe double-check? :-)
Close, just some letters to replace in the URL ;) (he's cute btw)
@Bass yep, definitely expected some mistakes here and there. This visualization is extremely bad for showing 0s when they are surrounded by 1s :P
the errors may be in the puzzle itself, by the looks of it, or maybe the "font" just so unreadable..
Nah, i double checked and got the solution. Fun little exercise :P
I'll give you six to one odds that this puzzle looked ok and perfectly readable, and then the image was scaled. In the original, the lines between two squares were very likely the only double-width ones, but in the scaled one, the line widths are more or less random.
@votbear It's correct!! Very good job man! :)
| common-pile/stackexchange_filtered |
Can't vote from ledger using a hardware wallet?
I tried to vote from a ledger kusama account on polkadot js and I get Raw data signing is not supported for hardware wallets. Is this a limitation of the zondax ledger app or a polkadot-js limitation?
Maybe this comment https://github.com/polkadot-js/extension/issues/1025#issuecomment-1066056797 from Jaco (back from March 2022) on a similar error msg is helpful ?
Ledger Raw data signing is not supported for hardware wallets
https://polkadot.js.org/apps/#/signing
| common-pile/stackexchange_filtered |
Qt 5.5 WinRT x64 VS2013 applications fail WACK Direct3D tests
I downloaded latest Qt 5.5 x64 WinRT VS2013 binaries, created basic QWidget application and converted my Qt project to VS project by executing "qmake -tp vc .pro "CONFIG+=windeployqt"". The VS2013 project could be compiled and launched easily but both Windows Application Certification Kit Direct3D feature tests failed. I also have tested Several Qt example projects on Win 10 VirtualBox and Win 8.1 PC with the same result. Tried all these things with Qt 5.5 WinRT x86 VS2013 which I've built from sources w/o success.
On the other hand I installed Qt's QuickForecast application from Windows Store and it passed all WACK tests. The only significant difference I noticed between both packages is d3dcompiler_qt.dll in QuickForecast package folder. This .dll is missing in Qt 5.5 x64 WinRT VS2013. There is a d3dcompiler_47.dll but when I put it into package I got another WACK fails connected to restricted APIs in d3dcompiler_47.dll.
Is there any way to enable Direct3D features support and passing WACK tests with Qt 5.5 WinRT x64 VS2013?
I really appreciate any help.
This is a bug in Qt 5.5. Will be fixed in Qt 5.5.1. https://codereview.qt-project.org/#/c/126680/
| common-pile/stackexchange_filtered |
Laravel - Repetitive code in store and update functions
I am working on a system and I can see myself using repetitive code that does not look right. I want to clean up my code but I don't really know many ways to do so. I have 2 methods (store and update). They both look like this
store
public function store(Request $request)
{
$user = Auth::user();
$validatedData = $request->validate([
'street' => ['required', 'string'],
'number' => ['required', 'string'],
'city' => ['required', 'string'],
'state' => ['required', 'string'],
'postal_code' => ['required', 'string'],
'country' => ['required', 'string'],
'phone' => ['required', 'string']
]);
$addresses = Address::all();
$billing = 0;
if($request->is_billing) {
$billing = $request->is_billing;
foreach($addresses as $address) {
if($address->is_billing == 1) {
$address->is_billing = 0;
$address->save();
}
}
}
$address = Address::create([
'user_id' => $user->id,
'token' => Str::random(32),
'street_name' => $request->street,
'house_number' => $request->number,
'postal_code' => $request->postal_code,
'state' => $request->state,
'city' => $request->city,
'country_id' => $request->country,
'phone' => $request->phone,
'is_billing' => $billing
]);
return redirect('/dashboard/user/' . $user->user_token . '/addresses');
}
update
public function update(Request $request, $id)
{
$addresses = Address::all();
$address = Address::where('token', $id)->firstOrFail();
$user = Auth::user();
$billing = 0;
if($request->is_billing) {
$billing = $request->is_billing;
foreach($addresses as $item) {
if($item->is_billing == 1) {
$item->is_billing = 0;
$item->save();
}
}
}
$address->user_id = $user->id;
$address->street_name = $request->street;
$address->house_number = $request->number;
$address->postal_code = $request->postal_code;
$address->state = $request->state;
$address->city = $request->city;
$address->country_id = $request->country;
$address->phone = $request->phone;
$address->is_billing = $billing;
$address->save();
return redirect('/dashboard/user/' . $user->user_token . '/addresses');
}
Currently, the code looks messy and I have a feeling it can be done much more efficiently. Can someone give me tips on how to clean this up?
No need to get all data from database. Instead, only update those rows that need to be updated. Only if needed, you should work with database, avoid otherwise. Database I/O is biggest speed consumer in web applications and generally in PHP applications [when used]. https://laravel.com/docs/master/eloquent#updates
Check this one for store method, should work way faster:
public function store(Request $request)
{
$user = Auth::user();
$validatedData = $request->validate([
'street' => ['required', 'string'],
'number' => ['required', 'string'],
'city' => ['required', 'string'],
'state' => ['required', 'string'],
'postal_code' => ['required', 'string'],
'country' => ['required', 'string'],
'phone' => ['required', 'string']
]);
$billing = $request->is_billing ?? 0;
if ($billing) {
Address::where(['is_billing' => 1])->update(['is_billing' => 0]);
}
$address = Address::create([
'user_id' => $user->id,
'token' => Str::random(32),
'street_name' => $request->street,
'house_number' => $request->number,
'postal_code' => $request->postal_code,
'state' => $request->state,
'city' => $request->city,
'country_id' => $request->country,
'phone' => $request->phone,
'is_billing' => $billing
]);
return redirect('/dashboard/user/' . $user->user_token . '/addresses');
}
Next, insted line $user = Auth::user(), you can work with policies ( https://laravel.com/docs/master/authorization ). In docs you can see PostPolicy how's been created and yours should be named AddressPolicy.
And also, you should move validation to request file created with let's say
php artisan make:request AddressStoreRequest
Again, in docs you will find how to set code there ( https://laravel.com/docs/master/validation#creating-form-requests ).
That is what you can do to release controller method of code and set those code blocks in their respective classes. Although your code (way I wrote it above avoiding unnecessary DB calls) will work the same even if you don't make separate classes for form validation or for authorization.
Use this code and make similar for update method.
| common-pile/stackexchange_filtered |
I want the loop running until numbers are: 123
I have 2 different codes that to me look the same, but they aren't. The first one works as I want, the second doesn't. I don't understand why.
#include <iostream>
#include <stdlib.h>
#include <time.h>
using namespace std;
int main()
{
int num1, num2, num3, i=0;
srand(time(0));
do{
i++;
num1=rand()%3+1;
num2=rand()%3+1;
num3=rand()%3+1;
cout<<i<<"."<<num1<<num2<<num3<<endl;
}while(!((num1==1)&&(num2==2)&&(num3==3)));
}
This is the second one. As I understand it, do-while loop should run until num1 doesn't equal to 1, num2 doesn't equal to 2 and num3 doesn't equal to 3.
#include <iostream>
#include <stdlib.h>
#include <time.h>
using namespace std;
int main()
{
int num1, num2, num3, i=0;
srand(time(0));
do{
i++;
num1=rand()%3+1;
num2=rand()%3+1;
num3=rand()%3+1;
cout<<i<<"."<<num1<<num2<<num3<<endl;
}while((num1!=1)&&(num2!=2)&&(num3!=3));
}
The problem is here:
while(((num1==1)&&(num2==2)&&(num3==3)));
do while loop will end when all of these three are different from the values you have printed. Also I highly suggest on learning bool arithmetic, especially De Morgan's Law
!(A && B) = !A || !B !(A || B) = !A && !B so
!((num1==1)&&(num2==2)&&(num3==3))
will convert into
(!(num1==1)||!(num2==2)||!(num3==3))
The difference between the two codes is that the first code will iterate until all three variables meet their respective conditions, but the second code will continue until at least one of the variables meets its condition. That's how it comes out when evaluating using boolean logic. If you wanted to correct the second condition without using the beginning !, you could change the condition to:
while((num1 != 1) || (num2 != 2) || (num3 != 3));
The condition in the do-while statement in the first program
while(!((num1==1)&&(num2==2)&&(num3==3)));
can be equivalently converted to
while( (num1!=1) || (num2!=2) || (num3 != 3)));
that means that at least one of the variables is not equal to 1 or 2 or 3.
The condition in the do-while statement in the second program
}while((num1!=1)&&(num2!=2)&&(num3!=3));
means that num1 is not equal to 1 and num2 is not equal to 2 and num3 is not equal to 3.
It is obvious that the conditions
while( (num1!=1) or (num2!=2) or (num3 != 3)));
and
}while( (num1!=1) and (num2!=2) and (num3!=3));
are different.
Thus the programs behave differently.:)
| common-pile/stackexchange_filtered |
CoreData - Value by ID
Is there any simple way to retrieve property value by ID?
I have used :
[request setReturnsDistinctResults:YES];
[request setResultType:NSDictionaryResultType];
I have retrieved a dictionary of unique values like this:
(
{Category = "0x6d83070 <x-coredata://04C30A5B-A2A2-4342-B5D4-DCE1AAA339DB/Category/p15>";},
{Category = "0x5cbad20 <x-coredata://04C30A5B-A2A2-4342-B5D4-DCE1AAA339DB/Category/p16>";}
)
How to use these ID's?
How to retrieve data from Category-table having these IDs?
Will objectWithID:objectID help me?
please help
Yeah, that should be an instance of NSManagedObjectID, which you can give to -[NSManagedObjectContext objectWithID:] to retrieve the actual object.
But this begs the question: why not just fetch everything as an object instead of as a dictionary? Then you can just do:
NSArray * results = [context executeFetchRequest:request error:nil];
MyManagedObject * obj = [results objectAtIndex:0];
Category * category = [obj category];
Thanks for the answer. I use setReturnsDistinctResults:YES - when you use this you have to change type to dictionary. I have tried so many times with object returns all table.
@Cezar setReturnsDistinctResults: requires a dictionary return type? That's news to me! Are you sure that's correct, though? I looked in the documentation and couldn't find anything...
| common-pile/stackexchange_filtered |
Using Cell Arrays in Interpolants in Matlab
I have a griddedInterpolant F and some of the input variables are in a cell array form. As an example, this is how I created the interpolant F:
[x,y,z] = ndgrid(-5:1:5);
t = x+y+z;
mycell = {x,y};
F = griddedInterpolant(mycell{:},z,t);
In reality, the size of the cell array mycell changes each time I run the code, and that's why I figured I have to use a cell array as an input. Now I'd like to call this function with the same input structure. When I have a single row for each input, everything works fine as in the following example:
testcell = {1,3};
F(testcell{:},5)
ans =
9
However, when I'd like the inputs in a vector form, the interpolant doesn't work and I get the following error:
testcell = {1,3; 2, 4};
F(testcell{:,:},[5;1])
Error using griddedInterpolant/subsref
Invalid arguments specified in evaluating the interpolant.
Because I don't know the dimensions (number of columns) in my actual cell array, I can not break testcell apart. What is the right way to use the interpolant F in this case? I could, of course, use a for loop but this approach might be very time consuming due to the large number of data that I have.
what are you trying to do here? not knowing the dimensions you can make a function instead of working with cell arrays....
how would I do that?
If I understood what you are trying to do here I'd try to answer. Begin with, "I want to interpolate a 3D data set of..." ....
Let me be clearer. I don't know how many variables go in to griddedInterpolant as inputs to create the interpolant F. It is a blackbox to me. I have two functions, first to create the interpolant, which I don't have any problem with, and second to call the interpolant, where my problem lies. If F is made out of 3D data, the latter function does know it is 3D data but I cannot explicitly code it as F(x1, x2, x3) because I don't know it is 3D. I only have a cell of x that contains {x1, x2, x3} where each are nx1 vector themselves. So I'd like to call F for n cases at once without a for loop.
I dont think you can do this without a loop, and i fail to see how you don't even know the dimensionality of what you want to interpolate in advance... (in your example you did use x,y,z=ndgrid... )
So each time you should define new t and F.
rahnema1, yes that is what my function automatically does.
Bla, the reason why I shouldn't care about the dimensionality comes from automation. The code must work with any dimension I feed in. I gave the example of x,y,z = ngrid(...) above because the code does know what the dimensions are, and change the grids accordingly. It is a very simple logic, actually. I can't automate the process otherwise.
I got an answer to my problem in another forum. Apparently, this problem is solved just by slightly fixing how testcell is defined at the end as such:
testcell = {[1;2]; [3; 4]};
| common-pile/stackexchange_filtered |
Splitting up large requests due to payload size issues
I've just found out that gtag limits hit payloads to 8k. If your request is larger it get rejected with a 413 error.
In my case it's sending the GA4 view_item_list event with about 50 products.
Is it possible to split up the data into multiple payloads, and still have it register as one list view?
In GA4 it's made a little worse as it bundles multiple events into the one payload, even if it would make the payload too large. Thus other small events are also lost.
Using product data uploads would not be a great option as our solution will be used on many sites where it would be hard to get them to manage those product lists.
Sending events on only visible products may work, but it would then greatly inflate the list view counts.
How did you find out that gtag limits hit 8k? I was struggling wondering why I could only fit 20 items in my ecommerce.items array.
The 8k came from gtag documentation. Since then I did some testing and found that the limit is 16k if the payload is sent via a beacon.
You can send products to different events by keeping the numbering of their position. This is what is normally done with impressions also in Enhanced Ecommerce in Universal Analytics, so that the information on the products actually viewed by the user is sent when they enter the user's view port. So for example, in the first event the first 4 products of list 'A' will have positions 1, 2, 3 and 4. The second group of products displayed will be sent in another event always with list 'A' and position 5, 6, 7 and 8 and so on.
Note: sending all the products of a list together in one shot, for the reason mentioned above, makes you lose the meaning of impressions, since on opening the page the product in position 80 will be seen for Analytics as the product in position 1, but most likely the one in position 80 will almost never be seen (so it should not be sent until it has actually been displayed).
Great point with the positions. I'll try and also implement some detection of products coming into view to make it more realistic.
I just checked the GA4 specs for view_item_list and don't see an option for position? https://developers.google.com/gtagjs/reference/ga4-events#view_item_list
Have you not considered the index parameter? It is in the documentation.
LOL, didn't spot that
Unfortunately GA4 tends to bundle the events and still ends up sending them all at once and exceeding the payload limit.
If you only send those that are in the port view, and on scrolling when other products entering the port view you start a new event, and so on, the bundle problem shouldn't exist.
I'll do more testing, but the bundling period seems to be quite long, so a reasonable scroll rate would bundle enough events to fail.
Hey Michele Pisani, I am new to Google Analytics 4. I was wondering where would you go about to see the impressions that you mentions in GA4 dashboard.
Update: I think the GA4 bundling is more intelligent now and will not over-bundle and break the 16k limit. I'm about to test that in the real world.
| common-pile/stackexchange_filtered |
Color Overlapping Polygons in Shapely Python
I am working with a set of overlapping circles in Shapely. I am trying to figure out how to color each circle fragment in my results list.
Here's my code:
import matplotlib.pyplot as plt
from shapely.geometry import Point, LineString, Polygon, MultiPoint, MultiPolygon
from shapely.ops import unary_union, polygonize
def plot_coords(coords):
pts = list(coords)
x, y = zip(*pts)
plt.plot(x,y)
def plot_polys(polys):
for poly in polys:
plot_coords(poly.exterior.coords)
plt.fill_between(*poly.exterior.xy, alpha=.5)
points = [Point(0, 0),
Point(2,0),
Point(1,2),
Point(-1,2),
Point(-2,0),
Point(-1,-2),
Point(1,-2)]
# buffer points to create circle polygons
circles = []
for point in points:
circles.append(point.buffer(2.25))
# unary_union and polygonize to find overlaps
rings = [LineString(list(pol.exterior.coords)) for pol in circles]
union = unary_union(rings)
result = [geom for geom in polygonize(union)]
# plot resulting polygons
plot_polys(result)
plt.show()
Here's the plot:
In this example, 7 points buffered by 2.25 results in a total of 43 polygons due to all of the overlap. I want to choose the colors for each of the 43 segments. Results is a list object, so I am wondering if I can add a variable for color to each list item, or if I need to add the color in the plot_coords or plot_polys functions.
I have tried changing the "facecolor" and "linewidth" in the plt.fill_between line, from this tutorial, but it isn't working right, so I'm unsure where the instructions for color are actually coming from.
Any help would be greatly appreciated!
I don't know if this is what you tried to do, but here I assign one color to every Polygon
import matplotlib.pyplot as plt
from shapely.geometry import Point, LineString
from shapely.ops import unary_union, polygonize
from matplotlib.pyplot import cm
import numpy as np
def plot_coords(coords, color):
pts = list(coords)
x, y = zip(*pts)
print(color)
plt.plot(x,y, color=color)
plt.fill_between(x, y, facecolor=color)
def plot_polys(polys, colors):
for poly, color in zip(polys, colors):
plot_coords(poly.exterior.coords, color)
points = [Point(0, 0),
Point(2,0),
Point(1,2),
Point(-1,2),
Point(-2,0),
Point(-1,-2),
Point(1,-2)]
# buffer points to create circle polygons
circles = []
for point in points:
circles.append(point.buffer(2.25))
# unary_union and polygonize to find overlaps
rings = [LineString(list(pol.exterior.coords)) for pol in circles]
union = unary_union(rings)
result = [geom for geom in polygonize(union)]
# plot resulting polygons
colors = cm.rainbow(np.linspace(0, 1, len(result)))
plot_polys(result, colors)
plt.show()
| common-pile/stackexchange_filtered |
UIBarButtonItem change the icon fill to white ios
Hello I've been trying to figure out how to change the 'fill' which I believe is the tint color for a UIBarButtonItem but when I try to do it using either appearance or appearanceWhenContainedIn is not working keep giving me the icon as blue:
I wish it could be white, when I customize the the button itself I am able to change the tint color and it work, but I would like to do it with appearance for all my buttons. Here is the code where I do the styling, If someone can give me a hint or a tip how to do this kind of things?
-(void)applyStyle {
[self styleUIButtons];
UIImage *navigationBarBackground = [[UIImage imageNamed:@"base_nav_bar" ] stretchableImageWithLeftCapWidth:0 topCapHeight:0];
[[UINavigationBar appearance]setBackgroundImage:navigationBarBackground forBarMetrics:UIBarMetricsDefault];
[[UINavigationBar appearance]setTitleTextAttributes:@{NSForegroundColorAttributeName: self.mainNavigationBarTextColor}];
[[UIBarButtonItem appearanceWhenContainedIn:[UINavigationBar class], nil] setTitleTextAttributes:@{NSForegroundColorAttributeName: self.mainNavigationBarTextColor} forState:UIControlStateNormal];
[[UIButton appearanceWhenContainedIn:[UINavigationBar class], nil] setTitleColor:self.mainNavigationBarTextColor forState:UIControlStateNormal];
UIImage* barButtonImage = [self createSolidColorImageWithColor:[UIColor colorWithWhite:1.0 alpha:0.1] andSize:CGSizeMake(10, 10)];
[[UIBarButtonItem appearance] setBackgroundImage:barButtonImage forState:UIControlStateNormal barMetrics:UIBarMetricsDefault];
[[UIButton appearanceWhenContainedIn:[UINavigationBar class], nil] setBackgroundImage:barButtonImage forState:UIControlStateNormal];
[[UIBarButtonItem appearanceWhenContainedIn:[UINavigationBar class], nil] setTintColor:self.mainNavigationBarIconColor ];
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:
[UIFont boldSystemFontOfSize:12], NSFontAttributeName,
[UIColor grayColor], NSForegroundColorAttributeName,
nil];
[[UISegmentedControl appearance]setTitleTextAttributes:attributes forState:UIControlStateNormal];
}
-(UIImage*)createSolidColorImageWithColor:(UIColor*)color andSize:(CGSize)size{
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(size, NO, scale);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect fillRect = CGRectMake(0,0,size.width,size.height);
CGContextSetFillColorWithColor(currentContext, color.CGColor);
CGContextFillRect(currentContext, fillRect);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void) styleUIButtons {
UIImage *buttonNormalBg = [[UIImage imageNamed:@"button_normal" ] stretchableImageWithLeftCapWidth:0 topCapHeight:0];
UIImage *buttonSelectedBgb = [[UIImage imageNamed:@"button_selected" ] stretchableImageWithLeftCapWidth:0 topCapHeight:0];
id appearance = [UIButton appearance];
[appearance setTintColor:self.mainNavigationBarTextColor];
[appearance setBackgroundImage:buttonNormalBg forState:UIControlStateNormal];
[appearance setBackgroundImage:buttonSelectedBgb forState:UIControlStateHighlighted];
}
When you say you want it to be white, do you mean the tree icon in the barbutton?
Yes, well it is actually white but the barbutton put it in blue
Take a look at the "Customizing Your App's Appearance for iOS 7" video from WWDC 2013. I don't know the exact answer to your question, but this video speaks about Navigationbars, UIToolbars, UITabBars and UIColor quite a bit.
To keep original image color in your UIBarButtonItem on iOS 7, try to use imageWithRenderingMode: UIImageRenderingModeAlwaysOriginal and set it as in code below:
UIImage *buttonImage = [UIImage imageNamed:@"myImage"];
UIBarButtonItem *barButton = [[UIBarButtonItem alloc] initWithImage:[buttonImage imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal]
style:UIBarButtonItemStylePlain
target:self
action:@selector(action:)];
self.navigationItem.rightBarButtonItem = barButton;
UIImage *image = [[UIImage<EMAIL_ADDRESS>imageWithRenderingMode:UIImageRenderingModeAlwaysOriginal]; [_barbuttonItem setImage : image];
| common-pile/stackexchange_filtered |
Java NLP: Extracting Indicies When Tokenizing Text
When tokenizing a string of text, I need to extract the indexes of the tokenized words. For example, given:
"Mary didn't kiss John"
I would need something like:
[(Mary, 0), (did, 5), (n't, 8), (kiss, 12), (John, 17)]
Where 0, 5, 8, 12 and 17 correspond to the index (in the original string) where the token began. I cannot rely on just whitespace, since some words become 2 tokens. Further, I cannot just search for the token in the string, since the word likely will appear multiple times.
One giant obstacle is that I'm working with "dirty" text. Here is a real example from the corpus, and its tokenization:
String:
The child some how builds a boaty c capable of getting scrtoacross the sea, even after findingovercoming many treachrous rous obsittalcles.
Tokens:
The, child, some, how, builds, a, boaty, , , c, , capable, of, getting, scrto, , across, the, sea, ,, even, after, finding, , , , , overcoming, many, treachrous, rous, obsittalcles, .
I'm currently using OpenNLP to tokenize the text, but am ambivalent about which API to utilize for tokenization. It does need to be Java, though, so (unfortunately) Python's NLTK is out of the picture.
Any ideas would be greatly appreciated! Thanks!
OpenNLP will return the offsets using the method Tokenizer.tokenizePos(String s), see the OpenNLP API for TokenizerME as an example for one the implemented tokenizers. Each Span returned contains the start and end positions of the token.
Whether you decide to use UIMA is really a separate question, but OpenNLP does provide UIMA annotators for their tokenizers that use tokenizePos(). However, if you just want to tokenize a string, UIMA is definitely overkill...
You can use OpenNLP Tokenizer with UIMA. The token annotator in UIMA will create a type for Token which will include the start and end indices of the token. You can also attach features like part-of-speech tag, stem, lemma, etc. to the token. UIMA has Java and C++ APIs.
The same you can do with BreakIterator instead of using any external API.
| common-pile/stackexchange_filtered |
The remote server returned an error: (500) Internal Server Error
I am trying to call a webservice. I am getting 500 internal error.
webservice is running .I uses the following code
I am getting the error at this point:
WebResponse response = request.GetResponse();
Code:
string requestxml = @"C\request.xml";
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load(requestxml);
StringWriter sw = new StringWriter();
XmlTextWriter tx = new XmlTextWriter(sw);
xmlDoc.WriteTo(tx);
byte[] bytes = Encoding.UTF8.GetBytes(sw.ToString());
WebRequest request = WebRequest.Create("http://localhost:3993/test.asmx");
request.Method = "POST";
byte[] byteArray = Encoding.UTF8.GetBytes(sw.ToString());
request.ContentType = "application/xml";
request.ContentLength = byteArray.Length;
Stream dataStream = request.GetRequestStream();
dataStream.Write(byteArray, 0, byteArray.Length);
dataStream.Close();
WebResponse response = request.GetResponse();
stack trace
at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args)\r\n at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)\r\n at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()\r\n at System.Threading.ThreadHelper.ThreadStart_Context(Object state)\r\n at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)\r\n at System.Threading.ThreadHelper.ThreadStart()
Powershell code
$TrustAll=$TAAssembly.CreateInstance("Local.ToolkitExtensions.Net.CertificatePolicy.TrustAll")
[System.Net.ServicePointManager]::CertificatePolicy=$TrustAll
$webRequest = [System.Net.WebRequest]::Create("http://localhost:3993/test.asmx");
$webRequest.Method = "POST";
$webRequest.ContentType = "text/xml";
$con = Get-Content .\Request.xml;
$bytes = [System.Text.Encoding]::UTF8.GetBytes($con);
$webRequest.ContentLength = $bytes.Length;
$ReqStream = $webRequest.GetRequestStream();
$ReqStream.Write($bytes,0,$bytes.Length);
#$ReqStream.Flush();
$ReqStream.Close();
$response = $webRequest.GetResponse();
We need to know the actual exception, and where it's actually happening within the code.
Enable debugging, and include the stack trace.
You should run the local iis for this , i suggest you to turn on two Visual Studio on project is the client the other one is server, after you turn the server [CTRL+F5] you take the address of the server url like you have done.
After that you go to the client and make the request, it should work.
@ Andrew when i call from powershell,the webservice is running perfectly responding.when i call from c#,it is showing this error
Show us the PowerShell code. It's doing something different.
If you're getting a 500, then IIS should be making an entry in the event log. Check that and see what it says. I'd hazard a guess it's a data format/conversion error, but that's just a guess.
Error 500 happens because there's something messed up in the output html page. To me, that means there's something wrong with the parameters you're passing to the web service and the WS is not smart enough to catch that problem. You might try it with a canned 'sw', one that you KNOW has worked before, and then look at YOUR 'sw' to find the problem.
@Pete Wilson, I am using the same request xml working in the powershell
The 500 response indicates there is a problem in the webservice. You need to debug the webservice, not the call to the service. I would first verify that your method is being called, and then work on from there.
If your webservice is not being called, then you need to verify the XML that is being sent, and the URL for the webservice.
the webservice is running perfectly
Wrong! the webservice is NOT running perfectly if you get 500. That is, 500 is showing up because the WS is doing something wrong. The only question is, why and what is it doing wrong. The most likely "why" is that 'sw' is messed up and the WS is not smart enough to handle it.
@ i can call the webservice from powershell,i am getting the response.
I had the same error and it was not problem the WebService. The problem was in the XML request that i was sending to the server.
| common-pile/stackexchange_filtered |
Better ways to make a table (htmlservice?)
I've created my first web app (it's now kinda a web page) with the google script api.
http://bit.do/WhoKilledWho
From this experience I thought "wow getting the data is easy".
urlFetchApp.fetch()
Collating the data was not so easy, but not too bad. Lots of loops etc.
However creating my "table" for output was horrendous.
// Blue team Output
output = "<table class='gradienttable' width = '60%'><tr><th>Blue Team</th>"
for (p in players) {
if (players[p].team != 100) {
output = output + "<th width = '14%'>" + players[p].pname + "<br>(" + players[p].cname + ")</th>";}
}
for (p in players) {
//output = output + "<tr>";
i = 1;
for (k in players){
if (players[p].team == 100) {
if (i==1) { output = output + "<tr><th width = '14%'>" + players[p].pname + "<br>(" + players[p].cname + " " +players[p].kda +")</th>"; }
if (players[k].team != players[p].team) {
output = output + "<td>" + players[p].killed[k] + "</td>";}
i++;
}
}
output = output + "</tr>";
}
output = output + "</table>";
return output;
Is there's no JSON.htmltablify(myobject) or something easy like that?
Hava a look at d3.js.
Wow, it looks spectacular. I must still find a way to incorporate it into my document (in google scripts api), but it looks like something that will be well worth the effort to learn how to use.
Please note that d3.js is a client library, so it runs in the browser, not on the server. Depending on where you get your data from, you can either query it in the browser via ajax or - as you did right now - fetch it on server side and generate some json structure into your html that is later used by d3.js on the client.
| common-pile/stackexchange_filtered |
Warning: closing unused connection n
getCommentary=function(){
Commentary=readLines(file("C:\\Commentary\\com.txt"))
return(Commentary)
close(readLines)
closeAllConnections()
}
I have no idea what is wrong with this function. When I run this in R, it keeps giving me the following warning:
Warning message:
closing unused connection 5 ("C:\\Commentary\\com.txt")
readLines() is a function, you don't close() it. You want to close the connection opened by the file() function. Also, you are return()ing before you close any connections. As far as the function is concerned, the lines after the return() statement don't exist.
One option is to save the object returned by the file() call, as you shouldn't be closing all connections only those your function opens. Here is a non-function version to illustrate the idea:
R> cat("foobar\n", file = "foo.txt")
R> con <- file("foo.txt")
R> out <- readLines(con)
R> out
[1] "foobar"
R> close(con)
To write your function, however, I would probably take a slightly different tack:
getCommentary <- function(filepath) {
con <- file(filepath)
on.exit(close(con))
Commentary <-readLines(con)
Commentary
}
Which is used as follows, with the text file created above as an example file to read from
R> getCommentary("foo.txt")
[1] "foobar"
I used on.exit() so that once con is created, if the function terminates, for whatever reason, the connection will be closed. If you left this just to a close(con) statement just before the last line, e.g.:
Commentary <-readLines(con)
close(con)
Commentary
}
the function could fail on the readLines() call and terminate, so the connection would not be closed. on.exit() would arrange for the connection to be closed, even if the function terminates early.
@hadley's comment is wise (unsurprisingly): prefer base behavior that's well-constructed to manage connections over dealing with them yourself. Having said that, I voted the above answer up for it's illustration of on.exit, for the general case where that wise advice doesn't apply.
@MattTenenbaum : So if I open a file using var1 <- readLines("filename.txt",encoding="UTF-8"). I need not close it right?
| common-pile/stackexchange_filtered |
advise for debugging `tf.data.Dataset` operations in tensorflow 2.0
What is the equivalent of Panda's df.head() for tf datasets?
Following the documentation here I've constructed the following toy examples:
dset = tf.data.Dataset.from_tensor_slices((tf.constant([1.,2.,3.]), tf.constant([4.,4.,4.]), tf.constant([5.,6.,7.])))
print(dset)
outputs
<TensorSliceDataset shapes: ((), (), ()), types: (tf.float32, tf.float32, tf.float32)>
I would prefer to get back something resembling a tensor, so to get some values I'll make an iterator.
dset_iter = dset.__iter__()
print(dset_iter.next())
outputs
(<tf.Tensor: id=122, shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: id=123, shape=(), dtype=float32, numpy=4.0>,
<tf.Tensor: id=124, shape=(), dtype=float32, numpy=5.0>)
So far so good. Let's try some windowing...
windowed = dset.window(2)
print(windowed)
outputs
<WindowDataset shapes: (<tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b25c0>, <tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b27b8>, <tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b29b0>), types: (<tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b25c0>, <tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b27b8>, <tensorflow.python.data.ops.dataset_ops.DatasetStructure object at 0x1349b29b0>)>
Ok, use the iterator trick again:
windowed_iter = windowed.__iter__()
windowed_iter.next()
outputs
(<_VariantDataset shapes: (), types: tf.float32>,
<_VariantDataset shapes: (), types: tf.float32>,
<_VariantDataset shapes: (), types: tf.float32>)
What? A WindowDataset's iterator gives back a tuple of other dataset objects?
I would expect the first item in this WindowDataset to be the tensor with values [[1.,4.,5.],[2.,4.,6.]]. Maybe this is still true, but it isn't readily apparent to me from this 3-tuple of datasets.
Ok. Let's get their iterators...
vd = windowed_iter.get_next()
vd0, vd1, vd2 = vd[0], vd[1], vd[2]
vd0i, vd1i, vd2i = vd0.__iter__(), vd1.__iter__(), vd2.__iter__()
print(vd0i.next(), vd1i.next(), vd2i.next())
outputs
(<tf.Tensor: id=357, shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: id=358, shape=(), dtype=float32, numpy=4.0>,
<tf.Tensor: id=359, shape=(), dtype=float32, numpy=5.0>)
As you can see, this workflow is quickly becoming a mess. I like how Tf2.0 is attempting to make the framework more interactive and pythonic. Are there good examples of the datasets api conforming to this vision too?
I was in a similar situation. I eventually ended up using zip
train_dataset = train_dataset.window(10, shift=5)
for step_dataset in train_dataset:
for (images, labels, paths) in zip(*step_dataset):
train_step(images, labels)
| common-pile/stackexchange_filtered |
Open GNU Scientific Library Documentation within emacs?
How do I open documentation of the GNU Scientific Library within emacs?
Which platform are you on? Which format is the documentation in? Debian/Ubuntu provide a package called gsl-doc-info which packages the documentation in Info format, which is very well suited to Emacs. After installing you can then C-h i m gsl-ref RET and browse it.
For info files you just need to add the gsl doc directory to Info-directory-list. That should be available: "sphinx is able to produce info output, so we will be providing info files with GSL". You can view the html doc via eww.
@Basil unfortunately I am running OSX. @Tobias I think I will try to get the info files from Sphinx. And thanks did not know about the Info-directory-list
Did you figure out the answer to your question?
| common-pile/stackexchange_filtered |
JSLint Expected '===' and instead saw '=='
Recently I was running some of my code through JSLint when I came up with this error. The thing I think is funny about this error though is that it automatically assumes that all == should be ===.
Does that really make any sense? I could see a lot of instances that you would not want to compare type, and I am worried that this could actually cause problems.
The word "Expected" would imply that this should be done EVERY time.....That is what does not make sense to me.
I ran into this too with JSLint. I made an update from == to === and it actually broke the previously working code.
If your code has more than 100 lines, it will not pass jslint, really, it's impossible.
"Broke" is too strong a word. It changed your code's meaning. If you were doing a myVar == null check, yes, big changes. ;^) Crockford's argument is that it made the code's meaning more precise, and that's hard to argue.
IMO, blindly using ===, without trying to understand how type conversion works doesn't make much sense.
The primary fear about the Equals operator == is that the comparison rules depending on the types compared can make the operator non-transitive, for example, if:
A == B AND
B == C
Doesn't really guarantees that:
A == C
For example:
'0' == 0; // true
0 == ''; // true
'0' == ''; // false
The Strict Equals operator === is not really necessary when you compare values of the same type, the most common example:
if (typeof foo == "function") {
//..
}
We compare the result of the typeof operator, which is always a string, with a string literal...
Or when you know the type coercion rules, for example, check if something is null or undefinedsomething:
if (foo == null) {
// foo is null or undefined
}
// Vs. the following non-sense version:
if (foo === null || typeof foo === "undefined") {
// foo is null or undefined
}
JSHint - which is like an extended version of jslint - has an option to not warn you "about == null" checks in particular.
I hate this rule of JSLint. I think the real problem is that people shouldn't use operators in ways that they don't understand (ironically these are often same kind of people who would blindly replace '===' with '=='). Sure, there are a few usual cases that arise when comparing the number 0 with various strings, but if you're comparing unrelated data like 0 == 'this is a string' - Your code probably has bigger problems than double equal! If you know for sure what types you're dealing with and you know how exactly how they interact with ==, then I think you should use it.
@Jon The point of the === operator is code clarity. There is no reasonable situation to use == as it will never be as clear and understandable as the identity operator. It's not about whether you understand the operators or not, it's about using the one which makes your code more easily readable at almost no expense. The only developers that are arguing against the identity operator are solo developers and people who don't work in teams. By definition, people who's code is not reviewed by enough eyes.
@Alternatex, I'm a senior developer in a team of 10+. I also manage a popular open source project on GitHub. A properly used == neither helps nor harms code quality. From my experience, === causes more bugs than == (because beginners tend to use === improperly anyway). Yes bugs caused by == are generally harder to diagnose/reproduce but they are also much less common - It's a matter of quantity vs magnitude - Ultimately I would say that the amount of lost productivity is the about the same for using either - Negligible.
I find the == null comparison to be almost essential. The issue becomes even less important if your code is well tested.
There are in fact times when using == is essential, in order to perform the required test. if foo.toString() will perform in a predictable fashion and a plain string needs to be tested against that output then writing foo == stringToTest is a lot cleaner than foo.toString() === stringToTest.
@Alternatex If the point was clarity they shouldn't have made it TRIPLE equals! No beginner understands it. At least double equals is known from other languages. Also, there is no reasonable situation is a gross misstatement. Think about (native) Javascript types Number and String. Their existence prove the Javascript authors had certain use cases in mind for ==. Do you really think that new String('hi') === 'hi' evaluating to false is very clear? Please write a code snippet that tests your function argument against 'hi' accepting both String and string and tell me that is clear.
@Jon You say "A properly used == neither helps nor harms code quality". The problem with == is that its semantics are "complicated and unmemorable" (google that phrase to find Crockford's writeup of it which explains very well), which leads to bugs, unclarity, and extra cognitive load every time I (and you, I bet) look at a ==, and yes, that harms code quality. You say "beginners tend to use === improperly anyway". Can you give an example of what you're talking about here?
JSLint is inherently more defensive than the Javascript syntax allows for.
From the JSLint documentation:
The == and != operators do type coercion before comparing. This is bad because it causes ' \t\r\n' == 0 to be true. This can mask type errors.
When comparing to any of the following values, use the === or !== operators (which do not do type coercion): 0 '' undefined null false true
If you only care that a value is truthy or falsy, then use the short form. Instead of
(foo != 0)
just say
(foo)
and instead of
(foo == 0)
say
(!foo)
The === and !== operators are preferred.
I have to conclude the people from JSLint work in some very high ivory tower that they never get out of. Javascript was designed to be used with the == operator. The === is a special case... JSLint tries to make it seem like using == would somehow be wrong... However, try this: var x = 4, y = new Number(4); if (x == y) {alert('Javascript depends on == just embrace it!');}. Primitive types have corresponding classes that substitute for them (Number, String) and Javascript depends on the == operator to make comparing these natural.
Keep in mind that JSLint enforces one persons idea of what good JavaScript should be. You still have to use common sense when implementing the changes it suggests.
In general, comparing type and value will make your code safer (you will not run into the unexpected behavior when type conversion doesn't do what you think it should).
Plus it cannot be as context-smart as a programmer. It's just working on the basis that most users get tripped up by auto type conversion inherent in the system (like violence - "help help I'm being repressed!")
Triple-equal is different to double-equal because in addition to checking whether the two sides are the same value, triple-equal also checks that they are the same data type.
So ("4" == 4) is true, whereas ("4" === 4) is false.
Triple-equal also runs slightly quicker, because JavaScript doesn't have to waste time doing any type conversions prior to giving you the answer.
JSLint is deliberately aimed at making your JavaScript code as strict as possible, with the aim of reducing obscure bugs. It highlights this sort of thing to try to get you to code in a way that forces you to respect data types.
But the good thing about JSLint is that it is just a guide. As they say on the site, it will hurt your feelings, even if you're a very good JavaScript programmer. But you shouldn't feel obliged to follow its advice. If you've read what it has to say and you understand it, but you are sure your code isn't going to break, then there's no compulsion on you to change anything.
You can even tell JSLint to ignore categories of checks if you don't want to be bombarded with warnings that you're not going to do anything about.
I did not ask "What is ===", so I am not sure why you answered it.
@Metropolis: if for no other reason, then as background in case someone else read the answer who didn't know. I did try to answer your question in the paragraphs after that, though.
@Spudley + 1 for the additional and useful information
yep, it's 10-100 times faster: jsperf speed test
A quote from http://javascript.crockford.com/code.html:
=== and !== Operators.
It is almost always better to use the
=== and !== operators. The == and != operators do type coercion. In
particular, do not use == to compare
against falsy values.
JSLint is very strict, their 'webjslint.js' does not even pass their own validation.
Nice clarification. That's true, about webjslint.js not validating--though most of the errors I see right now have to do with spacing. Clearly, one must use common sense and reasonable judgment when reviewing JavaScript using JSLint.
The use of the word always automatically disqualifies this quote as wisdom. Smart programmers aren't dogmatic. They use what is best in the given situation. And they welcome and embrace any tool built in to the very core of the language, not just dismiss it with a just never touch it. Bottom line: My code is shorter (and not just from saving one = character), thus my site loads faster, at less bandwidth cost, thus my user is better served.
If you want to test for falsyness. JSLint does not allow
if (foo == null)
but does allow
if (!foo)
Use ===, which JSLint recommends.
@NarawaGames This solution is perfectly acceptable.
This answer is not good. The thing is that both of these mean something else. foo == null checks for null or undefined. !foo checks for null, undefined, 0 and empty string.
@Markos This answer is meant to be a helpful alternative to make JSLint happy and keep your code logic intact, not to be an exact equivalent. This is why I prefixed the answer with "If checking for falsyness"
To help explain this question and also explain why NetBeans (from) 7.3 has started showing this warning this is an extract from the response on the NetBeans bug tracker when someone reported this as a bug:
It is good practice to use === rather than == in JavaScript.
The == and != operators do type coercion before comparing. This is bad because
it causes ' \t\r\n' == 0 to be true. This can mask type errors. JSLint cannot
reliably determine if == is being used correctly, so it is best to not use ==
and != at all and to always use the more reliable === and !== operators
instead.
Reference
I found this error just now using Netbeans as well. It is strange that they would treat this with warning severity due to the bizarre example case they provided.
i mean, it is true, but there are many use cases in which the person knows that the two compared things will be of the same type, so it seems strange that due to this strange case where a carriage return might be compared to the number zero is the reason all usages of == are considered wrong. i am finding out though that === is faster though, since no type conversions are done. i'm surprised i didn't find this out before netbeans.
@oMiKeY Yes I see what you mean, they could have given more realistic examples!
Well it can't really cause problems, it's just giving you advice. Take it or leave it. That said, I'm not sure how clever it is. There may well be contexts in which it doesn't present it as an issue.
but why is the word "Expected" used? That makes it sound like you should always do this.
The evaluator is looking for a valid response, as it's trying to validate it. If it doesn't get a valid response, then it's not what it expected. The validator starts with the assumption that everything is fine and then points out errors as it traverses the code. It doesn't necessarily understand what is an invalid response, it just knows when it sees a non-valid one. It could also work in the reverse, deliberately searching for bad code according to rules of what is bad. Whitelisting vs blacklisting.
The question was "Does that really make any sense?". Given that question, yes, it does. Also, it was five years ago. Jesus.
You can add this to the previous line to disable these warning.
// eslint-disable-next-line
| common-pile/stackexchange_filtered |
Part of my PHP code is is not working on the client's server
I am using this code to post an address to a form in an iframe. Everything works perfectly locally and on my own web server, but when I try it on the client's server, it just displays this code as text in the form field in the iframe Any idea why it wouldn't work here? The client is running a slightly newer version of PHP than me as well.
<iframe name="iFrameName" id="iFrameName" frameborder="0" height="600px" width="700px"></iframe>
<?php
if ($_POST["FormtoCRM"] == "Login")
{
?>
<form action="http://www.mywebsite.com/iframe.cfm" method="post" target="iFrameName" id="FormtoCRMForm" style="display:none">
<input type="text" input name="address" value="<?= $_POST['address'] ?>">
</form>
<script type="text/javascript">
document.getElementById("FormtoCRMForm").submit();
</script>
<?php
}
?>
Try this code
<?php echo $_POST['address']; ?>
My guess is that short open tags are not enabled in your PHP configuration.
Most likely the short_open_tag configuration option is turned off on their hosting environment. Instead of <?= $_POST['address'] ?> use <?php echo $_POST['address'] ?>.
| common-pile/stackexchange_filtered |
Did people use to marry much younger during the last millennium?
I've frequently heard it stated that "people used to marry at much younger ages" historically. Recently, these kinds of statements have tended to show up in sociological discussions about young people choosing to get married later. Many such studies tend only to trace data back through the 20th century, often stopping at 1960 or so, when age at first marriage was particularly low.
In historical studies of marriage, the more common statistic seems to be the "age of consent" for marriage, which was often lower than today. But those numbers tell us little about the average age that most people married.
I've come across a few references here and there which have tried to measure actual marriage ages historically, and they often seem to come up with numbers that are perhaps surprisingly higher than we might expect. For instance, in 17th-century England Wikipedia notes that a survey of 1000 marriage certificates showed the average age for brides was 24 years and 27 years for the grooms. I've seen a number of other sources which have indicated that first marriage ages were often in the early to mid-20s in parts of Western Europe in the past few centuries. (Obviously, this can vary significantly by region as well as time period.)
Based on data like this, my sense is that our perception of young marriage ages has been skewed by selection bias: we know more about aristocratic families, and they tended to arrange marriages at particularly young ages for political reasons.
Moreover, at least for the U.S., it seems we may have met a local minimum of median age at first marriage in the 1950s, if these statistics derived from census data are to be believed.
Data like this has made me wonder if our perception of younger marriages on average is a historical myth (or at least needs significant qualification), perhaps based only on our perception that parents and grandparents may have married younger on average than current generations. But were the mid-1900s actually historical outliers, or at least part of a more complex picture?
IN SUM: Prior to the last century or so, what evidence do we have for the statement "people used to marry at much younger ages," in general? Some historians have asserted that people even used to marry basically upon reaching puberty; is there evidence to support or debunk such claims? Obviously there are plenty of historical examples of marriage at a young age, but does that also relate directly to a similar trend in median marriage ages (or not)?
Since marriage records, legal issues, and customs in Western marriage have varied by time and region over the past 1000 years or so (i.e., since the beginning of some regulation and standardization of Western marriage), do we even have enough information to speculate on such general trends over time? How much variance is in the data? And if we can observe any broad trends, what are they?
(I realize the answers here may vary significantly by region. I'm particularly interested in Europe, as well as the U.S./Canada, but data on other regions could be interesting as well.)
EDIT: I've edited the question somewhat to try to make clear what I'm asking for here. I do NOT expect a general history of world marriage ages over the past 1000 years. (However, if anyone can point to a reputable external resource that surveys such information, that would obviously be a superior contribution toward answering this question.) I began the question with a common historical claim about an overall marriage trend, and I'm interested in whether actual historical data supports that claim in general -- OR whether there are other general trends, or whether it's all so inconsistent that we can't make any useful general observations.
+1 for own survey and research. I don't like this topic, but appreciate and wish other questions be so well written
We only have bits and pieces, but (as with the history of married couples sleeping together) it appears to be a phase that goes in and out of fashion.
http://en.wikipedia.org/wiki/Marriageable_age#History_and_social_attitudes
Not making this an answer because your question deserves better.
The thing I find interesting in those #'s is the delta age between men and women. It started large (about 4 years), briefly shrunk to almost nothing at the end of WWII, and has stayed at about 2 years ever since, even as the absolute age of both has been rising.
Too broad. I suggest you specify median age of marriage in southern England (good data sets).
@SamuelRussell - I appreciate the recommendation, and I hope you might provide further insight if you have information on that particular region. However, I would note that your comment is actually part of the question, i.e., do we have reasonable estimates? Presumably if we do, they are based on studies of particular regions, but I don't know those ahead of time. In any case, I'm interested in the broader question of how much variance there is historically (or whether ages haven't varied as much as we think), rather than the exact range of ages in region X during decade Y in century Z.
The question is too broad because it asks for an account spanning a whole 1000 years. And yes we do have estimates for many regions and periods. Marrying around 20 years old is fairly normal.
@Semaphore - I'll try to edit the question to make it more clear. I actually do NOT want "an account spanning a whole 1000 years" in any detail. While a longer answer could be possible, what I want to know could probably be handled in a paragraph or two. I'll try to clarify.
@Semaphore - frankly, I'm mostly wondering if we have sufficient evidence to debunk the "marry upon puberty" (or at least much younger) theories. But I'm also interested in overall variance, e.g., you mention our current "anomalous" average, but the Wiki passage I cited mentions average marriage ages that are more like the mid-20s in the 1600s, which might not be quite like our modern situation, but is also significantly older than, say, the 1950s generation.
@Athanasius We prefer to have focused questions that have one answer, even if related. You should put that in a separate question.
@Semaphore - thanks for your help. I'm really trying to figure out how to ask this question, which seems pretty simple. If I asked a question like "How has the population of Europe generally trended and varied over the past millennium?" the answer would be: "It mostly went up -- here's the lowest number, here's the highest number, and there were probably some dips around a few big plagues and wars." I don't see how you could answer a question about whether there's a general trend in marriage stats without noting how much variance there is, particularly given the difficulty of sampling here.
@Athanasius Whether it is true that people historically marry at puberty, is not the same question as whether there is a general trend in marriage age. One is a specific, (dis)provable claim. The other is nebulous and broad, although interesting enough that I hope is salvageable (but regardless, belongs in a separate question). Also marriage patterns (which varies significantly from region to region - even within Europe, let alone across the Atlantic) are relatively more complicated than tallying up population estimates.
As life span increases, it becomes less necessary to marry early.
Couldn't edit so: I haven't seen evidence, of course, but scientifically speaking that's the logical conclusion. If you go back far enough marriage wouldn't have been a thing, and coupling would happen when sex happened (as early as possible). As time progresses social influence would leave its mark, but I'd hazard a guess that life span is the core determinant.
@Semaphore - Thanks... sorry, I thought I had accepted the day the first answer came in. I just realized I had only upvoted it.
I think often everyone would think of them being married because they acted like they were married and therefore disqualified themselves from marring anyone else, but the formal paper work was not done until the first child was about to arrive….. All we have records of is the formal paper work…..
Not really.
Generally speaking, most European women since married in their early to mid twenties, to men in their mid to late twenties. The age gap for the commoners, i.e. the vast majority of the population, were typically not large. Unfortunately the question declined to define how much younger is "much younger" supposed to mean, but most Europeans married well after the onset of puberty.
Overall, there does seem to be an upward trend in marriage ages. However, we have little statistical evidence prior to the 17th century. The paucity of records makes claims of trends over the whole millennium rather hazardous.
Both spouses married late in Europe during the Early Middle Ages. Citing Carolingian survey data, the late David Herlihy argues[1] that prior to 1000 or so, barbarian marriage customs - marrying in late twenties to similarly aged spouse - predominated in Western Europe. From about A.D. 1,000, however, the value of women appears to have declined. Rather than receiving a bridal price from the husband, families now paid dowries to unload daughters much earlier. The age of first marriage for women thus plummeted to their late teens, but largely left that of men unaffected.
For reasons that remain unclear, the situation began to be reversed at some point during the High and Late Middle Ages. This gave rise to the curious nuptial phenomenon known as the northwestern European pattern, which has dominated Western Civilisation to this date. Proposed in his highly influential 1965 work[2] by John Hajnal, this paints a picture where both spouses married late and established their own households, independent of their parents. Another feature is that significant proportions of both men and women abstained from marriage completely. Under Hajnal's classification, this system prevailed west of an imaginary line running from Trieste to St Petersburg.
Hajnal's pattern is sometimes thought to originate from the value of retaining a daughter's labour on Late Medieval farms of Western Europe. Later on, the habit of young women and men to work in other households also delayed marriages. This contrasts with the Mediterranean situation, where domestic servants were more likely to be married and widowed. Other arguments propose that the need for financial security (due to the habit of relocating away from home upon marriage) forced delays.
Data from the Middle Ages are scarce, the earliest statistical records from the Late Medieval and Early Modern periods demonstrates a relatively high, and increasing, age at first marriage. By the Late Middle Ages Dijonese women were known[3] to marry at 20. This rose to 21 during the 16th century, and everywhere in France the mean age of first marriage seemed to have climbed to about 25 by the 18th.
Similarly, in most German regions, women married in their twenties - averaging between 22.7 to 28.5 in one study[4]. Demographic data from the late 17th century[5] reveal that commoner women from Giessen and Heuchelheim on average first married when just over 24, although Mainz's average was much lower at 21.3.
Likewise, Medieval English couples are thought to have married during their their early to mid twenties[6]. By the Early Modern period, the average 17th century English women were marrying when 25.6-26.2 to men 28.1 years of age, although it declined slightly subsequently.
In the Netherlands, by the middle of the period, the mean ages at first marriages for women were estimated to be about 20-21 at mid 16th century Leiden, and 23.5-25 at late 16th century Amsterdam. Both groups married husbands who were on average 1-2 years older. These numbers further increased after the 17th century.
Overall, the evidence is that European marriage patterns resembles that of the 20th century.
Not all of Europe followed the same pattern. Southern Europeans women were more likely to marry young to older men, although ages were generally still around 20. A landmark study[7] of 1427 Tuscany reveal the mean age of first marriage there to be 19 for women, but 28 for men.
Subsequent studies[8] on 15th and 16th century Florence confirms that all women married when 18 to 19, to men between 27.7 and 31.2. However, men with higher socioeconomic status tended to marry older, a trend not reflected in women's marriage patterns.
While the Florentine situation is often regarded as unusual, it is not unique. Another study[9] of 15th century Ragusa showed that women were on average betrothed at 18, but gave birth to their first child when 22. From this the authors surmised that Ragusan couples consummated their marriages when the women were 21 and men 36. In this case, local cultural norms seemed to be the main culprit.
Nuptial patterns in colonial North America were also different from the colonists' Western European motherland. A lack of eligible women relative to available bachelors resulted in fierce competition for potential brides[10]. This led to a reduction of women's age at first marriage in the 17th century, though it gradually caught up to European norms as the colonies grew over the following centuries.
However, few colonial couples marry as young as earlier writers had once assumed [11]. In early English colonies, the average age at first marriage for women were late teens to very early twenties[12], roughly five years lower than that of England. In Massachusetts[13], women married around 19 to 20 in the early 17h century. Maryland women married even younger at 17 to 18, while for Virginians it was closer to 21[14].
The difference during the early colonial period is much smaller for men, who married mid to late twenties in the colonies. This was only a couple of years lower than that of English men. Mirroring developments in England, the gap in ages between spouse closed overtime. Women's age at first marriage climbed back up to almost 24 by the 19th century, while men's dropped slightly to around 25-26. In both cases, the mean age of different colonies evened out over time.
Many cultures elsewhere in the world did have lower marriage ages than contemporary Europeans. For instance, Song China at the start of this period had legal minimum ages of marriage set at 16 for men and 14 for women. A survey[15]. of tomb inscriptions found on average, women married when slightly over 18 to men slightly over 23. Similarly, in Japan during the early modern period, women were found[4] to have married around 16.7 to 22.7. By the late 18th and 19th centuries, especially in areas of high commercial development, women's mean age of marriage had rose to around 22-25[16].
References:
[1] Herlihy, David. Medieval Households. Harvard University Press, 1985.
[2] Hajnal, John. "European Marriage Patterns in Perspective." (1965): 101-43.
[3] Rossiaud, Jacques. "Prostitution, jeunesse et société dans les villes du Sud-Est au XVe siècle." Annales (1976): 289-325.
[4] Murayama, Satoshi. "Regional Standardization in the Age at Marriage: A Comparative Study of Pre-industrial Germany and Japan." The History of the Family 6.2 (2001): 303-324.
[5] Hurwich, Judith J. Noble Strategies: Marriage and Sexuality in the Zimmern Chronicle. Vol. 75. Truman State Univ Press, 2006.
[6] McSheffrey, Shannon. Marriage, Sex, and Civic Culture in Late Medieval London. University of Pennsylvania Press, 2006.
[7] Herlihy, David, and Christiane Klapisch-Zuber. "[Tuscans and their families: a study of the Florentine catasto of 1427]." Editions de lEcole des Hautes Etudes en Sciences Sociales Ouvrage 8 (1985).
[8] Siegmund, Stefanie Beth. The Medici state and the Ghetto of Florence: the construction of an early modern Jewish community. Stanford University Press, 2006.
[9] Rheubottom, David B. "“Sisters First”: Betrothal Order and Age At Marriage in Fifteenth-Century Ragusa." Journal of Family History 13.4 (1988): 359-376.
[10] Haines, Michael R., and Richard H. Steckel, eds. A Population History of North America. Cambridge University Press, 2000.
[11] Lancaster, Jane Beckman, and Beatrix A. Hamburg, eds. School-age Pregnancy and Parenthood: Bisocial Dimensions. Transaction Publishers, 1986.
[12] Smith, Daniel Scott. "The Demographic History of Colonial New England." The Journal of Economic History 32.01 (1972): 165-183.
[13] Demos, John. "Notes on life in Plymouth Colony." The William and Mary Quarterly: A Magazine of Early American History (1965): 264-286.
[14] Wells, Robert V. "The population of England's colonies in America: Old English or new Americans?." Population Studies 46.1 (1992): 85-102.
[15] 〈宋代婚姻禮俗考述〉方建新《文史》第24輯158頁, 1985 April
[16] Saito, Osamu. "The Third Pattern of Marriage and Remarriage: Japan in Eurasian Comparative Perspectives." Marriage and the Family in Eurasia: Perspectives on the Hajnal Hypothesis (2005): 165-193.
Thanks so much for this. And thanks for the references.
Did more single man go out then single ladies in the "early colonial period", hence did the 1st generation immigrant man take the ladies that were 2nd generation immigrants?
In the early middle ages (at least) it was not uncommon for nobles and royalty to marry very young. For example, William the Atheling's wife Matilda of Anjou was no more than 12 when they were married in 1119. and Edward I's son was due to be married around his thirteenth birthday but died a few weeks before the nuptials. Eleanor of Castile was about 13 when she married Edward I, who was 15 at the time.
@LarsBosteen Matilda could not be less than 12 - most sources puts her at 14 in 1119. In any case, such marriages were mostly restricted to the high nobility and royalty due to political needs. While you can find plenty of those examples, they are not representative of the population at large, nor even of the general nobility. Further, even if the marriage was consumated early, regular cohabitation likely did not occur until their late teens. Hence, "the marked tendency of most English and French queens to bear children only in their late teens or early twenties" - Eisenbichler (2002)
@Semaphore. I should have been clearer - my comment was to point out an exception relating to nobility & royalty, not to contradict anything in the answer. Concerning Matilda of Anjou's age, my source is C. Warren Hollister 'Henry I' (Yale University Press, 2003). I would be interested in other sources relating to William the Atheling as I am currently working on a documentary and am looking to be as balanced / accurate as possible.
Wrt the northwestern European pattern: I have in the back of my mind (though no sources at hand) that about the end of the high middle age/early late middle age the population density in Europe reached the maximum that could be fed with the existing agricultural technology and that surplus population then died in/because of wars and due to frequent famines (later on, the situation was somewhat relieved by emigration to colonies). I'm wondering whether postponing marriage or not marrying at all are also reactions to this situation.
Coming to think of it: nutritional status is known to have a considerable influence on puberty (i.e. well nourished -> early puberty). http://www.mum.org/menarage.htm lists a considerable amount of historical data, giving the average age at menarche for 19th century Europe somewhere between 14.6 and 16.6 yrs (US end oth 19th century and medieval Europe: 12 - 14). End of puberty would be 3 - 4 years later, so in 19th century Europe (and probably also earlier for populations that happend to be exposed to food shortage or famine) early 20s would for the women translate to "just out of puberty"
"the value of women appears to have declined" I'm not sure you want to be spreading this around :)
Others have already provided excellent information and cites. There are a couple other things to look at.
There may actually be a proxy that you can use to fill in data that you can't directly obtain: the number of children a woman bears should be related to her marriage age. The larger the family size, the younger the marriage age. Another proxy might be the length of one generation (which would indicate the average age of the mother when having any of her children). Finally, a proxy you could use is when property (farms etc.) was passed through the generation. In many regions, they were passed on only from father to the firstborn son, which would give you a good indication of the age of the father when he had his first child, and thus indirectly of the marriage age of men (of course, daughters as first children would be a confounding factor here!)
In central Europe, you also will have a hard time going back 1000 years with your research, because the 30 years war (1609-1639) destroyed most relevant records, if they were even ever collected.
The definition of marriage itself has changed multiple times over the last millenia.
Marriage wasn't always the formal recorded matter it is today, and in some cases it may not even have been one-man, one-woman.
Based on data like this, my sense is that our perception of young
marriage ages has been skewed by selection bias: we know more about
aristocratic families, and they tended to arrange marriages at
particularly young ages for political reasons.
I think you may actually be subject to another selection bias of your own: most marriages weren't recorded until, IIRC, around the 16th to 19th century, depending on the region. In medieval Europe, what we today would call "shacking up" was the very definition of marriage - you were married when one partner moved in with the other, and maybe your family or the church held festivities for the occasion.
The purpose of marriage has changed multiple times.
Marriage could be for love.
Marriage could be for political reasons (not just in the higher levels, but potentially even at the village level).
Marriage could be for procreation.
Marriage could be for social security.
Marriage could be for mutual protection.
Marriage could be for division of labor.
Different purposes would lead to different optimum marriage ages.
Biological factors play a major role.
People, and in particular women, have a limited age range when they can procreate. When maximizing procreating (whether for its own sake, or to have many children providing social security) was the goal of marriage, that would argue for an earlier marriage age.
Infant mortality would also call for women having more time for plenty of pregnancies.
Maternal mortality would probably call for higher average marriage age. Very young mothers would be at higher risk (and people would have known that).
Today, we are probably near the upper end of the age range where marriage for procreation purposes is feasible at all, and then only with very small family sizes. That would support the notion that historically, people did marry younger (although it does not say how much younger).
Marrying late is, in a way, a luxury. During times of turmoil or disease, people would have children (and thus marry) as early as possible. During times of peace, prosperity, and longevity, people could afford to wait longer. Incidentally, it seems that @Semaphore's data also follows the same general pattern: a higher average marriage age in wealthy Florence, for instance, and it seems that generally the higher marriage ages seem to correlate with peaceful periods.
Based on all of that, you will probably find the following patterns:
Average marriage age varied with conditions, both up and down.
Average marriage age among unrecorded marriages are likely to be lower than among recorded marriages (because ordinary people had less security and more need for many children).
Today's average marriage age is likely near the historic peak.
Incidentally, one way to validate this is to look at international comparisons today. Today's developing countries may have their own issues, but in many ways, especially when it comes to the basics of humanity, very much resemble medieval Europe.
Another possible factor that may postpone marriage age would be if marriage is acceptable only if the couple could more or less provide for themselves plus their offspring. This would mean that at least the husband would need to have reached a certain experience in his profession but probably also for the wife to be experienced enough to manage the household side of the business or farm (and somehow I imagine this would include servants as they couldn't risk their position too early). Again, this may be more important (or more sensible to expect) in peace time.
@cbeleites Excellent point. The age when people became self-sufficent like that has also changed dramatically. 500 years ago, you could be self-sufficient while being illiterate and, by today's standards, under age. Today, at least, you need a high school degree, if not a college degree.
Not so sure about age of being self-suffient: I have in the back of my mind that "professional" hunters and gatherers of stone-age-like cultures reach their maximum productivity around age 40 after maximum physical power at age 20 due to gain in experience. I wouldn't be too sure that a, say, couple of 15-year-olds would have been considered to have reached a level of self-sufficiency. And as for today: I'm living in a European wellfare state (Germany) so this is basically a non-question at the level of a higher living standard than upper middle class had 60 years ago. Still, that aside, ...
... if instead of attending university the couple would each learn a trade nowadays, they can feed their family in their early 20s. And at the time the urban academics graduate, such a say, nurse + HVAC couple in a rural area may be quite a bit further towards owning their house than the urban academic elite ever gets... (but that's politics, not history...)
Another possible counterfactual, from a community of European origins faced with isolation, low population and scarce ressources: the consequences on women's average age of mariage were dramatic https://en.wikipedia.org/wiki/Pitcairn_sexual_assault_trial_of_2004#Historical_background
I'm surprised to hear of women marrying so late, since having children late could be an issue. But maybe they married late due to the fear of death in childbirth. I would also like to know what the differences could be in the different classes. Surely the age of marriage for the aristocracy could well be different to those of the peasant class. I don't think this topic should ignore that issue.
There are a lot of parish records in England which could be examined for this topic, and a lot of those are available online.
In my own family history research, which of course is only a small sample, I have noticed in Southern England among the lower classes, in the 18th and early 19th centuries that marriages were at about age 20 and both male and female about the same age. Usually they married when the woman was already pregnant (this does actually appear to be very common). I do have just one marriage from the 16th century, where the man was in his late 20s and the woman in her mid teens, but I don't know the class of that marriage, and yes, it is just one marriage stat.
a lot of parish records show the mirage date to be VERY close to the date of the first child..... So maybe the records are not showing what most people think of as marriage, but are recording a marriage that happened some time before.
Marrying late is not so surprising when you realize that it's an efficient way to reduce the number of children when households had a limited ability to raise them or to provide means of living (mainly land) for each of them as adults.
Also: maybe it wasn't as late as it seems to us: the onset of puberty has become considerably earlier (e.g. http://www.mum.org/menarage.htm has graphs and references). Average age at onset of puberty for girls around 1830 in (northern) Europe was almost 17, end of puberty would then have been at about 20.
| common-pile/stackexchange_filtered |
Sed file from row number stored in array
I've an array such
echo ${arr[@]}
1 13 19 30 34
I would like to use this array to sed rows (1,13,19,30 and 34) from other file. I know that I can use a loop, but I would like to know if there is a more straightforward way to do this. So far I've not been able to do it.
Thanks
what would you do on those rows/lines from "another file"?
Just want to split a file in two based on the array index. Array numbers are rows.
@biorunner88, that's slightly different. How that numbers 1 13 19 30 34 should help to split the file? Can you post the file?
so line# 1, 13,19...34 in one file, the rest lines in the other file?
I've an array arr with numbers. Those numbers stored in the array are lines that I want to extrac from a file. So I want to get rows 1, 13,19,30 and 34 from a file. Did I explained properly now? Thanks
sed solution:
a=(1 13 19 30 34)
sed -n "$(sed 's/[^[:space:]]*/&p;/g' <<< ${a[@]})" file
This will extract 1, 13, 19, 30 and 34th rows from file
It seems to be doing what I wanted. Thanks
You can execute a single sed command on each line by appending the command and a semicolon to each line, and run the result as a sed program. This can be managed in a compact way using bash pattern replacement in variables and arrays; for example, to print the selected lines, use the p command (-n suppresses printing the unselected lines):
sed -n "${arr[*]/%/p;}"
Works fine also with more complex commands like s/from/to/:
sed "${arr[*]/%/s/from/to/;}"
This will perform the replacement only on the selected lines.
awk -v rows="${arr[*]}" 'BEGIN{split(rows,tmp); for (i in tmp) nrs[tmp[i]]} NR in nrs' file
Explain your solution please.
You could use awk and the system function to run the sed command
awk '{ for (i=1;i<=NF;i++) { system("sed -n \""$i"p\" filename") } }' <<< ${arr[@]}
This can be open to command injection though and so assess the risk accordingly.
I don't downvote answers @SO, however I would say, your answer deserves one.
I did downvote it. shell calling awk to call system() to call shell to call sed is crazy.
| common-pile/stackexchange_filtered |
Prove or Disprove that $\left|\frac{e^{2i\theta} -2e^{i\theta} - 1}{e^{2i\theta} + 2e^{i\theta} -1}\right| = 1$
Prove or disprove that
$$\left|\frac{e^{2i\theta} -2e^{i\theta} - 1}{e^{2i\theta} + 2e^{i\theta} -1}\right| = 1$$
This is a step in an attempt to solve a much larger problem, thus I'm fairly sure it's true but not absolutely sure. It looks like it should be simple but it's resisted all my attempts so far.
If we may know, what is the "larger problem"?
I don't see why the larger problem matters, the only reason I mentioned it was to give the reason why I wasn't sure if it was true or not.
@Thoth, it matters because my intellectual curiosity matters to me. That is why I asked. Also, I believe it may have piquanted the curiosity of others.
I'm trying to find a conformal map from the slit open unit disk to the open unit disk which takes boundary to boundary. I have my conformal map but I'm not sure if conformal maps always take boundary to boundary, thus I was trying to prove it did for my particular conformal map, which in this case is $-\frac{z-i2\sqrt{r}e^{i\frac{\theta}{2}} - 1}{z+i2\sqrt{r}e^{i\frac{\theta}{2}} - 1}$ This conformal map wont give exactly the case I'm looking at above, I tweaked it a bit, but hopefully you get the idea.
There is indeed a larger problem : $\forall \theta_1,\theta_2, \ \ \left|\frac{e^{i(\theta_1+\theta_2)}+1-2e^{i \theta_1}}{e^{i(\theta_1+\theta_2)}+1-2e^{i\theta_2}}\right|=1$
This is true.
$$|z - \frac{1}{z} -2 | = |z - \frac{1}{z} + 2|$$
where $z = e^{i\theta}$, since $\Re(z - \frac{1}{z}) = 0$.
Geometrically, $z - \frac{1}{z}$ lies on the $y$-axis (perpendicular bisector of $(2,0)$ and $(-2,0)$).
Ah nicely done, thank you.
@Thoth: You are welcome!
Divide by $e^{i\theta}$ the numerator and denominator :
$$\left|\frac{e^{2i\theta} -2e^{i\theta} - 1}{e^{2i\theta} + 2e^{i\theta} -1}\right|=\left|\frac{e^{i\theta} -2 - e^{-i\theta}}{e^{i\theta} +2 - e^{-i\theta}}\right|$$
Think at the complex conjuguate of the numerator and conclude!
Seems like you were getting at the same thing as Aryabhata, +1.
@Thoth: yes. I hope that both helped, fine continuation,
Taking the squared norm of the numerator and denominator separately,
$$
\eqalign{
\left|e^{ 2i\theta}\pm e^{ i\theta}-1\right|^2 &=
\left(e^{ 2i\theta}\pm e^{ i\theta}-1\right)\cdot
\left(e^{-2i\theta}\pm e^{-i\theta}-1\right)\\ &
\matrix{=& 1 & \pm2e^{ i\theta} & -e^{2i\theta} \\\\
& \pm2e^{-i\theta} & +4 & \mp2e^{ i\theta} \\\\
& -e^{2i\theta} & \mp2e^{-i\theta} & +1 }
\\\\ &= 6 - 2\cos 2\theta \pm 2\cos\theta \mp 2\cos\theta
\\\\ &= 6 - 2\cos 2\theta\,.
}
$$
Notice, however, that this no longer depends on the sign,
i.e. it is the same for the numerator and denominator.
But I admit, I like @Raymond's and @Aryabhata's answers much better!
| common-pile/stackexchange_filtered |
fill the second combo box based on the first combo box
I have 2 comboboxes in an excel userform and i need the first combobox to affect what is listed in the second. The table which i am getting the data from on a sheet in excel looks like image attached:
.
The data goes on with business names and individuals. What I would like combobox2 to do is when say DT limited is selected in combobox1 then combobox2 will only show john and steve for selection.
Can anyone help, I am new to this?
Private Sub UserForm_Initialize()
With ComboBox1 'Your first combobox
.Clear
.AddItem "Users"
.AddItem "Salary" ' Add main combobox Category
End With
End Sub
Private Sub ComboBox1_Change()
Dim index As Integer
index = ComboBox1.ListIndex
ComboBox2.Clear ' clear your dependent combobox
Select Case index
Case Is = 0 ' Add Subcategory Display items
With ComboBox2
.AddItem "John"
.AddItem "Angie"
.AddItem "Sam"
End With
Case Is = 1
With ComboBox2
.AddItem "20000"
.AddItem "45000"
.AddItem "80000"
End With
End Select
End Sub
the only issue with what you have provided above is that the list is continuously added to with new businesses and individual linked to that business. i need the comboboxes to be able to automatically pick this up, the number of entries is not fixed
@benjaminsolanke Just add a ComboBox2.Clear line. Please be nice with the people that are trying to help you for free.
@AK47 You should not provide code if the one who ask don't show any effort to develop his own code.
This code will resolve your proplem.
Private Sub UserForm_Initialize()
Dim RowMax As Integer
Dim wsh As Worksheet
Dim countExit As Integer
Dim CellCombo1 As String
Dim i As Integer
Dim j As Integer
Set wsh = ThisWorkbook.Sheets("Sheet2")
RowMax = wsh.Cells(Rows.Count, "A").End(xlUp).Row
'find last row of sheet in column A
ComboBox1.Clear
'clear all value of comboBox1
With ComboBox1
For i = 2 To RowMax
'Run each row of column A
countExit = 0
CellCombo1 = wsh.Cells(i, "A").Value
For j = i To 2 Step -1
'just show value not duplicate
If CellCombo1 = wsh.Cells(j, "A").Value Then
countExit = countExit + 1
End If
Next j
If countExit = 0 Then
ElseIf countExit > 1 Then
Else
.AddItem CellCombo1
End If
Next i
End With
End Sub
Private Sub ComboBox1_Change()
Dim RowMax As Integer
Dim wsh As Worksheet
Dim countExit As Integer
Dim CellCombo2 As String
Dim i As Integer
Set wsh = ThisWorkbook.Sheets("Sheet2")
RowMax = wsh.Cells(Rows.Count, "A").End(xlUp).Row
'find last row of sheet in column A
ComboBox2.Clear
'clear all value of comboBox2
With ComboBox2
For i = 2 To RowMax
If wsh.Cells(i, "A").Value = ComboBox1.Text Then
'Just show value of mapping with column A
.AddItem wsh.Cells(i, "B").Value
Else
End If
Next i
End With
End Sub
| common-pile/stackexchange_filtered |
Converting markdown to HTML with JavaScript - restricting sppported syntax
I am using marked.js currently to convert markdown to HTML, so the users of my Web-App can create a structured content. I am wondering if there is a way to restrict the supported syntax tu just an sub-set, like
headers
italic text
bold text
lists with only 1 depth of indentation
quotes
I would like to prohibit conversion of list with multiple levels of indentation, code blocks, headers in lists ...
The reason is, that my WebApp should the users to create content in a specific way and if there will be possibility create some crazy structured content (list of headers, code in headers, lists of images ...) someone will for sure do it.
It might be easier to parse it to HTML then use some DOM queries to see if there are unwanted elements or structures using selectors, e.g. doc.querySelector('li ul') will find a nested ul, 'li ol' a nested ol, etc.
Circa 10 years ago I had a similar issue, at the time it was easier to implement our own very simple parser to only support the 3-4 tags we needed rather than try to restrict a library package. There may be better ways now.
You have a few difference options:
Marked.js uses a multi-step method to parse Markdown. It uses a lexer, which breaks the document up into tokens, a parser to convert those tokens to a abstract syntax tree (AST) and a renderer to convert the AST to HTML. You can override any of those pieces to alter the handling of various parts of the syntax.
For example, if you simply wanted to ignore lists and leave them out of the rendered HTML, replace the list function from the renderer with one which returns an empty string.
Or, if you want the parser to act as if lists are not even a supported feature of Markdown, you could remove the list and listitem methods from the parser. In that case, the list would remain in the output, but would be treated as a paragraph instead.
Or, if you want to support one level of lists, but not nested lists, then you could replace the list and/or listitem methods in the parser with your own implementation that parses lists as you desire.
Note that there are also a number advanced options, which use the above methods to alter the parser and/or render in various ways. For the most part, those options would not provide the features you are asking for, but browsing though the source code might give you some ideas of how to implement your own modifications.
However, there is the sanitize option, which will accept a sanitizer function. You could provide your own sanitizer which removed any unwanted elements from the HTML output. This would result in a similar end result to overriding the renderer, but would be implemented differently. Depending on what you want to accomplish, one or the other may be more effective.
Another possibility would be to use Commonmark.js, parse the input ant than walk the parsed tree and remove all nodes with/without specific type. See this example, it worked fine for images, but failed for code blocks.
Downside of this approach is, that the parsed markdown source will be traversed two-times: one time for editing and second time for rendering.
| common-pile/stackexchange_filtered |
Non power of two textures and memory consumption optimization
I read somewhere that XNA framework upscales a texture to nearest power of two size and then sends that to VRAM, which, provided it's how it really works, might be not efficient when loading many small (in my case 150×150) textures, which essentially waste memory with unused texture data resulting from upscaling.
So is there some automatic optimization, or should I make my own implementation of it, like loading all textures, figuring out where the "upscaled" space is big enough to hold some other texture and place it there, remembering sprite positions, thus using one texture instead of two (or more)?
It isn't always handy to do this manually for each texture (placing many small sprites in a single texture), because it's hard to work with later (essentially it becomes less human-oriented), and not always a sprite will be needed in some level of a game, so it would be better if sprites were in a different composition, so it should be done automatically.
There are tools available to create what are known as "sprite sheets" or "texture atlases". This XNA sample does this for you as part of a content pipeline extension.
Note that the padding of textures only happens on devices that do not support non-power-of-two textures. Windows Phone, for example. Modern GPUs won't waste the RAM. However this is still a useful optimisation to allow you to merge batches of sprites (see this answer for details).
| common-pile/stackexchange_filtered |
how to make flash swf fluid size as3
I am working on facebook game and I want my game have fluid width. What is best practice to achive it ?
Shoul I only use special way of embeding or I need make changes in as3 code as well ?
I tryed google for it but I was not sucesfull.
Thank you very much for any help.
http://www.republicofcode.com/tutorials/flash/as3fluidresize/
How does i google?
there is not explained how can I make it resizable when it is embeded in webpage!
@Riddlah When embedding the swf, you'll also need to set the width and height to a certain percentage (normally 100%)
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/StageScaleMode.html
the swf gets planted in a page, the containing element needs to have fluid height and width. The swf will update based on the containing element.
In general, you need to embed your swf with width=100% for width and height=100% for height.
Your SWF file will assume the stageWidth/stageHeight to be the size of the container it's in. Then, to trace if the window is resized, use following listener:
stage.addEventListener(Event.RESIZE, YOUR_HANDLER);
| common-pile/stackexchange_filtered |
Using Selenium to Scrape Infinite Scroll problem
I'm not able to scrape the title of each handbag, price, and color. The website is: https://www.coach.com/shop/women-handbags
I have already tried different scrapers, as well as placing the scraping information in different parts of the while loop.
The code provided is after the while loop scrolls the entire page and then goes back to the very top.
products = driver.find_elements_by_xpath('/html/body/div[1]/div[8]/div[4]/div/div/div/div[1]/div[1]/div')
for product in products:
bag_dict = {}
try:
name = product.find_element_by_tag_name('a').text
price = thing.find_element_by_xpath('.//span[@class="price-sales"]').text
bag_dict['name'] = name
bag_dict['price'] = price
except:
continue
print(bag_dict)
I get an empty dictionary or an error message that says bag_dict is not found.
WE’RE SORRY
Our site is temporarily offline for maintenance.
Thank you for your patience and check back soon.
That xpath to find products is only going to find 1 element isn't it?
Found a request made by the site that loads the handbags in sets of 24, this code will loop through all the sets and then store the price and name of each handbag in a dataframe. Selenium is not necessary to accomplish this, I used requests and beautifulsoup.
Code
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
handbags = pd.DataFrame()
for next_set in range(0, 481, 24):
payload = f'start={next_set}&format=page-element'
r = requests.get('https://www.coach.com/shop/women-handbags', params = payload)
soup = BeautifulSoup(r.text, 'html.parser')
names = [name.meta['content'] for name in soup.find_all(class_="product-name")]
prices = [price.find('span', {'data-sales-price': re.compile(r'\d+\.\d+')})['data-sales-price'] for price in soup.find_all(class_="product-price")]
temp_df = pd.DataFrame({'Names': names, 'Prices': prices})
handbags = handbags.append(temp_df).reset_index(drop=True)
print("Appended next set")
print(handbags)
Output
Names Prices
0 TROUPE TOTE IN COLORBLOCK 695.0
1 TROUPE TOTE IN COLORBLOCK WITH SNAKESKIN DETAIL 750.0
2 TROUPE TOTE 695.0
3 TROUPE TOTE WITH KAFFE FASSETT PRINT 795.0
4 TROUPE TOTE IN SIGNATURE CANVAS WITH PATCHWORK... 895.0
5 TROUPE TOTE IN SIGNATURE CANVAS WITH KAFFE FAS... 795.0
6 TROUPE TOTE IN SIGNATURE CANVAS 695.0
7 TROUPE CARRYALL WITH CROCODILE DETAIL 1100.0
8 TROUPE CARRYALL 595.0
9 TROUPE CARRYALL IN SIGNATURE CANVAS 595.0
10 TROUPE CARRYALL 35 IN COLORBLOCK WITH SNAKESKI... 850.0
11 TROUPE CARRYALL 35 IN SIGNATURE CANVAS WITH KA... 995.0
12 TROUPE SHOULDER BAG WITH KAFFE FASSETT PRINT 550.0
13 TROUPE CROSSBODY WITH KAFFE FASSETT PRINT 595.0
14 TROUPE CROSSBODY 495.0
15 TROUPE CROSSBODY IN SIGNATURE CANVAS 495.0
16 TABBY TOP HANDLE IN COLORBLOCK SNAKESKIN 695.0
17 TABBY TOP HANDLE IN COLORBLOCK 550.0
18 TABBY TOP HANDLE IN COLORBLOCK 550.0
19 TABBY TOP HANDLE 550.0
20 TABBY TOP HANDLE IN SIGNATURE CANVAS WITH KAFF... 650.0
21 TABBY SHOULDER BAG 26 IN SIGNATURE CANVAS WITH... 450.0
22 TABBY SHOULDER BAG 26 IN SNAKESKIN 650.0
23 TABBY SHOULDER BAG 26 IN COLORBLOCK WITH SNAKE... 450.0
24 TABBY SHOULDER BAG 26 IN COLORBLOCK 350.0
25 TABBY SHOULDER BAG 26 IN COLORBLOCK WITH SNAKE... 450.0
26 TABBY SHOULDER BAG 26 350.0
27 TABBY SHOULDER BAG 26 350.0
28 TABBY SHOULDER BAG WITH KAFFE FASSETT PRINT 550.0
29 TABBY SHOULDER BAG IN SNAKESKIN 595.0
.. ... ...
439 DINKY CHAIN STRAP 35.0
440 NOVELTY STRAP 95.0
441 NOVELTY STRAP 50.0
442 STRAP IN SIGNATURE CANVAS 95.0
443 STRAP IN SNAKESKIN 150.0
444 STRAP WITH CHAIN 150.0
445 STRAP WITH WAVE PATCHWORK AND SNAKESKIN DETAIL 150.0
446 CASSIE CROSSBODY 350.0
447 NOVELTY STRAP WITH TEA ROSE AND TOOLING 150.0
448 CENTRAL TOTE WITH ZIP 295.0
449 DREAMER WRISTLET 175.0
450 DREAMER WRISTLET IN COLORBLOCK 175.0
451 DREAMER WRISTLET IN SIGNATURE CANVAS 175.0
452 DREAMER WRISTLET WITH SNAKESKIN DETAIL 225.0
453 RIVINGTON CONVERTIBLE POUCH 250.0
454 RIVINGTON CONVERTIBLE POUCH IN SIGNATURE CANVAS 250.0
455 ROGUE POUCH 325.0
456 ROGUE POUCH 325.0
457 CHARLIE POUCH 175.0
458 CHARLIE POUCH IN COLORBLOCK SIGNATURE CANVAS 175.0
459 CHARLIE POUCH WITH MEADOW PRAIRIE PRINT 195.0
460 CHARLIE POUCH WITH SCATTERED RIVETS 195.0
461 CHARLIE POUCH WITH SIGNATURE CANVAS BLOCKING 175.0
462 LARGE CHARLIE POUCH 225.0
463 LARGE CHARLIE POUCH WITH PATCHWORK STRIPES 275.0
464 LARGE CHARLIE POUCH WITH SCATTERED RIVETS 275.0
465 LARGE WRISTLET 30 IN SIGNATURE CANVAS WITH STA... 195.0
466 LARGE WRISTLET 30 WITH REXY AND CARRIAGE 195.0
467 KISSLOCK CLUTCH 225.0
468 KISSLOCK CLUTCH IN COLORBLOCK 225.0
[469 rows x 2 columns]
| common-pile/stackexchange_filtered |
vga to hdmi to mini-hdmi on a t430?
I have a Lenovo Thinkpad T430 laptop with VGA output. I want to connect the VGA to a 16:9 external monitor that accepts mini-HDMI, and then use that monitor in portrait mode. The monitor is 4k, but I'd be happy to get 1440p or even 1080p.
My plan is to use a VGA to HDMI adapter, and an HDMI to mini-HDMI adapter.
According to another post (What is the maximum display resolution on the Intel HD Graphics 4000 chipset on a Lenovo t430?), the max output for the T430 VGA port is 2048x1536@75hz. Would the fact that this is 4:3 and the monitor is 16:9 cause any issues?
Is there anything else that could cause issues?
It might be relevant that I'm using the mini-DP to power an internal 1440P screen.
The model number is 2349TMH. It has an integrated gpu (HD 4000) and an i5-3320m cpu. However, I do plan to upgrade the cpu to a quad core i7-36xx.
There are six different T430 models with varying CPU choices, and some have NVIDA Optimus graphics as well. How about the model number copied from the serial number sticker beneath, or the serial number if the model number is illegible? That would enable knowing more about the T430 video capability. Please click [edit] and put that into your question; please do not use Add Comment.
So your mini-DP port is already being used? I would have suggested that you use that output for this purpose. VGA to HDMI is the tricky part (you need an "active" adapter), HDMI to Mini HDMI is no problem at all. See K7AAY's comment above though, the video card will determine what orientation and resolution functionality these adapters will result in.
| common-pile/stackexchange_filtered |
Converting a datetime string to timestamp in Javascript
Question in brief:
What is the easiest way to convert date-month-year hour(24):minute to timestamp?
due to more views add clear question on the top, so that no need to go through background & all if need quick help.
Background :
I have a simple html table and I used jquery sorter to sort my table columns.
Everything is working fine except a date column which is having following format of data,
17-09-2013 10:08
date-month-year hour(24):minute
This column is sorting(alphabetically) but not as I expected(date wise). I tried to use a custom parser as follows,
$.tablesorter.addParser({
id: 'date_column', // my column ID
is: function(s) {
return false;
},
format: function(s) {
var timeInMillis = new Date.parse(s);
return timeInMillis;
},
type: 'numeric'
});
Problem :
it fails due to new Date.parse(s) .
Question :
what is the easiest way to convert date-month-year hour(24):minute to timestamp? then I can skip var timeInMillis = new Date.parse(s); line.
Thanks
Edited :
Sorry about the confusion about milliseconds, actually it should be the timestamp which is a number that represents the current time and date.
What's wrong with Date.parse(), how does it not work?
You can use moment.js: http://momentjs.com/ to convert time to milliseconds
What exactly do you mean convert to milliseconds? You can't just convert a date to milliseconds. A date is a reference to a specific point in time, milliseconds are a measurement of time from a specific point, if you see my meaning. You can get the number of milliseconds from a specific date, like the number of millis since 17-9-2013, but since you have an entire column of dates, I'm guessing this isn't what you want. Or you can add millis to the current time to get a more exact point, is this what you're looking for?
@Pekka error : TypeError: Date.parse is not a constructor
Look at the console when coding. Look at the documentation for Date.parse(). Is your date string in a valid format? The docs will tell you.
@BeanBagKing I need something like timestamp
@JanithChinthana http://stackoverflow.com/questions/8123878/data-parse-is-not-a-constructor
Just a suggestion: Unless your format is a requirement, I might suggest formatting as [yyyy-mm-dd hh:mm:ss] as it's easily sortable without conversion.
@epascarello my format is not in that list
@epascarello The problem is that he is using the new keyword.
@Sumurai8 even I remove that, it is not working
@Sumurai8 hence my first part of my comment to look at the console and that is half the problem. The other problem is the a valid string does not represent a RFC2822 or ISO 8601 date.
Parsing dates is a pain in JavaScript as there's no extensive native support. However you could do something like the following by relying on the Date(year, month, day [, hour, minute, second, millisecond]) constructor signature of the Date object.
var dateString = '17-09-2013 10:08',
dateTimeParts = dateString.split(' '),
timeParts = dateTimeParts[1].split(':'),
dateParts = dateTimeParts[0].split('-'),
date;
date = new Date(dateParts[2], parseInt(dateParts[1], 10) - 1, dateParts[0], timeParts[0], timeParts[1]);
console.log(date.getTime()); //1379426880000
console.log(date); //Tue Sep 17 2013 10:08:00 GMT-0400
You could also use a regular expression with capturing groups to parse the date string in one line.
var dateParts = '17-09-2013 10:08'.match(/(\d+)-(\d+)-(\d+) (\d+):(\d+)/);
console.log(dateParts); // ["17-09-2013 10:08", "17", "09", "2013", "10", "08"]
what can I do with dateParts ?
@JanithChinthana Well, the regular expression example just shows an alternative way of parsing the date string instead of using multiple split calls.
Link to MDN (not W3)! Nice. +1
@plalx, it is not working when var dateString = '17-09-2013 10:08:30'; basically not working when there is second counterpart
Date.parse() isn't a constructor, its a static method.
So, just use
var timeInMillis = Date.parse(s);
instead of
var timeInMillis = new Date.parse(s);
Date.parse() is the best solution. It works in every situation.
Not true. I have an example with NaN
So what is your example ? @GrávujMiklósHenrich
For those of us using non-ISO standard date formats, like civilian vernacular 01/01/2001 (mm/dd/YYYY), including time in a 12hour date format with am/pm marks, the following function will return a valid Date object:
function convertDate(date) {
// # valid js Date and time object format (YYYY-MM-DDTHH:MM:SS)
var dateTimeParts = date.split(' ');
// # this assumes time format has NO SPACE between time and am/pm marks.
if (dateTimeParts[1].indexOf(' ') == -1 && dateTimeParts[2] === undefined) {
var theTime = dateTimeParts[1];
// # strip out all except numbers and colon
var ampm = theTime.replace(/[0-9:]/g, '');
// # strip out all except letters (for AM/PM)
var time = theTime.replace(/[[^a-zA-Z]/g, '');
if (ampm == 'pm') {
time = time.split(':');
// # if time is 12:00, don't add 12
if (time[0] == 12) {
time = parseInt(time[0]) + ':' + time[1] + ':00';
} else {
time = parseInt(time[0]) + 12 + ':' + time[1] + ':00';
}
} else { // if AM
time = time.split(':');
// # if AM is less than 10 o'clock, add leading zero
if (time[0] < 10) {
time = '0' + time[0] + ':' + time[1] + ':00';
} else {
time = time[0] + ':' + time[1] + ':00';
}
}
}
// # create a new date object from only the date part
var dateObj = new Date(dateTimeParts[0]);
// # add leading zero to date of the month if less than 10
var dayOfMonth = (dateObj.getDate() < 10 ? ("0" + dateObj.getDate()) : dateObj.getDate());
// # parse each date object part and put all parts together
var yearMoDay = dateObj.getFullYear() + '-' + (dateObj.getMonth() + 1) + '-' + dayOfMonth;
// # finally combine re-formatted date and re-formatted time!
var date = new Date(yearMoDay + 'T' + time);
return date;
}
Usage:
date = convertDate('11/15/2016 2:00pm');
It is just as simple:
var today = new Date(); // Thu Apr 28 2022 21:51:23 GMT+0530
var todaysTimestamp = new Date().getTime(); //<PHONE_NUMBER>379
| common-pile/stackexchange_filtered |
Add New Query Strings to Existing URL in C# and Redirect to it
Can anybody help me with the c# code to add two query strings to an existing URL. URL typically looks like this below. When the URL is created in c# to redirect to it as well. Any help much appreciated.
http://localhost/somesite/index.php?Filename=somefile.txt&Filepath=E:\myfolder
to build up url would be
option A: (string concat)
string param1 = "Filename=somefile.txt";
string param2 = @"Filepath=E:\myfolder";
Uri YourURL = new Uri($"http://localhost/somesite/index.php?{param1}&{param2}");
Console.WriteLine(YourURL);
Option b: (URIBuilder)
UriBuilder uriBuilder = new UriBuilder();
uriBuilder.Scheme = "http";
uriBuilder.Host = "localhost";
uriBuilder.Path = "somesite/index.php";
//uriBuilder.Port = 80;
var q = new Dictionary<string,string>();
q.Add("Filename","somefile.txt");
q.Add("Filepath","E:\\myfolder");
uriBuilder.Query = string.Join("&", q.Select(s => $"{s.Key}={s.Value}").ToList());
uriBuilder.ToString().Dump();
to redirect. not sure what are you using for UI. the question is very blur.
but to redirect try this:
Response.Redirect(YourURL);
otherwise, please rewrite your question with better explanation
I was going to explore Option B. How do you retrieve the URL to variable from the option B code? and format the URL it so it will work with Response.Direct(). I simply want to update the new URL with query strings in the address bar of the browser.
uriBuilder.ToString() -will contain your url . so in controller return would be :: return Response.Redirect(uriBuilder.ToString());
Yes I modified as such uriBuilder.ToString(); Uri uri = uriBuilder.Uri; and to launch new URL Process.Start(uri.ToString());. Thank very much for your help Power Mouse
@user1234 If the answer helped you out, you can accept it for others to see.
Yes the answer was correct. I don't have enough point to upvote.
Another option:
ToString() the existing Uri with a query string and put it in the QueryHelpers.AddQueryString method.
completed_uri = new Uri(
QueryHelpers.AddQueryString(completed_uri.ToString(), new Dictionary<string, string> {
{ "pageToken", pageToken}
})
| common-pile/stackexchange_filtered |
I Want to display a listview when i click a button in Monodroid. i tried this coding. its not running. Can anyone correct this
I want to display a listview when I click a button in Monodroid. I tried the following code, however it doesn't run. Can anyone correct this?
protected override void OnCreate (Bundle bundle)
{
base.OnCreate (bundle);
SetContentView (Resource.Layout.Main);
Button button1 = FindViewById<Button> (Resource.Id.btn);
button1.Click += delegate { listviewFunction(); };
}
public void listviewFunction()
{
ListAdapter = new ArrayAdapter<string>(this, Resource.Layout.list_item, _countries);
ListView.TextFilterEnabled = true;
ListView.ItemClick += (sender, args) => Toast.MakeText(Application, ((TextView) args.View).Text, ToastLength.Short).Show();
}
Try just:
button1.Click += ...
oh Sorry... that's my mistake in this coding... but still i couldn't get the exact output..
Declare a ListView globally:
private ListView _listView;
Now (1)create the ListView, OR (2)get it from a axml file:
(1)
_listView = new ListView(this);
(2)
_listView = (ListView)View.FindViewById(Resource.Id.MyList);
Now create your adapter, then:
_listView.SetAdapter(myAdapter);
Then create your ItemClick handler:
_listView.ItemClick += (sender, args) => Toast.MakeText(Application, ((TextView) args.View).Text, ToastLength.Short).Show();
| common-pile/stackexchange_filtered |
@Valid Not working @PostMapping spring boot
I am learning spring boot, I am adding Validation for @PostMapping, but somehow it always create a object even with non-valid value.
Hospital.java
public class Hospital {
private Integer id;
@Size(min = 2)
private String name;
@Size(min = 2)
private String city;
public Hospital(Integer id, String name, String city) {
this.id = id;
this.name = name;
this.city = city;
}
Controller
@Autowired
HospitalData data;
...
@PostMapping("/hospital")
public ResponseEntity<Hospital> addHospital(@Valid @RequestBody Hospital hospital){
Hospital newHospital = data.addHospital(hospital);
URI location = ServletUriComponentsBuilder
.fromCurrentRequest()
.path("/{id}")
.buildAndExpand(newHospital.getId()).toUri();
return ResponseEntity.created(location).build();
}
pom.xml
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
And previously I have added below dependency as I am using Spring 2.3.10 RELEASE, but it doesn't work, so I have added above dependency.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
With the spring-boot-starter-validation, like described at this post: https://www.baeldung.com/spring-boot-bean-validation I see no reason for this to not work.
give us some concrete examples of what non valid values have passed without failing
@Size(min = 2) does not mean it can not be null. In that case you also want another annotation @NotNull
@Boug : I have added @NotNull and @Size(min=2), and when I am passing { "name": "b", "city": "A" } it still got 201 created.
I created a Test Application reproducing the state of your code. As stated under the comments, the code you provided should definitely work. You definitely don't neet to provide a BindingResult to the method. Spring Boot throws a MethodArgumentNotValidException and therefore returns a bad request http status if the validation fails.
I created a project with following content:
pom.xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.5</version>
</parent>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
DemoEntity:
public class DemoEntity {
@NotNull
public String name;
@Size(min = 3)
public String greeting;
}
DemoController:
@Controller
public class DemoController {
@PostMapping("/post")
public ResponseEntity<DemoEntity> put(@Valid @RequestBody DemoEntity demoEntity) {
return ResponseEntity.ok(demoEntity);
}
}
Now, that's what happens with my requests:
Name
Greeting
Result
Peter
Mr
400
Stephen
Monsieur
200
Clara
null
200
Jenny
Madamme
200
As you see from the table above, when greeting is null the result is an ok status. If you want to guarantee that the string is at least min characters long and not null, you need to declare this explicitely.
That's, for example, if you want to validate optional fields like a mobile number. You know, it should be n numbers long and contain only numbers, but you don't want to make it mandatory. Like I showed with the greeting above.
Thanks, I had some weird experience, I removed the json body that I passed in postman and I typed again the same json body and now I am getting proper validation response and proper HTTPStatus.
You need to inject the BindingResult object as the param immediately following your form object and then manually check if there were any errors.
See https://spring.io/guides/gs/validating-form-input/ Create a Web Controller for a good example of how this should be done in your controller.
Unfortunately, that's wrong. As I demonstrated in my answer, the code OP provided works. I assume the requests contained null values.
Probably, however a post controller receiving a form post still needs to use the BindingResult object anyway so it can intelligently forward to the correct page to display errors and ask for corrected input.
It depends on several aspects. If OP is just providing an api for data which should be validates, he doesn't nee dto provide user information. Yes, it would be good (and he can implement this in a exception handler, like I have linked), but not necessary. Especially if hes using that entity in multiple requests.
| common-pile/stackexchange_filtered |
.gitignored some files only recently; trying to merge into a previous branch, ignored files show up as conflicts
I had a branch devel from which I branched out A topic branch in the past.
devel was always intended to be the parent of A (everything devel had/hadn't should reflect in A). After a long time, I've added some files to .gitignore and updated the index of devel to reflect it.
Now I'm trying to merge devel back into A again.. to reflect those changes (.gitignoreed files from it) but it gives me a merge conflict in those ignored files. I don't want those ignored files in A. How do I tell that to git?
screenshot if it helps...
I would rather, before merging devel to A, making sure all devel ignored files are ignored as well in A.
The trick for that is to remove from the index of A everything, update the .gitignore content, and add everything back!
git checkout A
# update the .gitignore file from devel in A
git checkout devel -- .gitignore
# remove/add everything
git rm --cached -r .
git add .
git commit -m "new commit with devel gitignore files out"
# then
git merge devel
Thanks! But can I remove/add everything but with a GLOBAL gitignore? (so that it remove/add everything from ALL branches?)
Because otherwise it's basically just doing the same thing I did with the devel (remove/add everything acc. to new .gitignore). Merging after that would be kinda pointless...(except for preserving semantic-relationships) And I'll have to do this with every A,B,C…
@laggingreflex I would presume merging that would not be pointless: you would merge devel to all the branches, which can have non-ignored files with evolutions of their own, hence the merge. And you should be able to script my proposed sequence of commands easily enough for each branches (s i http://stackoverflow.com/a/3847586/6309).
| common-pile/stackexchange_filtered |
when and how to use return & print while inter-calling the functions with if __name__ == '__main__': method
I have a doubt, when and how to use return & print while inter-calling the functions with if __name__ == '__main__': method.
For example i have quoted two example codes below the first one while i have used the return keyword into the First Fuction CheckUid(user) and print into second Function CallUid() , So When i ran the programm it runs and gives the ouput but the only the diffrence i see when userid.txt is being called to read the user ID's against the ldapsearch command and if user id doesn't exits then it returns None while otherhand while using the print statement into the First Fuction CheckUid() itself then calling it into second function CallUid() then it doesn't return None statement.
Please, suggest how and where to use return keyword while using the if __name__ == '__main__': method.
$ cat function1.py
#!/usr/bin/python3
import subprocess
from subprocess import call
def CheckUid(user):
proc = subprocess.Popen("ldapsearch -h server1 -D 'cn=directory manager' -w pass123 -LLLb 'ou=people,o=rraka.com' 'uid=%s' managerlogin" % (user), shell=True, stdout=subprocess.PIPE)
info_str = proc.stdout.read().decode('utf8')
split_str = info_str.split()
if len(split_str) > 1:
return {'UserID': split_str[1].split(',')[0].split('=')[1], 'MangerID': split_str[-1]}
else:
split_str = 'null'
def CallUid():
with open('userid.txt', mode='rt', encoding='utf-8') as f:
for line in f.readlines():
print(CheckUid(line))
#return CheckUid(line)
if __name__ == '__main__':
CallUid()
output as below:
$ ./function1.py
None
None
{'UserID': 'aashishp', 'MangerID': 'rpudota'}
{'UserID': 'abaillie', 'MangerID': 'davem'}
{'UserID': 'abishek', 'MangerID': 'kalyang'}
While Other way around:
$ cat function2.py
#!/usr/bin/python3
import subprocess
from subprocess import call
def CheckUid(user):
proc = subprocess.Popen("ldapsearch -h server1 -D 'cn=directory manager' -w pass123 -LLLb 'ou=people,o=rraka.com' 'uid=%s' managerlogin" % (user), shell=True, stdout=subprocess.PIPE)
info_str = proc.stdout.read().decode('utf8')
split_str = info_str.split()
if len(split_str) > 1:
print({'UserID': split_str[1].split(',')[0].split('=')[1], 'MangerID': split_str[-1]})
else:
split_str = 'null'
def CallUid():
with open('hh', mode='rt', encoding='utf-8') as f:
for line in f.readlines():
CheckUid(line)
if __name__ == '__main__':
CallUid()
result output:
$ ./function2.py
{'UserID': 'aashishp', 'MangerID': 'rpudota'}
{'UserID': 'abaillie', 'MangerID': 'davem'}
{'UserID': 'abishek', 'MangerID': 'kalyang'}
Note: Please quote examples or point edition in code if you feel as
i'm just a newbie learner.
You should be intentional about what you return from a function. Instead of allowing the first function to fall off the end and return None by default, it would be better to explicitly return {} so that the return value is consistent. You could still capture the return value with retval = CheckUid() and test it: if retval:.
@MarkRansom, appreciate your expert advise, would be great if you quote example when you have time.
I'm not sure I have a concrete example, it's just a guiding principle that will make your code more consistent and easier to reason about. Many languages will enforce a consistent return strategy, such as C++ or Java.
The line split_str = 'null' doesn't do anything useful in either version of your function, since it returns immediately afterwards without doing anything with split_str. You need to decide what you want your code to do in that situation, and then write code to do it. Your current code returns Null by default if it takes the else branch, but you should be explicit if that's what you want.
@Blckknght, thats true i Just want to ignore that where user ID is the the userid.txt file but really not exixts in the LDAP database. but as you said we can do it better.
The purpose if if __name__ == '__main__': is to allow you to either run your program from the command line and have it do something, or use it as a module without having it do anything.
With your example function1.py, if you wanted to use CheckUid from another file, it wouldn't be very useful: you could do
from function1 import CheckUid
userInfo = CheckUid('karn')
but you wouldn't get anything: userInfo would always be None, whether the user karn was found or not, and the function may have printed output that you might not want. But if you were to use CheckUid from function2.py instead, userInfo would be None only if the user wasn't found, and a dictionary if it was. So you almost always want to use the style from function2.py and return a value rather than printing it.
Since your function sometimes returns a dictionary and sometimes returns None, wherever you use it - in an if __name__ == '__main__' section or in another script - you'll need to check what you get back from it. For example, to skip printing None when a user is not found, you could modify your function2.py as follows:
if __name__ == '__main__':
with open('hh', mode='rt', encoding='utf-8') as f:
for line in f:
result = CheckUid(line.strip())
if result:
print(result)
Nathan, you raised a valid point of importing the function into another that's very important, thx for your detailed Answers , very appreciative. Though i have used a little diffent approach with the code geting the results tabulated here https://stackoverflow.com/a/48423452/4795853
| common-pile/stackexchange_filtered |
Database table Creation MySQL
I should create an online car rental system using MySQL and PHP, that allows customers to reserve a car by entering a search criteria such as Pickup date, Return data, car category etc.
I have a car table that include the following columns: car_id, car_name, car_make, car_model.
I have reservation table that include the following columns: reservation_id, reservation_date, customer_id, total_price.
I have a reservedCar table that include the following columns: reservation_id, car_id, pickup_date, return_date
Question : I am required to add a column called quantity, as there are multiple cars with the exact same features. For example: if there are 5 Toyota cars and the customer wishes to reserve 2 of these cars, in the reservedCar table a two similar records should be inserted with two different car_id. How should it be done?
Are you asking how to add the column or how to reserve the car? Some code would be also helpful for a base understanding of how you did things.
you should add a column 'flag' in car table ,that can be boolen or smallint ,that will contain true/false value. whenever a user will reserve the car then table table particular row flag value will be update by true and on return date flag value will be false... and finally using the select count query you can display counts of numbered of reserved car
If you have database design questions on how to model you might want to look into Fully Communication Oriented Information Modeling (FCO-IM) is a method for building conceptual information models. Such models can then be automatically transformed into entity-relationship models (|ERM) with software https://en.m.wikipedia.org/wiki/FCO-IM
| common-pile/stackexchange_filtered |
SQLCLR stored procedures with input parameter
I am quite new to SQLCLR stored procedures. In my example I have two stored procedures, one without a parameter and one with a input parameter. Both target the same tables.
The one without the parameter is working fine and returns all rows in the result. But the one with an input parameter targeting the same tables is not returning any rows even though I am not receiving any errors. The input parameter in the .NET code is set as SqlString and in the database is NVARCHAR(50).
This is how my C# code looks:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AirlineSqlStoredProcedure (SqlString strAirline)
{
SqlConnection conn = new SqlConnection();
conn.ConnectionString = "Context Connection=true";
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
conn.Open();
cmd.CommandText = "SELECT dbo.tblAirline.AirlineName, dbo.tblAircraft.AircraftUnits, dbo.tblAircraft.Manufacturer, dbo.tblAircraft.AircraftModel FROM dbo.tblAircraft INNER JOIN dbo.tblAirline ON dbo.tblAircraft.AirlineID = dbo.tblAirline.AirlineID WHERE AirlineName = '@strAirline' ORDER BY dbo.tblAircraft.AircraftUnits DESC";
SqlParameter paramAge = new SqlParameter();
paramAge.Value = strAirline;
paramAge.Direction = ParameterDirection.Input;
paramAge.SqlDbType = SqlDbType.NVarChar;
paramAge.ParameterName = "@strAirline";
cmd.Parameters.Add(paramAge);
SqlDataReader sqldr = cmd.ExecuteReader();
SqlContext.Pipe.Send(sqldr);
sqldr.Close();
conn.Close();
}
[Microsoft.SqlServer.Server.SqlProcedure]
public static void AirlineAircraftStoredProcedure()
{
//It returns rows from Roles table
SqlConnection conn = new SqlConnection();
conn.ConnectionString = "Context Connection=true";
SqlCommand cmd = new SqlCommand();
cmd.Connection = conn;
cmd.CommandText = "SELECT dbo.tblAirline.AirlineName, dbo.tblAircraft.AircraftUnits, dbo.tblAircraft.Manufacturer, dbo.tblAircraft.AircraftModel FROM dbo.tblAircraft INNER JOIN dbo.tblAirline ON dbo.tblAircraft.AirlineID = dbo.tblAirline.AirlineID ORDER BY dbo.tblAircraft.AircraftUnits DESC";
conn.Open();
SqlDataReader sqldr = cmd.ExecuteReader();
SqlContext.Pipe.Send(sqldr);
sqldr.Close();
conn.Close();
}
}
And when I execute the stored procedure I get empty rows:
USE [TravelSight]
GO
DECLARE @return_value Int
EXEC @return_value = [dbo].[AirlineSqlStoredProcedure]
@strAirline = N'American Airlines'
SELECT @return_value as 'Return Value'
GO
(0 row(s) affected)
(1 row(s) affected)
Also, for the input parameter I put an N before the string.
When running the stored procedure AirlineAircraftStoredProcedure targeting the same tables, I am getting all the rows back:
USE [TravelSight]
GO
DECLARE @return_value Int
EXEC @return_value = [dbo].[AirlineAircraftStoredProcedure]
SELECT @return_value as 'Return Value'
GO
(8 row(s) affected)
(1 row(s) affected)
What have I done wrong here?
Maybe these are just examples, but in case they're not, why are these CLR stored procedures? It doesn't look like you're doing anything that the CLR is particularly better at than native T-SQL procs. So you're incurring the overhead for none of the benefit.
You are right, this was just a exercise to learn the proper syntax.
Two (maybe 3) problems:
paramAge.Value = strAirline; should be:
paramAge.Value = strAirline.Value;
Notice the use of the .Value property.
WHERE AirlineName = '@strAirline' (within cmd.CommandText = "... ) should be:
WHERE AirlineName = @strAirline
Notice that the single-quotes were removed in the query text. You only use single-quotes for literals and not parameters / variables.
Replace the following 5 lines:
SqlParameter paramAge = new SqlParameter();
paramAge.Value = strAirline;
paramAge.Direction = ParameterDirection.Input;
paramAge.SqlDbType = SqlDbType.NVarChar;
paramAge.ParameterName = "@strAirline";
with:
SqlParameter paramAge = new SqlParameter("@strAirline", SqlDbType.NVarChar, 50);
paramAge.Direction = ParameterDirection.Input; // optional as it is the default
paramAge.Value = strAirline.Value;
Please note that the "size" parameter was set in the call to new SqlParameter(). It is important to always specify max string lengths.
With the technical problem out of the way, there are two larger issues to address:
Why is this being done in SQLCLR in the first place? Nothing specific to .NET is being done. Based solely on the code posted in the Question, this would be much better off as a regular T-SQL Stored Procedure.
If it must remain in SQLCLR, then you really need to wrap the disposable objects in using() constructs, namely: SqlConnection, SqlCommand, and SqlDataReader. For example:
using (SqlConnection conn = new SqlConnection("Context Connection=true"))
{
using (SqlCommand cmd = conn.CreateCommand())
{
...
}
}
and then you do not need the following two lines:
sqldr.Close();
conn.Close();
as they will be called implicitly by the call to each of their Dispose() methods.
Thanks man, that helped out :) True, this is not a proper use of a clr stored procedure, just trying this out as an exercise, I will proceed later on and add some business logic to it.
Thanks once more :)
| common-pile/stackexchange_filtered |
setting Cygwin's $HOME to Windows profile directory
Are there any drawbacks to having Cygwin and Windows share the same $HOME directory, in this case the Windows profile directory?
Merging them will work fine.
Cygwin proper doesn't store anything in your HOME directory. On first running Cygwin with a fresh home directory, default versions of .bash_profile and such get put there, but again, there is no conflict with things that already get put there.
I, too, find it frequently convenient to be able to use Cygwin on things that live under your Windows profile directory. However, I don't want the two to be the same[*], so I just make a symlink to it in my home directory. I'm never farther from my Windows profile directory than a cd ~/WinHome.
[*] So many programs feel privileged to throw random junk in the Windows profile directory that it would annoy me to see it every time I say ls in my home directory. I prefer to keep that mess at arm's length. I feel my home directory should be mine. I'm happy to let ~/WinHome be a midden.
Thanks for your answer! I agree with your comment on the Windows profile directory, however I'd say it's not really that different from *nixes. My main objective is to get rid of the hassle of managing two home directories, and to simplify management of config files for software that I use both in Cygwin and native Windows, eg. gVim.
It is simple
go to .bashrc file and set an alias to your desired folder
alias someName="cd path/to/your/home/directory"
save it and wherever you are in cygwin, just type someName and you will be there in a blink. I have used this hack to make links to my folders and it is super fast in my navigations.
| common-pile/stackexchange_filtered |
Set values to ranges in a different spreadsheet
Can anybody offer suggestions about how to update or set values to a range in a different spreadsheet. I know the importRange function will bring values into the active sheet, but I want to "push" a single row of data from the active sheet to a row at the bottom of a sheet called FinalData, which is in a different spreadsheet.
The data I want to "push" is populated by other code, resulting in a single row of data in a sheet called TempData. The data exists in range TempData!A2:U2.
My goal is to append data from TempData!A2:U2 to a new row at the bottom of a table called "DataFinal", which is in a completely separate google spreadsheet (but on the same "google drive".)
Here's what I tried so far:
// Row to FinalData
var ss = SpreadsheetApp.getActiveSpreadsheet ();
var startSheet = ss.getSheetByName("TempData");
var sourceRange = ss.getRange ("TempData!A2:U");
var target = SpreadsheetApp.openById("1bWKS_Z1JwLSCO5WSq1iNP1LLQpVXnspA4WkzdyxYDNY");
var targetSheet = target.getSheetByName("DataFinal");
var lastRow = targetSheet.getLastRow();
targetSheet.insertRowAfter(lastRow);
sourceRange.copyTo(targetSheet.getRange(lastRow + 1,1), {contentsOnly: true});
When I run it I get an error that says "Target range and source range must be on the same spreadsheet.". There must be a way to do this-- any suggestions would be welcome.
copyTo() can be used for the same spreadsheet.
From your script, I think that you can achieve it using getValues() and setValues(), because you use contentsOnly: true. So how about this modification?
From :
sourceRange.copyTo(targetSheet.getRange(lastRow + 1,1), {contentsOnly: true})
To :
var sourceValues = sourceRange.getValues();
targetSheet.getRange(lastRow + 1, 1, sourceValues.length, sourceValues[0].length).setValues(sourceValues);
Note :
If you want to use copyTo(), this thread might be useful for your situation.
References :
copyTo()
getValues()
setValues()
If this was not what you want, please tell me. I would like to modify it.
Oh my gosh it worked! Thank you so much!!! I have spent hours slogging thru the intenet trying to figure out how to do this. You made it look easy! I SO MUCH appreciate it! Thank you very much!!!!
@Carolyn I'm glad your issue was solved. Thank you, too!
| common-pile/stackexchange_filtered |
Is there a fast and easy way to calculate high powers mentally, e.g. $ 67^{81} $?
I have done research into vedic mathematics and I was wondering if it's possible that a faster method exists than the one I already know which involves Pascal's triangle.
How fast and easy do you want your method to be? $67^{81}$ has $148$ digits and regardless of which method you use, (unless you use a computer) it will be very slow and painful.
As pointed out in the comments, I'm assuming you don't actually need all the digits, just an approximation. For this you can use $x^y = 10^{y \log_{10} x }$, and then break down $x$ to it's nearest power of ten - this will work well when $x$ is large. In your example:
$$67^{81}=10^{81 \log_{10} 67}$$
$$\log_{10} 67=\log_{10} (0.67\times100)=2+\log_{10} (0.67) =2+\frac{\ln 0.67}{\ln 10}$$
Using the Taylor series, you can calculate:
$$\ln 0.67=\ln(1-0.33)\approx-0.33-\frac{0.33^2}{2}-\frac{0.33^3}{3}\approx-0.396$$
And now using $\ln 10\approx 2.3$, we have:
$$81\log_{10} 67 \approx 148$$
Giving you $10^{148}$, which is really close to the correct $\approx8.2\times 10^{147}$.
| common-pile/stackexchange_filtered |
Translate all magento1.9.2 emails in French language
We are running mangento1.9.2 website which contains two stores i-e English and French.
Now all the emails sent to users are only in English language but if the users in French store then email should be sent in French language.
Kindly guide me that how can I translate the magento emails into French.
Thank you
Create the template Email that you want to send in frensh language in: app/locale/fr_FR and the Email Will be sent in frensh language if a store is set in frensh.
| common-pile/stackexchange_filtered |
Changing the domain name in WP multisite - multidomain portal
I changed the domain to a site on my word press 3.7 multisite (subdirectory) multidomain installation.
All wents fine but I cannot login anymore as superadmin to the domain changed site.
Also if I pass from the supersite to the domain name changed site, the portal ask again for credentials and doesn't recognise me.
In the supersite I changed the setting in the following areas: setting domain, domain mapping, sites.
Any tips?
Thanks
MAybe try to delete / regenerate the .htaccess ?
SOLVED!
I changed back the site to the original domain name, later I added in Admin->domain the new domain using the same site id and setting it as primary site.
You can update your database to point to the new domain. And make the following changes in your wp-config.php
define('DOMAIN_CURRENT_SITE', 'www.newdomain.com');
define('PATH_CURRENT_SITE', '/');
In MySQL
update wp_options set option_value = "http://www.newsite.com/" where option_id = 1;
update wp_options set option_value = "http://www.newsite.com/" where option_id = 2;
update wp_blogs set domain = "www.newsite.com" where site_id = 1;
update wp_sitemeta set meta_value = "http://www.newsite.com/" where meta_id = 14;
Note: These settings in MySQL might change a little bit, so just confirm by doing a select on the table you're updating once. Mostly the structure should remain the same.
| common-pile/stackexchange_filtered |
Intellij suddenly throwing ClassNotFoundException
I'm at a complete loss here. I have a project on an external hard drive called LenseProject. Inside LenseProject, I have .idea, lib, Natives and SRC folders. I also have a number of text files for reading.
When I left work last night, this all worked fine. Coming in this morning, I'm met with:
Exception in thread "main" java.lang.ClassNotFoundException: QuadTest
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:188)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:113)
QuadTest being the main class.
Information that I think might be helpful:
If I comment out the imports, I get the appropriate error messages.
(Cannot find symbol, etc).
I have 9 dependencies, located in lib\lwjgl-2.9.0\jar that are set up
in Project Structure -> Libraries.
The module has LenseProject as the content root, and SRC as the
source folder.
The language level is 7.0
I have the dependencies set to export in Project Structure -> Modules
-> Dependencies.
In Compiler Output, I have it set to Use Module Compile Output Path,
as Inherit Project Compile Output Path gave me "Cannot start
compilation: The output path is not specified for module
"LenseProject". Specify the output path in Configure Project.
I have VM option -Djava.library.path=Natives.
I can compile and run the program through command prompt no problem.
I was missing my configure options when I started it up this morning. I had to set the SDK again, and libraries.
You need to download dependencies first then add it to the project classpath.
I'm not sure what you mean. I have the dependencies in the lib folder.
Have your tried to invalidate caches. often help with sudden errors.
File-> invalidate caches
Have you had found a solution for this issue? I'm facing the same but three years later :)
in 2020: a potential remedy, if using Gradle, is to edit your build.gradle and rebuild it
Open your Idea File --> Project Structure --> Modules ,and then choose problematic module,in the "Paths" tab change selected to "Use module compile output path ".
I ran into a similar issue while writing unit tests. Everything would work at the command line but failed in IntelliJ. IntelliJ would successfully compile but not run the test.
Then I ran across a post on IntelliJ's blog: http://blog.jetbrains.com/idea/2014/03/intellij-idea-13-1-released/
anet says:
March 21, 2014 at 12:20 pm
You may remove the existing junit dependency and allow IDEA to add JUnit library for your from scratch.
New junit doesn’t bundle hamcrest anymore but still depends on it on runtime.
Thanks,
Anna
See more at: http://blog.jetbrains.com/idea/2014/03/intellij-idea-13-1-released/#sthash.2KNQuwZ5.dpuf
I removed JUnit from my project settings and let IntelliJ add it back. Then things worked fine.
I had similar issue. A new dependency was not being found when running tomcat. The problem was it wasn't being deployed to /WEB-INF/lib. After half a day banging my head on the desk I found this YouTube video that fixed it.
Essentially I needed to add the dependencies from module settings-> artifact->MyWar->Output Layout tab. Under available elements open your projects folder. If there are maven dependencies listed there, select them. Then right click->Put into /WEB-INF/lib
In my case, the problem was I reused the "out" directory for program output.
I solved it by redirect the output to another folder.
I suggest confirming your Run Configuration as follows:
Toolbar: Run->Edit Configurations..., confirm your Configuration is correct.
Hope it helps.
I know its a bit old post but for me helped by going to Run-->Edit Configurations--> In Application Selected your main code file --> Configuration on the right panel. Checked "Use alternative JRE:" and selecting JDK folder called jre(.../Java/jdk.x.x.x_xx/jre). Apply and OK. I am really new with intelliJ but that solved my problem hope it helps someone.
My problem was, that all this renaming the project-structure, my Maven dependencies didn't mach.
After changing the maven dependencies in the pom file, it worked.
File -> Project Structrure -> Project.
Change the Project Language level from "SDK default" to Actual version that you are using.
This is weird, but solves the problem.
Check your mainClassName
mainClassName = "com.xxx.xxxApplicationKt"
After bashing head into multiple times, the issue is resolved by downgrading the Junit version.
At the time of writing the latest junit version is 5.8.2. However, after downgrading the version to 5.7.2 in the pom.xml the test successfully Run.
Resolution 1 : Update Java, if you're using Java 8 check Java version with java -version if Java is added to Path(Windows)
From the Start Menu, and Java Folder, Check for Updates
Resolution 2 : Downgrade junit in pom.xml
Resolution 3 : Install the proper JDK latest version from Oracle link-windows not from Google suggested link
Try going to Preferences -> Compiler and select Eclipse, rather than using javac.
So why are you telling Intellij to select the Eclipse compilers? The chances are, they won't be installed. And is it even an option to select them?
@StephenC yes, it is an option. Check it out.
OK ... so assuming that the Eclipse compilers are installed (a stretch!) ... >>why<< would changing the compilers fix what is essentially a runtime classpath problem?
| common-pile/stackexchange_filtered |
How to read in graphml file into networkx with weird characters?
I am trying to read in a graphml file of my facebook network into NetworkX. However, because some of my friends have unusual characters, such as accents, their names are unable to be read into networkx.
I ran the command:
g = nx.read_graphml("/Users/juliehui/Desktop/MyGraph.graphml")
I then get the error:
TypeError: int() argument must be a string or a number, not 'NoneType'
I looked at the graphml file in Sublime Text, and it seems to have trouble with names, such as Andrés
I then looked at the graphml file in Gephi to see what it looked like. The name, Andrés, in Gephi looks like:
Andrés
When I export the data without making any edits into a separate graphml file, and try to read that file in, I get the error:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 7-8: ordinal not in range(128)
When I delete the problem names in Gephi, then the file reads fine.
I am not sure if there is some way to edit my original graphml file to fix the names with unusual characters.
I have looked at this page: Graphml parse error
But, I could not figure out if my graphml file is in UTF-8 or needs to be in UTF-8 or needs to be in ASCII?
I have also tried:
data="/Users/juliehui/Desktop/MyGraph.graphml"
udata=data.decode("utf-8")
asciidata=udata.encode("ascii","ignore")
g = nx.read_graphml(asciidata)
But, this gave the error:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 8-19: ordinal not in range(128)
How do I resolve this error?
This worked for me in Python 2.7. You have to specify the node type as unicode.
nx.read_graphml('/path/to/my/file.graphml', unicode)
Nice and neat answer!
I would suggest to use unidecode to remove all non ASCII character in the file:
from unidecode import unidecode
data_in="/Users/juliehui/Desktop/MyGraph.graphml"
data_ascii ="/Users/juliehui/Desktop/MyGraph_ASCII.graphml"
f_in = open(data_in, 'rb')
f_out = open(data_ascii, 'wb')
for line in f_in:
f_out.write(unidecode(line))
f_in.close()
f_out.close()
Then you can hopefully use:
g = nx.read_graphml(data_ascii)
| common-pile/stackexchange_filtered |
celery tasks: update state after try & except block
I have celery 4.1.0, django 1.11.11, rabbitMQ and Redis for results.
@shared_task(bind=True)
def one_task(self):
try:
...
some db stuff here
...
except BaseException as error:
self.update_state(state='FAILURE',
meta={'notes': 'some notes'})
logger.error('Error Message ', exc_info=True,
extra={'error': error})
So, when my code runs into except block self.update_state does not work but logger works...
Actually, I'm not sure if
@shared_task(bin=True)
it's right...
What I want to do it's catch exceptions(through try & except blocks) of my python code, change states and terminate the tasks manually.
So, any advise/help?
What do you mean by saying that it is not working?
Well, when I look in my flower dashboard, my task finish with success state :/
Celery will set success status on every task that finished without throwing an exception. And you're catching that exception without throwing it back.
Unfortunately, throwing it won't help, because celery will put task into failed state with its own error message.
Only solution to that problem is to set ignore_result=True option on this task, so celery won't manage state of this task, but celery documentation suggests that it may have other side effects.
| common-pile/stackexchange_filtered |
Trying to Scanner from delimited txt file into a String Array
I am trying to read a tab delimited txt file and put the data into two columns of a String array.
package mailsender;
import java.io.*;
import java.util.Scanner;
public class MailSenderList {
static String address=null;
static String name=null;
static String[][] mailer;
// @SuppressWarnings("empty-statement")
public static void main(String[] args) throws IOException {
try {
Scanner s = new Scanner(new BufferedReader(new FileReader("/home/fotis/Documents/MailRecipients.txt"),'\t')); //This is the path and the name of my file
for(int i=0;i>=30;i++){
for(int j=0;j>=2;j++){
if (s.hasNext());{
mailer[i][j]=s.next(); //here i am trying to put 1st+2 word into first column and 2nd+2 into second column.
}
}
}
for(int ii=0;ii>=30;ii++){
System.out.println("Line : ");
for(int ji=0;ji>=2;ji++){
System.out.print(" " + mailer[ii][ji]);
//trying to print and check the array
}
}
}
catch (java.io.FileNotFoundException e) {
System.out.println("Error opening file, ending program");
//System.exit(1);}
}
}
class mail{
mail(){
}
}
}
The file builds successfully but no result in System.out.In debugger, it seems as it never passes from the first for loop.
The code will never enter either for loop. You are saying i=0, and while i is greater than or equal to 30, which it is not, so it will exit the loop.
Indeed.For was so wrong.But I was dizzy.
Nevertheless , confusing "<>" was not the real problem.
The code worked with while.
Here is the whole class.
import java.io.File;
import java.util.Scanner;
public class ReadFile {
public static void main(String[] args) {
try {
File file = new File("/home/fotis/Documents/Mailers.txt"); //this a the path there
try (Scanner input = new Scanner(file).useDelimiter("\\t")) {
String line[] = new String[150000];
int i=0;
while (input.hasNextLine()) {
line[i] = input.next();
System.out.println(line[i]);
i++;
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
Well, yes this works better. But your originally problem was that you mixed up greater than and less than.
You probably made a mistake between < and >. Try switching i >= 30 in both for loops to i <= 30. Same with the j loops.
| common-pile/stackexchange_filtered |
How to implement elastic scrolling using jquery mobile?
I am developing an ebook reader app. When i swipe the pages, elastic scrolling has to happen. Can somebody please let me know how to do this using jquery mobile?
| common-pile/stackexchange_filtered |
Django ORM fails to recognise concrete inheritance in nested ON statement
Defining a custom Django user combined with django-taggit I have ran into an ORM issue, I also have this issue in the django admin filters.
NOTE: I am using this snippet: https://djangosnippets.org/snippets/1034/
# User
id | first_name
---------------------------------
1 | John
2 | Jane
# MyUser
usr_ptr_id | subscription
---------------------------------
1 | 'A'
2 | 'B'
Now when I use the django ORM to filter on certain tags for MyUser, e.g.
MyUser.objects.filter(tags__in=tags)
I get the following error:
(1054, "Unknown column 'myapp_user.id' in 'on clause'")
The printed raw query:
SELECT `myproject_user`.`id`, `myproject_user`.`first_name`, `myapp_user`.`user_ptr_id`, `myapp_user`.`subscription`
FROM `myapp_user` INNER JOIN `myproject_user`
ON ( `myapp_user`.`user_ptr_id` = `myproject_user`.`id` )
INNER JOIN `taggit_taggedtag`
ON ( `myapp_user`.`id` = `taggit_taggedtag`.`object_id`
AND (`taggit_taggedtag`.`content_type_id` = 31))
WHERE (`taggit_taggedtag`.`tag_id`)
IN (SELECT `taggit_tag`.`id` FROM `taggit_tag` WHERE `taggit_tag`.`id` IN (1, 3)))
Changing 'id' to 'user_ptr_id' in the second ON part makes the query work is there any way to force this with the Django ORM ?
Can you add code of models?
You need to provide more information. If we could see how your models are defined, that would help.
there is no way to answer this with the info provided. you have to provide some more code.
The issue is that you can't look for an ID in a list of Tags; you need to look for the ID in a list of IDs. To fix this, construct a values_list of all of the IDs you want to filter by, and then pass that list off to your original query instead.
id_list = Tag.objects.all().values_list("id")
MyUser.objects.filter(tags__in=id_list)
If you have a many_to_many rleationship between MyUser and Tag, you can also just use the manytomany manager in place of the whole thing:
MyUser.tags.all()
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.