text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
"""This module is intended for solving recurrences or, in other words, difference equations. Currently supported are linear, inhomogeneous equations with polynomial or rational coefficients. The solutions are obtained among polynomials, rational functions, hypergeometric terms, or combinations of hypergeometric term which are pairwise dissimilar. rsolve_X functions were meant as a low level interface for rsolve() which would use Mathematica's syntax. Given a recurrence relation: a_{k}(n) y(n+k) + a_{k-1}(n) y(n+k-1) + ... + a_{0}(n) y(n) = f(n) where k > 0 and a_{i}(n) are polynomials in n. To use rsolve_X we need to put all coefficients in to a list L of k+1 elements the following way: L = [ a_{0}(n), ..., a_{k-1}(n), a_{k}(n) ] where L[i], for i=0..k, maps to a_{i}(n) y(n+i) (y(n+i) is implicit). For example if we would like to compute m-th Bernoulli polynomial up to a constant (example was taken from rsolve_poly docstring), then we would use b(n+1) - b(n) == m*n**(m-1) recurrence, which has solution b(n) = B_m + C. Then L = [-1, 1] and f(n) = m*n**(m-1) and finally for m=4: >>> from sympy import Symbol, bernoulli, rsolve_poly >>> n = Symbol('n', integer=True) >>> rsolve_poly([-1, 1], 4*n**3, n) C0 + n**4 - 2*n**3 + n**2 >>> bernoulli(4, n) n**4 - 2*n**3 + n**2 - 1/30 For the sake of completeness, f(n) can be: [1] a polynomial -> rsolve_poly [2] a rational function -> rsolve_ratio [3] a hypergeometric function -> rsolve_hyper """ from collections import defaultdict from sympy.core.singleton import S from sympy.core.numbers import Rational from sympy.core.symbol import Symbol, Wild, Dummy from sympy.core.relational import Equality from sympy.core.add import Add from sympy.core.mul import Mul from sympy.core import sympify from sympy.simplify import simplify, hypersimp, hypersimilar from sympy.solvers import solve, solve_undetermined_coeffs from sympy.polys import Poly, quo, gcd, lcm, roots, resultant from sympy.functions import binomial, factorial, FallingFactorial, RisingFactorial from sympy.matrices import Matrix, casoratian from sympy.concrete import product from sympy.core.compatibility import default_sort_key from sympy.utilities.iterables import numbered_symbols[docs]def rsolve_poly(coeffs, f, n, **hints): """Given linear recurrence operator L of order 'k' with polynomial coefficients and inhomogeneous equation Ly = f, where 'f' is a polynomial, we seek for all polynomial solutions over field K of characteristic zero. The algorithm performs two basic steps: (1) Compute degree N of the general polynomial solution. (2) 'r': [1] S. A. Abramov, M. Bronstein and M. Petkovsek, On polynomial solutions of linear operator equations, in: T. Levelt, ed., Proc. ISSAC '95, ACM Press, New York, 1995, 290-296. [2] M. Petkovsek, Hypergeometric solutions of linear recurrences with polynomial coefficients, J. Symbolic Computation, 14 (1992), 243-264. [3] M. Petkovsek, H. S. Wilf, D. Zeilberger, A = B, 1996. """ f = sympify(f) if not f.is_polynomial(n): return None homogeneous = f.is_zero r = len(coeffs) - 1 coeffs = [ Poly(coeff, n) for coeff in coeffs ] polys = [ Poly(0, n) ] * (r + 1) terms = [ (S.Zero, S.NegativeInfinity) ] *(r + 1) for i in xrange(0, r + 1): for j in xrange(i, r + 1): polys[i] += coeffs[j]*binomial(j, i) if not polys[i].is_zero: (exp,), coeff = polys[i].LT() terms[i] = (coeff, exp) d = b = terms[0][1] for i in xrange(1, r + 1): if terms[i][1] > d: d = terms[i][1] if terms[i][1] - i > b: b = terms[i][1] - i d, b = int(d), int(b) x = Dummy('x') degree_poly = S.Zero for i in xrange(0, r + 1): if terms[i][1] - i == b: degree_poly += terms[i][0]*FallingFactorial(x, i) nni_roots = roots(degree_poly, x, filter='Z', predicate=lambda r: r >= 0).keys() if nni_roots: N = [max(nni_roots)] else: N = [] if homogeneous: N += [-b - 1] else: N += [f.as_poly(n).degree() - b, -b - 1] N = int(max(N)) if N < 0: if homogeneous: if hints.get('symbols', False): return (S.Zero, []) else: return S.Zero else: return None if N <= r: C = [] y = E = S.Zero for i in xrange(0, N + 1): C.append(Symbol('C' + str(i))) y += C[i] * n**i for i in xrange(0, r + 1): E += coeffs[i].as_expr()*y.subs(n, n + i) solutions = solve_undetermined_coeffs(E - f, C, n) if solutions is not None: C = [ c for c in C if (c not in solutions) ] result = y.subs(solutions) else: return None # TBD else: A = r U = N + A + b + 1 nni_roots = roots(polys[r], filter='Z', predicate=lambda r: r >= 0).keys() if nni_roots != []: a = max(nni_roots) + 1 else: a = S.Zero def _zero_vector(k): return [S.Zero] * k def _one_vector(k): return [S.One] * k def _delta(p, k): B = S.One D = p.subs(n, a + k) for i in xrange(1, k + 1): B *= -Rational(k - i + 1, i) D += B * p.subs(n, a + k - i) return D alpha = {} for i in xrange(-A, d + 1): I = _one_vector(d + 1) for k in xrange(1, d + 1): I[k] = I[k - 1] * (x + i - k + 1)/k alpha[i] = S.Zero for j in xrange(0, A + 1): for k in xrange(0, d + 1): B = binomial(k, i + j) D = _delta(polys[j].as_expr(), k) alpha[i] += I[k]*B*D V = Matrix(U, A, lambda i, j: int(i == j)) if homogeneous: for i in xrange(A, U): v = _zero_vector(A) for k in xrange(1, A + b + 1): if i - k < 0: break B = alpha[k - A].subs(x, i - k) for j in xrange(0, A): v[j] += B * V[i - k, j] denom = alpha[-A].subs(x, i) for j in xrange(0, A): V[i, j] = -v[j] / denom else: G = _zero_vector(U) for i in xrange(A, U): v = _zero_vector(A) g = S.Zero for k in xrange(1, A + b + 1): if i - k < 0: break B = alpha[k - A].subs(x, i - k) for j in xrange(0, A): v[j] += B * V[i - k, j] g += B * G[i - k] denom = alpha[-A].subs(x, i) for j in xrange(0, A): V[i, j] = -v[j] / denom G[i] = (_delta(f, i - A) - g) / denom P, Q = _one_vector(U), _zero_vector(A) for i in xrange(1, U): P[i] = (P[i - 1] * (n - a - i + 1)/i).expand() for i in xrange(0, A): Q[i] = Add(*[ (v*p).expand() for v, p in zip(V[:, i], P) ]) if not homogeneous: h = Add(*[ (g*p).expand() for g, p in zip(G, P) ]) C = [ Symbol('C' + str(i)) for i in xrange(0, A) ] g = lambda i: Add(*[ c*_delta(q, i) for c, q in zip(C, Q) ]) if homogeneous: E = [ g(i) for i in xrange(N + 1, U) ] else: E = [ g(i) + _delta(h, i) for i in xrange(N + 1, U) ] if E != []: solutions = solve(E, *C) if not solutions: if homogeneous: if hints.get('symbols', False): return (S.Zero, []) else: return S.Zero else: return None else: solutions = {} if homogeneous: result = S.Zero else: result = h for c, q in list(zip(C, Q)): if c in solutions: s = solutions[c]*q C.remove(c) else: s = c*q result += s.expand() if hints.get('symbols', False): return (result, C) else: return result[docs]def rsolve_ratio(coeffs, f, n, **hints): """Given linear recurrence operator L of order 'k' with polynomial coefficients and inhomogeneous equation Ly = f, where 'f': (1) Compute polynomial v(n) which can be used as universal denominator of any rational solution of equation Ly = f. : [1] S. A. Abramov, Rational solutions of linear difference and q-difference equations with polynomial coefficients, in: T. Levelt, ed., Proc. ISSAC '95, ACM Press, New York, 1995, 285-289)) """ f = sympify(f) if not f.is_polynomial(n): return None coeffs = map(sympify, coeffs) r = len(coeffs) - 1 A, B = coeffs[r], coeffs[0] A = A.subs(n, n - r).expand() h = Dummy('h') res = resultant(A, B.subs(n, n + h), n) if not res.is_polynomial(h): p, q = res.as_numer_denom() res = quo(p, q, h) nni_roots = roots(res, h, filter='Z', predicate=lambda r: r >= 0).keys() if not nni_roots: return rsolve_poly(coeffs, f, n, **hints) else: C, numers = S.One, [S.Zero]*(r + 1) for i in xrange(int(max(nni_roots)), -1, -1): d = gcd(A, B.subs(n, n + i), n) A = quo(A, d, n) B = quo(B, d.subs(n, n - i), n) C *= Mul(*[ d.subs(n, n - j) for j in xrange(0, i + 1) ]) denoms = [ C.subs(n, n + i) for i in range(0, r + 1) ] for i in range(0, r + 1): g = gcd(coeffs[i], denoms[i], n) numers[i] = quo(coeffs[i], g, n) denoms[i] = quo(denoms[i], g, n) for i in xrange(0, r + 1): numers[i] *= Mul(*(denoms[:i] + denoms[i + 1:])) result = rsolve_poly(numers, f * Mul(*denoms), n, **hints) if result is not None: if hints.get('symbols', False): return (simplify(result[0] / C), result[1]) else: return simplify(result / C) else: return None[docs]def rsolve_hyper(coeffs, f, n, **hints): """Given linear recurrence operator L of order 'k': (1) Group together similar hypergeometric terms in the inhomogeneous part of Ly = f, and find particular solution using Abramov's algorithm. (2) Compute generating set of L and find basis in it, so that all solutions are linearly independent. (3): [1] M. Petkovsek, Hypergeometric solutions of linear recurrences with polynomial coefficients, J. Symbolic Computation, 14 (1992), 243-264. """ coeffs = map(sympify, coeffs) f = sympify(f) r, kernel = len(coeffs) - 1, [] if not f.is_zero: if f.is_Add: similar = {} for g in f.expand().args: if not g.is_hypergeometric(n): return None for h in similar.iterkeys(): if hypersimilar(g, h, n): similar[h] += g break else: similar[g] = S.Zero inhomogeneous = [] for g, h in similar.iteritems(): inhomogeneous.append(g + h) elif f.is_hypergeometric(n): inhomogeneous = [f] else: return None for i, g in enumerate(inhomogeneous): coeff, polys = S.One, coeffs[:] denoms = [ S.One ] * (r + 1) s = hypersimp(g, n) for j in xrange(1, r + 1): coeff *= s.subs(n, n + j - 1) p, q = coeff.as_numer_denom() polys[j] *= p denoms[j] = q for j in xrange(0, r + 1): polys[j] *= Mul(*(denoms[:j] + denoms[j + 1:])) R = rsolve_poly(polys, Mul(*denoms), n) if not (R is None or R is S.Zero): inhomogeneous[i] *= R else: return None result = Add(*inhomogeneous) else: result = S.Zero Z = Dummy('Z') p, q = coeffs[0], coeffs[r].subs(n, n - r + 1) p_factors = [ z for z in roots(p, n).iterkeys() ] q_factors = [ z for z in roots(q, n).iterkeys() ] factors = [ (S.One, S.One) ] for p in p_factors: for q in q_factors: if p.is_integer and q.is_integer and p <= q: continue else: factors += [(n - p, n - q)] p = [ (n - p, S.One) for p in p_factors ] q = [ (S.One, n - q) for q in q_factors ] factors = p + factors + q for A, B in factors: polys, degrees = [], [] D = A*B.subs(n, n + r - 1) for i in xrange(0, r + 1): a = Mul(*[ A.subs(n, n + j) for j in xrange(0, i) ]) b = Mul(*[ B.subs(n, n + j) for j in xrange(i, r) ]) poly = quo(coeffs[i]*a*b, D, n) polys.append(poly.as_poly(n)) if not poly.is_zero: degrees.append(polys[i].degree()) d, poly = max(degrees), S.Zero for i in xrange(0, r + 1): coeff = polys[i].nth(d) if coeff is not S.Zero: poly += coeff * Z**i for z in roots(poly, Z).iterkeys(): if z.is_zero: continue C = rsolve_poly([ polys[i]*z**i for i in xrange(r + 1) ], 0, n) if C is not None and C is not S.Zero: ratio = z * A * C.subs(n, n + 1) / B / C ratio = simplify(ratio) # If there is a nonnegative root in the denominator of the ratio, # this indicates that the term y(n_root) is zero, and one should # start the product with the term y(n_root + 1). n0 = 0 for n_root in roots(ratio.as_numer_denom()[1], n).keys(): n0 = max(n0, n_root + 1) K = product(ratio, (n, n0, n - 1)) if K.has(factorial, FallingFactorial, RisingFactorial): K = simplify(K) if casoratian(kernel + [K], n, zero=False) != 0: kernel.append(K) symbols = numbered_symbols('C') kernel.sort(key=default_sort_key) sk = zip(symbols, kernel) for C, ker in sk: result += C * ker if hints.get('symbols', False): return (result, [s for s, k in sk]) else: return result[docs]def rsolve(f, y, init=None): """! """ if isinstance(f, Equality): f = f.lhs - f.rhs n = y.args[0] k = Wild('k', exclude=(n,)) h_part = defaultdict(lambda: S.Zero) i_part = S.Zero for g in Add.make_args(f): coeff = S.One kspec = None for h in Mul.make_args(g): if h.is_Function: if h.func == y.func: result = h.args[0].match(n + k) if result is not None: kspec = int(result[k]) else: raise ValueError( "'%s(%s+k)' expected, got '%s'" % (y.func, n, h)) else: raise ValueError( "'%s' expected, got '%s'" % (y.func, h.func)) else: coeff *= h if kspec is not None: h_part[kspec] += coeff else: i_part += coeff for k, coeff in h_part.iteritems(): h_part[k] = simplify(coeff) common = S.One for coeff in h_part.itervalues(): if coeff.is_rational_function(n): if not coeff.is_polynomial(n): common = lcm(common, coeff.as_numer_denom()[1], n) else: raise ValueError( "Polynomial or rational function expected, got '%s'" % coeff) i_numer, i_denom = i_part.as_numer_denom() if i_denom.is_polynomial(n): common = lcm(common, i_denom, n) if common is not S.One: for k, coeff in h_part.iteritems(): numer, denom = coeff.as_numer_denom() h_part[k] = numer*quo(common, denom, n) i_part = i_numer*quo(common, i_denom, n) K_min = min(h_part.keys()) if K_min < 0: K = abs(K_min) H_part = defaultdict(lambda: S.Zero) i_part = i_part.subs(n, n + K).expand() common = common.subs(n, n + K).expand() for k, coeff in h_part.iteritems(): H_part[k + K] = coeff.subs(n, n + K).expand() else: H_part = h_part K_max = max(H_part.iterkeys()) coeffs = [H_part[i] for i in xrange(K_max + 1)] result = rsolve_hyper(coeffs, -i_part, n, symbols=True) if result is None: return None solution, symbols = result if init == {} or init == []: init = None if symbols and init is not None: if type(init) is list: init = dict([(i, init[i]) for i in xrange(len(init))]) equations = [] for k, v in init.iteritems(): try: i = int(k) except TypeError: if k.is_Function and k.func == y.func: i = int(k.args[0]) else: raise ValueError("Integer or term expected, got '%s'" % k) try: eq = solution.limit(n, i) - v except NotImplementedError: eq = solution.subs(n, i) - v equations.append(eq) result = solve(equations, *symbols) if not result: return None else: for k, v in result.iteritems(): solution = solution.subs(k, v) return solution | http://docs.sympy.org/dev/_modules/sympy/solvers/recurr.html | CC-MAIN-2013-20 | refinedweb | 2,484 | 60.72 |
This article tells you how to merge any two files, whether it is text or video irrespective of the content type and size of the file.
Introduction:I have used FileStream class for merging two files. First file will be opened for appending. Second file will be opened and read into a byte array. Then this byte array is written onto the first file stream and then both file steams will be closed. Now you have successfully appended the first file with the second file.Uses:If are left with two parts of video or audio file you have no option to play it because the player may not be able to read partial files. This is useful for joining files of same format splitted due to some size restrictions.Code:
private void cmdMerge_Click(object sender, EventArgs e)<?xml:namespace prefix = o
{
string sFile1 = txtFile1.Text;
string sFile2 = txtFile2.Text;
FileStream fs1=null;
FileStream fs2=null;
try
{
fs1 = File.Open(sFile1, FileMode.Append);
fs2 = File.Open(sFile2, FileMode.Open);
byte[] fs2Content = new byte[fs2.Length];
fs2.Read(fs2Content, 0, (int)fs2.Length);
fs1.Write(fs2Content, 0, (int)fs2.Length);
MessageBox.Show("Done!");
}
catch (Exception ex)
MessageBox.Show(ex.Message + " : " + ex.StackTrace);
finally
fs1.Close();
fs2.Close();
} | http://www.c-sharpcorner.com/UploadFile/sonuraj/MergeTwoFiles04082008144204PM/MergeTwoFiles.aspx | crawl-002 | refinedweb | 206 | 70.29 |
Issues
ZF-12224: Zend_Json::fromXml creates empty json string
Description
I have extracted an xml (xmp) string from a jpeg photo and successfully loaded it to create a DOMDocument in php. When I export the xml string from the DOMDocument (using $xmlstring = $xmldocument->saveXML()) and try to create a json string using Zend_Json::fromXml($xmlstring), I get the following "empty" json string: {"RDF":""}
Same thing happens when I use the xml string directly (ie without loading to a DOMDocument first). There aren't many recursion levels in the string (2 or 3) and I'm able to manipulate the DOMDocument in php without any issues.
Posted by Frank Brückner (frosch) on 2012-05-22T08:56:31.000+0000
Can you post the XML string here?
Posted by lindon (lindon) on 2012-05-24T03:54:50.000+0000
Here's the xml string:"> 123000000/100003000000/10000
Posted by Frank Brückner (frosch) on 2012-05-24T08:12:25.000+0000
This is a {{SimpleXML}} problem.
Posted by lindon (lindon) on 2012-05-29T01:14:20.000+0000
Thanks for looking at this. Not too sure about the answer you found though. When I step through the code, the problem seems to be in the _processXml function in Zend/Json.php. In that function the $children variable doesn't work for the example above (and perhaps anytime namespaces are used?).
When namespaces are used, I think you would need to start with one of the following: //either get all namespaces used at once //the below would return an array of 8 namespaces for the above xml: rdf, tiff, xap, exif, xapMM, dc, xml, photoshop $ns = $simpleXmlElementObject->getNamespaces(true);
//or to start with just the top level namespace //the below would return an array with a single namespace for the above xml: rdf $ns = $simpleXmlElementObject->getNamespaces();
//then get children by iterating through the namespaces //for the above xml, the below would return 6 children for the rdf namespace foreach($ns as $namespace => $url) { $children = $simpleXmlElementObject->children($namespace, true); }
With the existing code, $children is 0, which seems to be the reason an empty string is returned.
The answer you linked to suggested the colons in the xml tags caused a problem in SimpleXML. Not sure what problem is being referred to - the colons signify the use of namespaces and SimpleXML can easily return the part before the colon (the namespace), or the part after (the name). Thanks again for your help. lindon
Posted by Tamlyn Rhodes (tamlyn) on 2012-08-07T13:17:39.000+0000
I don't think this is a problem with SimpleXML. The comp.lang.php discussion linked above is very old, possibly from before SimpleXML supported namespaced elements which it definitely does do now as explained by @lindon.
It would be very useful if Zend_Json could handle namespaces. | http://framework.zend.com/issues/browse/ZF-12224?focusedCommentId=50745&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-27 | refinedweb | 468 | 59.84 |
-- | -- ( SearchQueue(..), SearchView(..), LIFO(..), FIFO(..), parallelTreeSearch ) where import Control.Monad.SearchTree import Control.Concurrent import qualified Data.Sequence as Seq -- | -- Search queues store multiple search trees. -- class SearchQueue q where -- | Constructs an empty search queue. emptyQ :: q a -- | Adds a search tree to asearch queue. addQ :: SearchTree a -> q a -> q a -- | creates a view on a search queue for pattern matching. viewQ :: q a -> SearchView q a -- | -- Checks whether the given search queue is empty. -- isEmptyQ :: SearchQueue q => q a -> Bool {-# SPECIALISE INLINE isEmptyQ :: LIFO a -> Bool #-} {-# SPECIALISE INLINE isEmptyQ :: FIFO a -> Bool #-} isEmptyQ q = case viewQ q of EmptyQ -> True; _ -> False -- | -- A @SearchView@ is used for pattern matching a search queue. -- data SearchView q a = EmptyQ | SearchTree a :~ q a -- | -- LIFO search queues can be used to implement parallel depth-first -- search. -- newtype LIFO a = LIFO [SearchTree a] instance SearchQueue LIFO where {-# SPECIALISE instance SearchQueue LIFO #-} emptyQ = LIFO [] addQ t (LIFO q) = LIFO (t:q) viewQ (LIFO []) = EmptyQ viewQ (LIFO (x:xs)) = x :~ LIFO xs -- | -- FIFO search queues can be used to implement parallel breadth-first -- search. -- newtype FIFO a = FIFO (Seq.Seq (SearchTree a)) instance SearchQueue FIFO where {-# SPECIALISE instance SearchQueue FIFO #-} emptyQ = FIFO Seq.empty addQ t (FIFO q) = FIFO (q Seq.|> t) viewQ (FIFO q) = case Seq.viewl q of Seq.EmptyL -> EmptyQ x Seq.:< xs -> x :~ FIFO xs -- | -- This function enumerates the results stored in the queue of -- @SearchTree@s in parallel. It is parameterised by the maximum -- number of threads to use and the maximum amount of work to perform -- by each thread before communicating the results. -- parallelTreeSearch :: SearchQueue q => Int -- ^ thread limit -> Int -- ^ work limit -> q a -- ^ queue with search trees -> IO [a] parallelTreeSearch tl wl q = do counter <- newMVar 1 channel <- newChan let env = SearchEnv tl wl counter channel forkIO (parSearch env [] q)] } parSearch :: SearchQueue q => SearchEnv a -> [a] -> q a -> IO () parSearch env xs q | isEmptyQ q = do writeResults env xs finaliseResults env | otherwise = do noMoreThreads <- threadLimitReached env if noMoreThreads then let (ys,q') = search (workLimit env) xs (viewQ q) in do writeResults env ys parSearch env [] q' else do (ys,q') <- process env [] (viewQ q) parSearch env ys q' -- forks a new thread for the first entry of the given queue that is a -- choice. -- process :: SearchQueue q => SearchEnv a -> [a] -> SearchView q a -> IO ([a], q a) process _ xs EmptyQ = return (xs,emptyQ) process env xs (None :~ q) = process env xs (viewQ q) process env xs (One x :~ q) = process env (x:xs) (viewQ q) process env xs (Choice s t :~ q) = do incThreadCounter env forkIO (parSearch env xs (addQ s (emptyQ `withTypeOf` q))) return ([], addQ t q) withTypeOf :: a -> a -> a withTypeOf = const -- :: SearchQueue q => Int -> [a] -> SearchView q a -> ([a],q a) search _ xs EmptyQ = (xs,emptyQ) search 0 xs (t :~ q) = (xs,addQ t q) search n xs (None :~ q) = search (n-1) xs (viewQ q) search n xs (One x :~ q) = search (n-1) (x:xs) (viewQ q) search n xs (Choice s t :~ q) = search (n-1) xs (viewQ (addQ s (addQ t q))) | http://hackage.haskell.org/package/parallel-tree-search-0.2/docs/src/Control-Concurrent-ParallelTreeSearch.html | CC-MAIN-2016-30 | refinedweb | 513 | 73.21 |
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page2
...
than an interface ?
Ans : A Java interface is an abstract data type like...
we cannot create objects of an interface. Typically, an interface in java
Inter Thread Communication
is ended or there is no thread running inside the critical region. Then the1
... be assigned in the constructor. As per the specification
declared in java document... of the arguments passed,
that is sufficient for the java interpreter
Thread
;Java throw and throws
Whenever we want to force an exception then we use throw... a possible exception then we use throws keyword. Point to note here is that the Java compiler very well knows about the exceptions thrown by some methods so it insists
corejava - Java Beginners
for more information:... design patterns are there in core java?
which are useful in threads?what r....
-----------------------
This is code of using thread
public class ThreadTest
corejava - Java Beginners
/java/thread/deadlocks.shtmlThanks...Deadlock Core Java What is Deadlock in Core Java? ... at the same time . To avoid this problem java has a concept called synchronization
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
Java thread
Java thread Why do threads block on I/O? When a thread... and in that time some other thread which is not waiting for that IO gets a chance to execute.If any input is not available to the thread which got suspended for IO
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Core Java Interview Questions Page3
... ?
Ans :
Generally Java sandbox does not allow to obtain this reference so... and retrieve information. For example, the
term data store is used in Enterprise Java
Some additions for my previous question - Java Beginners
Some additions for my previous question This is the question I posted several hours ago:
"I'm trying to write a program that has a text field for input, two buttons and the output text area. A user is asked to type in a number
Thread
Thread Explain two ways of creating thread in java. Explain at three methods of thread class.
Java Create Thread
There are two main ways of creating a thread. The first is to extend the Thread class and the second
Thread
Thread My question is why it is needed to pass this keyword to the thread constructor eventhough we had created only one thread and if you say we have added to point to the current thread then why we have not added
Thread
Thread What is multi-threading? Explain different states of a thread.
Java Multithreading
Multithreading is a technique that allows... processor system.
States of Thread:
New state ? After the creations of Thread
question
question Sir,
Please help me to develop a simple search engine model in java , send me some codes
Thread
attaching class object to thread..?
There are some compilation errors... some code like this:
class A extends Thread {
public void run...Thread will this code work..?
class A extends Thread
{
public
question
question i need to select data from database using mysql+java script+html
Please specify some more details Sir ,
i have a starting trouble to start my search engine project , please help me to start my project , please send me some relevant java codes
Thread
Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread....
Java Thread Example
class ThreadExample{
static int
Toby's PL/SQL Editor
(with some colour choice available)
PL/SQL Code auto...Toby's PL/SQL Editor
What is the PL/SQL Editor?
The PL/SQL editor is a plugin... easily develop and test PL/SQL code.
Download
You can download the plugin
corejava - Java Beginners
corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass... with respect to Java's references to objects and pass by reference calling semantics
Java Thread
Java Thread
In this tutorial we will discuss about Java Thread.
Java Thread :
A thread is light weight java program.JVM permits you to have multiple... into the running state.
Waiting : Sometimes it is need to put some thread
Core Java Interview Question, Interview Question
Core Java Interview Question Page 9
... myThread(); new Thread(my1).start(); new Thread(my2).start(); }
Question: How can i tell what state a thread is in ?
Answer: Prior to Java 5, isAlive() was commonly
corejava - Java Interview Questions
Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java
CoreJava - Java Beginners
core java an integrated approach I need helpful reference in Core Java an integrated approach
JAVA THREAD - Java Beginners
JAVA THREAD hii
i wrote a pgm to print the numbers from 0 to 9 in 2...);
}
public void fgh(int i,int p)
{
int sum;
new Thread(public void run... this may be you got some idea
package com.pack1;
class TestTwo extends
Core Java Interview Question, Interview Question
Core Java Interview Question Page 31
Threads
Question: Where does java thread... language and virtual machine
Question: What is the difference between Thread
Thread - Java Beginners
Thread Can i ask a thread method that will input two names using... = JOptionPane.showInputDialog(null, "Enter some text : ", "Roseindia.net", 1...);
String str1 = JOptionPane.showInputDialog(null, "Enter some text
Core Java Interview Question, Interview Question
Core Java Interview Question Page 6
Question: How can i tell what state a thread... with the release of Tiger (Java 5) you can now get what state a thread
Java :Thread Methods
Java :Thread Methods
This section explains methods of Thread class.
Thread Methods :
Thread class provides many method to handle different thread... and allow other
threads the temporal executing thread object.
Some other methods
Overview of Thread
when the main() method complete its execution. The main thread
creates some other... with this program.
Thread
A thread is a lightweight process
which exist within... thread is referred to as a single-threaded process,
while a process
CoreJava
corejava
Core Java Interview Question, Interview Question
Core Java Interview Question Page 2
... the event-dispatching thread.
Question: How can a subclass call a method....
Question: What
comes to mind when you hear about a young generation in Java
Core Java Interview Question, Interview Question
Core Java Interview Question Page 5
... } } );
Question: What are the uses of Serialization?
Answer: In some types... enterprise java beans.
To send objects between the servers in a cluster.
Question
Java interview question
Java interview question Hello i had an online test in some company and i didn't succeed in doing it(they gave me a timeline) this was the test... name,Last name are not the same for each
line)
They gave an eclipse environment
Core Java Interview Question, Interview Question
Core Java Interview Question Page 10
Question: What is the difference between notify... thread waiting on the same object to be notified (i.e., the object that calls notify
Java :Thread dumpStack
Java :Thread dumpStack
In this tutorial you will learn about Java Thread dumpStack .
Thread dumpStack :
JVM gives the concept of Thread Dump which..., some of them going
to die or in waiting stage. Functionality of Thread stack
Java Thread setName() Example
Java Thread setName() Example
In this section we are going to describe setName() method with example in java thread.
Thread setName() :
Suppose, you want to change name of your thread so for that java thread
provides
Java Thread Interrupted
Java Thread Interrupted
In this tutorial, you will learn how to interrupt a thread with example in
Java.
Thread Interrupt :
Java thread facilitate you to interrupt any thread. Interrupting a thread
means to stop the running thread
Java Thread Priority
Java Threads run with some priority
There are Three types of Java Thread...() method.
Java Thread Priority Example
public class priority implements...++)
System.out.println(x + " This is thread "
+ Thread.currentThread
Core Java Interview Question, Interview Question
Core Java Interview Question Page 29
Flow Control and exception
Question: What... at the end of the body
Question: When do you use continue and when do you use
java thread problem - Java Beginners
java thread problem Hi Friends,
My problem is related with java.util.concurrent.ThreadPoolExecutor
I have a thread pool which using LinkedBlockingQueue to send some runnable object .
Samples Code :
ThreadPoolExecutor
sleep method in thread java program
sleep method in thread java program How can we use sleep method in thread ?
public class test {
public static void main(String... some interval to the sleep method .After that interval thread will awake
What Is Thread In Java?
What Is Thread In Java?
In this section we will read about thread in Java... execution and the
description of the example.
Before, defining a Thread in Java..., process with multiple
thread is called Multi-threaded process.
Thread In Java
Get Current Thread
. Many thread runs
concurrently with a program. Some threads are Multithread.... The thread in java
is created and controlled by the java.lang.Threadclass.
Two...
Get Current Thread
java Thread
java Thread what is purpose of Thread
java apptitude question and answers - Java Beginners
java apptitude question and answers i want java appititude question and answers Hi Friend,
Please visit the following link:
Thanks
Thread in java
Thread in java which method will defined in thread class
CREATE AND WRITE FILE THREAD JAVA
CREATE AND WRITE FILE THREAD JAVA Hi guys I was wondering how can I make this program in java with threads.
I need to create a file and write in it (i know how to do that) but by listening in some port all the data that is being
Java thread
Java thread What are the high-level thread states
Java thread
Java thread What are the ways in which you can instantiate a thread
Java thread
Java thread What invokes a thread's run() method
Java thread
Java thread What is the difference between process and thread
Java thread
Java thread What's the difference between a thread's start() and run() methods
Java Thread HoldsLock
Java Thread HoldsLock
In this tutorial, you will learn how to lock a thread with example in
Java.
Thread HoldsLock :
It is easy to use synchronized code for locking but there is some limitation
too
Thread priority in java
Thread priority in java
A thread is a part or entity of a process...
concurrently. In java each and every thread has priority , priority means which....
Thread class defines some priority constant as follows:
MIN_PRIORITY = 1
NORM
java question
java question comparator and comparable
Differences..., but some other class?s instances.
For more information, visit the following links:
http
java question
java question what r the Comparator vs Comparable
... instances, but some other class?s instances.
For more information, visit the following links:
http
interview question - Java Interview Questions
interview question hello i want technical interview question in current year
Hi Friend,
Please visit the following links:
destroy thread when server stoped - Java Beginners
execution.
Please give me some information, how can i stop the thread when...destroy thread when server stoped Hi,
I have written a thread... the thread will run again but the previous thread will not destroy on the server close
Java question
Java question Craps is a gambling game based on repeatedly rolling 2 dice. You win or lose depending on the sum of the 2 dice and the occurrence... the craps game. Here are some example rounds:
1st roll
Rest of the game.
[4,3]
[5,6
question
question sir plz tell me what should i give in title box. just i want java program for the question typed in this area
Question
Question When there is an exception in my program how java runtime system handles
question
question Dear sir
i had some typing mistake at previous question
so its my humble request to let me know the steps to start the tomcat6 under the tomcat directory
question
question dear sir/madam
my question is how to compare two text format in java..we are java beginners..so we need the complete source code for above mentioned question...we have to compare each and every word
question
question i am using this code in my project
response.sendRedirect("LoginForm1.jsp?msg=Username or Password is incorrect!");
but i wan't to hide this message from url.please help me.
you gave me the response Hi Jamsiya
Give me some java Forum like Rose India.net
Give me some java Forum like Rose India.net Friends...
Please suggest some forum like RoseIndia.net where we can ask question like here.
Thanks
Java thread
Java thread What is the use of serializable:
Java thread
Java thread What method must be implemented by all threads
Java thread
Java thread What is the difference between wait() and sleep method
Thread in Nutshell
Thread in Nutshell Hi,
I m confused about what is called a thread actually in a program. There are many answer to this question on the web...; Please go through the following link:
Thread Tutorials Related Question
Java thread execute an object's synchronized method at a time. The concept lies...Java synchronized method Dear Friend,
I have written below program for synchronized method in java but it's not working plz help me.
program
Java Related Question
ensures that only one Java thread execute an object's synchronized method at a time...Java synchronized method Hi,
I have written below program for synchronized method in java but it is not working please help me.
program | http://roseindia.net/tutorialhelp/comment/8920 | CC-MAIN-2014-42 | refinedweb | 2,276 | 63.49 |
This program is awesome!
This program is awesome!
If you just want to change the year, you can do this:
import time
t = time.localtime(1222988400.0)
t = (t[0] - 30,) + t[1:]
unix_time = time.mktime(t)
print unix_time
Is this too crude?
Why do you need this? Are you building a time machine?
I'd say that PyS60 is around 10 times faster and easier to develop than Symbian C++. But I can't ever seem to sign an install anything. (So in that respect I might as well just write stories about...
Hey Neil,
Just out of curiosity, what did you find? I think I remember seeing a pure Python implementation and a PYD module, but didn't have any luck when I tried to find them for you.
You should probably try to stick to one or two threads and give them titles that indicate what they're about. "Editor" isn't too bad but "Plz plz plz" could mean anything. When someone runs a...
Thank you Croozeus. That's what I was doing for a long time but still haven't had any luck. :(
Does anyone have an example of creating a PYD installer? Mine is just like this:
python...
I guess you guys are mostly using "Open Signed Offline" for testing? If the key and certificate files are supplied for Ensymble, this is what it must be, right?
"Open Signed Online" sure doesn't...
Haha, I remember that. I was a big fan of all those games as a kid. Also Microsoft Word skipped from 2 to 5 in order to keep up with WordPerfect. But I still wonder why Nokia skipped 4. I guess...
Yeah, I'm building a UREL version for 3rd Ed. FP1.
I haven't frozen anything because I don't know how. But right now there's just one function for testing that multiplies 2 numbers. :)
I've been using WTFWMSFI for months now, and it's very helpful.
But in the case of my PYD's SIS, it says there's nothing wrong. Now I'm getting "Unable to install" messages. That's about as...
Version 5? What ever happened to 4? :confused:
You can also return tuples, arrays or dictionaries if you want multiple outputs:
def swap(x, y):
return y, x
(x, y) = swap(2, 1)
Thank you for asking, but no, I haven't had any success yet.
I'm not even trying to install Python or my app now. I'm just trying to install the PYD by itself in the c:\sys\bin folder. But still,...
MOD music never dies! It just loops back around to the beginning.
Thanks, that looks pretty useful. I was hoping for some way to integrate extra fields right into the calendar database, but I wasn't really expecting it.
I wonder why in that FAQ about e32dbm...
Thanks Bogdan. Some of those posts were made to help me. :)
Hi,
I need to update the device's calendar (to-do list, appointments, anniversaries, etc.) with entries from my company's system. After that, my program needs to know how to synchronize, so I...
It looks interesting but I'm not able to download Bazaar.
I can confirm it's a lot of fun on the N95. You can pretend you're Stephen Hawking and sound smart.
To create a SIS, I just use Ensymble's simplesis option and apply the UID via the command line (using a batch file). This is only intended for now to install a PYD for testing.
Thanks, I'll try...
I'd be glad to show you but no .pkg file seems to exist here.
I tried just compiling an existing project made by someone else, pys60usb, using the command line, packing into a simple SIS with Ensymble, then signing it online.
But I get the message "Unable...
It used to work for me, but at some point stopped working.
I'm currently trying to write an extension that will monitor incoming and outgoing MMS. My main problem is that when I create a DLL (PYD), and try to import it, I get a corruption error. | http://developer.nokia.com/community/discussion/search.php?s=4ca073fd1680fe19509412d0ae4d9d00&searchid=1953317 | CC-MAIN-2014-10 | refinedweb | 692 | 85.69 |
I'm trying to set bounds for an array that will later be printed in the console and summed. The lower bound ($a) should be less than 50 and I wrote this code to evaluate for that, but I want it to re-prompt for a number if a higher number is typed. So far, Google and experimentation have failed me.
def num_a
print "Pick a number from 1 to 50: "
$a = Integer(gets.chomp)
until $a < 50
puts "Um, try again please."
# need something here to prompt for another response
# until $a is less than 50
end
end
You could restructure the loop so that the prompt and call to
gets are both inside it:
def num_a # start with a number that doesn't meet the condition a = 50 # check if the number meets the condition yet until a < 50 # ask the user to enter a number print "Pick a number from 1 to 50: " a = Integer(gets.chomp) # ask to try again if the number isn't under 50 puts "Um, try again please." unless a < 50 end # return the entered value to the caller a end
Also, as I've shown in the example, I would recommend avoiding the use of global variables (
$a in this case). | https://codedump.io/share/mrSRwQt0AKgY/1/re-try-logic-in-a-simple-loop | CC-MAIN-2017-09 | refinedweb | 210 | 71.38 |
Detecting Code Indentation
The Firefox developer tools use CodeMirror as the source editor in many places (including the new WebIDE). CodeMirror provides auto-indentation when starting a new line and can convert tab presses into indents. Both of these features are frustrating however if the editor is set to insert the wrong indentation. That’s why many text editors will detect the indentation used in a file when it’s loaded into the editor. I set about to add this indentation detection to the devtools.
The first order of business was getting a good default value for new files. Every codebase and programmer has a different preferred indentation. Some use 2 spaces per indent, some use 4, and some even use tabs. I went through a rebellious 3-space indentation phase myself, which unfortunately was one of the most prolific open source months of my life (or did 3-space indents just increase my productivity??).
It can get kind of contentious, to be honest. So I thought I might back the decision up with some data. Using the GitHub gist API, I downloaded random code files from several different languages at different times throughout the day until I got 100 files of each language. Then I manually classified each file. Here’s the breakdown per language based on this limited sample size:
So, there you go. 2-spaces edges out others in Web languages, while Python is all for 4-space indents and Ruby is all for 2-space indents. At least for these 100 files. The 2 vs. 4 difference isn’t statistically significant for JavaScript with that small sample size, but is for the other languages.
As for detecting the indentation in a file, there were a few different algorithms out there. I looked at the two popular ones: greatest common divisor and minimum width, as well as two other experiments I came up with: comparing lines and neural network.
Indentation detection isn’t completely straightforward. A file that a human would classify as 2-spaces will have indent widths of all sizes as indents get nested: 4, 6, 8, 10, 12. An example of a not-straightforward one would be a 2-space file that mainly consists of a class. So function definitions would be indented by 2 spaces, but function bodies would be indented by 4. You might only have a tiny portion of the file indented by 2 spaces, and a majority by 4.
Another problem is outliers: too-long lines chopped off and indented by say 37 spaces to line up with something on the previous line. Multi-line comments royally throw things. These block comments often start with an even-indented width, and the body gets indented by one more space (at least in JavaScript). If it’s a really descriptive comment, a good portion of the file could have an indent width of say 5.
These problems are easier to solve if you can parse the file, but I wanted a language-agnostic approach.
All these algorithms were focused on determining the indentation if you were using spaces to indent. But tab indents are possible, so in all algorithms I classified a file as “tabs” if there were more tabs than space indents. All the algorithms discounted all-whitespace lines. Additionally, in the gcd and min-width algorithms, I threw out indent widths that were less than a certain percent of the file. I won’t show these parts as they were common to several of the algorithms.
The algorithms
Greatest common divisor
The greatest common divisor (gcd) is a math concept. The gcd of [4, 6, 8, 10] is 2. A file with indent widths of [4, 6, 8, 10] would also clearly be a 2-space indented file. Things go haywire when you get any outlier indents of say, 37 though. The gcd of [4, 6, 8, 37] is 1. Multi-line comments really throw this one off, so for practicality you have to throw out odd numbers. Here’s the JavaScript code for this:
We’ll see how this performs later.
Minimum width
The other common algorithm I saw was a simple one: just take the smallest indentation width you see in the file. This can also trip up a bit on multi-line comments. 1 is a common minimum indent in that case, so we have to chuck that:
Comparing lines
I thought about what I did when I manually classified a file. I noticed that when I opened a file, I would focus on a random line and scan until I hit the next indent, then I would make note of that, and do this for a few other random lines in the file. So I coded up something that mimics this procedure.
This method compares the indentation of each line with the previous line, and adds the difference to a tally. So if a line is indented by 10 spaces, and the previous by 8, one more vote would be added for 2-space indentation. The benefit to this is that a block comment could be any number of lines, but its indentation would only count twice. This means we don’t have to throw out odd-numbered widths (3-spacers rejoice!):
Neural network
I had to see how machine learning would stack up. I wish I could feed the raw data in — the indent widths of the first n lines. But I couldn’t figure out how to turn inputs like [4, 8, 10] into the continuous (non-discrete) signals that the network required (update: immediately upon writing this I thought of a way, but it didn’t perform as well). So I fed it the popularity of each width instead. Unlike the other algorithms, I didn’t throw out anything. Outlier widths and odd widths went in with the rest of them.
I trained the network on about 100 hand-classified files, separate from the test files. Here’s the classification function:
The results
I ran all the algorithms on the set of 500 total code files in JavaScript, HTML, CSS, Python, and Ruby. Here’s what percent of the files each algorithm detected correctly:
Unpatched (not removing outliers or odd widths):
% files correctly detected, out of 500 (unpatched):
gcd: 78.0%
minimum: 87.6%
compare: 95.4%
neuralnet: 94.8%
Patched (removing outliers and odd widths, where applicable):
% files correctly detected, out of 500 (patched):
gcd: 94.4%
minimum: 92.8%
compare: 96.8%
neuralnet: 94.8%
Out of this, the only statistically significant result is that the compare-lines algorithm is better than the minimum-width algorithm. That conclusion comes from a two-tailed Z test with p < 0.05. More differences could solidify if more gists were tested, however.
My summary is that once you patch the weaknesses of a particular one of these algorithms, it performs about as well as the other. At least on random files from the wild, where edge cases are few.
The neural network performed well, and could have performed better if I’d given it more samples. 100 training samples is not a lot. What’s nice about the neural network approach is you don’t have to figure out the outliers and special cases yourself. Look at this code I came up with for finding outliers, after several guess-and-check rounds:
function isOutlier(total, count) {
return (total > 40 && count < (total / 20))
|| (total > 10 && count < 3)
|| (total > 4 && count < 2);
}
Even after playing around with a bunch of different values, I haven’t found the best way to detect an outlier. The network, however, will probably figure out the best way for the data it’s trained on. It’ll learn that a single indentation of width 37 shouldn’t have much sway in the final decision. In that way, it doesn’t have the weaknesses the other algorithms have (me).
You can “see” (hopefully not even notice) the compare-lines algorithm in action in the latest Firefox release. | https://medium.com/firefox-developer-tools/detecting-code-indentation-eff3ed0fb56b | CC-MAIN-2017-09 | refinedweb | 1,335 | 63.8 |
Hi,
I have created a feature collection layer using static feature collection. I am working in a completely disconnected environment. How can I export my feature collection layer to a shapefile/geojson/csv format ?? Please provide with some solution.
Thanks in advance
The runtime doesn't provide any feature for that.
Is there any alternative for this ?
Hi,
You have several options for this type of functionality. If you're building an app for Windows desktops using WPF, then you can use the ArcGIS Runtime Local Server component to create Shapefiles (defining the geometry type, schema, etc) and then add features to the Shapefile. Exporting features to the Shapefile could be done within the same Python script that runs within the Local Server, or alternatively once the script has created the Shapefile then you can use the ArcGIS Runtime API directly to add features to the Shapefile. Exporting features to GeoJSON is also possible using ArcGIS Runtime Local Server and the 'Features To JSON' tool. Exporting to CSV is relatively straightforward, although note you need to take care over decimal separators (',' versus '.' depending on the culture). For more information on using the Local Server component see:
Local Server—ArcGIS Runtime SDK for .NET (WPF) | ArcGIS for Developers
Local Server geoprocessing tools support—ArcGIS Runtime SDK for .NET (WPF) | ArcGIS for Developers
License your app—ArcGIS Runtime SDK for .NET (WPF) | ArcGIS for Developers
Cheers
Mike
Hi Michael,
I am using a WPF app in a complete disconnected environment. Can you help me with code samples , so i can understand , how can i create shape file and how and when should I run , the python script ?
Thanks in advance
Hi,
Here are some links to documentation topics that will help you:
Python in ArcGIS Pro—ArcPy Get Started | ArcGIS Desktop
A quick tour of creating tools with Python—Geoprocessing and Python | ArcGIS Desktop
Create Feature Class—Data Management toolbox | ArcGIS Desktop
Package Result—Data Management toolbox | ArcGIS Desktop
Local Server—ArcGIS Runtime SDK for .NET (WPF) | ArcGIS for Developers
Cheers
Mike
Hi Michael,
I tried running the scripts, but it gave me an error.
import arcpy
import os
arcpy.env.workspace = "c:/data"
arcpy.FeaturesToJSON_conversion(os.path.join("outgdb.gdb", "myfeatures"), "myjsonfeatures.json")
In the following code , I am unable to understand , what is "myfeatures" ? I assumed it to be the name of feature dataset , but my script did not work ? Can you help me with it ?
I also have one other small doubt , that if once geopackage is created to use this tool , Will I be able to convert features of FeatureCollectionLayer(layer created in runtime) ??
If yes, then how ?
Waiting for a reply | https://community.esri.com/t5/arcgis-runtime-sdk-for-net/how-can-i-export-to-a-shapefile-geojson-csv-from-feature/td-p/112296 | CC-MAIN-2020-50 | refinedweb | 444 | 56.25 |
Content-type: text/html
#include <rpcsvc/ypclnt.h>
int yp_get_default_domain(
char **outdomain);
int yp_bind(
char *indomain);
void yp_unbind(
char *indomain);
int yp_match(
char *indomain,
char *inmap,
char *inkey,
int inkeylen,,
int *outorder);
int yp_master(
char *indomain,
char *inmap,
char **outname);
char *yperr_string(
int incode);
int ypprot_err(
unsigned int incode);
This package of functions provides an interface to the Network Information Service (NIS) data base lookup service. The package can be loaded from the standard library, /lib/libc.a. Refer to ypfiles(4) and ypserv(8) for an overview of NIS, including the definitions of "map" and "domain", and for a description of the servers, data bases, and commands that constitute the NIS application.
All input parameters names begin with "in". Output parameters begin with "out". Output parameters of type char ** should be addresses of uninitialized character pointers. The NIS client package allocates memory using malloc(3). This memory can be freed if the user code has no continuing need for it. The yp_get_default_domain function, however, returns a pointer to thread-specific data. Therefore, the memory cannot be freed by user code. (The contents of that memory are updated with the current default domain on each call.) cannot be null, but can point to null strings, with the count parameter indicating this. Counted strings need not be null-terminated.
All functions of type int return 0 if they succeed, or a failure code (YPERR_xxxx) if they do not succeed. Failure codes are described in the ERRORS section of this reference page.
The NIS lookup calls require a map name and a domain name. It is assumed that the client thread knows the name of the map of interest. Client threads fetch the node's default domain by calling yp_get_default_domain, and use the returned outdomain as the indomain parameter to successive NIS calls.
To use NIS services, the client thread must be bound to a NIS server that serves the appropriate domain. The binding is accomplished with yp_bind. Binding need not be done explicitly by user code; it is done automatically whenever a NIS lookup function is called. The yp_bind function can be called directly for processes that make use of a backup strategy in cases when NIS services are not available.
Each binding allocates one client process socket descriptor; each bound domain in each thread requires one socket descriptor. Multiple requests to the same domain from the same thread use that same descriptor. The yp_unbind function is available at the client interface for threads that explicitly manage their socket descriptors while accessing multiple domains. The call to yp_unbind makes the domain unbound, and frees all per-thread and per-node resources used to bind it.
If an RPC failure results upon use of a binding, that domain will be unbound automatically for the thread that encountered the error. At that point, the ypclnt layer will retry forever or until the operation succeeds. This action occurs provided that ypbind is running, and either the client thread cannot bind a server for the proper domain, or RPC requests to the server fail.
The ypbind -S flag allows the system administrator to lock ypbind to a particular domain and set of servers. Up to four servers can be specified. An example of the -S flag follows: /usr/sbin/ypbind -S domain,server1[,server2,server3,server4]
The ypclnt layer will return control to the user code, either with an error code, or with a success code and any results under certain circumstances. For example, control will be returned to the user code when an error is not RPC-related and also when the ypbind function is not running. An additional situation that will cause the return of control is when a bound ypserv process returns any answer (success or failure).
The yp_match function returns the value associated with a passed key. This key must be exact; no pattern matching is available.
The yp_first function returns the first key-value pair from the named map in the named domain.
The yp_next function of next is particular to the structure of the NIS map being processed; there is no relation in retrieval order to either the lexical order within any original (non-NIS) messages that would otherwise be returned in the midst of the enumeration. Enumerating all entries in a map is accomplished with the yp_all function.
The yp_all function provides a way to transfer an entire map from server to client in a single request using TCP (rather than UDP as with other functions in this package). The entire transaction take place as a single RPC request and response. The yp_all function can be used like any other NIS procedure, to identify the map in the normal manner, and to supply the name of a function that will be called to process each key-value pair within the map. Returns from the call to yp_all occur only when the transaction is completed (successfully or unsuccessfully), or when the); int instatus; char *inkey; int inkeylen; char *inval; int invallen; char *indata;
The instatus parameter will hold one of the return status values defined in the rpcsvc/yp_prot.h header file - either YP_TRUE or an error code. (See the discussion of ypprot_err for a function that converts a NIS protocol error code to a ypclnt layer error code.)
The key and value parameters are somewhat different than defined in the syntax.
The foreach function returns a Boolean value. It should return zero to indicate that it wants to be called again for further received key-value pairs, or nonzero to stop the flow of key-value pairs. If foreach returns a nonzero value, it is not called again; the functional value of yp_all is then 0.
The yp_order function returns the order number for a map.
The yp_master function returns the machine name of the master NIS server for a map.
The yperr_string function returns a pointer to an error message string that is null-terminated but contains no period or new line.
The ypprot_err function takes a NIS protocol error code as input and returns a ypclnt layer error code, which may be used in turn as an input to yperr_string.
All integer functions return 0 if the requested operation is successful, */
Header file containing ypclnt definitions. Header file defining return status values.
ypfiles(4), ypserv(8)
delim off | http://backdrift.org/man/tru64/man3/yp_master.3.html | CC-MAIN-2016-44 | refinedweb | 1,052 | 53.31 |
Related
Question
Flask+nginx+uwsgi+Ubuntu tutorial
My hopes are lost and so is my temper. I have followed the tutorial found here -
I have reinstalled and started all over 10 times and still same PROBLEM. Is there another tutorial available because this only works to 80%.
All works fine up to a certain point but I always get a 502 gateway problem.
I created a new user belonging to sudo group.
Ssh:ed to server with this new user – all good.
Set up domain also and if I ping domainname.com the correct droplet ip gives reply – all good.
python myproject.py (to start flask) works perfect
uwsgi –socket 0.0.0.0:5000 –protocol=http -w wsgi:app
Works perfect and site is accessible with IP
Content of wsgi.py-
from myproject import app
if name == “_main”:
app.run()
Content of myproject.ini looks like -
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
Content of /etc/systemd/system/myproject.service:
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User = blizzard
Group = www-data
WorkingDirectory = /home/blizzard/myproject
Environment = “PATH=/home/blizzard/myproject/myprojectenv/bin” #IS THIS LINE WRONG??
ExecStart=/home/blizzard/myprojectenv/bin/uwsgi –ini myproject.ini
[Install]
WantedBy = multi-user.target
Content of /etc/nginx/available-sites/myproject
server {
listen 80;
server_name billigaflygtilllondon.com;
location / { include uwsgi_params; uwsgi_pass unix:///home/blizzard/myproject/myproject.sock; }
}
If this tutorial is wrong why dont staff delete it or at least the author updates it since I am not the ONLY ONE having problems. If there is a helpful soul who could guide me I would be more than happy.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.×
Could it be a permission error since www-data has no permission inside homefolder ?
Could this be the problem?
Or is it a domain issue?
Anyone ?
Think I have found the problem, or at least one of them.
Did
sudo systemctl status myproject.service
And it returned FAILED....and now to the $10000000 question why?
Thank you very much, it helped me a lot
Greetings from Peru | https://www.digitalocean.com/community/questions/flask-nginx-uwsgi-ubuntu-tutorial | CC-MAIN-2020-24 | refinedweb | 386 | 52.87 |
Red Hat Bugzilla – Bug 246723
backport optimistic DAD patch from upstream
Last modified: 2008-05-21 10:45:28 EDT
Description of problem:
ODAD backport as part of ipv6.
As it turns out, this patch breaks xen... See bug 423791 for details.
in 2.6.18-60.el5
You can download this test kernel from
The patch contained in this bugzilla breaks the kernel->userland rtnetlink API:
Created attachment 291177 [details]
Quick and dirty fix. Not tested.
Alternatively, providing /usr/include/linux/if_addr.h in the kernel-headers RPM
and possibly doing this in rtnetlink.h may also retain compatibility:
#ifndef __KERNEL__
#include <linux/if_addr.h>
#endif
This is being fixed in bz 428143. Re-closing
confirmed fix is in the -88. | https://bugzilla.redhat.com/show_bug.cgi?id=246723 | CC-MAIN-2017-26 | refinedweb | 122 | 59.7 |
Table of Contents
- What this is about
- The reference code base
- 1. Fetch and print values within Session.run
- 2. Use the tf.Print operation
- 3. Use Tensorboard visualization for monitoring
- a) clean the graph with proper names and name scopes
- b) Add tf.summaries
- c) Add a tf.summary.FileWriter to create log files
- d) Start the tensorboard server from your terminal
- 4. Use the Tensorboard debugger
- 5. Use the TensorFlow debugger
- Conclusion
What this is about
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. — BRIAN W. KERNIGHAN
Debugging in general can be a tedious and challenging task. Nevertheless, you must be comfortable going through the written code and identifying problems. Normally there are many guides, and the process of debugging is often well documented for many languages and frameworks.
When it comes to TensorFlow, however, some new challenges arise because of the way it works.
As the official documentation states:
A TensorFlow Core program consists of two discrete sections:
- Building the computational graph (a tf.Graph).
- Running the computational graph (using a tf.Session).
The actual computation is done with
session.run(), which means that we need to find a way to inspect values inside this function.
The reference code base
As a reference, I will provide my Github repository with the corresponding code here.
We will use a basic neural network to classify handwritten digits from the MNIST dataset, using:
tf.nn.softmax_cross_entropy_with_logits_v2as the TF classification operation for defining the loss
tf.train.GradientDescentOptimizerfor minimizing the loss
Running this small neural network shows that it can already achieve an accuracy of ~92%:
The process of debugging
Now for debugging, there are basically 5 (pragmatic) ways to achieve this.
As a side note: It is often useful to assert shapes to ensure that everything works together as intended.
1. Fetch and print values within Session.run
This is probably the fastest and easiest way to get the information you need.
- easy and fast
- any evaluation can be fetched from everywhere
- it’s necessary to hold the reference to the tensor, which is bad in complex models
In essence, you run the session in a print statement and feed it the dictionary, like so:
print( f"The bias parameter is: {sess.run(b, feed_dict={x: mnist.test.images, y_: mnist.test.labels})}" )
If the code gets more complex, the partial_run execution of a session could be used. But since this is an experimental feature I will not implement this for demonstration.
Additionally, don’t forget the
.eval() method for evaluating tensors specifically.
See the full code here on Github.
2. Use the tf.Print operation
The tf.Print method comes in handy during run-time evaluation when we don’t want to explicitly fetch the code with session.run(). It is an identity op that prints data when evaluating.
- it allows us to see the development of values during evaluation
- it has limited configuration and therefore can easily clog the terminal
Yufeng G created a fantastic video and article about how to use the tf.Print statement. And as he points out, it is vital to structure the print node the way that it is used further. As he says:
It is vitally important that you actually use this returned node, because if you don’t, it will be dangling.
In my code, I added a print statement that fetches the values within the session to illustrate how both methods perform differently in execution.
With runtime evaluation comes the possibility of runtime assertion with
tf.Assert .
3. Use Tensorboard visualization for monitoring
Before diving into this debugging method, be aware that there is the Tensorboard and the Tensorboard debugger!
The TF website offers a great tutorial for implementing and using the board.
A key for the usage is the serializing of the data. TensorFlow provides the summary operations, which allow you to export condensed information about the model. They are like anchors telling the visualization board what to plot.
a) Clean the graph with proper names and name scopes
First we need to organize all the variables and operations with the
scope methods that TF provides.
with tf.name_scope("variables_scope"):
x = tf.placeholder(tf.float32, shape=[None, 784], name="x_placeholder")
y_ = tf.placeholder(tf.float32, shape=[None, 10], name="y_placeholder")
b) Add tf.summaries
For example:
with tf.name_scope("weights_scope"):
W = tf.Variable(tf.zeros([784, 10]), name="weights_variable")
tf.summary.histogram("weight_histogram", W)
c) Add a tf.summary.FileWriter to create log files
Tip: Make sure to create sub folders for each log to avoid accumulation of graphs.
d) Start the tensorboard server from your terminal
For example:
tensorboard --logdir=./tfb_logs/ --port=8090 --host=127.0.0.1
Navigating to the tensorboard server (in this case) shows the following:
Now the full power and use of tensorboard becomes clear. It allows you very easily to spot errors in your machine learning model. My code example is a very simple one. Imagine a model with multiple layers and more variables and operations!
See full code here on Github.
4. Use the Tensorboard debugger
As the Tensorboard Github repository states:
This dashboard is in its alpha release. Some features are not yet fully functional.
However, it can still be used and provides cool debugging features. Please check out the Github repository to get an adequate overview. Also, see their video to get a deeper understanding. They have done a great job.
To accomplish this, there are 3 things to add to our previous example:
- Import
from tensorflow.python import debug as tf_debug
- Add your session with
tf_debug.TensorBoardDebugWrapsperSession
- Add to your tensorboard server the
debugger_port
Now you have the option to debug the whole visualized model like with any other debugger, but with a beautiful map. You are able to select certain nodes and inspect them, control execution with the “step” and “continue” buttons, and visualize tensors and their values.
There is much more to talk about regarding this unique feature of Tensorflow, but I will probably dedicate another article to that.
See my full code here on Github.
5. Use the TensorFlow debugger
The last method, but also very powerful, is the CLI TensorFlow debugger.
This debugger focuses on the command-line interface (CLI) of tfdbg, as opposed to the graphical user interface (GUI) of tfdbg, that is the TensorBoard Debugger Plugin.
You simply wrap the session with
tf_debug.LocalCLIDebugWrapperSession(sess) and then you start the debugging with executing the file (maybe it's necessary to add the
--debug flag).
It basically allows you to run and step through the execution of your model, while providing evaluation metrics.
I think the official documention could be improved, but they have also created a video which introduces the feature in a good way.
So the key features here are the commands
invoke_stepper and then pressing
s to step through each operation. It is the basic debugger functionality of a debugger but in the CLI. It looks like this:
See the full code here on Github.
Conclusion
As shown, there are many ways to debug a TensorFlow application. Each method has its own strengths and weaknesses. I didn’t mention the Python debugger, because it is not TensorFlow specific, but keep in mind that the simple Python debugger already provides some good insights!
There is a great presentation by Wookayin who talks about these concepts as well but also goes over some general debugging advise. That advice is:
- name tensors properly
- check and sanitize input
- logging
- assertions
- proper use of exceptions
- failing fast -> immediately abort if something is wrong
- don’t repeat yourself
- organize your modules and code
I am really excited for all the features that TensorFlow has to offer for people who are building machine learning systems. They are doing a great job! Looking forward to further developments! :)
Thanks for reading my article! Feel free to leave any feedback! | https://medium.com/free-code-camp/debugging-tensorflow-a-starter-e6668ce72617?source=post_recirc---------2------------------ | CC-MAIN-2020-16 | refinedweb | 1,334 | 57.67 |
A CRTP base class for scoring schemes. More...
#include <seqan3/alignment/scoring/scoring_scheme_base.hpp>
A CRTP base class for scoring schemes.
This type is never used directly, instead use seqan3::nucleotide_scoring_scheme or seqan3::aminoacid_scoring_scheme.
This class is only implementation detail and not required for most users. Types that model seqan3::scoring_scheme can (but don't need to!) inherit from it.
Constructor for the simple scheme (delegates to set_simple_scheme()).
Constructor for a custom scheme (delegates to set_custom_matrix()).
Score two letters (either two nucleotids or two amino acids).
Score two letters (either two nucleotids or two amino acids).
Set a custom scheme by passing a full matrix with arbitrary content.
Set the simple scheme (everything is either match or mismatch). | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1scoring__scheme__base.html | CC-MAIN-2021-39 | refinedweb | 118 | 62.14 |
Give an algorithm (or straight Python code) that yields all partitions of a collection of N items into K bins such that each bin has at least one item. I need this in both the case where order matters and where order does not matter.
Example where order matters
>>> list(partition_n_in_k_bins_ordered((1,2,3,4), 2))
[([1], [2,3,4]), ([1,2], [3,4]), ([1,2,3], [4])]
>>> list(partition_n_in_k_bins_ordered((1,2,3,4), 3))
[([1], [2], [3,4]), ([1], [2,3], [4]), ([1,2], [3], [4])]
>>> list(partition_n_in_k_bins_ordered((1,2,3,4), 4))
[([1], [2], [3], [4])]
>>> list(partition_n_in_k_bins_unordered({1,2,3,4}, 2))
[{{1}, {2,3,4}}, {{2}, {1,3,4}}, {{3}, {1,2,4}}, {{4}, {1,2,3}},
{{1,2}, {3,4}}, {{1,3}, {2,4}}, {{1,4}, {2,3}}]
itertools
Enrico's algorithm, Knuth's, and only my glue are needed to paste together something that returns the list of lists or set of sets (returned as lists of lists in case elements are not hashable).
def kbin(l, k, ordered=True): """ Return sequence ``l`` partitioned into ``k`` bins. Examples ======== The default is to give the items in the same order, but grouped into k partitions: >>> for p in kbin(range(5), 2): ... print p ... [[0], [1, 2, 3, 4]] [[0, 1], [2, 3, 4]] [[0, 1, 2], [3, 4]] [[0, 1, 2, 3], [4]] Setting ``ordered`` to None means that the order of the elements in the bins is irrelevant and the order of the bins is irrelevant. Though they are returned in a canonical order as lists of lists, all lists can be thought of as sets. >>> for p in kbin(range(3), 2, ordered=None): ... print p ... [[0, 1], [2]] [[0], [1, 2]] [[0, 2], [1]] """ from sympy.utilities.iterables import ( permutations, multiset_partitions,: for p in partition(l, k): yield p else: for p in multiset_partitions(l, k): yield p | https://codedump.io/share/KvvW4ibDWn0f/1/partition-n-items-into-k-bins-in-python-lazily | CC-MAIN-2017-30 | refinedweb | 317 | 61.9 |
.7, also released on November 20.
It includes a fair number of fixes, including one with a CVE number
attached.
Kernel development news
Quotes of the week
+/*
+ * "Define 'is'", Bill Clinton
+ * "Define 'if'", Steven Rostedt
+ */
+#define if(cond) if (__builtin_constant_p((cond)) ? !!(cond) : \
+ ({ \
+ int ______r; \
+ static struct ftrace_branch_data \
+ __attribute__((__aligned__(4))) \
+ __attribute__((section("_ftrace_branch"))) \
+ ______f = { \
+ .func = __func__, \
+ .file = __FILE__, \
+ .line = __LINE__, \
+ }; \
+ ______r = !!(cond); \
+ if (______r) \
+ ______f.hit++; \
+ else \
+ ______f.miss++; \
+ ______r; \
+ }))
Ksplice and kreplace
The core idea behind Ksplice remains the same: when given a source tree and
a patch, it builds the kernel both with and without the patch and looks at
the differences. To that end, the compilation procedure is modified to
put every function and data structure into its own executable section.
That makes life a little harder for the compiler and the linker, but
developers are notably insensitive to the difficulties faced by those
tools. With things split up this way, it is relatively easy to identify a
minimal set of changes in the binary kernel image which result from the
patch. Ksplice can then, with some care, patch the new code into the
running kernel. Once this work is done, the old kernel is running the new
code without ever having been rebooted.
This technique works well for code changes, but different challenges come
with changes to data structures. Back in April, Ksplice could not handle
that kind of change. Even so, the project's developers claimed to be able
to apply the bulk of the kernel's security updates using ksplice. Since
then, though, the developers have applied some energy to this problem.
With the addition of a couple of new techniques - which require extra
effort on the part of the person preparing the patch for Ksplice - it is
now possible to apply 100% of the 65 non-DOS security patches released for
the kernel since 2005.
In some cases, a kernel patch will simply require that a data structure be
initialized differently. The way to handle this change in an update
through Ksplice is to modify the relevant data structures on the fly. To
effect such changes, a patch can be modified to include code like the following:
#include <ksplice-patch.h>
ksplice_apply(void (*func)());
While Ksplice is applying the changes - and while the rest of the system is
still stopped - the given func will be called. It can then go
rooting through the kernel's data structures, changing things as needed.
For example, CVE-2008-0007
came about as a result of a failure by some drivers to set the
VM_DONTEXPAND flag on certain vm_area_struct structures.
Ksplice is able to apply the fix to the drivers without trouble, but that
is not helpful for any incorrectly-initialized VMAs present on the running
system. So the
modifications to the patch add some functions which set
VM_DONTEXPAND on existing VMAs, then use ksplice_apply()
to cause those functions to be executed. The result is a fully-fixed
system.
Changes to data structure definitions are harder. If a structure field is
removed, the Ksplice version of the patch can just leave it in place. But
the addition of a new field requires more complicated measures. Simply
replacing the allocated structures on the fly seems impractical; finding
and fixing all pointers to those structures would be difficult at best. So
something else is needed.
For Ksplice, that something else is a "shadow" mechanism which allocates a
separate structure to hold the new fields. Using shadow structures is a
fair amount of additional work; the original patch must be changed in a
number of places. Code which allocates the affected structure must be
modified to allocate the shadow as well, and code which frees the structure
must be changed in similar ways. Any reference to the new field(s) must,
instead, look up the shadow structure and use that version of the field.
All told, it looks like a tiresome procedure which has a significant chance
of introducing new bugs. There is also the potential for performance
issues caused by the linear linked list search performed to find the shadow
structures. The good news is that it is only rarely necessary to modify a
patch in this way.
The Ksplice developers do not appear to be done yet; from the latest patch
posting:
This is an ambitious goal; a single stable series can add up to hundreds of
changes, some of which can be reasonably large. It will be interesting to
see how many users are really interested in this particular sort of update;
sites running critical systems tend to have older "enterprise" kernels
which are no longer receiving stable tree updates. But a Ksplice which is
flexible enough to handle that kind of update stream should also be useful
for distributors wanting to provide no-reboot patches to their customers.
Meanwhile, Nikanth Karthikesan has posted a facility called kreplace. On the surface, it
looks similar to Ksplice, but the goal is a little different: its purpose
is to allow a developer to quickly try out a change on a running kernel.
Kreplace works by simply patching out and replacing one or more functions
in the kernel. Kreplace may have its value, but the initial reaction has
not been greatly enthusiastic. Among other things, it has been pointed out that Ksplice also has a facility
to allow for quick experimentation with changes - though it will be quick
only if the developer is already set up to use Ksplice with the running
kernel.
A final concern with either of these solutions is that they are, for all
practical purposes, employing rootkit techniques. A mechanism which can be
used by distributors to patch running systems can also be (mis)used by others.
Vendors of binary-only modules could, for example, use Ksplice or kreplace
to get around GPL-only exports and other inconvenient features of
contemporary kernels. Crackers could also use it, of course, but they
already have their own rootkit tools and gain no real benefit from an
officially-supported runtime patching mechanism. Whether this aspect of
Ksplice is of concern to the development community may be seen in the
coming months as this code gets closer to mainline inclusion.
Character devices in user space
There is a lot of functionality—things like filesystems and device
drivers—that are normally considered to be kernel tasks, but have,
over time, been allowed to move into user space. The UIO user space driver framework
came along in 2.6.23, while filesystems in user space (FUSE) have been
around since 2.6.14. Tejun Heo would like to see this idea broadened even
further with the character
devices in user space (CUSE) patches.
At first blush, the uses for a character device implemented in user space
are not obvious. Looking a bit deeper, though, one finds numerous
programs—both open and closed source—that rely on legacy
character drivers. Those drivers are currently in the kernel, but need not
be if there were a way to implement them in user space. In addition,
older, deprecated interfaces, such as Open Sound System (OSS) can be better
supported without constantly fiddling with the in-kernel emulation.
Providing better OSS support is one of the prime motivators for CUSE as
Heo announced in a linux-kernel posting
introducing the OSS
proxy. The proxy uses CUSE to implement the /dev/dsp,
/dev/adsp, and /dev/mixer devices that programs using OSS
expect. Adrian Bunk didn't necessarily see
this as a good thing:
The
application you list on your webpage is UML host sound support, and I'm
wondering why you don't fix that instead of working on a better OSS
emulation?
But Heo sees the current state of OSS emulation as a rather complicated
mess that, for better or worse, needs cleaning
up:
But there are other uses for CUSE too. Greg Kroah-Hartman notes that legacy
software for talking to Palm Pilots, much of which is binary-only, expects
to talk to a /dev/pilot serial port. The kernel carries around a
driver, but "a libusb userspace program can handle all of the data to
the USB device instead". So CUSE could be used to eventually remove
another crufty driver from the kernel, while still maintaining
compatibility with old user space code.
CUSE is implemented on top of FUSE as there is a fair amount of overlap
between them. Character devices and filesystems implement many of the same
file operations—things like open(), close(),
read(), and write()—which makes them a good match.
Heo has a separate patchset for
FUSE that implements additional operations for filesystems some of
which will be used by CUSE.
The additional FUSE operations include an implementation of
ioctl() that is necessarily rather ugly. Because an
ioctl implementation can access memory in unpredictable
ways—and those data structures can be arbitrarily deep—there
needs to be a mechanism for user-space CUSE devices to read and write that
memory. The CUSE server does not have direct access to the caller's
memory, so a multi-step
ioctl() with retries must be implemented. This particular bit of
ugliness is only allowed for in-kernel use, so that CUSE (or other
things like it) can allow "unrestricted" ioctl() implementations.
All FUSE filesystems are still required to have "restricted"
ioctls where the kernel can determine the direction and amount of
data that is transferred.
poll() support has also been added to FUSE, which, in turn,
requires a separate patch that allows poll() callbacks to sleep
(described in this article).
Once the FUSE changes are in place, the actual implementation of CUSE is
relatively small, weighing in around 1000 lines plus some housekeeping to
rename and export FUSE symbols. At its core, it collects up a FUSE-mounted
filesystem that connects to the user-space implemented device along with
the kernel-exported character device, binding the two together. FUSE
handles the interaction with the user-space code, in the same way that it
does for a filesystem.
CUSE creates a device for commands, /dev/cuse, which is opened by
a program that wants to implement a particular character device. CUSE
queries the opener to determine which device it is implementing and then
creates the device node. For most operations, CUSE just hands off to FUSE,
but for open() it, instead, opens a file from the FUSE mount,
storing the file handle for use by later operations.
In many ways, CUSE is a kind of impedance matching layer that creates
something that acts like a character device, but has no hardware directly
behind it. This allows CUSE to ignore things like hardware interrupts;
those would need to be handled by something else, typically a downstream
driver—the soundcard driver in the OSS proxy case. This is one of
the big differences between UIO and CUSE. UIO is much more like a regular
kernel device driver that requires kernel code to handle interrupts. CUSE
drivers, on the other hand, can be created without ever touching kernel
space.
The only objection so far seems to be Bunk's complaint about supporting
OSS when it has been deprecated for so long. As Heo points out, though,
there are still many applications that only support OSS. In addition, all
of the code that has been submitted is "way smaller than the
in-kernel ALSA OSS emulation which is somewhat painful to use these
days", Heo says. Since there are
other potential users of CUSE, not just the OSS proxy, it would seem that,
absent any major objections, CUSE could make it into 2.6.29.
Driver API: sleeping poll(), exclusive I/O memory, and DMA API debugging
Most of the functions in the file_operations structure are
concerned with I/O. So it is not surprising that these functions are
allowed to sleep. Except that, as it turns out, one of them -
poll() - cannot. There is nothing inherent in the poll()
or select() system calls which would require the driver
poll() callback to be nonblocking; this requirement is, instead, a
result of the implementation. In essence, the core poll()
implementation looks like this:
for (;;)
set_current_state(TASK_INTERRUPTIBLE)
for each fd to poll
ask driver if I/O can happen
add current process to driver wait queue
if one or more fds are ready
break
schedule_timeout_range(...)
The problem is relatively straightforward: if a specific driver chooses to
sleep in its poll() callback, the current task state will get set
back to TASK_RUNNING and schedule_timeout_range() will return
immediately. So a sleeping driver turns the main loop into a busy-wait.
The solution, as developed by
Tejun Heo, is also straightforward. His patch causes
sys_poll() to define a custom wakeup function which, in turn, sets
a new triggered flag when called. That eliminates the need to put
the process into TASK_INTERRUPTIBLE for the duration of the main
loop; that can be done, instead, right before actually sleeping.
Most driver writers can remain unaware of this change, which looks highly
likely to be merged for 2.6.29. But, for those who need it, there will be
one more degree of flexibility in the implementation of poll()
callbacks.
For a while, developers involved in the hunt for the e1000e corruption
bug thought that the X server might be the problem. The real bug
turned out to be elsewhere, but the suspicion cast upon X led to the
development of a new API designed to make it harder for user-space programs
to interfere with the operation of an in-kernel driver.
In particular, it seemed sensible to prevent user space from manipulating
I/O memory which has been allocated by device drivers. This can be
achieved by not allowing an mmap() call on /dev/mem to
map regions already given to drivers. If the STRICT_DEVMEM
configuration option is set, the kernel will protect its own memory from
mapping by user space; protecting I/O memory is really just a matter of
extending that mechanism.
Arjan van de Ven has implemented that feature in his MMIO exclusivity patch. He
chose, however, not to make this protection the default. Instead, drivers
which want exclusive access to an I/O memory region should call one of
these new functions:
int pci_request_region_exclusive(struct pci_dev *pdev, int bar,
const char *res_name);
int pci_request_regions_exclusive(struct pci_dev *pdev,
const char *res_name);
int pci_request_selected_regions_exclusive(struct pci_dev *pdev,
int bars,
const char *res_name);
There is also a new, low-level allocation macro:
request_mem_region_exclusive(start, n, name);
In each case, these functions are equivalent to their non-exclusive
cousins, except for the changed name and the resulting exclusive
allocation.
There may be cases where a developer wants to be able to map a region from
user space on a development system, regardless of what the driver thinks.
For such situations, there is a new iomem=relaxed boot parameter.
When relaxed is selected, exclusive allocations are not enforced.
Clearly this is not an option which one would want to set on a production
system, but it may be useful in development environments.
The last topic is not actually an API change, but it's worth a look
anyway. The kernel provides a nice API for setting up DMA operations. In
many cases, the associated functions do little or no work; the system they
are running on does not require any additional effort. The result is that
a lot of "tested" driver code may, in fact, have serious errors in its use
of the DMA API. When those drivers are run on a different system - one
with an I/O memory management unit (IOMMU) in particular - those errors
could lead to no end of unpleasant behavior.
Kernel developers like the idea of finding bugs before they bite users on
remote systems. To help make that happen with the DMA API, Joerg Roedel
has posted a new DMA API
debugging facility. This feature, when built into the kernel, should
make it possible to find a number of previously-hidden bugs in device
drivers. It has, in fact, already turned up a few problems with in-tree
drivers, mostly in the networking subsystem.
Use of this facility simply requires enabling a configuration option; the
API itself does not change. Once it's enabled, this code will check for a
number of problems, including freeing DMA buffers with a different size
than was given at allocation time, freeing buffers which were never
allocated at all, mixing coherent and non-coherent functions on the same
buffer, confusion over I/O directions, and more. Each of these problems
might slip by on a developer's test system, but might create havoc where an
IOMMU is being used. When a problem is found, a warning and stack
traceback are logged.
The response to this API has been positive. The biggest complaint seems to
be about the fact that this API is implemented as an x86-specific feature.
So it will probably have to be made generic before merging - after all,
developers on other platforms are entirely capable of introducing
DMA-related bugs too. Once it goes in, this feature should probably be
enabled on any system used for driver development.
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Benchmarks and bugs
Miscellaneous
Page editor: Jake Edge
Next page: Distributions>>
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/307993/ | crawl-002 | refinedweb | 2,892 | 58.62 |
So this project started with a need - or, not really a need, but an annoyance I realized would be a good opportunity to strengthen my Haskell, even if the solution probably wasn't worth it in the end.
There's a blog I follow (Fake Nous) that uses Wordpress, meaning its comment section mechanics and account system are as convoluted and nightmarish as Haskell's package management. In particular I wanted to see if I could do away with relying on kludgy Wordpress notifications that only seem to work occasionally and write a web scraper that'd fetch the page, find the recent comments element and see if a new comment had been posted.
I've done the brunt of the job now - I wrote a Haskell script that outputs the "Name on Post" string of the most recent comment. And I thought it'd be interesting to compare the Haskell solution to Python and Go solutions.
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE TupleSections #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE MultiWayIf #-} {-# LANGUAGE ViewPatterns #-} import Network.HTTP.Req import qualified Text.HTML.DOM as DOM import qualified Text.XML.Cursor as Cursor import qualified Text.XML.Selector as Selector import qualified Data.XML.Types as Types import qualified Text.XML as XML import Data.Text (Text, unpack) import Control.Monad main = do resp <- runReq defaultHttpConfig $ req GET (https "fakenous.net") NoReqBody lbsResponse mempty let dom = Cursor.fromDocument $ DOM.parseLBS $ responseBody resp recentComments = XML.toXMLNode $ Cursor.node $ head $ Selector.query "#recentcomments" $ dom newest = head $ Types.nodeChildren recentComments putStrLn $ getCommentText newest getCommentText commentElem = let children = Types.nodeChildren commentElem in foldl (++) "" $ unwrap <$> children unwrap :: Types.Node -> String unwrap (Types.NodeContent (Types.ContentText s)) = unpack s unwrap e = unwrap $ head $ Types.nodeChildren e
My Haskell clocs in at 25 lines, although if you remove unused language extensions, it comes down to 21 (The other four in there just because they're "go to" extensions for me). So 21 is a fairer count. If you don't count imports as lines of code, it can be 13.
Writing this was actually not terribly difficult; of the 5 or so hours I probably put into it in the end, 90% of that time was spent struggling with package management (the worst aspect of Haskell). In the end I finally resorted to Stack even though this is a single-file script that should be able to compile with just
ghc.
I'm proud of my work though, and thought it reflected fairly well on a language to do this so concisely. My enthusiasm dropped a bit when I wrote a Python solution:
import requests from bs4 import BeautifulSoup file = requests.get("").text dom = BeautifulSoup(file, features='html.parser') recentcomments = dom.find(id = 'recentcomments') print(''.join(list(recentcomments.children)[0].strings))
6 lines to Haskell's 21, or 4 to 13. Damn. I'm becoming more and more convinced nothing will ever displace my love for Python.
Course you can attribute some of Haskell's relative size to having an inferior library, but still.
Here's a Go solution:) { var name = elem.FirstChild var on = name.NextSibling fmt.Printf("%s%s%s\n", unwrap(name), unwrap(on), unwrap(on.NextSibling)) } } func unwrap(node *html.Node) string { if node.Type == html.TextNode { return node.Data } return unwrap(node.FirstChild) } func must(err error) { if err != nil { panic(err) } }
32 lines, including imports. So at least Haskell came in shorter than Go. I'm proud of you, Has- oh nevermind, that's not a very high bar to clear.
It would be reasonable to object that the Python solution is so brief because it doesn't need a main function, but in real Python applications you generally still want that. But even if I modify it:
import requests from bs4 import BeautifulSoup def main(): file = requests.get("").text dom = BeautifulSoup(file, features='html.parser') recentcomments = dom.find(id = 'recentcomments') return ''.join(list(recentcomments.children)[0].strings) if __name__ == '__main__': main()
It only clocs in at 8 lines, including imports.
An alternate version of the Go solution that doesn't hardcode the number of nodes (since the Python and Haskell ones don't):) { fmt.Printf("%s\n", textOfNode(elem)) } } func textOfNode(node *html.Node) string { var total string var elem = node.FirstChild for elem != nil { total += unwrap(elem) elem = elem.NextSibling } return total } func unwrap(node *html.Node) string { if node.Type == html.TextNode { return node.Data } return unwrap(node.FirstChild) } func must(err error) { if err != nil { panic(err) } }
Though it ends up being 39 lines.
Maybe Python's lead would decrease if I implemented the second half, having the scripts save the last comment they found in a file, read it on startup, and update if it's different and notify me somehow (email could be an interesting test). I doubt it, but if people like this post I'll finish them.
Edit: I finished them.
Discussion (20)
A Haskell one-liner:
Can you give some context for this? When I plug it in, even with all the imports I used, almost everything in there is undefined.
I'm ready with the full version that does the saving and emailing me for all three languages, but I'm holding off on posting now because I don't want to finalize if the Haskell can be improved by that much.
Apologies, I should have thought of this earlier. Anyway, adding more details:
The dependencies can be put in a
dev-to.cabalfile:
Doing the saving and emailing you would be a simpler addition.
You can clone my repo from github.com/smunix/dev-to
Ah. Still, that doesn't seem to be a complete solution. I ran it with
cabal runand the output is the object:
Instead of the text.
I also wouldn't consider that one line. If I were to really use that code, I'd certainly break it into 2-4. Still, it is an impressive improvement! I'll have to look more into those libraries.
One may also argue that your python code uses the beautifulsoup library which has already done the hard work of parsing the html/xml for you!
(though in fairness, I don't know much about haskell or go to comment on how "bare metal" those pieces of code are).
True, beautiful soup seems much high-level than the other libraries. Though I am using at-least two non-standard libraries for all languages (Go has great high-level HTTP in the stdlib but needed 2 HTML/traversal libraries just to get there, for Haskell I'm using 5 libraries: req, html-conduit, dom-selector, xml-conduit and xml-types (might be a way to cut down on those but I really couldn't find it cause some of those libraries are just like 'provides HTML helpers for XML types' or something)).
I would recommend Colly (github.com/gocolly/colly) to get a better comparison since you are using BeautifulSoup for Python. Both scraper libraries have superb APIs.
Wow! I didn't know about that library. That does much more for me here than even BeautifulSoup! New&Improved Go version:
That gets it down to about same number of "meaningful" lines as Python. Technically can drop 2 more lines by putting the function inline, but I wouldn't do that IRL.
Great post, haven't tried Haskell yet, looks interesting. Can you do a performance test on each version? The LOC is surely a factor but knowing the performance would be even better.
Python seems to average about 1.6 seconds. The first run was 3 seconds which is probs because of filesystem caching or TLS resumption. Go is averaging about 1.25 and Haskell about 1.35.
I don't think performance really means much here though, because on such a short program, factors like the time to start the interpreter and parse source code, write to the console, etc, are much more significant than they should be. The Haskell binary dynamically loads 11 system libraries while the Go binary only loads 2 dynamically, and that might account for the speed difference there. I've heard dynamic linking increases startup costs.
Compiling the go binary with cgo disabled will reduce startup time even more (zero dynamic libraries), but you're right in that it's a terrible measure of the general strength of a programming language/runtime.
As a FaaS function, the startup/first request time is important. As a long-running service, that time doesn't matter.
LOC is an even worse measurement (worst? possibly) since I think we've all seen some of the abominable obfuscated C or golfed python solutions.
Putting all of the language battles aside, I'd say the python/bs4 and go/colly solutions are the most easily understandable and implementation-ready options. That says way more about the libraries and their API designs than the programming languages in which they're implemented.
Another point for API design over implementation language.
I know LOC as a blind measurement is problematic but I didn't golf these implementations. I believe LOC of an idiomatic implementation is a reasonable measure. Still flawed of course, but so is every metric.
I appreciate this post, but I still tend to agree with @Providence Salumu that Haskell can also do this in one or two lines depending on what package you might use.
Now, it's a good thing they don't give downvotes here because I'm about to anger many people. I'm not going to comment on Google Go, but while it is true Python has a lot of neat packages written for it, Python is (In My Opinion) a scripting language that is "broken by design"...
No matter how hard you try, Python will never be able to do certain things that Haskell can do (like purity). On the other hand, Haskell can likely be modified to do just about anything Python can.
I continue to be frustrated by coworkers that insist Haskell is difficult to learn and is obscure just because Python got better marketing over the course of the last 20 or so years. Python was adopted by the masses (In My Opinion) because people were convinced that it was a "good" language when in reality, it wasn't all that great. Yes, Python is easy to learn, but that doesn't make it a good language...
Lol, nothing to fear from me. Spicy opinions are fun. I've got a few myself that I haven't posted here mainly for fear of the reception. I'm gonna have to disagree with this one though...
I think this is really moot if not backward. Functional purity (and really all language features) isn't a goal, but a tool for reaching our goals, so it's wrong to describe it as something "Python will never be able to do no matter how hard you try". Lacking that feature doesn't reduce the domain of problems Python can solve. And it surely has features Haskell can never replicate, like
breakpoint, default arguments, or proper struct inheritance.
Not to mention, isn't it a tiny minority of languages that support language-enforced functional purity? Even among other compiled languages?
There are use cases Python can't do that Haskell can, like compiling to a shared library to be called from another language. But that applies to all scripting languages. Do you think all scripting languages are broken by design?
To be fair, if you do, I don't find that totally unreasonable. I prefer compiled languages and abhor not having type checking. My opinion of Python is more that it has bad core design in a couple areas, but is so much more practical than languages that try to be perfect and fail. Basically "the best that the wrong way of doing things can provide", while most other languages are "the worst that the right way of doing things can provide".
About that... something I started thinking recently was that Haskell is the only languages I know that tries to be perfect. The others don't seem like their designers ever intended to make something that would revolutionize programming. Go is the epitome of this, as its designers have said something like "Go isn't meant to advance programming theory, it's meant to advance programming practice" (translation: we don't want a good design we just want to special case the common stuff).
It's true that being easy to learn doesn't make it a good language, but I think it does count toward it. After all, tools exist to make work more efficient, so a tool that takes more time to learn is, all other things the same, a worse tool. And there is no way Haskell's learning curve is all due to inadequate tutorials and documentation. It's conceptually arcane.
I don't necessarily agree with you totally, but your goal of objectivity is at least refreshing so I liked your post. Thanks.
I was expecting performance write up.
Lol
Yes, I would tend to think "reasonably well written" Haskell would outperform Python, but I think this is too small of an example to really make a determination. I can't comment on Google Go. By "reasonably well written" I mean Haskell that is written without too many "rookie" mistakes like I might make. While Haskell (in general) likely has good performance, I think newbies such as myself can sometimes wind up doing things that are logically correct but inappropriate from an efficiency perspective.
Me too...
Would love to see more posts like this! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/yujiri8/comparing-the-same-web-scraper-in-haskell-python-go-387a | CC-MAIN-2021-49 | refinedweb | 2,265 | 65.93 |
years, 2 months ago.
Making a LED pulse for a desk lamp
Hello: I am currently doing an experiment to check the alertness in people working with certain light scenarios. I need to add to a convenitonal desk lamp a BLUE LED light pulsing at:
Rate of 1milisecond ON and 9miliseconds OFF With a power of 12micro watts / cm2. At 1KHZ.
My first test LED light will be a Optosupply 6W MCPCB (OSTCXBCBCIE) LED.
Later I want to upgrade the output LED to a conventional LED desktop lamp connected to a typical plug to the wall.
Right now my power supply is 9V.
I have the mbed LPC 1768 and a CL6807 LED driver kit, but as I am NOT an electronics guy AT ALL, I just dont understand how to program this and how to wire the mbed to the driver kit. Can anyone help me?
I am sorry if I sound to ignorant, but yes, I am, but I need to learn that for my experiment.
Thanks in advance for your time reading my post. If needed I can provide external email contact.
Carlos
2 Answers
5 years, 2 months ago.
From a software point of view this is very simple:
#include "mbed.h" DigitalOut ledControl(p18); main () { while (true) { ledControl = 1; // turn on wait_ms(1); ledControl = 0; // turn off wait_ms(9); } }
There are lots of other more elegant ways to get the same result but unless you need the LPC to also do some other task at the same time you may as well keep it simple and it doesn't get much simpler than the code above.
On the electronics side: Do you have a CL6807 demo board with the LED on or just the chip? If it's just the chip then you need to add external parts (it looks like a capacitor, a resistor, an inductor and a diode are needed as a bare minimum), if it's a demo board set up for the LED then that's one less thing to worry about. If it's a demo board for the chip but not correctly setup for the LED then it should just be case of setting the LED current resistor to the correct value (about 0.2 Ohms or larger for your LED from what google tells me).
If you need more help in connecting the LED to the driver then let us know exactly what you have and we'll go from there.
Connect the 9V supply to the Power in on the LED board and also to the Vin pin (pin 2) on the mbed. Connect the ground pins of the power supply, the LED board and the mbed (pin 1) together. Connect the control signal (p18) to the ADJ pin on the LED board.
That should be it.
Note: any IO pin would work fine for the code above. The reason for using p18 is that that pin can also be used as an analog output. The LED driver allows two different ways to control the brightness, you can use a PWM signal to control the brightness (a digital signal that constantly turns on and off quickly, the percentage of time it's on sets the brightness) or you can give it an analog voltage between 0.25 and 2.75V. If you wanted to use PWM then you'd use a PWM output capable pin on the LPC1768(p21-p26) however analog is probably a little more intuitive to use.
To use the analog out to control brightness the code becomes:
#include "mbed.h" AnalogOut ledControl(p18); main () { while (true) { ledControl = 0.5; // turn on ~50% brightness wait_ms(1); ledControl = 0; // turn off wait_ms(9); } }
Given the range limits on the driver IC anything below 0.085 will be off and over 0.915 will be on full. Also keep in mind that the relationship between LED power and the human eyes perception of brightness is not linear, 0.9 will not look twice as bright as 0.45.
By the way, I have the CL6807 demo board withOUT the LED on. I have a separate LED which is LED light will be a Optosupply 6W MCPCB (OSTCXBCBCIE). I will work on connecting everything and let you know. Thanks for the advice.posted by 02 Oct 2014
Hello again, well I follow all of your kind recommendations but now my LED lights up, only the RED colour, if I connect the BLUE or the Green they just dont lit up. I dont know if this is because of some missing lines on the programming or because my LED needs to be wired differently or maybe the resistor I am using is not the propper one for the mbed output signal.
Also by reading your explanation I think I should go th PWM way as I need the LED to be constantly flashing at the rate I showed you for at least 10 minutes as I will use it in a homework task as a desk lamp, and this also brings me the problem of how to assign the BLUE colour to which PWM output pin.
The LED i am using is a "6W powerfull colour RGB" it looks like a star (OSTCXBCBC1E). I hope you can help out again, I am sorry for the inconvenience this may cause. Thanks
BTW The resistor I am using are 5.1 ohm / 5 W and 5.0 ohm / 10 W for what I understood in an internet calculator for resistors.
Thanks again. Carlosposted by 16 Oct 2014
How have you connected the red, green and blue LEDs up? If you just connect them all in parallel to the same drive signal you'll only get red because red uses a lower voltage. If you connect them all in series and your driver can supply at least 9V then they will all light up. But since they all have the same current the end result will probably look green since your eyes are most sensitive to green.
To get white then you need to control the brightness of each color independently. The blue and possibly green you will need to be able to drive up to at least 3.2V.
Beyond that it's hard to be more specific without knowing exactly what you have and how it's connected.posted by 16 Oct 2014
Thanks Andy, I wired the mbed as you showed me and I have been connecting only one LED at a time, testing if the signal lits up each one separately. The LED i have has the same mA values for the red and the blue and the green, DC Fwd of 600mA and Pulse Fwd of 800mA with a Reverse Voltage of 5. I wonder if the LED is to big for the mbed output. So far i have been using only the Analog output on Pin 18 as you instructed me with only the red one lighting up, (and really bright, as the LED has a power of 6Watts). Would like to send you pictures of my arrangement but how can that be done? I dont want to blow the LED, i just want it to pulse while is on. Thanks for your time. Carlosposted by 17 Oct 2014
What voltage is the power supply to the LED driver chip? Red needs a lower voltage than green or blue so if your power supply isn't high enough voltage then only the red will work.
Pictures can help, a schematic would be better. If you don't have that sort of software handy then draw what your circuit is on a piece of paper, take a picture and email it. I'll message you with my email address.posted by 17 Oct 2014
5 years, 2 months ago.
<<quote>Rate of 1milisecond ON and 9miliseconds OFF With a power of 12micro watts / cm2. At 1KHZ.<</quote>>
1ms on and 9ms off gives 10ms that's 100Hz.
It is just higher than the persitence of vision. For the common people this will just dim the LED level.
Good point, I will review it as I think I wrote it wrong about the KHZ, thanks a lot for your time. BUt in did, the idea is that the person does not percieve the flickering, but, the pupil and the brain does, and it is actually the reason of the experiment to review the pupil reaction and the brain reaction. But I will double check on that number to avoid problems affecting others. REALLY THANKS FOR THE OBSERVATION!!!.posted by 02 Oct 2014
You need to log in to post a question | https://os.mbed.com/questions/4749/Making-a-LED-pulse-for-a-desk-lamp/ | CC-MAIN-2019-51 | refinedweb | 1,444 | 78.18 |
Liquidat has posted a nice overview of the technology known as NEPOMUK, a part of KDE 4. An excerpt reads: "Nepomuk-KDE is the basis for the semantic technologies we will see in KDE 4. Sebastian Trüg, the main developer behind Nepomuk-KDE, provided me with some up2date information about the current state and future plans".
To me, the hard problem here always was how to share and transfer the metadata between users and machines. For instance, suppose I am using more than one machine; would the Nepomuk information created on one machine even make sense on the other? Would it be possible to sync the databases between machines? Is it possible to distill out a meaningful, privacy-filtered subset of the data and send that to a friend? What happens when I reorganise all of my data using non Nepomuk aware tools?
I see in the article that there are some plans to address these issues, but it seems they have not been solved so far. Any thoughts?
Just a KDE user here wanting to say that is a good question
I hadn't thought about it but the problem you bring up should be thought about. Though I don't know, I assume that any non-nepomuk enabled desktop won't support tags made with a nepomuk desktop. Perhaps if a nepomuk is brought to all of the free desktops than atleast we can be assured that it would work on all the free operating systems.
In a way, KDE is the first largescale test. If it works out, perhaps it'll be adopted elsewhere (gnome, xfce,... perhaps even a proprietary OS like OSX?).
Actually, as far as this goes, I was somewhat certain that OSX already had a framework in place for the support of arbitrary metadata. I'm not sure on details, but a friend of mine was talking about it at length one night
For Mac OS X, Apple's Spotlight system is very similar, but not exactly the same. You can write metadata importer plugins which can add, index, and search arbitrary metadata.
However, programs like Google Desktop for Mac OS X show that it's possible in some ways to integrate other systems with Apple's Spotlight metadata system....
i'm not a semantic web (SW) expert, but i had to study it for school (uni). but for all i know the SW technologies are build with exactly what both you guys ask for: distributed and sharing oriented.
SW basically allows computers to understand that something means, and how that relates to other information. it's all about describing (annotating) data in standard, computer readable way.
That sounds like pure and plain xml.
RDF/XML is one format for it. Internally, it's probably going to look more like the Notation3 (N3 format) -- just a list of "triples": lines like "uri1 relationship uri2". For instances, you might declare relationships like " photographer_of", or "googleearth://postcode location_of ipinfo://yourserver") or " manufacturer_of companyservers://missioncriticalserver1".
As an ancestor post said, this is pretty much perfect for exporting/importing/otherwise sharing info. You can easily create queries based on this data, like "? photographer_of ?", to get a list of all photographers, or "? photographer_of*" to get a list of all photos published by your company. Then, you just need to provide that list to others in some way. Depending on how its implemented, it might also be possible to mark certain namespaces as private, but make the rest available, so that anything referring to objects such as "myborrowedmp3collection://*" or "topsecretprojects://*" or just "smb://" gets filtered, but everything else is made available. Likewise, and probably more safely, the opposite could be true, with only public namespaces made available.
Interestingly, let's say you have a kde io plugin that understands URIs with unique hashes, and deferences those to the appropriate files: something like "md5://number". By publishing this on some shared site (say nepomuk_repository.kde.org), then every KDE user with that file could automatically gain all the (non-filtered, public) tags of information that any other participating KDE user contributes. So, some KDE user in taiwan might mark set a song attribute such as "amarok://performed_by amarok://artist/Sarah McLachlan", and everyone else's desktop would suddenly know this.
For general queries, let's assume Wikipedia will take up the (already very functional) Semantic MediaWiki Extension at some point. Then, it'll be possible for your desktop to ask Wikipedia for all sorts of complicated information, like "countries with a population of more than 1,000,000, but less than three internet providers", or, for a more basic Unix utility, "languages that include the characters X, Y, Z, but not A". Or, for a person in need of medical help, they might consult a national medical database, along with a blog site, asking for "doctors within coordinates A,B and C,D who specialise in earache and who no one called a sadist". Within an organisation, lots of useful queries, like "people working on project X, who work over lunch" would be possible.
No one's saying the file format (be it XML/RDF, N3, CSV, or something else) is revolutionary (although, in the relative simplicity of N3/RDF, they do make some advances, I suppose). The trick is in taking all these information sources, combining them into a huge database of triples that performs well, and designing the right queries, the right interfaces, the right amount of sharing, and the right security features, so that your desktop "knows" more than it used to, and can work with other systems that know more than they used to, without being bogged down by the terabytes of new data we're soon going to be using for this.
Of course, this all depends on your own/others' ability to organise information, but it's all coming together, from other projects online. This WILL take off, and it will almost certainly be the REAL Web 2.0, that people actually notice, like they noticed Web 1.0. KDE *must* be part of that, and I'm very glad to see it's going to be there.
I DO hope KDE's/NEPOMUK's not going to be limited to simple things like tagging and searching files though, much as I want to see KDE have those features. At the very least, I'm hoping to see what GNOME's (now abandoned, for some insane reason) hint-based system did: let applications actually share knowledge in real time, like "user is working with a document that has subject X" and "Oh, I have files related to subject X". It's unclear whether NEPOMUK will actually allow the kind of things described above. The technology certainly does, though, and Nepomuk is claiming to advance it, as I understand things.
Neopomuk cooperates with freedesktop.org, however I don't know how good this cooperation works, and what standards will be defined there or if those standards will address those issues.
On the Mandriva Club there is also an interview with nepomuk-kde developer Sebastien Trüg:
On Liqudiat's blog there is quite a lot of support for using xattributes, yet it seems like this option won't be used. Can anyone explain why?
- not all filesystems support it
- you'll need a database anyway to be able to search through it
I dunno what the performance is, either...
- not all filesystems support it.
Most of the well used ones support it.
- you'll need a database anyway to be able to search through it
Well yes, but xattributes is about making sure the tags move with the file. Not searching.
metadata you can't search is completely useless.... that's the point of nepomuk, isn't it? finding and connecting stuff through metadata. therefor you need a central index. an index scattered through the whole filesystem is useless. that's why everyone is working on something like strigi...
Sure, but you could have some kind of minimal metadata attached to the file to differentiate it from other files, and the full metadata in the index. Otherwise how would you cope with:
echo Hello > ~/myfile
# I go to Dolphin and tag myfile
cp ~/myfile ~/myfile.bak
mv ~/myotherfile ~/myfile
mv myfile.bak ~/myfile
Unless you have some way of disambiguating files *other* than their name, you're going to have issues confusing the metadata of these two different files.
Now, there are various ways you could handle this. xattrs would be the obvious one to me, but I guess you could also have others.
that only works if you want to find the metadata of a file. whats with the other way around? i want to find every file i got per mail.
its quite simple: if you move the file you break the index. the index needs to be updated everytime a file is moved.
>that only works if you want to find the metadata of a file. whats with the other way around? i want to find every file i got per mail.
That's why you have the same data stored in the file and in the database. Having the metadata in every file means that all applications automatically keep the metadata intact without modification. Having the metadata in a database allows fast searching.
>its quite simple: if you move the file you break the index. the index needs to be updated everytime a file is moved.
but the database is broken every time you move a file even if no metadata is stored in the file.
The database will have to include the location of every file so when you search for files based on metadata you probably want strigi to tell you where to find the file, this means the location of the file has to be in the database and updated every time a file moved.
Why have "files" in the first place? Why "copy/move them around"? They are just
sequences of bytes. Why should I have a file manager? Isn't that what we want to
replace? The only reason is different physical computers on a network. But we could imagine even that to be irrelevant in some, not so far, point in the future.
You have the meta-data in the file and in a database.
It goes in a file to ensure that the file keeps the correct metadata even after mv, cp, dd or being E-mailed. Etc.
It goes in a database to be searched.
> or being E-mailed ?
than the metadata needs to be stored inside the _file_. The filesystem is of no help here... or do you want to email your filesystem?
but this doesn't solve the whole problem. if you move a file you still have to change the index, otherwise you could only find the oldlocation of the file. so you don't gain much.
so if you have to change the index everytime a file moves anyway, there is no real gain from storing anything with the file.
also, you don't need a filename or an id to track files. look at modern version controll systems like monotone or git. the identity of a file isn't an id, or a name. its the content - so use a hash. that would automatically solve all copy problems.
the only remaining problem would be tools that alter the file somehow. that should be solved by nepomuk integration into all applications. for legacy apps you could store the location of a file too. so if you overwrite a file, the index should automaticaly "transfer" the metadata, if not told otherwise through the nepomuk api.
so with this in place, the only scenario that could break the data-metadata relationship would be legacy applications (apps without nepomuk support) which create "copies" of files with new content (like converting images).
but that's a case you can't do anything about.
You also have to remember that most apps move the old file to a backup file (eg. xx~) and write a new file. Unless the app knows to copy the meta data it will be lost at this point.
The filesystem _is_ a database. A metadata supporting filesystem can maintain its own indexes. Why put the file and metadata relationship on such a high level if you don't need to? There might be considerable overhead space-wise, but with 1TB harddisks getting mainstream soon this should not be a big problem.
If you put this indexing responsibility on filesystem level you get automatic, default nepomuk support for low level commands like cp and mv.
If you want this information to 'cross over' non-metadata filesystems you can use higher level tools. I could see a project like BasKet fit such a role for example.
Regarding hashes to bind relationships, I think this is not so useful on a filesystem. reading the full content of a couple of ISO files or a large mp3 collection just to get the hashes seems a little inefficient to me. And a hash still isn't as uniquely identifying as a URI.
I just started a FAQ page for Nepomuk-KDE. The first question I answer there is the xattributes one.
You can find the FAQ at:
Thanks :)
Thank you, please could you use this for the second question:
"Will my file lose the metadata associated with it if I use generic rather than Nepomuk specific tools to move files around (FTP, mv, cp, a non-kde file manager, firefox upload. etc)
I'll be curious to see how it turns out. WinFS was supposed to have a metadata based filesystem, but WinFS is vaporware at this point. If KDE beats Microsoft to the relational/metadata/integrated-search desktop, I think a lot of businesses might suddenly become interested. I haven't gotten a chance to try Nepomuk, but I really like Strigi - it's freaking fast (compared to Beagle, which I tried previously) and it doesn't have security holes like Google Desktop Search (which I haven't tried, because of the constant "A new zero-day hole has been found in Google Desktop!" stories on Slashdot).
The file system argument is a good one: Isn't is all about filesystems? I think distributors need to think more about filesystems. For our home partition a crypto file system should be standard. I don't know whether user space solutions make much sense.
I posted an italian translation of this article:... | http://dot.kde.org/comment/50627 | CC-MAIN-2014-10 | refinedweb | 2,415 | 71.85 |
Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. First Public Working Draft. Please send your comments.
Based on the RIF Use Cases and Requirements, this document develops the RIF Core (core of the Rule Interchange Format) through (see .
As mentioned, making RIF a Web language entails that UR, URIs are constants in just one of the primitive sorts of RIF. Integers, strings, dates, time, and others are treated as other primitive sorts. Thus, the single mechanism of a polymorphic multisorted logic provides support for the use of UR.
The RIF Condition Language is intended to be a common basis for several dialects of RIF. First of all, it is used by RIF Core, as described in this document. The other future dialects or groups of dialects where the condition language or its extensions might be used include:
Rule bodies and queries in declarative logic programming dialects (LP)
Rule bodies in first-order dialects (FO)
Conditions in the rule bodies of production rule dialects (PR)
The event and condition parts of the rule bodies in reactive rules dialects (RR)
Integrity constraints (IC)
It should be noted, however, that apart from RIF Core no decision has been made regarding which dialects will ultimately be part of RIF.
The RIF condition language is intended to be used only in rule bodies and queries, not in rule heads. The various RIF dialects diverge in the way they specify, interpret, or use rule heads and other components of the rules. By focusing on the condition part of the rule bodies we achieve maximum syntactic and a great deal of semantic reuse among the dialects. This document describes Positive Conditions.
The UR:uri for UR.
`For didactic reasons, we start with a single sort, which includes all constants, function symbols, and predicate names. The multisorted RIF logic is described in Section Multisorted RIF Logic..
RIF uses multi-sorted logic to account for primitive data types, the use of UR.
Sorts are drawn from a fixed collection of sorts PS1, ..., PSn. These sorts are intended to model primitive data types. For instance, RIF Core supports the sorts xsd:long, xsd:string, xsd:dateTime, and xsd:ur). For instance, xsd:dateTime × xsd:long → xsd:dateTime could be an arrow signature for a function. A Boolean signature is a statement of the form s1 × ... × sk, where, again, s1, ..., sk are names of sorts. For instance, the predicate HappenedBefore may have a Boolean sort xsd:dateTime × xsd:dateTime. The arrow and Boolean sorts, along with the primitive sorts, are used in the definition of well-formed terms and formulas. This notion is defined in Section Formalization of the Multisorted RIF Logic. (See end note on using sorts to represent FOL, RDF, and other logics.)
In addition to the syntax for the sorted literals, RIF Core needs the following extensions. First, we introduce the following new functions:
Sort: Const → Primitive_Sorts
ASignature: Const → Arrow_Sorts
BSignature: Const → Boolean_Sorts). A variable, ?v, is a well-formed term of sort Sort(?v), if Sort(?v) is defined. If Sort(?v) is not defined then ?v is a well-formed term for every sort.:uri is a symbol that belongs to the sort rif:ur UR:uri. Symbols of this sort have the form "XYZ"^^rif:uri, where XYZ is a URI as specified in RFC 398:uri is intended to be used in a way similar to RDFS resources. The domain of the sort rif:uri can be any set. Its concrete structure depends on the application. This domain is not the same as the value space of the XML primitive type anyURI. The exact semantics of the sort rif:uri will be elaborated upon in future drafts. The precise meaning of rif:uri is currently still an open issue for the group.
Specifying Arrow and Boolean Sorts. Arrow sorts are declared as follows. Note that multiple sorts can be declared for the same symbol.
:- signature NAME s1 * ... * sn → s , r1 * ... * rk → r , ...
Boolean sorts are declared similarly:
:- signature NAME s1 * ... * sn , r1 * ... * rk , ...
Note that NAME is a symbol from Const whose syntax was specified in Section Multisorted Syntax. In most cases functions and predicates are denoted by URIs, so such a symbol will have the syntax {{{"..."^^rif:uri}}}. In many cases sorts will be XML Schema data types referred to by strings that syntactically look like URI (in the sense of RFC 3986), but they should not be confused with RIF symbols of the sort rif:uri. For instance, in "1"^^'', the sort specification is a URI in the sense of RFC 3986, but it is not a rif:uri symbol, because it does not have the form ""^^rif:uri.. Only the mappings IC, IV, and IF require some changes. These changes are expressed as additional constraints as indicated below.
IC from Const to elements of D.
New constraint: If symbol c has sort s then IC(c) ∈ D.
This section develops a Core RIF Rule Language by extending the RIF Condition Language, where conditions become rule bodies. RIF Phase I covers only Horn Rules and a number of extensions that do not increase the expressive power of the language. The envisioned RIF dialects will extend the core rule language by generalizing the positive RIF conditions and by other means.
This section defines Horn rules for RIF Phase 1. The syntax and semantics incorporates RIF Positive Conditions defined in Section Positive Conditions..
The classes Var, CONDITION, and ATOMIC were specified in the abstract syntax of Positive Conditions.
The combined RIF Core abstract syntax and its visualization are given in Appendix: Specification. UR.
The compatibility of RIF Core is currently focussed on the Semantic Web standards OWL and RDF. Sections RIF-OWL Compatibility and RIF-RDF Compatibility currently provide starting points only.
This version of the specification of RIF Core does not detail compatibility with OWL; however, this is planned for future versions. There are several expressive features of OWL that are not expressible using a standard rule language such as RIF Core. These features include OWL's (classical) negation as well as its disjunction and existential quantification (outside of rule antecedents). The RIF Working Group will investigate different approaches to achieving interoperability between OWL and RIF Core. The approaches to be investigated include, but are not limited to, (a) translating a subset of OWL to RIF Core, (b) defining a query mechanism to allow the querying of OWL knowledge bases from RIF Core rules, and (c) using a hybrid formalism that combines OWL ontologies and RIF Core rule sets, but limits their interaction in order to attain certain desirable computational features.
This version of the specification of RIF Core does not detail compatibility with RDF; however, this is planned for future versions. There are several issues that arise when combining RDF with RIF Core, which will need to be addressed by the RIF Working Group. Among these issues are the blank nodes in RDF, which correspond to existentially quantified variables (beyond those allowed in RIF Core), the meta-modeling features of RDF (e.g., using classes as instances), and the possibility to use the RDFS ontology vocabulary in arbitrary locations, thereby changing its semantics. The RIF Working Group will investigate different approaches to interoperation, including, but not limited to, (a) integrating (a subset of) the RDF semantics with RIF Core, and (b) defining a query mechanism that allows the querying of RDF knowledge bases from RIF Core rules, possibly using a SPARQL query interface. Two specific considerations are: (i) whether to treat a triple s p o as a binary predicate p(s,o) or as the arguments of a special _triple predicate _triple(s,p,o), and (ii) whether to treat blank nodes as existentially quantified variables or as "skolem" constants.
These issues and considerations also need to be taken into account when considering interoperation with OWL Full.
A syntactic specification of RIF Core is given here as the combination of the RIF Condition and RIF Rule syntaxes.
The default namespace of RIF is.
The abstract syntax of RIF Core is specified in asn06 as follows:
It is visualized by a UML diagram as follows:
Automatic asn06-to-UML transformation has been employed for this, using the available tool.
The concrete syntax, in particular an XML Schema, will be derived from this (to follow in a later Working Draft).. | http://www.w3.org/TR/2007/WD-rif-core-20070330/ | CC-MAIN-2017-04 | refinedweb | 1,396 | 54.93 |
Anton Pevtsov wrote:
> The attached file contains the test updated according to your notes.
Thanks!
>
> Martin Sebor's wrote:
>
>> <>> Second, in the test function, I would like to use rw_match() instead
>
>
>> <>of Traits::compare() to verify that the contents of the buffer match
>> the
>> expected result. This will let us avoid widening the expected
>> result and enable us to display the offset of the first mismatched
>> character (if any) in the rw_assert() diagnostic message.
>
>
> Ok, but I have a note: rw_match compares symbols only, so it is possible
> that two UserChar's with different .f and equal .c parts will be
> interpreted
> as equal. Currently this affects nothing, but it might become a problem.<>
Yes. The long double UserChar member is there only to verify that
the library doesn't make any assumptions about the alignment of
characters (long double usually has the strictest alignment
requirement of all types). The member isn't currently being used
for anything else (except to check it's zero for eos()).
In the future we might want to make use of the long double member
to encode characters outside the extended ASCII range. If and when
we do we will also have to change rw_match() and probably also
rw_widen().
>
> Martin Sebor's wrote:
>
>> <>> Oh, and one minor nit :) If you could move the indented #ifndef
>
>
>> <>reprocessor directives to the left margin that would be great :)
>> I am not sure that I understand what exact did you mean under the "left
>> margin" here - 0 or 4 spaces left?
The pound sign should always be in column 0 to make directives easy
to see. The formatting convention used throughout the library is to
indent the name of the directive two spaces for each level of nesting,
like this:
#if LEVEL_0 // 0 spaces between '#' and "LEVEL_0"
# if LEVEL_1 // 2 spaces between '#' and "LEVEL_1"
# if LEVEL_2 // 4 spaces between '#' and "LEVEL_2"
# include <file> // 6 spaces
# error "text" // 6 spaces
# pragma foobar // 6 spaces
# else // back to 4 spaces
...
# endif // LEVEL_2
# endif // LEVEL_1
#endif // LEVEL_0
Code within the blocks controlled by the directives is indented
according to the usual rules (i.e., not affected by the nesting
of preprocessor directives).
Martin | http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200603.mbox/%3C44283823.2040402@roguewave.com%3E | CC-MAIN-2017-13 | refinedweb | 362 | 57.5 |
On 11/23, Pekka Enberg wrote:> (Adding some CC's.)>> On Sat, Nov 21, 2009 at 2:16 PM, André Goddard Rosa> <andre.goddard@gmail.com> wrote:> > Avoid calling kfree() under pidmap spinlock, calling it afterwards.> >> > Normally kfree() is very fast, but sometimes it can be slow, so avoid> > calling it under the spinlock if we can.kfree() is called when we race with another process which alsofinds map->page == NULL, allocs the new page and takes pidmap_lockbefore us. This is extremely unlikely case, right?> > @@ -141,11 +141,12 @@ static int alloc_pidmap(struct pid_namespace *pid_ns)> > * installing it:> > */> > spin_lock_irq(&pidmap_lock);> > - if (map->page)> > - kfree(page);> > - else> > + if (!map->page) {> > map->page = page;> > + page = NULL;> > + }> > spin_unlock_irq(&pidmap_lock);> > + kfree(page);And this change pessimizes (a little bit) the likely case, whenthe race doesn't happen. And imho this change doesn't make thecode more readable.But this is subjective, and technically the patch is correctafaics.> > if (unlikely(!map->page))> > �Hmm. Off-topic, but why alloc_pidmap() does not do this rightafter kzalloc() ?Oleg.--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2009/11/23/187 | CC-MAIN-2016-18 | refinedweb | 197 | 66.94 |
FCNTL(2) System Calls Manual FCNTL(2)
fcntl - file control
#include <fcntl.h> int fcntl(int fd, int cmd, ...);
The by some commands, a pointer to a struct flock by others (see below), and ignored by the rest. The commands are: F_DUPFD Return a new descriptor as follows: o Lowest numbered available descriptor greater than or equal to arg set to remain open across execve(2) calls. F_DUPFD_CLOEXEC Like F_DUPFD, but the FD_CLOEXEC flag associated with the new file descriptor is set, so the file descriptor is closed when execve(2) is called. (interpreted as an int) is either 0 or FD_CLOEXEC, as described above. F_GETFL Get file status flags associated with the file descriptor fd, as described below (arg is ignored). F_SETFL Set file status flags associated with the file descriptor fd to arg (interpreted as an int) as negative, otherwise arg is taken as a process ID. The flags for the F_GETFL and F_SETFL commands). O_ASYNC Enable the SIGIO signal to be sent to the process group when I/O is possible, e.g., upon availability of data to be read. O_SYNC Cause writes to be synchronous. Data will be written to the physical device instead of just being stored in the buffer cache; corresponds to the O_SYNC flag.
Upon successful completion, the value returned depends on cmd as follows: F_DUPFD A new file descriptor. F_DUPFD_CLOEXEC.
fcntl()d is not a valid open invalid. The argument cmd is F_SETLKW, and the function was interrupted by a signal. [EINVAL]d refers to a file that does not support locking. [EMFILE] The argument cmd is F_DUPFD and the maximum number of open file descriptors permitted for the process are already in use,. OpenBSD 6.4 December 16, 2014 OpenBSD 6.4 | https://resin.csoft.net/cgi-bin/man.cgi?section=2&topic=fcntl | CC-MAIN-2019-09 | refinedweb | 291 | 65.01 |
Move last element to front of a Linked-List in C++
In this tutorial, we will learn to move last element to front of a linked-list in C++.
For Example:
- If the given linked list is 1->2->3->4->5
- Then, the output is 5->1->2->3->4
Algorithm:
- Firstly, traverse the linked list till the last node.
- Secondly, we will use two pointers here.
- Now, use one pointer to store the address of the last node and second pointer to store the address of the second last node.
- Do following after the end of the loops.
- Now make the second last node as the last node.
- Then set next of the last node as head of the list.
- Finally, make the last node as the head of the list.
- The time complexity of the solution is O(s) where s is the total number of nodes in the list.
You may also like:
Program to move the last element to the front of a given Linked-List in C++
Hence, the implementation is here.
Code:
#include<iostream> #include <bits/stdc++.h> using namespace std; // node class class Node { public: int dat; Node *nex; }; void mvefront(Node **head) { if (*head == NULL || (*head)->nex == NULL) return; Node *seclast = NULL; Node *last = *head; while (last->nex != NULL) { seclast = last; last = last->nex; } seclast->nex = NULL; last->nex = *head; *head = last; } void push(Node** head, int newdata) { Node* newnode = new Node(); newnode->dat = newdata; newnode->nex = (*head); (*head) = newnode; } //print void printl(Node *node) { while(node != NULL) { cout << node->dat << " "; node = node->nex; } } int main() { Node *strt = NULL; push(&strt, 5); push(&strt, 4); push(&strt, 3); push(&strt, 2); push(&strt, 1); printl(strt); cout<<endl; mvefront(&strt); printl(strt); return 0; }
OUTPUT EXPLANATION
INPUT: 1 2 3 4 5 OUTPUT: 5 1 2 3 4 | https://www.codespeedy.com/move-last-element-to-front-of-a-linked-list-in-cpp/ | CC-MAIN-2020-40 | refinedweb | 307 | 68.81 |
In this article we'll measure the performance of an example React app with both the Profiler tab in React DevTools, and the
Profiler component.
You’ve just created a brand new React app, but you want to understand its performance characteristics before shipping it to your customers. While you can use the browser’s User Timing API to measure the rendering times of your components, there’s a better alternative created by the React team: the
Profiler API, and a Profiler tab in React DevTools.
The
Profiler API is the recommended way of measuring the rendering times of our components, because it’s fully compatible with features such as time-slicing and Suspense.
In this article we’ll measure the performance of an example React app with both the Profiler tab in React DevTools, and the
Profiler component.
If we are working on our React app in development mode, we can use the Profiler tab in React DevTools to record parts of its execution, and then analyze all the updates that React made. (If we want to use the Profiler tab on a production app, we need to make some changes to our config.)
To profile our app, we just need to switch to the Profiler tab, and press the Record button to start profiling:
We’ll then perform actions on our app, and press the Record button again to stop profiling. The DevTools will show us each of the updates that happened while we were recording, using a fancy flame chart:
If you are not familiar with this way of representing performance data, you may be wondering what all these colored bars mean. Let’s break it down.
Every time any of our components render, React compares the resulting tree of components with the current one. If there are changes, React will take care of applying them to the DOM in a phase called commit.
The colored bars we’re seeing at the top are commits that happened while we were recording. The yellow/orange bars are the ones with higher rendering times, so we should probably pay extra attention to them:
If we click on one of those commits, the flame chart below will be updated, showing the components that changed in that commit as horizontal bars. The longer the bar, the more time it took for that component to render:
The chart shows the root component at the top, with its children sitting below in hierarchical order. The number shown inside each bar represents the time it took to render the component and its children. When we see something like RangeButtons (0.2ms of 1.8ms), it means that
RangeButtons took 0.2ms to render, while
RangeButtons plus its only child
ButtonGroup took 1.8ms. That means
ButtonGroup must have taken ~1.6ms to render, which is confirmed when we look at the bar below that says ButtonGroup (0.3ms of 1.6ms).
Another cool thing we can do here is click on the bar for a certain component. Not only will the flame chart focus on the selected component, but the pane on the right will also show us how many times it has rendered for the lifetime of the app:
The Profiler tab in React DevTools is a great way of inspecting how our app is performing without needing to change our code. Just by recording key interactions, we’ll be able to know where rendering time is going, and identify bottlenecks that make our app sluggish.
ProfilerComponent
If we want to have programmatic access to the performance measurements of a specific component, we can use the
Profiler component. It wraps part or all of our app tree, and gives us metrics on how long it took for that tree to render.
The first thing we have to do to use the
Profiler component is to import it:
import React, { Profiler } from "react";
The
Profiler component can then be used to wrap any part of our tree of components:
// CustomStockChart.js const CustomStockChart = props => { // ... return ( <Profiler id="StockChart" onRender={logTimes}> <StockChart> {/* ... */} </StockChart> </Profiler> ); }; const logTimes = (id, phase, actualTime, baseTime, startTime, commitTime) => { console.log(`${id}'s ${phase} phase:`); console.log(`Actual time: ${actualTime}`); console.log(`Base time: ${baseTime}`); console.log(`Start time: ${startTime}`); console.log(`Commit time: ${commitTime}`); }; export default CustomStockChart;
When
CustomStockChart renders, the
Profiler's
onRender callback will be invoked with a bunch of useful information. In our example, it’ll print something like this to the console:
StockChart's mount phase: Actual time: 7.499999995867256 Base time: 7.1249999981955625 Start time: 384888.51500000054 Commit time: 384897.5449999998 StockChart's update phase: Actual time: 0.3500000038766302 Base time: 7.075000001175795 Start time: 385115.2050000001 Commit time: 385116.22499999974
The meaning of each of these arguments is explained in the documentation for the
Profiler API. In the real world, instead of logging them to the console, you would probably be sending them to your backend in order to get useful aggregate charts.
Anyways, be sure to spend time understanding these two new tools in your arsenal, as they’ll prove invaluable when trying to identify performance issues in your React apps!
Blanca is a full-stack software developer, currently focused on JavaScript and modern frontend technologies such as React. Blanca can be reached at her blog or @blanca_mendi on Twitter. | https://www.telerik.com/blogs/profiling-react-apps-with-profiler-api | CC-MAIN-2021-17 | refinedweb | 886 | 62.88 |
I hate this change :-( The code generated for something like this today:
def f():
if 0:
x = 1
elif 0:
x = 2
elif 1:
x = 3
elif 0:
x = 4
else:
x = 5
print(x)
is the same as for:
def f():
x = 3
print(x)
No tests or jumps at all. That made the optimization an extremely efficient, and convenient, way to write code with the _possibility_ of using different algorithms by merely flipping a 0 and 1 or two in the source code, with no runtime costs at all (no cycles consumed, no bytecode bloat, no useless unreferenced co_consts members, ...). Also a zero-runtime-cost way to effectively comment out code blocks (e.g., I've often put expensive debug checking under an "if 1:" block). | https://bugs.python.org/msg347383 | CC-MAIN-2021-43 | refinedweb | 128 | 57.44 |
, ...
[POSIX Threads Extension (1003.1c-1995)]
TTYNAME(3) OpenBSD Programmer's Manual TTYNAME(3)
NAME
ttyname, ttyname_r, isatty, ttyslot - get name of associated terminal
(tty) from file descriptor
SYNOPSIS
#include <unistd.h>
char *
ttyname(int fd);
int
ttyname_r(int fd, char *name, size_t namesize);
int
isatty(int fd);
int
ttyslot(void);
DESCRIPTION
These functions operate on the system file descriptors for terminal type
devices. These descriptors are not related to the standard I/O FILE type- null.
RETURN VALUES
The ttyname() and ttyname_r() functions return the null-terminated name
if the device is found and isatty() is true; otherwise a null pointer is
returned and errno is set to indicate the error.
The isatty() function returns 1 if fd is associated with a terminal de-
vice; otherwise it returns 0 and errno is set to indicate the error.
The ttyslot() function returns the unit number of the device file if
found; otherwise the value zero is returned.
ERRORS.
FILES
/dev/*
/etc/ttys
SEE ALSO
ioctl(2), ttys(5), dev_mkdb(8)
HISTORY
The isatty(), ttyname(), and ttyslot() functions appeared in Version 7
AT&T UNIX. The ttyname_r() function appeared in the POSIX Threads Exten-
sion (1003.1c-1995).
BUGS
The ttyname() function leaves its result in an internal static object and
returns a pointer to that object. Subsequent calls to ttyname() will mod-
ify the same object.
OpenBSD 2.6 June 4, 1993 2 | http://www.rocketaware.com/man/man3/ttyname.3.htm | crawl-002 | refinedweb | 233 | 53.41 |
Quick settings is undoubtedly one of the most popular feature in Android devices. It provides a convenient way to quickly change the device settings or key actions directly from the notification panel.
Like required or frequently used, and should not be used as shortcuts to launching an app.
Android Quick Settings Tiles can only have icons of a single colour. It can be tinted to either white or grey. You can handle click, double click action on the tiles but the long press is restricted only to few system apps.
Android Quick Settings API Example
In this tutorial we will see how to use the Quick Settings Tile API in Android N to register a custom tile action. The sample application will add a custom action to quick settings and when clicked it will launch an activity.
You need the following perquisites to run this code sample.
- Android Studio version 2.0+
- A test device or emulator configuration with Android version 6.0+
Adding a tile to quick settings involves two steps. First you need to declareQuick Settings Tile in
AndroidManifest.xml file. Add the following code snippets inside
<application> element.
<service android: <intent-filter> <action android: </intent-filter> </service>
Let us now extend the
TileService class to respond the Tile events and handle click action to launch an activity when user clicking on the Quick Settings Tile. You can optionally override the following methods to perform different actions when the Tile state changes.
MyAppTileService.java
public class MyAppTileService extends TileService { @Override public void onDestroy() { super.onDestroy(); } @Override public void onTileAdded() { super.onTileAdded(); } @Override public void onTileRemoved() { super.onTileRemoved(); } @Override public void onStartListening() { super.onStartListening(); } @Override public void onStopListening() { super.onStopListening(); } @Override public void onClick() { super.onClick(); //Start main activity startActivity(new Intent(this, MainActivity.class)); } }
Insightful example for bignners. However it is missing the other important aspects of the tile API such as listining to tile state, update tile dynamically.
Adding that to this tut will make it precious
Thanks Pankaj | https://www.stacktips.com/tutorials/android/quick-settings-tile-api-example-in-android-n | CC-MAIN-2018-34 | refinedweb | 332 | 50.63 |
I am trying to make a custom ListView that can track where the list is in the view.
To do this I need to set the delegate of the control, but if I do this I loose original functionality (Like ItemSelected).
I guess that Xamarin have made a delegate for the ListView and I am setting my CustomListView to a new delegate, but I can't "get" a delegate from control.delegate.
Should I override Xamarins delegate or how can I do this?
My CustomListView Renderer
public class CustomListViewRenderer : ListViewRenderer, IUITableViewDelegate, IUIScrollViewDelegate { CustomListView list; protected override void OnElementChanged (ElementChangedEventArgs<ListView> e) { base.OnElementChanged (e); if (list == null) list = this.Element as CustomListView; Control.WeakDelegate = this; } [Export ("scrollViewDidScroll:")] public void Scrolled (UIKit.UIScrollView scrollView) { System.Diagnostics.Debug.WriteLine (scrollView.ContentOffset.Y.ToString()); } }
Answers
@ChrisNielsen,
@JohnMiller Thank you for the quick answer
Just to be sure, is this what you mean?
And I need to do this for every method that ListView uses (At least the ones I need)?
Another thing.
The variable that is going to hold the existing delegate have to be public.
When I try to get the type of the existing delegate I get this "Xamarin.Forms.Platform.iOS.ListViewRenderer+ListViewDataSource"
I have never seen a plus operator in a type name, what is this?
@ChrisNielsen,
Yes, that is what I was thinking. The
+sign is the way reflection denotes a nested type.
@JohnMiller I can't figure out what type to use for the variable that is going to hold the existing delegate?
I can't say
var oldDelegate = Control.weakDelegate;because I need the variable to be public, and I don't know how to allocate it with the type of "Xamarin.Forms.Platform.iOS.ListViewRenderer+ListViewDataSource"
_
"The class ListViewDataSource is internal"_
If I use "UITableViewSource" as a type for my public variable and cast the existing delegate to "UITableViewSource" I am getting the error "Exception of type 'Foundation.You_Should_Not_Call_base_In_This_Method' was thrown." when I use it in WillSelectRow
Bummer, I didn't realize it was internal.
I don't think this is possible then, unfortunately. What do you want to customize / add?
@JohnMiller I just realized that WillSelectRow is not a part of the Xamarin assembly.
It's working like a dream now
Thank you for your help!
If someone is interested in what the final code is looking like.
Oops, I didn't comprehend the error message you mentioned at first.
Glad you figured it out though!
Thanks @ChrisNielsen
By setting the Control.WeakDelegate = this you lose a lot of built in functionality. For instance if that was a grouped listiview you would notice that grouping no longer works. You technically need to wrap all the methods used by the internal datasource.
So everything thats "overriden" in ListViewDataSource needs to be exported and pointed back to the oldDelegate. | https://forums.xamarin.com/discussion/comment/129827/ | CC-MAIN-2019-18 | refinedweb | 475 | 58.38 |
If Page1.aspx opens Page2.aspx in a window, how can I have Page1.aspx
refresh once Page2.aspx is closed?
I have a page with data on
it and I have a LinkButton set up so the user can edit that data. The
LinkButton launches another windowed page with some text fields and a
"Save" & "Cancel" button. Once one of those clicks I execute a save
and close the window OR just disregard t
I've got two aspx pages: Default.aspx and with_Image.aspx.From
Default.aspx, I tried to show a jpg image that is in with_Image.aspx.with_Image.aspx displays the jpg image without problems. But Default.aspx
doesn't display any image, only displays an "X".
On
Default.aspx:
<body> <form id="form1"
runat="server"> <div>
aspx
with
image
doesn
display
when
I wrote a test page that does a bunch of busy work in a method called at
page load. This process as I have it now takes around 12 seconds.
If I try to load another page while the first long running page is
loading, this second page doing nothing except writing out a hello world,
it doesn't load until the first long running page is finished.
Why is this the case? I would
I can't find a way to pass a variable declared in my code-behind aspx.js
file to the corresponding code-behind.aspx markup file. The error I keep
getting is this:
Parser Error Message: Code blocks are not
allowed in this file.
My Code-Behind.aspx.js looks
like this:
import System;package Test { class
CodeBehind extends System.Web
Has anyone ever run a problem where creating a new STS project in Visual
Studio via the "Add STS Reference ..." menu item yields a project that does
not contain the necessary Default.aspx and Login.aspx files? This was
working earlier and now I'm baffled.
I have been looking for a solution for hours but I can't manage to
customize my list forms. Although it is very easy to modify the form under
SPDesigner, I can't find a way to do the same in VS2008.
Does
anyone know the way to customize a form programmatically ?
Thank you very much
Hopefully the title made at least some sense.
I've been
fiddling with ASP.NET and C# for the first time today - following a
tutorial. I have comfortably got as far as this; however, when I try to
implement:
<%@ Reference
Control="~/UserInfoBoxControl.ascx" %>
and
UserInfoBoxControl userInfoBoxControl = (UserInfoBoxControl)Loa
I have created 3 partial classes and i want all of them to be grouped
under same page. Like Registration -> Registration.aspx.cs,
Registration_db.cs, Registration_validation.cs.
Is it possible
to group all these files under same category i.e registration under tree
view of solution explorer.
<%@ Page Language="C#"
MasterPageFile="~/Master Pages/LeftMenu.maste
I have to give the user the option to upload his own aspx and aspx.cs
files on to the server,and adjusting the hyperlink to point to a page
which would do the following.
Display the aspx and aspx.cs
files code onto the page without actually rendering the code.
The browser should not understand anything andwhile reading the
files to display them the method be
just moment ago I discovered strange behaviour of my visual studio
environment. I'm using ASP.net web application. When the webform is added,
and i'm trying to place button on webform, instead of declaring
button_Click event in code-behind file (WebForm.aspx.cs) this declaration
is placed in aspx.file as follows:
<script
runat="server">protected void Button2_Click(obj | http://bighow.org/tags/ASPX/1 | CC-MAIN-2017-04 | refinedweb | 606 | 67.86 |
Hi,
I'm trying to make a very simple program that declares a new type of class, that contains a vector. The idea is that I will be able to make objects of this class for different chat bots, that are just a name, and then a list of numbers.
Anyway, after scouring numerous forums I've been able to figure out how to declare the vector. But I can't figure out how to do anything with it. Based on my code (below), can someone give me some example code for how to:
1. put stuff into the vector inside the class.
2. access that stuff?
3. if I used a list of these objects, would I access them the same way?
None of the tutorials I've seen deal with using vectors inside a class, so I would really appreciate any help. Thanks.
Code:#include <iostream> #include <vector> using namespace std; class cretin { public: cretin() : d_v(3) {}; private: std::vector<int> d_v; vector<int>::iterator Iter; }; int main() { return 0; } | https://cboard.cprogramming.com/cplusplus-programming/96464-using-vector-members-class.html | CC-MAIN-2017-22 | refinedweb | 172 | 71.95 |
using map, i want display message based on id selected [both are strings]
i do no
Hi Friend,
Try the following code:
import java.util.*; class MapExample { public static void main(String[] args) { Map map=new HashMap(); map.put("Message1","Hello"); map.put("Message2","Hello World"); map.put("Messgae3","All glitters are not gold"); map.put("Message4","Where there is will, there is a way"); String id = (String) map.get("Message1"); System.out.println(id); } }
Thanks
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/answers/viewqa/Java-Beginners/14616-map.html | CC-MAIN-2013-20 | refinedweb | 112 | 66.64 |
Inline::Python - Write Perl subs and classes in Python.
print "9 + 16 = ", add(9, 16), "\n"; print "9 - 16 = ", subtract(9, 16), "\n"; use Inline Python => <<'END_OF_PYTHON_CODE'; def add(x,y): return x + y def subtract(x,y): return x - y END_OF_PYTHON_CODE.
Version 0.21 provides the ability to bind to 'new-style' classes (as defined by the python PEP's 252 and 253.) See "New-Style Classes" for details.
Using Inline::Python will seem very similar to using another Inline language, thanks to Inline's consistent look and feel.
This section will explain the different ways to use Inline::Python. For more details on
Inline, see 'perld'();(source code)
Unlike Python, Perl has no exec() -- the eval() function always returns the result of the code it evaluated. eval() takes exactly one argument, the perl source code, and returns the result of the evaluation....
Calls the Perl subroutine in a void context. Guarantees that no results will be returned. If any are returned, Perl deletes them.
Calls the Perl subroutine in a scalar context. Ensures that only one element is returned from the sub. If the sub returns a list, only the last element is actually saved.
Calls the Perl subroutine in a list context. Ensures that any items returned from the subroutine are returned. This is the default for PerlSub objects.
If you are not interested in the return values, you can optimize slightly by telling Perl, and it will discard all returned values for you.
If you are not passing any arguments, you can optimize the call so that Perl doesn't bother setting up the stack for parameters.
It is possible for the Perl sub to fail, either by calling die() explicitly or by calling a non-existent sub. By default, the process will terminate immediately. To avoid this happening, you can trap the exception using the G_EVAL flag..
Accepts only expressions. Complete statements yield a syntax error. An expression is anything that can appear to the right of an '=' sign. Returns the value of the expression.
The default. Accepts arbitrarily long input, which may be any valid Python code. Always returns
undef.
Accepts exactly one statement, and prints the result to STDOUT. This is how Python works in interactive mode. Always returns
undef..
Unlike in previous releases of Inline::Python, eval_python() can now return the result of the code. As before, eval_python() is overloaded:
Evaluate the code using py_eval().
Run the given function and return the results using py_call_function().
Invoke the given method on the object using py_call_method() and return the results.. instructions.
Here are some things to watch out for:
Example:
use Inline Python => <<'END'; import mymodule class A: class B: pass END
The namespace imported into perl is ONLY that related to
A. Nothing related to
mymodule or
B is imported, unless some Python code explicitly copies variables from the mymodule namespace into the global namespace before Perl binds to it.!
All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself.
(see) | http://search.cpan.org/~neilw/Inline-Python/Python.pod | crawl-002 | refinedweb | 511 | 67.45 |
import java.awt.Color ;23 import javax.swing.Icon ;24 25 /** A class which can provide rendering data for the tree portion an Outline,26 * such as converting values to text, providing tooltip text and icons.27 * Makes it possible to provide most of the interesting data that affects28 * display without needing to provide a custom cell renderer. An Outline29 * will use its RenderDataProvider to fetch data for <strong>all</strong>30 * its columns, so it is possible to affect the display of both property31 * columns and the tree column via this interface.32 *33 * @author Tim Boudreau34 */35 public interface RenderDataProvider {36 /** Convert an object in the tree to the string that should be used to37 * display its node */38 public String getDisplayName (Object o);39 /** Returns true of the display name for this object should use HTML 40 * rendering (future support for integration of the lightweight HTML41 * renderer into NetBeans). */42 public boolean isHtmlDisplayName (Object o);43 /** Get the background color to be used for rendering this node. Return44 * null if the standard table background or selected color should be used.45 */46 public Color getBackground (Object o);47 /** Get the foreground color to be used for rendering this node. Return48 * null if the standard table foreground or selected foreground should be49 * used. */50 public Color getForeground (Object o);51 /** Get a description for this object suitable for use in a tooltip. Return52 * null if no tooltip is desired. */53 public String getTooltipText (Object o);54 /** Get an icon to be used for this object. Return null if the look and 55 * feel's default tree folder/leaf icons should be used as appropriate. */56 public Icon getIcon (Object o);57 }58
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/netbeans/swing/outline/RenderDataProvider.java.htm | CC-MAIN-2016-44 | refinedweb | 299 | 60.95 |
How to Set up Borland's Free C Compiler for Windows
The Borland C++ compiler is a free command line compiler which allows conversion of C or C++ applications into runnable computer programs (*.exe). This compiler is extremely well used and versatile. It is also a great value for its price of zero dollars.
[edit] Steps
- Go to Borland.com.
- Click on the Downloads link and then C++ Builder under IDE Products.
- Click on Compiler, a login screen will appear.
- Login to Borland's website. If you haven't registered with Borland before, create a new username and password.
- Click on freecommandLinetools.exe and chose the save as option to save the file to your computer. Click on the OK button. The download will begin.
- Run the executable you’ve just downloaded. Specify a directory in which to store the files that the installer will install. Keep a note of this directory.
- Wait for everything to install. Once this is done you, will need to make some changes of your system environment to make the compiler work.
- Click on the start menu then click on the run item. In the text box that pops up, type NOTEPAD C:\AUTOEXEC.BAT
- Go to the last line of the text file that opens. Add the following two lines of text to the bottom of the document:
-
#set classpath=%classpath%;
#PATH=C:\BORLAND\BCC55\BIN;%PATH%#
- Look at the directory you wrote down earlier. If this is anything other than C:\BORLAND\BCC55, change the last line of the file to directory\BIN;%PATH% where directory is the directory you wrote down.
- Click on the start menu then click on the run item and type NOTEPAD.
- Add the following lines of text to the file:
-
#-IC:\BORLAND\BCC55\INCLUDE
#-LC:\BORLAND\BCC55\LIB
#
- Click on the File menu then the Save As item. Under the File Type drop box, select the All Files item, and for the file name, type in BCC32.CFG. Save this to the BIN folder of the directory where you installed the files.
- Click on the File menu then the New item
- Add the following lines of text to the new file that just opened:
-
#-LC:\BORLAND\BCC55\LIB
#
- Click on File then Save As. Under “File Type” select “all Files” and for the file name type in ILINK32.CFG. Save this to the BIN folder of the directory where you installed the files.
- Restart your computer.
- Make a simple test program to test the compiler. A simple C++ test program is a hello world program. Open up a text editor and save the following program as hello.cpp.
-
#include <iostream.h>
#int main() #{ #cout<<”Hello World”<<endl; #return 0; #}
#
- Click on the Start menu then click on the Run menu item. Type CMD into the text box and click OK. An MS-DOS prompt will pop up.
- Type CD followed by the directory where you saved hello.cpp.
- Type bcc32 hello.cpp and hit return. This will compile the program.
- When the compiler is finished a prompt will pop up. Now type in hello.exe. The program should display Hello World on the screen if all worked well.
[edit] Tips
- If typing in BCC32 when you're trying to compile the program gives you an error, it is most likely due to an error in the text files you had to create. Check AUTOEXEC.BAT, BCC32.CFG and ILINK32.CFG and make sure that the directories you entered in these files are the same as the directories where you installed the program.
[edit] Warnings
- When editing AUTOEXEC.BAT make sure you don't modify anything already in the file. Also make sure that what you put into the text file is correct. If it is not, it can cause windows to load improperly. If this happens the only solution is to reinstall windows again.
- Be aware that this is proprietary software, so if you want to change it at all, you won't be able to. If you want something you can change, you should probably go in the direction of gcc or another open source compiler | http://www.wikihow.com/Set-up-Borland%27s-Free-C-Compiler-for-Windows | crawl-002 | refinedweb | 689 | 76.01 |
On Tue, 2002-01-08 at 15:52, Andrew Morton wrote:> naah. preempt() means preempt. But the implementation> is in fact maybe_preempt(), or preempt_if_needed().Agreed. preempt has me envision various things, none of which are whatwe want. What is the difference between schedule vs preempt? Confusing.What we are calling preempt here is the same as schedule, but we checkif it is needed. So I suggest conditional_schedule, which has thebenefit of being widely used in at least three patches. schedule_if_needed, sched_if_needed, etc. both fit. Why introduce thenamespace preempt when we already have sched?sched_conditional() and sched_needed() ? Robert Love-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2002/1/8/214 | CC-MAIN-2015-11 | refinedweb | 126 | 61.73 |
Hive for Flutter — fast local storage database made with Dart
About a month ago, talking with one application developer on Flutter, there was a problem of braking the processing of small (in tens of thousands) data arrays on the user’s phone.
Many applications require data processing on the phone and, further, their synchronization with the backend. For example to-do lists, lists of any data (analyzes, notes, etc.).
It’s not at all cool when the list of just a few thousand items, when deleting one of them and then writing to the cache or searching the cache, starts to slow down.
There is a solution! Hive — NoSQL database is written in pure Dart, very fast. In addition, the advantages of Hive:
* Cross-platform — since there are no native dependencies on pure Dart — mobile, desktop, browser.
* High performance.
* Built-in strong encryption.
In the article, we will look at how to use Hive and make a simple ToDo application, which in the next article will complement authorization and synchronization with the cloud.
Example
Here we will write such an application at the end of the article, the code is posted on Github.
One of the advantages of Hive is the very good documentation — in theory, everything is there. In the article, I will simply briefly describe how to work with what and make an example.
So, to connect hive to the project, add to pubspec.yaml
dependencies:
hive: ^1.4.1+1
hive_flutter: ^0.3.0+2dev_dependencies:
hive_generator: ^0.7.0+2
build_runner: ^1.8.1
Next, init it, usually when the app starts.
await Hive.initFlutter();
In case we are writing an application for different platforms, we can do initialization with a condition. Hive_flutter package:^0.3.0+2 is just a service wrapper making working with Flutter easier.
Data types
Out of the box, Hive supports the List, Map, DateTime, BigInt, and Uint8List data types.
Also, you can create your own types and work with them using the TypeAdapters adapter. Adapters can be done by yourself or use the built-in generator (we connect it with this line hive_generator:^0.7.0+2).
For our application, we need a class for storing
class Todo {
bool complete;
String id;
String note;
String task;
}
Modify it for generator and give id number typeId: 0
import 'package:hive/hive.dart';
part 'todo.g.dart';
@HiveType(typeId: 0)
class Todo {
@HiveField(0)
bool complete;
@HiveField(1)
String id;
@HiveField(2)
String note;
@HiveField(3)
String task;
}
If in the future we need to expand the class, it is important not to disturb the order, but to add new properties further by numbers. Unique property numbers are used to identify them in the Hive binary format.
Now run the command
flutter packages pub run build_runner build --delete-conflicting-outputs
and get the generated class for our data type todo.g.dart
Now we can use this type to write and get objects of this type to \ from Hive.
HiveObject — a class to simplify object management We can add convenient methods to our Todo data type by simply inheriting it from the built-in HiveObject class
class Todo extends HiveObject {
It gives us access for two methods save() and delete(), that sometimes very convenient.
Organising data — Box
In Hive, data is stored in the boxes. This is very convenient since we can make different boxes for user settings, user interface, etc.
The Box is identified by a string. To use it, you must first open it asynchronously (this is a very fast operation). In our version
var todoBox = await Hive.openBox<Todo>('box_for_todo');
and then we can synchronously read / write data from this box.
Identification of data in the box is possible either by key or by serial number:
For example, open a box with string data and write data by key and with auto-increment
var stringBox = await Hive.openBox<String>('name_of_the_box');
By key
stringBox.put('key1', 'value1');
print(stringBox.get('key1')); // value1
stringBox.put('key2', 'value2');
print(stringBox.get('key2')); // value2
Autoincrement when save + access data by index
To record many objects, it is convenient to use boxing similarly to List. All objects in the box have an index of type autoincrement.
The methods are designed to work with this: getAt(), putAt() and deleteAt()
For the record, we simply use add () without an index.
stringBox.put('value1');
stringBox.put('value2');
stringBox.put('value3');
print(stringBox.getAt(1)); //value2
Why is it possible to work synchronously? Developers write that this is one of the strengths of Hive. When you request a recording, all Listeners are notified immediately, and recording takes place in the background. If a recording fails (which is very unlikely and in theory, you can not handle this), then Listeners are notified again. You can also use await for work.
When the box is already open, then anywhere in the application we call it
var stringBox = await Hive.box<String>('name_of_the_box');
from which it immediately becomes clear that when creating a box, the package creates singleton.
Therefore, if we do not know whether boxing is already open or not, and to check laziness, then we can use it in doubtful places
var stringBox = await Hive.openBox<String>('name_of_the_box');
- if the box is already open, this call will simply return the instance of the already open box.
If you look at the source code of the package, you can see several helper methods:
/// Returns a previously opened box.
Box<E> box<E>(String name);
/// Returns a previously opened lazy box.
LazyBox<E> lazyBox<E>(String name);
/// Checks if a specific box is currently open.
bool isBoxOpen(String name);
/// Closes all open boxes.
Future<void> close();
Devhack In general, since Flutter is a complete open source, you can climb into any methods and packages, which is often faster and more understandable than reading the documentation.
LazyBox — for big data sets
When we create a regular box, all its contents are stored in memory. This gives high performance. In such boxes, it is convenient to store user settings, some small data.
If there is a lot of data, it is better to create boxes lazily
var lazyBox = await Hive.openLazyBox('myLazyBox');
var value = await lazyBox.get('lazyVal');
When it is opened, Hive reads the keys and stores them in memory. When we request data, Hive knows the position of the data on the disk and quickly reads it.
Box Encryption
Hive supports AES-256 box data encryption.
To create a 256-bit key, we can use the built-in function
var key = Hive.generateSecureKey();
which creates the key using the Fortuna random number generator.
After key created we create the box
var encryptedBox = await Hive.openBox('vaultBox', encryptionKey: key);
encryptedBox.put('secret', 'Hive is cool');
print(encryptedBox.get('secret'));
Features:
- Only values are encrypted, keys are stored as plaintext.
- When you close the application, you can store the key using the flutter_secure_storage package or use your own methods.
- There is no built-in key validation check, therefore, in the case of a wrong key, we must program the application behavior ourselves.
Box Compression
As usual, if we delete or change data, they are written incrementally at the end of the box.
We can do compression, for example, when closing the box when exiting the application
var box = Hive.box('myBox');
await box.compact();
await box.close();
The app todo_hive_example
Ok, that’s all, we’ll write an application that we will expand later to work with the backend.
We already have a data model, we’ll make the interface simple.
Screens:
- Home screen — to-do list + all functionality
- Add Case Screen
Actions:
- + Button adds business
- Clicking on the checkmark — switching done / not done
- Swipe Anyway —
Creating an Application Step 1 — Add
We create a new application, delete comments, leave the entire structure (we need a + button.
We put the data model in a separate folder, run the command to create the generator.
Create a list of common tasks.
To build a to-do list, we use the built-in extension (yes, we added extensions (extensions) to Dart a couple of months ago), which is located here /hive_flutter-0.3.0+2/lib/src/box_extensions.dart
/// Flutter extensions for boxes.
extension BoxX<T> on Box<T> {
/// Returns a [ValueListenable] which notifies its listeners when an entry
/// in the box changes.
///
/// If [keys] filter is provided, only changes to entries with the
/// specified keys notify the listeners.
ValueListenable<Box<T>> listenable({List<dynamic> keys}) =>
_BoxListenable(this, keys?.toSet());
}
So, we create a to-do list, which itself will be updated when the box changes
body: ValueListenableBuilder(
valueListenable: Hive.box<Todo>(HiveBoxes.todo).listenable(),
builder: (context, Box<Todo> box, _) {
if (box.values.isEmpty)
return Center(
child: Text("Todo list is empty"),
);
return ListView.builder(
itemCount: box.values.length,
itemBuilder: (context, index) {
Todo res = box.getAt(index);
return ListTile(
title: Text(res.task),
subtitle: Text(res.note),
);
},
);
},
),
While the list is empty. Now when you press the + button, we’ll add the case.
To do this, create a screen with a form onto which we throw over when you press the + button.
On this screen, when you click the Add button, we call the code that adds the record to the box and throws it back to the main screen.
void _onFormSubmit() {
Box<Todo> contactsBox = Hive.box<Todo>(HiveBoxes.todo);
contactsBox.add(Todo(task: task, note: note));
Navigator.of(context).pop();
}
Everything, the first part is ready, this application is already able to add Todo. When you restart the application, all data is saved in Hive.
The link to the commit on the Github applications at this point.
Application Creation Step 2 — Switching done / not done
It’s all very simple. We use the save method that we inherited from our class Todo extends HiveObject
Fill two properties and you’re done
leading: res.complete
? Icon(Icons.check_box)
: Icon(Icons.check_box_outline_blank),
onTap: () {
res.complete = !res.complete;
res.save();
});
The link to the commit on the Github applications at this point.
Creating an application Step 3 — swipe left — delete
Here, too, everything is simple. We wrap the widget in which the case is stored in dismissable and again use the service removal method.
background: Container(color: Colors.red),
key: Key(res.id),
onDismissed: (direction) {
res.delete();
},
That’s all, we got a fully working application that stores data in a local database.
Runnable app code on the Github — | https://awaik.medium.com/hive-for-flutter-fast-local-storage-database-made-with-dart-167ad63e2d1 | CC-MAIN-2021-43 | refinedweb | 1,731 | 66.13 |
Neil Jerram <address@hidden> writes: > > an example? (c-lazy-catch #t (lambda () (mucho hairy data download using http, including continuations to suspend)) (lambda args (print-message "%s and %s went wrong" ...) ;; continue on connection or http protocol problems (including ;; http timeout), throw full error otherwise (if (not (or (eq? 'http (first args)) ;; my errors (gethost-error-try-again? args) ;; gethost errors (system-error-econn? args))) ;; ECONNREFUSED (apply throw args)))) The idea is the handler does some cleanup (print a message in this case) and then makes a decision about continuing or dying. If it's fatal then re-throw, and in that throw I'm wanting a full backtrace. If this was a plain `catch' then the re-throw loses the backtrace, and if it was a lazy-catch then you're not allowed to return, hence my c-lazy-catch which is a combination. The implementation isn't super-efficient, ;; lazy-catch, but with HANDLER allowed to return (define-public (c-lazy-catch key thunk handler) (catch 'c-lazy-catch (lambda () (lazy-catch key thunk (lambda args (throw 'c-lazy-catch (apply handler args))))) (lambda (key val) val))) I'm not sure how typical this is. It seems a reasonable desire, but maybe there's a better way to do it. I've fiddled about a bit with my overall error trap structure, from trapping each call to now a higher level overall catch. > I think it's non-negotiable that if someone has coded a (throw ...) > or an (error ...), execution cannot continue normally past that > throw or error Yes, that'd create havoc, I only meant continue after the `catch' / `lazy-catch' form. | http://lists.gnu.org/archive/html/guile-devel/2006-01/msg00074.html | CC-MAIN-2016-26 | refinedweb | 275 | 62.58 |
TlsLiteCherry?
TlsLiteCherry? is a little module that enables SSL for CherryPy. It uses TLS Lite, a pure Python implementation of the SSL/TLS stuff.
A little comment from Daniel McNair:
To be quite honest, the only reason I did this is because M2Crypto didn't compile on my Python 2.4-based Gentoo box. Ergo, I had to find another way to skin the same cat, and essentially copied Tim Evans' SslCherry? module, translated the M2Crypto-dependent bit to use TLS Lite instead, and voila.
Usage
from cherrypy import cpg import tlslite_cherry ... cpg.root = Root() tlslite_cherry.start(configFile='foo.conf')
Your config file needs to contain at least this:
[server] sslKeyFile=privkey.pem sslCertFile=cert.pem
Here, sslKeyFile is your private key, which needs to be unencrypted (i.e., no passphrase). The sslCertFile is your public key.
Security
The weak link here is TLS Lite. I have no idea how secure it is. I may be using it in an insecure way. Hard to say. Here's the take-home message: the security ramifications of using this module have not been analyzed. Use at your own risk.
Bugs
About the same as SslCherry?, actually:
- The URLs given in cpg.request.base and cpg.request.browserUrl still start with "http://" instead of the correct "https://".
- If your user forgets and types "http://" instead of "https://" the server will crash. It happens somewhere in TLS Lite when it figures out that the connection isn't encrypted. A SyntaxError is thrown, which is appropriate; there is some SyntaxError in the stream. However, SyntaxError usually means you have a problem in your Python code (as a parse-time error, not a runtime error) so I'm loathe to catch it. It should be relatively easy to catch that error and return an unencrypted socket instead, however, which could be used to refer the browser to the "https://" version of the page...
- Unix domain sockets are probably not supported. I don't know this for sure, but I haven't tried it and since SslCherry? doesn't do it, I don't expect I will either.
- SSL error handling is done very poorly if at all.
Attachments
- tlslite_cherry.py (4.1 kB) -
The module of which I speak., added by d.mc@myrealbox.com on 03/17/07 20:15:05. | http://tools.cherrypy.org/wiki/TlsLite | CC-MAIN-2014-42 | refinedweb | 386 | 69.18 |
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Deployment Options8:26 with James Churchill
Regardless of your specific development, testing, and deployment workflow, you'll need a way to apply pending migrations to databases hosted in shared environments. Let's review the deployment options that Code First Migrations makes available to us..4 -b deployment-options
Finished Project Code
Here's how to get the code for the finished project:
git clone git@github.com:treehouse-projects/dotnet-ef-migrations.git cd dotnet-ef-migrations
Additional Project Ideas
Now that you've completed this course, it's a great time to reinforce what you've learned by working on a project.
Idea #1: Update the Comic Book Library Manager Console App
Update the Comic Book Library Manager console app—from the Entity Framework Basics course—to use Code First Migrations.
Here's the link to download the completed Comic Book Library Manager project files.
Idea #2: Design Your Own Entity Data Model
Design your own entity data model, enable migrations, then practice making model changes and adding migrations. You could purposely start with an overly simplistic model, so that you'll have plenty of model changes to make once you've enabled migrations.
Additional Treehouse Entity Framework Courses
Treehouse has two more EF courses (one that's available now and another that'll be available soon).
Additional Resources
Treehouse Community
If you ever have a question about EF or get stuck on something, check out the Treehouse Community. You’ll find there other students and a great team of moderators who can help.
An easy option for applying, 0:00 any pending migrations to databases hosted in shared environments is to configure 0:01 EF to use the migrate database to latest version, database initializer. 0:06 We're currently setting the database initializer to know, in order to disable 0:13 database initialization in favor of using the code first migrations, 0:18 update database command to update our database. 0:22 Let's start. 0:26 With adding a new database initializer then, we'll add preprocessor directives so 0:27 we can configure a different database initializer per build configuration. 0:32 Allow the new database initializer below our current one. 0:39 Database.SetInitializer (new 0:44 MigrateDatabaseToLatestVersion) Of type <Context,. 0:46 In addition to the context class type, the migrate database to latest version, 0:53 generic class also requires us to specify the code first 0:58 migrations configuration class type, which in our case is Configuration. 1:02 Don't forget to add the using directive, 1:10 for the ComicBookGalleryModel.Migrations namespace. 1:13 Now, let's add our preprocessor directives. 1:21 Right before our first database initializer, let's add an if directive 1:26 that checks for the presence of the debug symbol, #if DEBUG. 1:31 Then, in between the two database initializers, add an else directive. 1:36 #else. 1:42 And, just after the second database initializer, add an end if directive, 1:46 #endif. 1:51 Let's test our new database initializer by downgrading our database to 1:54 the previous migration in running our app using the release build configuration 1:58 to see if the database will be upgraded to the latest migration. 2:03 First, let's downgrade our database, 2:11 update-database -targetmigration AddBioPropertyToArtist. 2:15 Using the SQL Server Object Explorer, 2:22 we can verify that the comic book average rating 2:27 table that is created by our latest migration isn't in the list of tables. 2:32 Now, let's change our build configuration to release and 2:41 start the application without debugging by pressing Ctrl F5. 2:44 Here's our list of comic books. 2:56 Press Enter twice to continue execution. 2:58 Refresh the Tables folder, And, 3:03 here's our ComicBookAverageRating table. 3:11 If we view the data in the migration history table, 3:15 we can confirm that all three of our migrations have been applied. 3:18 By using the database initializer to migrate the database to the latest 3:26 migration is easy to do and works well with automated workflows, 3:30 it's not always possible to use. 3:34 In some situations, developers don't have direct access to test and 3:37 production environment databases or the servers that they're hosted on. 3:42 Instead, developers have to coordinate any updates to those databases 3:47 through a database administrator or DBA. 3:52 Luckily, there's a workaround. 3:56 We can use the Update-Database command to generate a SQL script which we can 3:58 hand off to our DBA, who can then review and apply it to the database. 4:03 To start, let's downgrade our database to the previous migration. 4:09 I'll press the up arrow key to recall the previously executed command. 4:13 Then, we can run the update database command with the script flag. 4:20 update-database -script. 4:24 When the command completes, it'll open the generated script into a new tab for 4:33 us to review. 4:37 Here's to create table SQL statement to create the ComicBookAverageRating table. 4:43 Our SQL statement, to migrate the AverageRating data from the comic book 4:49 table to the ComicBookAverageRating table. 4:52 To ALTER TABLE statements, one to drop the ComicBookAverageRating column and 5:00 another to add the comic book ID foreign key to the ComicBookAverageRating table. 5:05 Then lastly, 5:13 an answer statement to add this migration to the MigrationHistory table. 5:14 In addition to generating a script to apply the latest migration to 5:19 the database, we can also generate an item potent script 5:23 that can upgrade a database currently at any version. 5:27 To the latest version with an item potent script, 5:31 we can safely executed against any version of the database. 5:34 As a contains logic to determine which migrations have been applied and 5:38 which haven't. 5:42 To generate an item potent script, we specify the source migration 5:44 flag along with the dollar sign initial database migration name. 5:49 Update database -script 5:54 -sourcemigration $initialDatabase. 6:00 In the generated script, we can see that it includes a query to get the current 6:15 migration from the MigrationHistory table. 6:19 And, the conditionals to only apply migrations that 6:25 haven't been previously applied. 6:28 Regardless of the approach that you end up using to apply migrations to databases 6:31 in shared environments, it's important to take the necessary time to review and 6:36 test migrations before they are applied to production databases. 6:41 Even if you are able to recover from a failed migration, the resulting 6:46 application downtime can be costly and a frustrating experience for your users. 6:50 Let's recap this section. 6:55 We made a change to our model and took a closer look at using the Add-Migration 6:58 command to create migrations. 7:02 We saw how to use the update-database command to downgrade the database and 7:04 an example of modifying a migration. 7:09 We discussed workflows and environments, and learned about the deployment options 7:12 available to us for applying migrations to databases in shared environments. 7:17 Thanks for hanging out with me and 7:22 learning about Entity Framework Code First Migrations. 7:24 Now, it's a great time to reinforce what you've learned 7:27 by using Code First Migrations on a practice project. 7:30 For example, you can take the comic book library manager console application 7:33 from an earlier Treehouse course on EF and update it to use migrations. 7:38 Or, you could design and create your own entity data model and 7:43 add migrations to your project as you evolve your model. 7:47 If you haven't already, 7:51 be sure to check out Treehouse's other courses on entity framework. 7:52 See the teachers notes for links to courses that cover the basics of EF and 7:56 how to use EF within in ASP.NET MVC application. 8:01 There are also other great online and offline EF resources available. 8:05 Again, see the teacher's notes for a list of these resources and 8:09 don't forget if you ever have a question about EF or 8:13 get stuck on something, check out the Treehouse community. 8:17 You'll find, there are other students and 8:20 a great team of moderators who can help, see you next time. 8:22 | https://teamtreehouse.com/library/deployment-options | CC-MAIN-2020-45 | refinedweb | 1,507 | 51.89 |
0 Members and 2 Guests are viewing this topic.
def TWI_SLAVE_enable_sub_20568(...):LINK 0x0; # R0=TWI SLAVE ADDRB[FP + 0x8] = R0;R1 = 0x7f (X); # R1=0x7fR0 = R0 & R1; # discriminate lower 7 bitsP1 = 0x1404 (X); # P1=0x1404P1.H = 0xffc0; # P1=0xffc01404 -> TWI_CONTROLR1 = W[P1] (Z);R0 = R0 | R1;W[P1] = R0.L;R0 = W[P1] (Z);BITSET (R0, 0x7); # bit 7W[P1] = R0.L; # set TWI slave address bit 7 is *always* highP1 = 0x1410 (X); # P1=0x1410P1.H = 0xffc0; # P1=0xffc03950 -> TWI_SLAVE_ADDRR0 = W[P1] (Z);R1 = 0x51 (X); # R1=0x51R0 = R0 | R1;W[P1] = R0.L; # set TWI slave to R0 / 0x51 (101001b)P1 = 0x1428 (X); # P1=0x1428P1.H = 0xffc0; # P1=0xffc01428 -> TWI_FIFO_CTLR0.L = 0x1; # R0=0x10001W[P1] = R0.L; # flush receiver FIFOR0.L = 0x0; # R0=0x10000W[P1] = R0.L; # flush transmitter FIFOP1 = 0x1424 (X); # P1=0x1424P1.H = 0xffc0; # P1=0xffc01424 -> TWI_INT_MASKR0 = W[P1] (Z);R1 = 0xc2 (X); # R1=0xc2R0 = R0 | R1;W[P1] = R0.L; # set TWI interrupt mask to: # -> Receive Fifo SIM (RCVSERVM) # -> Transmit Fifo SIM (XMTSERV) # -> Slave Transfer Complere SIM (SCOMPM)P1 = 0x1408 (X); # P1=0x1408P1.H = 0xffc0; # P1=0xffc01408 -> TWI_SLAVE_CTLR0 = W[P1] (Z);BITSET (R0, 0x2); # bit 2W[P1] = R0.L; # STDVAL - enable Slave Transmit Data ValueUNLINK;RTS; # end function
SDRAM:2162C R0 = 0x64 (X); # R0=0x64SDRAM:21630 CALL TWI_SLAVE_Setup_Int_CallBack_sub_2076E;
I copied the FRAM. I have used an Arduino UNO with level shifter. The trial period is probably stored at 0x7F6 and 0x7F7. There seems to be no checksum.
and the serial determines the bandwidth limit ...
what option makes u believe that ? i assume same fw is used for a ds2072 as for a ds2202 - looking at what seems likes options (TRIGGER, MEMORY .. ) i didnt spot something that looks like fw upgrade option.
The minutes remaining from data value is -0.179x + 2630.9.
I entered a code to renew the trial options. Now my start screen looks like in the picture. It really seems to be bandwidth options in the firmware.
when you write the values in FRAM 0x7f6 and 0x7f7 do you have than the corresponding times for the trial options?
If you don't have the 2ns time base setting and the 100MHz BW limit per channel, it's unlikely you have the real 200MHz BW - regardless of whatever the option screen says.
QuoteIf you don't have the 2ns time base setting and the 100MHz BW limit per channel, it's unlikely you have the real 200MHz BW - regardless of whatever the option screen says.Right. I have a rise time of 3.2ns measured. Which corresponds to approximately 109MHz.
you have confirmed that all of the options (except BW) are turned on and working (as opposed to just displayed minutes changed), right?
does counter go to ffff or 0000 after expire?
Quoteyou have confirmed that all of the options (except BW) are turned on and working (as opposed to just displayed minutes changed), right?Yes. | https://www.eevblog.com/forum/testgear/sniffing-the-rigol_s-internal-i2c-bus/25/ | CC-MAIN-2019-43 | refinedweb | 482 | 76.42 |
I have an aplication that I have been working on in flash CS4, it is an air application. When I close the application using the main windows close button it appears to close but is still running when I look at processes.
If I close it by calling exit from the main menu it closes properly. I guess I am probably missing something obvious but I am not sure what.
I set up a listener for the application
NativeApplication.nativeApplication.addEventListener(Event.EXITING,onExiting);
and I setup a loop in the onExiting function to dispatch a closing event to application windows check to see if it's prevented and if not close the window.
However on testing the onExiting function is not being called when I close the application from the main window clsoe button, only form when I use the menu.
To try and see what is causing the problem I did a trace on the loop for open application windows. there are 2 one is the main program and the other is untitled both are normal windows. Since only use one window in my application I don't understand why there are 2. But I guess it's the unclosed window that is keeping the program from closing.
any help will be appreaciated.
Ok this is starting to look like a bug in air or flash cs4.
I have stripped the program down to a window that opens , an exiting event listener and an onExiting function.
The onExiting function is called intermittently I have not been able to work out when it will occur and when it wont, it seems to occur about 1 in 10.
here is the output when run
[SWF] wizard.swf - 2040 bytes after decompression
setting listener
Test Movie terminated.
when it works
[SWF] wizard.swf - 2040 bytes after decompression
setting listener
exit called for
Test Movie terminated
I have attached the files can someone else test them?
Ok this is really really frustrating. This blo*dy forum system is cr*p I have lost my message three times and if this gets through its a miracle.
anyway my message
It is beginning to look like a but in AIR or flash CS4, I have stripped the program I was working on down to a window, an event listener and a function to handle the event.
here is the contents of the wizard.as file
package {
import flash.display.*;
import flash.events.*;
import flash.display.NativeWindow;
import flash.desktop.NativeApplication;
public class wizard extends Sprite {
public function wizard():void
{
NativeApplication.nativeApplication.autoExit = true;
NativeApplication.nativeApplication.addEventListener(Event.EXITING, onExiting);
trace("setting listener");
}
function onExiting(e:Event):void
{
trace("exit called for ");
}
}
}
I will try to attach the files but it has not work 3 times so far.
When I close the wizard via the close button I get intermittent operation of the event, it works about 1 in 10 times I run the program. I have not been able to tell when it will or will not work.
here is the output
when it works
[SWF] wizard.swf - 2040 bytes after decompression
setting listener
exit called for
Test Movie terminated.
when it doesn't
[SWF] wizard.swf - 2040 bytes after decompression
setting listener
Test Movie terminated. | https://forums.adobe.com/thread/488271 | CC-MAIN-2017-39 | refinedweb | 542 | 64.81 |
FWIDE(3P) POSIX Programmer's Manual FWIDE(3P)
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
fwide — set stream orientation
#include <stdio.h> #include <wchar.h> int fwide(FILE *stream, int mode);
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2017 defers to the ISO C standard. The fwide() function shall determine the orientation of the stream pointed to by stream. If mode is greater than zero, the function first attempts to make the stream wide-oriented. If mode is less than zero, the function first attempts to make the stream byte-oriented. Otherwise, mode is zero and the function does not alter the orientation of the stream. If the orientation of the stream has already been determined, fwide() shall not change it. The fwide() function shall not change the setting of errno if successful. Since no return value is reserved to indicate an error, an application wishing to check for error situations should set errno to 0, then call fwide(), then check errno, and if it is non-zero, assume an error has occurred.
The fwide() function shall return. The following sections are informative.
None.
A call to fwide() with mode set to zero can be used to determine the current orientation of a stream.
None.
None.WIDE(3P)
Pages that refer to this page: wchar.h(0p) | https://www.man7.org/linux/man-pages/man3/fwide.3p.html | CC-MAIN-2022-27 | refinedweb | 275 | 66.13 |
Raw OpenGL bindingsPosted Friday, 19 March, 2010 - 13:49 by todd in
Hello,
I've been enjoying playing with OpenTK: the enum's for input parameters are particularly sweet.
HOWEVER... I am trying to port some moderately gnarly Java/JOGL code into C# with minimal changes. JOGL exposes a fairly raw-looking interface: input parameters take the form GL.GL_CONSTANT_NAME instead of using cool enum's. Is there some way to gain access to the raw OpenGL bindings while using OpenTK? Is there a different, more basic library I should consider for OpenGL access?
It looks like Tao might do the trick... but their website's down and Wikipedia reports their last update as being two years ago!
Thanks for any advice.
--Todd
Re: Raw OpenGL bindings
Did you look at OpenTK.Graphics.OpenGL.All ?
Re: Raw OpenGL bindings
Nice. Thanks, kvark.
Is there a place where I can make the OpenGL calls and pass in the entries from OpenGL.All?
E.g., instead of "GL.MatrixMode(MatrixMode.Projection)" I'd prefer to use "GL.MatrixMode(All.Projection)". Picky, I know.
Re: Raw OpenGL bindings
The final word is up to Fiddler, but I don't see it's possible to call standard OpenTK methods with raw enum binding.
You'll need to cast it to proper enums all the way...
Re: Raw OpenGL bindings
OpenTK is using strongly-typed enums by design, in fact this is one of its main features.
What you can is reference OpenTK.Compatibilty.dll and use the Tao.OpenGl namespace, which uses raw constants (Gl.GL_NAMED_CONSTANT). Chances are you will still need to port some things over from Java, but the process should be much less painful.
Having ported Tao applications to OpenTK before, I can say that this is not easy - which is why OpenTK now includes an intact copy of the Tao.OpenGl namespace.
Re: Raw OpenGL bindings
Fabulous. That's exactly what I was looking for.
Yeah, I knew the enum's were one of the primary features of OpenTK. And they're pretty nice when writing your own stuff from scratch. But the Tao bindings are making my porting project much more pleasant.
Basically, at the top of each file that uses OpenGL, I include:
using GL = Tao.OpenGl.Gl;
And the real hackish beauty comes when the code I'm porting goes to set a color based on a C#
Color. Instead of extracting values, converting them to floats, and worrying about the order, I just call OpenTK:
OpenTK.Graphics.OpenGL.GL.Color4(color);
Thank you!
For what it's worth, a colleague and I ended up porting javax.vecmath, too. vecmath and OpenTK seem to hold their matrices a little differently: in the end I decided it was simplest to give my legacy code access to the math library it'd been using all along. vecmath is GPL2 + classpath (thank you, Sun!), so I'd be happy to pass along my (rather wooden and almost completely untested) translation. | http://www.opentk.com/node/1655 | CC-MAIN-2015-40 | refinedweb | 500 | 75.5 |
Question: What are the potential faults in using the IRR as
What are the potential faults in using the IRR as a capital budgeting technique? Given these faults, why is this technique so popular among corporate managers?
Relevant QuestionsWhy is the NPV considered to be theoretically superior to all other capital budgeting techniques? Reconcile this result with the prevalence of the use of IRR in practice. How would you respond to your CFO if she instructed ...Kenneth Gould is the general manager at a small-town newspaper that is part of a national media chain. He is seeking approval from corporate headquarters (HQ) to spend $20,000 to buy some Macintosh computers and a laser ...For each of the projects shown in the following table, calculate the internal rate of return (IRR). Butler Products has prepared the following estimates for an investment it is considering. The initial cash outflow is $20,000, and the project is expected to yield cash inflows of $4,400 per year for seven years. The firm ...Does depreciation affect cash flow in a positive or negative manner? From a net present value perspective, why is accelerated depreciation preferable? Is it acceptable to utilize one depreciation method for tax purposes and ...
Post your question | http://www.solutioninn.com/what-are-the-potential-faults-in-using-the-irr-as | CC-MAIN-2017-39 | refinedweb | 208 | 56.66 |
"Mark D. Baushke" <address@hidden> writes: > 1) bugfix. readlink() on AIX 4.3 returns a > negative link_length and sets errno == ERANGE > when the length of the link is greater than > buf_size. Shouldn't this incompatibility be fixed in readlink.c rather than xreadlink? I would expect other users of the readlink module to be affected by it. Perhaps Bruno can comment, since he did readlink.c. > 2) enhancement. The size passed to xreadlink > could be the maximum value and adding one > could wrap it to zero which would be a bad > idea. > > 3) enhancement. Allow for at least one attempt > at the maximum allowed buffer size if > doubling the current buf_size pushes over the > limit. These are both good suggestions, but there's a problem with that patch: it assumes SSIZE_MAX < SIZE_MAX, but POSIX does not require this. I installed the following patch instead. 2004-11-02 Paul Eggert <address@hidden> * xreadlink.c (MAXSIZE): New macro. (xreadlink): Use it instead of SSIZE_MAX. Ensure initial buffer size does not exceed MAXSIZE. Avoid cast. As suggested by Mark D. Baushke in <>, if readlink fails with buffer size just under MAXSIZE, try again with MAXSIZE. Index: xreadlink.c =================================================================== RCS file: /cvsroot/gnulib/gnulib/lib/xreadlink.c,v retrieving revision 1.15 retrieving revision 1.16 diff -p -u -r1.15 -r1.16 --- xreadlink.c 7 Aug 2004 00:09:39 -0000 1.15 +++ xreadlink.c 2 Nov 2004 20:17:37 -0000 1.16 @@ -41,6 +41,8 @@ # define SSIZE_MAX ((ssize_t) (SIZE_MAX / 2)) #endif +#define MAXSIZE (SIZE_MAX < SSIZE_MAX ? SIZE_MAX : SSIZE_MAX) + #include "xalloc.h" /* Call readlink to get the symbolic link value of FILENAME. @@ -56,14 +58,15 @@ xreadlink (char const *filename, size_t { /* The initial buffer size for the link value. A power of 2 detects arithmetic overflow earlier, but is not required. */ - size_t buf_size = size + 1; + size_t buf_size = size < MAXSIZE ? size + 1 : MAXSIZE; while (1) { char *buffer = xmalloc (buf_size); - ssize_t link_length = readlink (filename, buffer, buf_size); + ssize_t r = readlink (filename, buffer, buf_size); + size_t link_length = r; - if (link_length < 0) + if (r < 0) { int saved_errno = errno; free (buffer); @@ -71,15 +74,18 @@ xreadlink (char const *filename, size_t return NULL; } - if ((size_t) link_length < buf_size) + if (link_length < buf_size) { buffer[link_length] = 0; return buffer; } free (buffer); - buf_size *= 2; - if (! (0 < buf_size && buf_size <= SSIZE_MAX)) + if (buf_size <= MAXSIZE / 2) + buf_size *= 2; + else if (buf_size < MAXSIZE) + buf_size = MAXSIZE; + else xalloc_die (); } } | http://lists.gnu.org/archive/html/bug-gnulib/2004-11/msg00015.html | CC-MAIN-2013-20 | refinedweb | 388 | 68.47 |
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]>
I am having some issues when I implement a watch dog timer in my application on top of ZStack-1.4.3-1.2.1 in CC2430 when I enable power saving.
To give a brief overview on how i implement my algorithm, during the application's initialization (i.e SampleApp_Init) phase i created a periodic timer event using osal_start_timerEx that will fire every 750ms. After doing so I call the fundtion WatchdogTimerEnable setting it to the maximum setting of 1second.
I expect that before the watch dog timer expires my periodic event will first be triggered. And when that event is triggered it will just reset the watch dog timer and restart the periodic timer once again. This sound simple and should work properly. I have confirm this working when i disabled power saving. However, when power saving is enabled I can see that the device will be detected by the coordinator but after sometime it will just reset and loops this way forever.
I would then like to seek any advice from those who have implemented a watch dog timer on their application on top of Zstack-1.4.3-1.2.1 on how you are able to prevent the watch dog from preventing to reset the system on a non-fault scenario.
Cheers,
Grant
Grant Hatamosa
Senior Software Engineer
What Power Mode are you setting the device to in the CC2430?
Please note that if you are in PM3, none of the clocks in the device will be enabled. This is indicated in Table 38 in the CC2430 Datasheet available on the CC2430 Product Folder.
Brandon
In reply to BrandonAzbell:?
Are there extra steps that are needed when waking up for the watchdog timer (other than the periodic watchdog clear timer)?
In reply to GrantHatamosa:
Grant?
To my knowledge, implementing what is indicated in Section 13.13.3 of the CC2430 datasheet should be sufficient. As this section indicates, when the device is in PM2 or PM3, the watchdog timer is not running. However, it is return to the same state as it was when going from a PM0 or PM1 state to a PM2 or PM3 state, but start counting from zero.
Grant
Are there extra steps that are needed when waking up for the watchdog timer (other than the periodic watchdog clear timer)?
Not that I am aware of.
If that is the case, has it ever been tried to enable the wathcdog timer with the ZStack-1.43-1.2.1 and have POWER_SAVING enabled?
If so, is there a sample application to show how it is done?
I have reconfigured my algorithm and placed the code to enable and clear the watchdog timer in osal_start_system.
Here's my modified osal_start_system:
void osal_start_system( void ){#if !defined ( ZBIT ) for(;;) // Forever Loop#endif { uint8 idx = 0;
// Enable the watchdog timer before entering the main processing loop WatchDogEnable(WDTIMX);
Hal_ProcessPoll(); // This replaces MT_SerialPoll() and osal_check_timer().);
events = (tasksArr[idx])( idx, events ); // Reset the watchdog timer after every event processed WatchDogEnable(WDTIMX);
HAL_ENTER_CRITICAL_SECTION(intState); tasksEvents[idx] |= events; // Add back unprocessed events to the current task. HAL_EXIT_CRITICAL_SECTION(intState); }#if defined( POWER_SAVING ) else // Complete pass through all task events with no activity? { // Reset the watchdog timer before going to sleep WatchDogEnable(WDTIMX);
osal_pwrmgr_powerconserve(); // Put the processor/system into sleep
// Reset the watchdog timer after waking up from sleep WatchDogEnable(WDTIMX); }#endif }}
Grant
If that is the case, has it ever been tried to enable the wathcdog timer with the ZStack-1.43-1.2.1 and have POWER_SAVING enabled?
I'm not personally aware, but isn't to say it hasn't been done. Based on the sample code you provided, it looks reasonable.
One thing you can add to verify if the watchdog timer is actually resetting the device is to add some checks early on in your application code to read the SLEEP Mode Control register. Bits[4:3] provide status bits which indicate the cause of the last reset. If the SLEEP[4:3] = 10b, it indicates a watchdog timer reset.
Grant,
I'm curious. Are you still setting a timer (osal_start_timerEx) in your AppInit with your modified osal_start_system? Or are you just effectively configuring and clearing the Watchdog with your 2 calls to WatchDogEnable in your osal_start_system?
Thanks!
wazilian - King of Wazil
Thanks ! Grant
I have tried your solution to the watchdog, it works also on the Z-stack Zstack-cc2530-2.4.0-1.40.
Even in the PM1 state, though the 32.768KHZ oscillator is on, cc2530 seems staying in PM1 longer
than 1s without reset. For the init&joining stages, these two phase is time-critical, occasionally 1s is
not enough, the cc2530 resets repeatedly at 1s. | http://e2e.ti.com/support/wireless_connectivity/proprietary_sub_1_ghz_simpliciti/f/156/t/16543 | CC-MAIN-2016-22 | refinedweb | 804 | 65.22 |
alphaKValphaKV
A simple key-value database, fast and lightweight. Support Linux/Mac/IOS/Android
How to UseHow to Use
#include "alphakv.hpp" USING_NS_HIVE; AlphaKV* pKey = new AlphaKV(); bool result = pKey->openDB("mydb"); result = pKey->set(key, keyLength, value, valueLength); uint32 valueLength; char* value = pKey->get(key, keyLength, &valueLength);
MoreMore
You can redefine the ALPHAKV_HASH_SLOT to get a suitable hash size, which is define in alphakv.hpp file
#define ALPHAKV_HASH_SLOT 65536
You can change the minimum size of storage cost for every value saving, which is define in file.hpp file
#define BLOCK_SIZE 64
If you want to know more, read the source code 233 | http://www.ctolib.com/alphaKV.html | CC-MAIN-2017-43 | refinedweb | 104 | 53 |
Tagged Content List
Blog Post:
64-Bit Team Build Fails on Silverlight Projects (and How to Fix It)
Jimmy Lewis
If...
on
22 Jan 2010
Blog Post:
MSBuilding Web Site Projects doesn’t Copy Silverlight XAP (and How to Fix It)
Jimmy Lewis
If...
on
2 Nov 2009
Blog Post:
Refactoring Root namespace Breaks Silverlight Applications (and How to Fix It)
Jimmy Lewis
One scenario that happens fairly often for new apps is to create a new project and then part way through decide you want to rename it, or at least the default namespace. With Silverlight apps, when you do this and try to run your project, you’ll see an error: Line: 56 Error: Unhandled Error in...
on
28 Oct 2009
Blog Post:
Silverlight 3 Support in VS2010 Beta 2
Jimmy Lewis
This. ...
on
27 Oct Tip: How to customize the default Hosting Web Project Choices for Silverlight Applications
Jimmy Lewis
Currently...
on
15 Aug 2008
Blog Post:
Silverlight Tip: Reflecting Objects Horizontally or Vertically using ScaleTransform
Jimmy Lewis
While...
on
1 Aug 2008
Page 1 of 1 (7 items)
© 2013 Microsoft Corporation.
Trademarks
Privacy & Cookies
Report Abuse
5.6.426.415 | http://blogs.msdn.com/b/jamlew/archive/tags/silverlight/ | CC-MAIN-2013-20 | refinedweb | 194 | 67.18 |
KJALB/PodSimplify-0.04 - 16 Aug 1996 19:14:10 GMT
Converts files from pod format (see perlpod) to HTML format. It can automatically generate indexes and cross-references, and it keeps a cache of things it knows how to cross-reference....SHAY/perl-5.26.1 - 22 Sep 2017 21:30:18 GMT
Converts files from pod format (see perlpod) to HTML format. It can automatically generate indexes and cross-references, and it keeps a cache of things it knows how to cross-reference....XSAWYERX/perl-5.28.0 - 23 Jun 2018 02:05:28 GMT
Converts files from pod format (see perlpod) to HTML format. Its API is fully compatible with Pod::Html. If input files support Pod::L10N::Format extended format, Pod::L10N::Html do some more works to print translated text pretty well....ARGRATH/Pod-L10N-1.03 - 23 Jan 2015 19:58:21 GMT
PETDANCE/Apache-Pod-0.22 - 17 Sep 2005 03:55:19 GMT
"Pod::Tree::HTML" reads a POD and translates it to HTML. The source and destination are fixed when the object is created. Options are provided for controlling details of the translation. The "translate" method does the actual translation. For conveni...MANWAR/Pod-Tree-1.29 - 02 Dec 2018 10:33:42 GMT
THIS IS PRELIMINARY SOFTWARE! The "Marek::" namespace is strictly preliminary until a regular place in CPAN is found. Marek::Pod::HTML converts one or more Pod documents into individual HTML files. This is meant to be a successor of Tom Christiansen'...MAREKR/MarekPodHtml-0.49 - 16 Jan 2003 20:53:56 GMT t...KHW/Pod-Simple-3.35 - 29 Nov 2016 23:27:21 GMT
MARKLE/Apache2-Pod-0.27 - 17 Mar 2009 18:39:58 GMT
This module does same as Pod::Html module but make html tree. Read Pod::Html document for more detail. You may want to look at Pod::ProjectDocs before using this module which may be more fun to you....TOMYHERO/Pod-Html-HtmlTree-0.92 - 01 Apr 2007 10:01:36 GMT
This class is a formatter that takes PseudoPod and renders it as wrapped html. This is a subclass of Pod::PseudoPod and inherits all its methods....CHROMATIC/Pod-PseudoPod-0.18 - 09 Aug 2011 23:18:27 GMT
HTML view of a Pod Object Model....NEILB/Pod-POM-2.01 - 07 Nov 2015 21:05:42 GMT
HTML can be embedded in POD, using: =for HTML <b>some html</b> or: =begin HTML <b>some html</b> <i>some more html</i> ... =end HTML This is explained in perlpod. text snippet 1. foo bar +-----------------+----------------+---------------------+------...PERLANCAR/Perl-Examples-0.094 - 29 Nov 2018 02:45:20 GMT ...SBURKE/Pod-HTML2Pod-4.05 - 30 Dec 2004 07:49:03 GMT - 17 Aug 2002 02:07:57 GMT
So you've just created a great new Perl module distribution including several *.pm files? You've added nice POD documentation to each of them and now you'd like to view it nicely formatted in a web browser? And you'd also like to navigate between all...MSCHILLI/Pod-HtmlTree-0.97 - 25 Apr 2004 19:26:11 GMT
This module generates small and clean HTML from POD file. Unlike other pod2html modules, this module enables you to get section based html snippets. For example, you can get an html for SYNOPSIS section by following code: my $html = $pod->section('SY...TYPESTER/Pod-HTMLEmbed-0.04 - 05 Sep 2011 07:57:03 GMT
MANWAR/Pod-Tree-1.29 - 02 Dec 2018 10:33:42 GMT | https://metacpan.org/search?q=module%3APod%3A%3AHTML | CC-MAIN-2018-51 | refinedweb | 592 | 67.25 |
Reverse first ‘K’ items in a Queue in C++
Welcome to this tutorial where you are going to learn “How to reverse first ‘K’ items in a Queue in C++”. So without wasting any time let’s begin this tutorial with a quick revision of “Queue” in C++.
- So Queue is a linear structure of data where items can be inserted and deleted in the FIFO fashion. Here the term FIFO stands for First In First Out.
- In other words, we can say that the first element to be inserted in a queue will be the first element to be deleted from the queue.
- In a queue, we can only insert elements from the rear end of the queue. On the other hand, deletions will only take place from the front of the queue.
Now let’s understand how our program should work with an example,
(Click on the above link for viewing the example)
- Now you should have understood what we are trying to achieve. So before discussing the approach for this problem, I request you to write a scratch code on a paper.
Approach for the above problem
- Let’s now talk about the solution to the problem. Since we need to reverse 1st ‘K’ items we are going to use a “Stack” of size ‘K’.
- Then we will push first ‘K’ items of the queue into the stack. At this point, the queue will contain the remaining elements.
- Now if we try to pop these ‘K’ items from the stack we will get the elements in reverse order than they are inserted.
- So we will pop the ‘K’ items from the stack and insert it into the queue. Finally, we need to rearrange the queue by deleting “N-K” elements from the queue and inserting it again in the queue. (where ‘N’ is the size of the queue)
Click on the link below to understand the approach diagrammatically,
Now let’s move towards the actual code for this problem…
C++ Program for reversing first ‘K’ elements of a Queue
// C++ program to reverse // first 'K' items of a Queue #include <bits/stdc++.h> using namespace std; void reverse_Que(queue<int>* Q , int k) { // If any of these conditions is true then return if(Q->empty() == true || k > Q->size() || k <= 0) return; stack<int> st; //Stack container for storing first 'K' items // Push first 'K' into the stack and pop from queue for(int i=0 ; i<k ; i++) st.push(Q->front()), Q->pop(); // Now pop all items of stack and push it into the queue while(!st.empty()) Q->push(st.top()), st.pop(); // Finally, rearrange the queue by // pushing "N-K" elements into the queue // and simultaneously pop from the front of queue for(int j=0 ; j<(Q->size()-k) ; j++) Q->push(Q->front()), Q->pop(); } int main(void) { queue<int> Q; int K = 3; // Reverse the first 3 elements of the queue 'Q' // Insert the elements of queue Q.push(1); Q.push(2); Q.push(3); Q.push(40); Q.push(50); // Call the function reverse_Que // with parameters as reference of queue 'Q' // and integer value 'K' reverse_Que(&Q , K); cout<<"Q = { "; // Display the updated queue 'Q' while(!Q.empty()) cout<<Q.front()<<" ", Q.pop(); cout<<"}"; return 0; }
So the output that you will get after running the above program is,
Q = { 3 2 1 40 50 }
So that’s all for this tutorial. I hope you have understood the approach and program for this problem.
Thanks for going through this tutorial.
Comment down for any queries.
Also read: Reverse a string using stack in C++ | https://www.codespeedy.com/reverse-first-k-items-in-a-queue-in-cpp/ | CC-MAIN-2022-27 | refinedweb | 606 | 75.34 |
import "github.com/juju/juju/charmstore"
charmid.go client.go info.go jar.go latest.go
MacaroonURI is use when register new Juju checkers with the bakery.
var MacaroonNamespace = checkers.NewNamespace(map[string]string{MacaroonURI: ""})
MacaroonNamespace is the namespace Juju uses for managing macaroons.
type CharmID struct { // URL is the url of the charm. URL *charm.URL // Channel is the channel in which the charm was published. Channel csparams.Channel // Metadata is optional extra information about a particular model's // "in-theatre" use use of the charm. Metadata map[string]string }
CharmID encapsulates data for identifying a unique charm from the charm store.
type CharmInfo struct { // OriginalURL is charm URL, including its revision, for which we // queried the charm store. OriginalURL *charm.URL // Timestamp indicates when the info came from the charm store. Timestamp time.Time // LatestRevision identifies the most recent revision of the charm // that is available in the charm store. LatestRevision int // LatestResources is the list of resource info for each of the // charm's resources. This list is accurate as of the time that the // charm store handled the request for the charm info. LatestResources []resource.Resource }
CharmInfo holds the information about a charm from the charm store. The info relates to the charm at a particular revision at the time the charm store handled the request. The resource revisions associated with the charm at that revision may change at any time. Note, however, that the set of resource names remains fixed for any given charm revision.
LatestURL returns the charm URL for the latest revision of the charm.
type CharmInfoResult struct { CharmInfo // Error indicates a problem retrieving or processing the info // for this charm. Error error }
CharmInfoResult holds the result of a charm store request for info about a charm.
func LatestCharmInfo(client Client, charms []CharmID, metadata map[string]string) ([]CharmInfoResult, error)
LatestCharmInfo returns the most up-to-date information about each of the identified charms at their latest revision. The revisions in the provided URLs are ignored. The returned map indicates charm URLs where the macaroon has been updated. This updated macaroon should be stored for use in any further requests. Note that this map may be non-empty even if this method returns an error (and the macaroons should be stored).
type CharmRevision struct { // Revision is newest revision for the charm. Revision int // Err holds any error that occurred while making the request. Err error }
CharmRevision holds the data returned from the charmstore about the latest revision of a charm. Note that this may be different per channel.
Client wraps charmrepo/csclient (the charm store's API client library) in a higher level API.
func NewCachingClient(cache MacaroonCache, server string) (Client, error)
NewCachingClient returns a Juju charm store client that stores and retrieves macaroons for calls in the given cache. The client will use server as the charmstore url.
NewCustomClientAtURL returns a juju charmstore client that relies on the passed-in httpbakery.Client to store and retrieve macaroons. If not nil, the client will use server as the charmstore url, otherwise it will default to the standard juju charmstore url.
func (c Client) GetResource(req ResourceRequest) (data ResourceData, err error)
GetResource returns the data (bytes) and metadata for a resource from the charmstore.
func (c Client) LatestRevisions(charms []CharmID, modelMetadata map[string]string) ([]CharmRevision, error)
LatestRevisions returns the latest revisions of the given charms, using the given metadata.
ListResources returns a list of resources for each of the given charms.
ResourceInfo returns the metadata for the given resource from the charmstore.
type MacaroonCache interface { Set(*charm.URL, macaroon.Slice) error Get(*charm.URL) (macaroon.Slice, error) }
MacaroonCache represents a value that can store and retrieve macaroons for charms. It is used when we are requesting data from the charmstore for private charms.
type ResourceData struct { // ReadCloser holds the bytes for the resource. io.ReadCloser // Resource holds the metadata for the resource. Resource resource.Resource }
ResourceData represents the response from the charmstore about a request for resource bytes.
type ResourceRequest struct { // Charm is the URL of the charm for which we're requesting a resource. Charm *charm.URL // Channel is the channel from which to request the resource info. Channel csparams.Channel // Name is the name of the resource we're asking about. Name string // Revision is the specific revision of the resource we're asking about. Revision int }
ResourceRequest is the data needed to request a resource from the charmstore.
Package charmstore imports 17 packages (graph) and is imported by 255 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/github.com/juju/juju/charmstore | CC-MAIN-2020-10 | refinedweb | 758 | 57.77 |
This is an Entrance chart on an iPhone. It was generated by a Groovy script using the Entrance Java API to plot data from a RAWS weather station.
The script takes advantage of certain Apple iPhone CSS extensions and Joe Hewitt’s iUI framework to make the page behave a lot like like a native iPhone application. If you have an iPhone, you can see the result here here
Groovy and the Entrance API
Groovy is a scripting language for the Java platform. To use the Entrance Java API from Groovy scripts drop two jars, ‘entrance.jar’ and ‘cotta.jar’, into the Groovy ‘lib’ directory. If you are running Groovy from the command line, put them in:
(your home dir)/.groovy/lib
To use the Entrance API with the Groovy servlet, put them in:
(your webapps)/groovy/WEB-INF/lib
Scripts you write using the Entrance desktop application can also be run from the Entrance API using EntrancePlot. These two methods take the path to an Entrance script file as an argument:
public static String generatePNG( Connection con,
String pngFile, String scriptFile,
boolean deleteIfExists)
public static String generateTempPNG(Connection con,
String scriptFile)
and this method takes the script itself as a String argument:
public static String generatePNG(Connection con,
String pngFile,
String script);
Here’s the usual pattern for drawing Entrance charts with Groovy:
import com.dbentrance.entrance.EntrancePlot
import groovy.sql.Sql
sql = Sql.newInstance(
"jdbc:mysql://localhost/test", (user),
(password), "com.mysql.jdbc.Driver")
script = """
... a PLOT script ...
"""
EntrancePlot.generatePNG(sql.getConnection(),
"/usr/local/tomcat/webapps/groovy/bndc1_files/wind.png",
script);
... then refer to the PNG file in HTML output ...
Note that this technique will work for any database with a JDBC driver. The complete groovy script for the example above is here.
Sizing the charts
Use PAGE and FRAME in an Entrance script to size charts for mobile devices like the iPhone:
PAGE 0 0 (width) (height)
FRAME (x0) (y0) (x1) (y1)
Both take pixel coordinates. In pixel coordinates the upper left hand corner of the display is (0,0),. Coordinates increase going down and to the right.
Use PAGE to set the width and height of the output bitmap, and use FRAME to locate the chart on the page.
The first two coordinates of FRAME specify the upper left hand corner of the chart. The last two specify its lower right hand corner.
Once you have a PAGE and FRAME combination you like, you can copy and paste it into other scripts. These sizes worked well to for fitting two charts at a time in iUI on an iPhone screen:
PAGE 0 0 275 150
FRAME 46 5 245 125
To completely fill the iPhone display in landscape mode without iUI, you could use:
PAGE 0 0 320 356
Fonts
Generally, the default font sizes should work for both the screen and iPhone output. If you find you need to tune font sizes, use FONT and TITLEFONT::
FONT (font family) (style) (size)
TITLEFONT (font family) (style) (size)
Both take a font family, which can be one of the platform-independent names: Serif, SansSerif, Monospaced, Dialog, and DialogInput, or another font family names supported by your platform. ‘Style’ can be PLAIN, BOLD, or ITALIC and the font ’size’ is set using point sizes. Generally speaking a font size that is legible on the display screen will also be legible on the iPhone.
Black Backgrounds
Use BACKGROUND, FOREGROUND and GRIDLINES to change colors:
BACKGROUND BLACK
FOREGROUND GRAY
GRIDLINES GRAY
A Note About the iPad
You can also use Entrance API to generate charts for the iPad. The iPad display is 1024 x 768 and 132 pixels per inch. I haven’t tested it, but I think you will want to double Entrance font sizes that look good on the desktop display when sending them to iPad. More to come on the iPad.
The latest version of Entrance can be downloaded from the main page. | http://dbentrance.com/blog/?m=201001 | CC-MAIN-2015-06 | refinedweb | 656 | 69.31 |
Types and Type DeclarationsTypes and Type Declarations
One of the design principles of Deno is no non-standard module resolution. When
TypeScript is type checking a file, it only cares about the types for the file,
and the
tsc compiler has a lot of logic to try to resolve those types. By
default, it expects ambiguous module specifiers with an extension, and will
attempt to look for the file under the
.ts specifier, then
.d.ts, and
finally
.js (plus a whole other set of logic when the module resolution is set
to
"node"). Deno deals with explicit specifiers.
This can cause a couple problems though. For example, let's say I want to
consume a TypeScript file that has already been transpiled to JavaScript along
with a type definition file. So I have
mod.js and
mod.d.ts. If I try to
import
mod.js into Deno, it will only do what I ask it to do, and import
mod.js, but that means my code won't be as well type checked as if TypeScript
was considering the
mod.d.ts file in place of the
mod.js file.
In order to support this in Deno, Deno has two solutions, of which there is a variation of a solution to enhance support. The two main situations you come across would be:
- As the importer of a JavaScript module, I know what types should be applied to the module.
- As the supplier of the JavaScript module, I know what types should be applied to the module.
The latter case is the better case, meaning you as the provider or host of the module, everyone can consume it without having to figure out how to resolve the types for the JavaScript module, but when consuming modules that you may not have direct control over, the ability to do the former is also required.
Providing types when importingProviding types when importing
If you are consuming a JavaScript module and you have either created types (a
.d.ts file) or have otherwise obtained the types, you want to use, you can
instruct Deno to use that file when type checking instead of the JavaScript file
using the
@deno-types compiler hint.
@deno-types needs to be a single line
double slash comment, where when used impacts the next import or re-export
statement.
For example if I have a JavaScript modules
coolLib.js and I had a separate
coolLib.d.ts file that I wanted to use, I would import it like this:
// @deno-types="./coolLib.d.ts" import * as coolLib from "./coolLib.js";
When type checking
coolLib and your usage of it in the file, the
coolLib.d.ts types will be used instead of looking at the JavaScript file.
The pattern matching for the compiler hint is somewhat forgiving and will accept quoted and non-question values for the specifier as well as it accepts whitespace before and after the equals sign.
Providing types when hostingProviding types when hosting
If you are in control of the source code of the module, or you are in control of how the file is hosted on a web server, there are two ways to inform Deno of the types for a given module, without requiring the importer to do anything special.
Using the triple-slash reference directiveUsing the triple-slash reference directive
Deno supports using the triple-slash reference
types directive, which adopts
the reference comment used by TypeScript in TypeScript files to include other
files and applies it only to JavaScript files.
For example, if I had created
coolLib.js and along side of it I had created my
type definitions for my library in
coolLib.d.ts I could do the following in
the
coolLib.js file:
/// <reference types="./coolLib.d.ts" /> // ... the rest of the JavaScript ...
When Deno encounters this directive, it would resolve the
./coolLib.d.ts file
and use that instead of the JavaScript file when TypeScript was type checking
the file, but still load the JavaScript file when running the program.
ℹ️ Note this is a repurposed directive for TypeScript that only applies to JavaScript files. Using the triple-slash reference directive of
typesin a TypeScript file works under Deno as well, but has essentially the same behavior as the
pathdirective.
Using X-TypeScript-Types headerUsing X-TypeScript-Types header
Similar to the triple-slash directive, Deno supports a header for remote modules
that instructs Deno where to locate the types for a given module. For example, a
response for
might look something like this:
HTTP/1.1 200 OK Content-Type: application/javascript; charset=UTF-8 Content-Length: 648 X-TypeScript-Types: ./coolLib.d.ts
When seeing this header, Deno would attempt to retrieve
and use that when type checking the original
module.
Using ambient or global typesUsing ambient or global types
Overall it is better to use module/UMD type definitions with Deno, where a
module expressly imports the types it depends upon. Modular type definitions can
express
augmentation of the global scope
via the
declare global in the type definition. For example:
declare global { var AGlobalString: string; }
This would make
AGlobalString available in the global namespace when importing
the type definition.
In some cases though, when leveraging other existing type libraries, it may not be possible to leverage modular type definitions. Therefore there are ways to include arbitrary type definitions when type checking programmes.
Using a triple-slash directiveUsing a triple-slash directive
This option couples the type definitions to the code itself. By adding a
triple-slash
types directive near the type of a module, type checking the file
will include the type definition. For example:
/// <reference types="./types.d.ts" />
The specifier provided is resolved just like any other specifier in Deno, which means it requires an extension, and is relative to the module referencing it. It can be a fully qualified URL as well:
/// <reference types=" />
Using a configuration fileUsing a configuration file
Another option is to use a configuration file that is configured to include the
type definitions, by supplying a
"types" value to the
"compilerOptions". For
example:
{ "compilerOptions": { "types": [ "./types.d.ts", " "/Users/me/pkg/types.d.ts" ] } }
Like the triple-slash reference above, the specifier supplied in the
"types"
array will be resolved like other specifiers in Deno. In the case of relative
specifiers, it will be resolved relative to the path to the config file. Make
sure to tell Deno to use this file by specifying
--config=path/to/file flag.
Type Checking Web WorkersType Checking Web Workers
When Deno loads a TypeScript module in a web worker, it will automatically type
check the module and its dependencies against the Deno web worker library. This
can present a challenge in other contexts like
deno cache,
deno bundle, or
in editors. There are a couple of ways to instruct Deno to use the worker
libraries instead of the standard Deno libraries.
Using triple-slash directivesUsing triple-slash directives
This option couples the library settings with the code itself. By adding the following triple-slash directives near the top of the entry point file for the worker script, Deno will now type check it as a Deno worker script, irrespective of how the module is analyzed:
/// <reference no- /// <reference lib="deno.worker" />
The first directive ensures that no other default libraries are used. If this is
omitted, you will get some conflicting type definitions, because Deno will try
to apply the standard Deno library as well. The second instructs Deno to apply
the built-in Deno worker type definitions plus dependent libraries (like
"esnext").
When you run a
deno cache or
deno bundle command or use an IDE which uses
the Deno language server, Deno should automatically detect these directives and
apply the correct libraries when type checking.
The one disadvantage of this, is that it makes the code less portable to other
non-Deno platforms like
tsc, as it is only Deno which has the
"deno.worker"
library built into it.
Using a configuration fileUsing a configuration file
Another option is to use a configuration file that is configured to apply the library files. A minimal file that would work would look something like this:
{ "compilerOptions": { "target": "esnext", "lib": ["deno.worker"] } }
Then when running a command on the command line, you would need to pass the
--config path/to/file argument, or if you are using an IDE which leverages the
Deno language server, set the
deno.config setting.
If you also have non-worker scripts, you will either need to omit the
--config
argument, or have one that is configured to meet the needs of your non-worker
scripts.
Important pointsImportant points
Type declaration semanticsType declaration semantics
Type declaration files (
.d.ts files) follow the same semantics as other files
in Deno. This means that declaration files are assumed to be module declarations
(UMD declarations) and not ambient/global declarations. It is unpredictable
how Deno will handle ambient/global declarations.
In addition, if a type declaration imports something else, like another
.d.ts
file, its resolution follow the normal import rules of Deno. For a lot of the
.d.ts files that are generated and available on the web, they may not be
compatible with Deno.
To overcome this problem, some solution providers, like the Skypack CDN, will automatically bundle type declarations just like they provide bundles of JavaScript as ESM.
Deno Friendly CDNsDeno Friendly CDNs
There are CDNs which host JavaScript modules that integrate well with Deno.
Skypack.dev is a CDN which provides type declarations (via the
X-TypeScript-Typesheader) when you append
?dtsas a query string to your remote module import statements. For example:
import React from "
Behavior of JavaScript when type checkingBehavior of JavaScript when type checking
If you import JavaScript into TypeScript in Deno and there are no types, even if
you have
checkJs set to
false (the default for Deno), the TypeScript
compiler will still access the JavaScript module and attempt to do some static
analysis on it, to at least try to determine the shape of the exports of that
module to validate the import in the TypeScript file.
This is usually never a problem when trying to import a "regular" ES module, but in some cases if the module has special packaging, or is a global UMD module, TypeScript's analysis of the module can fail and cause misleading errors. The best thing to do in this situation is provide some form of types using one of the methods mention above.
InternalsInternals
While it isn't required to understand how Deno works internally to be able to leverage TypeScript with Deno well, it can help to understand how it works.
Before any code is executed or compiled, Deno generates a module graph by parsing the root module, and then detecting all of its dependencies, and then retrieving and parsing those modules, recursively, until all the dependencies are retrieved.
For each dependency, there are two potential "slots" that are used. There is the
code slot and the type slot. As the module graph is filled out, if the module is
something that is or can be emitted to JavaScript, it fills the code slot, and
type only dependencies, like
.d.ts files fill the type slot.
When the module graph is built, and there is a need to type check the graph, Deno starts up the TypeScript compiler and feeds it the names of the modules that need to be potentially emitted as JavaScript. During that process, the TypeScript compiler will request additional modules, and Deno will look at the slots for the dependency, offering it the type slot if it is filled before offering it the code slot.
This means when you import a
.d.ts module, or you use one of the solutions
above to provide alternative type modules for JavaScript code, that is what is
provided to TypeScript instead when resolving the module. | https://deno.land/manual@v1.16.1/typescript/types | CC-MAIN-2022-21 | refinedweb | 1,987 | 60.35 |
<<
Uria LeviCourses Plus Student 1,760 Points
Python OOP help(solved)
Hello everyone,
I made some code inspired by Kenneth (almost the same code as he does). When I try to run the code I get an error but I can't understand the problem.
I don't know how to copy as VSC mode so I will just copy-paste:
File #1 "character.py"
class Character:
def __init__(self,name="",*args, **kwargs): if not name: raise ValueError("'Name' is required!") self.name = name for key, value in kwargs.items(): setattr(self, key, value)
---------------------------------------------------------------------
File #2"newbie.py"
import random
from attributes_newbie import Fast, Agile from characters import Character
class Newbie(Character, Agile, Fast): def fall(self): return self.fast and random.randint(0,1)
---------------------------------------------------------------------
File #3"attributes_newbie.py"
import random
class Fast: fast = True
def __init__(self, fast=True, *args, **kwargs): super().__init__(*args, **kwargs) self.fast = fast def fastest(self, speed): return self.fast and speed > 5
class Agile: def init(self, agile=True, *args, **kwargs): super().init(*args, **kwargs) self.agile = agile
def block(self): return self.agile and random.randint(0,1)
---------------------------------------------------------------------
file #4 "play.py"
from newbie import Newbie
Me = Newbie(name="Me") print(Me.agile) print(me.fast)
------------------------------------------------------------------------------------------------------------------------------------------
The error I get:
File "Game/play.py", line 1, in <module>
from newbie import Newbie
File "/home/treehouse/workspace/Game/newbie.py", line 3, in <module>
from attributes_newbie import Fast, Agile
File "/home/treehouse/workspace/Game/attributes_newbie.py", line 19
return self.agile and random.randint(0,1)
Edit:
I forgot to add bool line 19 file3, & line 8 file 2
I receive now a tab error:
Traceback (most recent call last): File "c:/Users/ulevi/Desktop/Py Game/play.py", line 1, in <module> from newbie import Newbie File "c:\Users\ulevi\Desktop\Py Game\newbie.py", line 3, in <module> from attributes_newbie import Fast, Agile File "c:\Users\ulevi\Desktop\Py Game\attributes_newbie.py", line 19 return self.agile and bool(random.randint(0,1)) ^ TabError: inconsistent use of tabs and spaces in indentation
Edit 2:
I've found a few type errors I made in the code because I changed it a lot during coding. and I've found a solution to it - I had to change some code and it's running smooth. | https://teamtreehouse.com/community/python-oop-helpsolved | CC-MAIN-2022-27 | refinedweb | 382 | 61.53 |
PEP: Python3 and UnicodeDecodeError
This is a PEP describing the behaviour of Python3 on UnicodeDecodeError. It's a draft, don't hesitate to comment it. This document suppose that my patch to allow bytes filenames is accepted which is not the case today.
While I was writing this document I found poential problems in Python3. So here is a TODO list (things to be checked):
- FIXME: When bytearray is accepted or not?
- FIXME: Allow bytes/str mix for shutil.copy*()? The ignore callback will get bytes or unicode?
Can anyone write a section about bytes encoding in Unicode using escape sequence?
What is the best tool to work on a PEP? I hate email threads, and I would prefer SVN / Mercurial / anything else.
Python3 and UnicodeDecodeError for the command line, environment variables and filenames
Introduction
Python3 does its best to give you texts encoded as a valid unicode characters strings. When it hits an invalid bytes sequence (according to the used charset), it has two choices: drops the value or raises an UnicodeDecodeError. This document present the behaviour of Python3 for the command line, environment variables and filenames.
Example of an invalid bytes sequence: ::
>>> str(b'\xff', 'utf8') UnicodeDecodeError: 'utf8' codec can't decode byte 0xff (...)
whereas the same byte sequence is valid in another charset like ISO-8859-1: ::
>>> str(b'\xff', 'iso-8859-1') 'ÿ'
Default encoding
Python uses "UTF-8" as the default Unicode encoding. You can read the default charset using sys.getdefaultencoding(). The "default encoding" is used by PyUnicode_FromStringAndSize().
A function sys.setdefaultencoding() exists, but it raises a ValueError for charset different than UTF-8 since the charset is hardcoded in PyUnicode_FromStringAndSize().
Command line
Python creates a nice unicode table for sys.argv using mbstowcs(): ::
$ ./python -c 'import sys; print(sys.argv)' 'Ho hé !' ['-c', 'Ho hé !']
On Linux, mbstowcs() uses LC_CTYPE environement variable to choose the encoding. On an invalid bytes sequence, Python quits directly with an exit code 1. Example with UTF-8 locale:
$ python3.0 $(echo -e 'invalid:\xff') Could not convert argument 1 to string
Environment variables
Python uses "_wenviron" on Windows which are contains unicode (UTF-16-LE) strings. On other OS, it uses "environ" variable and the UTF-8 charset. It drops a variable if its key or value is not convertible to unicode. Example:
env -i HOME=/home/my PATH=$(echo -e "\xff") python >>> import os; list(os.environ.items()) [('HOME', '/home/my')]
Both key and values are unicode strings. Empty key and/or value are allowed.
Python ignores invalid variables, but values still exist in memory. If you run a child process (eg. using os.system()), the "invalid" variables will also be copied.
Filenames
Introduction
Python2 uses byte filenames everywhere, but it was also possible to use unicode filenames. Examples:
- os.getcwd() gives bytes whereas os.getcwdu() always returns unicode
os.listdir(unicode) creates bytes or unicode filenames (fallback to bytes on UnicodeDecodeError), os.readlink() has the same behaviour
- glob.glob() converts the unicode pattern to bytes, and so create bytes filenames
- open() supports bytes and unicode
Since listdir() mix bytes and unicode, you are not able to manipulate easily filenames:
>>> path=u'.' >>> for name in os.listdir(path): ... print repr(name) ... print repr(os.path.join(path, name)) ... u'valid' u'./valid' 'invalid\xff' Traceback (most recent call last): ... File "/usr/lib/python2.5/posixpath.py", line 65, in join path += '/' + b UnicodeDecodeError: 'ascii' codec can't decode byte 0xff (...)
Python3 supports both types, bytes and unicode, but disallow mixing them. If you ask for unicode, you will always get unicode or an exception is raised.
You should only use unicode filenames, except if you are writing a program fixing file system encoding, a backup tool or you users are unable to fix their broken system.
Windows
Microsoft Windows since Windows 95 only uses Unicode (UTF-16-LE) filenames. So you should only use unicode filenames.
Non Windows (POSIX)
POSIX OS like Linux uses bytes for historical reasons. In the best case, all filenames will be encoded as valid UTF-8 strings and Python creates valid unicode strings. But since system calls uses bytes, the file system may returns an invalid filename, or a program can creates a file with an invalid filename.
An invalid filename is a string which can not be decoded to unicode using the default file system encoding (which is UTF-8 most of the time).
A robust program will have to use only the bytes type to make sure that it can open / copy / remove any file or directory.
Filename encoding
Python use:
- "mbcs" on Windows
- or "utf-8" on Mac OS X
- or nl_langinfo(CODESET) on OS supporting this function
- or UTF-8 by default
"mbcs" is not a valid charset name, it's an internal charset saying that Python will use the function MultiByteToWideChar() to decode bytes to unicode. This function uses the current codepage to decode bytes string.
You can read the charset using sys.getfilesystemencoding(). The function may returns None if Python is unable to determine the default encoding.
PyUnicode_DecodeFSDefaultAndSize() uses the default file system encoding, or UTF-8 if it is not set.
On UNIX (and other operating systems), it's possible to mount different file systems using different charsets. sys.getdefaultencoding() will be the same for the different file systems since this encoding is only used between Python and the Linux kernel, not between the kernel and the file system which may uses a different charset.
Display a filename
Example of a function formatting a filename to display it to human eyes: ::
from sys import getfilesystemencoding def format_filename(filename): return str(filename, getfilesystemencoding(), 'replace')
Example: format_filename('r\xffport.doc') gives 'r�port.doc' with the UTF-8 encoding.
Functions producing filenames
Policy: for unicode arguments: drop invalid bytes filenames; for bytes arguments: return bytes
- os.listdir()
- glob.glob()
This behaviour (drop silently invalid filenames) is motivated by the fact to if a directory of 1000 files only contains one invalid file, listdir() fails for the whole directory. Or if your directory contains 1000 python scripts (.py) and just one another document with an invalid filename (eg. r�port.doc), glob.glob('*.py') fails whereas all .py scripts have valid filename.
Policy: for an unicode argument: raise an UnicodeDecodeError on invalid filename; for an bytes argument: return bytes
- os.readlink()
Policy: create unicode directory or raise an UnicodeDecodeError
- os.getcwd()
Policy: always returns bytes
- os.getcwdb()
Functions for filename manipulation
Policy: raise TypeError on bytes/str mix
- os.path.*(), eg. os.path.join()
- fnmatch.*()
Functions accessing files
Policy: accept both bytes and str
- io.open()
- os.open()
- os.chdir()
- os.stat(), os.lstat()
- os.rename()
- os.unlink()
- shutil.*()
os.rename(), shutil.copy*(), shutil.move() allow to use bytes for an argment, and unicode for the other argument
bytearray
In most cases, bytearray() can be used as bytes for a filename.
Unicode normalisation
Unicode characters can be normalized in 4 forms: NFC, NFD, NFKC or NFKD. Python does never normalize strings (nor filenames). No operating system does normalize filenames. So the users using different norms will be unable to retrieve their file. Don't panic! All users use the same norm.
Use unicodedata.normalize() to normalize an unicode string. | https://wiki.python.org/moin/Python3UnicodeDecodeError | CC-MAIN-2022-21 | refinedweb | 1,202 | 58.08 |
Operating systems, development tools, and professional services
for connected embedded systems
for connected embedded systems
mq_getattr()
Get a message queue's attributes
Synopsis:
#include <mqueue.h> int mq_getattr( mqd_t mqdes, struct mq_attr* mqstat );
Arguments:
- mqdes
- The message-queue descriptor, returned by mq_open(), of the message queue that you want to get the attributes of.
- mqstat
- A pointer to a mq_attr structure where the function can store the attributes of the message queue. For more information, see below._getattr() function determines the current attributes of the queue referenced by mqdes. These attributes are stored in the location pointed to by mqstat.
The fields of the mq_attr structure are as follows:
- long mq_flags
- The options set for this open message-queue description (i.e. these options are for the given mqdes, not the queue as a whole). This field may have been changed by call to mq_setattr() since you opened the queue.
- O_NONBLOCK -- no call to mq_receive() or mq_send() will ever block on this queue. If the queue is in such a condition that the given operation can't be performed without blocking, then an error is returned, and errno is set to EAGAIN.
- long mq_maxmsg
- The maximum number of messages that can be stored on the queue. This value was set when the queue was created.
- long mq_msgsize
- The maximum size of each message on the given message queue. This value was also set when the queue was created.
- long mq_curmsgs
- The number of messages currently on the given queue.
- long mq_sendwait
- The number of threads currently waiting to send a message. This field was eliminated from the POSIX standard after draft 9, but has been kept as a QNX Neutrino extension. A nonzero value in this field implies that the queue is full.
- long mq_recvwait
- The number of threads currently waiting to receive a message. Like mq_sendwait, this field has been kept as a QNX Neutrino extension. A nonzero value in this field implies that the queue is empty.
Returns:
-1 if an error occurred (errno is set). Any other value indicates success.
Errors:
- EBADF
- Invalid message queue mqdes.
Classification:
See also:
mq_close(), mq_open(), mq_receive(), mq_send(), mq_setattr()
mq, mqueue in the Utilities Reference | http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/m/mq_getattr.html | crawl-003 | refinedweb | 363 | 66.03 |
Top: Multithreading: semaphore & timedsem
#include <pasync.h> class semaphore { semaphore(int initvalue); void wait(); void post(); void signal(); // alias for post() } class timedsem { timedsem(int initvalue); bool wait( [ int milliseconds ] ); void post(); void signal(); }
Semaphore is a special helper object with very simple logic which is typically used to synchronize the execution of concurrent threads. A semaphore object can be considered as an integer value which has one additional feature: if its value is 0, an attempt to decrement this value will cause the calling thread to "hang" until some other thread increments it. "Hanging" on the semaphore means entering effective wait state and consuming no or little CPU time, depending on the operating system.
One example showing the use of semaphores is when one thread needs to send data (e.g. through a buffer) to another thread. In multithreading environments there is no guarantee in which order two threads will come to the point where the first thread is filling the data buffer and the other thread is reading it. Therefore, these two threads need to synchronize execution at the exchange point. Semaphore's logic for this case is fairly simple: the reader thread calls wait() before reading the buffer and "hangs" if the semaphore is not yet signaled. The writer thread calls post() after filling the buffer with data and thus signals the reader thread that the data buffer is ready. This schema ensures that the data buffer will be read by the second thread only when the data is actually ready.
If the data exchange cycle is iterative you will have to make sure also that the buffer is not filled twice before the reading thread takes the first data chunk. In this situation another semaphore should be created with reverse logic: the semaphore is set to signaled state when the reading thread has taken the first data chunk and is ready to take the second chunk. The writing thread, in its turn, waits on this semaphore to make sure the buffer is ready for the successive data chunk.
In more complex applications when many threads need to exchange data with each other or with the main application thread, message queues can be used instead of semaphores. The message queue object itself is another example of using semaphores (please see pmsgq.cxx source file).
You can use semaphores when your application needs to limit the number of concurrently running threads of the same type. A typical web robot application, for example, creates a new thread for each download process. To limit the number of threads the application creates a semaphore with the initial value equal to the maximum allowed number of threads. Each new thread decrements the semaphore by calling wait() and then increments it with post() upon termination. If the maximum allowed number of threads is reached, the next thread calling wait() will "hang" until one of the running threads terminates and calls post().
PTypes' semaphore object encapsulates either Windows semaphore or an implementation based on POSIX synchronization primitives. This object implements the minimal set of features common to both Windows and POSIX semaphores. Besides, semaphores can not be shared between processes on some operating systems, thus limiting the use of PTypes' semaphores to one process.
PTypes' timedsem adds timed waiting feature to the simple semaphore. This class has an interface compatible with the simple semaphore with one additional function - wait(int) with timer. The reason this feature is implemented in a separate class is that not all platforms support timed waiting. Wherever possible, PTypes uses the system's native sync objects, or otherwise uses its own implementation based on other primitives. Note that timedsem can be used both for infinitely waiting and timed waiting; it is, however, recommended to use simple semaphore if you are not going to use timed waiting.
As an example of using timedsem see implementation of thread 'relaxing' mechanism in include/pasync.h and src/pthread.cxx.
semaphore::semaphore(int initvalue) constructs a semaphore object with the initial value initvalue.
void semaphore::wait() decrements the semaphore's value by 1. wait() can enter effective wait state if the value becomes -1, in which case the thread will "hang" until some other thread increments the value by calling post().
void semaphore::post() increments the semaphore's value by 1. post() can release some other thread waiting for the same semaphore if its value was -1.
void semaphore::signal() is an alias for post().
timedsem::timedsem(int initvalue) constructs a semaphore object with an interface fully compatible with (but not inherited from) semaphore. This class has one additional method for timed waiting (see below).
bool timedsem::wait( [ int milliseconds ] ) decrements the semaphore's value by 1 and enters effective wait state if the value becomes -1. Unlike simple wait() this function will 'wake' and return if the time specified in milliseconds has elapsed, in which case the function returns false. If the semaphore was signaled with post() or signal() before the time elapses this function returns true. If milliseconds is omitted or is -1 the function will wait infinitely, like simple wait().
See also: thread, mutex, rwlock, trigger, msgqueue, Examples | http://www.melikyan.com/ptypes/doc/async.semaphore.html | crawl-001 | refinedweb | 857 | 52.7 |
Here I begin what will be a massive Java Video Tutorial. I start with all you need to start writing Java programs.
I then cover just about anything you’d want to know about Javas primitive data types. I cover declaration statements, expression statements, all the types and how to convert them.
This tutorial starts out slow, but by the end you will be a Java expert! Make sure you print out the code below to help you learn easier.
I promise watching this Java video tutorial from beginning to end will teach you everything there is to know about Java. I hope you enjoy it.
If you like videos like this tell Google [googleplusone]
To make me extra happy, feel free to share it
Code From Video
// Here I'm defining a new class (Blueprint) named HelloWorld // public tells Java that this should be available to all other classes // classes are blue prints used to design objects that contain attributes (variables) and methods (functions) // HelloWorld is just what you named your program. That allows you to reference it later // { is the opening brace that surrounds the code used by HelloWorld public class HelloWorld { // public allows other classes to use this function // static means that only a class can call for this function to execute // void states that this function doesn't return any values after it is done executing // main is required in every Java program. This function is always executed first // Every main function must accept an array of string objects // Class variables must start with static if you want to access them with any other methods in the class static String randomString = "String to print"; // Constant variables are defined with the word final static final double PINUM = 3.1415929; public static void main(String[] args) { // System.out is an object that outputs information // println is a function that prints to the screen what ever you provide between braces // "Hello World" is a string of characters. Strings must be surrounded with quotes // Every statement ends with a semicolon ; System.out.println("Hello World"); // Variable names are case sensitive. Age is not the same as age. // Variables must begin with a letter and contain numbers, _, or $ // You must declare all variables before you can use them with a data type /* You can use any variable name except for * */ // This is a declaration statement // integerOne is a local variable to the main function. It can only be accessed in main int integerOne = 22; int integerTwo = integerOne + 1; // This is an expression statement // White space has no meaning in Java, aside from variables and keywords integerTwo = integerOne + 3; System.out.println(integerTwo); // Javas Primitive Types byte bigByte = 127; // Minimum value -128 Maximum value 127 short bigShort = 32767; // Minimum value -32768 Maximum value 32767 int bigInt = 2147483647; // Minimum value -2147483648 Maximum value 2147483647 long bigLong = 9223372036854775807L; // Minimum value -9223372036854775808L float bigFloat = 3.14F; // You must end a float with an F double bigDouble = 3.1234567890D; // The D is not required with doubles System.out.println(Float.MAX_VALUE); // Float is precise to 6 decimal places System.out.println(Double.MAX_VALUE); // Double is precise to 15 decimal places boolean trueOrFalse = true; // Booleans are True or False, not 1 or 0 // A char will accept a number or a character surrounded by apostrophes char randomChar = 65; // Character Code for A is 65 Minimum value 0 Maximum value 65535 char anotherChar = 'A'; System.out.println(randomChar); // chars can also contain escaped characters char backSpace = '\b'; char formFeed = '\f'; char lineFeed = '\n'; char carriageReturn = '\r'; char horizontalTab = '\t'; char doubleQuote = '\"'; char singleQuote = '\''; char backSlash = '\\'; // A string contains a series of characters String randomString = "I'm just a random"; String anotherString = "string"; // You combine strings with a + String combinedString = randomString + ' ' + anotherString; System.out.println(combinedString); // How to convert any other type to a string String byteString = Byte.toString(bigByte); String shortString = Short.toString(bigShort); String intString = Integer.toString(bigInt); String longString = Long.toString(bigLong); String floatString = Float.toString(bigFloat); String doubleString = Double.toString(bigDouble); String booleanString = Boolean.toString(trueOrFalse); String charString = Character.toString(randomChar); // You don't need to do this System.out.println(charString); // Can't do this because char is a primitive data type // System.out.println(randomChar.getClass()); // You can do this because String is an object System.out.println(charString.getClass()); // You use casting to convert from one primitive type to another // If you convert from a number that is to big the largest possible value will be // used instead double aDoubleValue = 3.1456789; int doubleToInt = (int) aDoubleValue; System.out.println(doubleToInt); /* To cast to other primitive types just proceed with the conversion to type * ie (byte) (short) (long) (double) * (float) & (boolean) & (char) don't work. * (char) stays as a number instead of a character */ // Use parseInt to convert a string into an integer int stringToInt = Integer.parseInt(intString); /* Other parse functions * parseShort, parseLong, parseByte, parseFloat, parseDouble, parseBoolean * There is no reason to parse a Character */ } // You must provide a closing brace } so Java knows when the function has ended }
awesome tutorial, please provide more videos as soon as possible..
Next week I plan to make a video a day while on vacation. It should be fun, or at least educational 🙂
Your tutorials are terrific and most inspiring. It would be most beneficial to follow the videos with either subtitles or a transcript of the accompanying lecture (audio input from you) as I am severely hearing impaired in both ears. I hope you don’t mind this humble request as your comments in the code for each lesson are usable during revision.
Thank you very much 🙂 I have captioning on YouTube, but I don’t know how well it works. I’ve recently been asked to translate the videos into other languages, but I’m not sure how to do that or how to make the captioning better. I’ll look into whether YouTube allows me to change that. Thank you for the request and I’ll see what I can do
Thank you for your helpful & continued advice. You are a gem to look into this request further. I’ve tried the caption option on YouTube, unfortunately it doesn’t work at all for me. Apart from resorting to your on line help with understanding the code in the brilliant lessons, would it be possible to increase & expand on your commented explanations in the comments in the code for each lesson. I remain greatful for your most kind consideration of my request. Thank you very much.
I could put a translator on my site if you think that would help? The only reason why I haven’t is because I thought Google automatically translates sites into any language. I’m not sure if they do that in your country though. I’m sorry, but I’m not very well traveled and know very little about other countries. If there are any tools I can add to help you I definitely will add them
I’m from the London, UK and can only use English, but would much appreciate any help with computerspeak, as it can be as daunting as ancient Greek! On a tangent, may I please request your learned advice and help about availability of Video tutorials for computer hardware? I look forward to hearing from you. Thank you very much, once again.
Are you looking for a tutorial on simulating speech? I’d love to do something on that topic. Image recognition would be more exciting yet and I plan on covering that in the future when I explore AI. I have been planning a tutorial on electronics for over a year. I was going to start of with basic components and work my way up into a functioning machine with an OS and everything. I just haven’t been able to fit it in. It doesn’t seem like many people are interested in that for some reason? I have all of the equipment and hope to cover that soon
I wish to thank you for your continuing and enthusiatic help, encouragement and inspiration with regards to my requests for help with image recognition and simulating speech. It is so wonderful that this is associated with your own exploration of AI. I must say that I am really looking forward to your planned tutorial on electronics, which would work all the way up into a functioning machine with an OS. I do hope you get the required time and resources to fit it in to your demanding schedule. I hope that this endeavour appeals and more people express their interest. In the meanwhile, thanks for all your well intended help.
Thank you for the kind words 🙂 I really want to cover electronics and will get to it soon. Sometimes I get a bit overwhelmed by requests. Always know that I’m doing my best to fulfill everyones requests.
Thank You Sir!!!! for such a g8 tutorial!!!!! terrific!!!
You’re very welcome 🙂 I’m very happy that you enjoyed it!
thank you very mutch derek
this new serie came in a right time
after covering almost the web languages programing you made the right choise to move to java.
so thank you derek see you in the next tutorial
Yes I’ve covered most everything in regards to web specific languages. I do want to go back and cover Ruby. I’ll probably do that after I’m done with Java. Then I’m going to start covering other areas of knowledge like I mentioned before. I hope you like the new direction
Hello, your tutorials are as always interesting. I dont know my question is related to java or not but just wanted to ask whether u know how to create something like this website dollar2rupee.net, its a dollar to rupee exchange rate website. Can you help out. Thanks
Thank you 🙂 A tool like that is easy to make with JavaScript. I teach you most of what you need in this JavaScript Tutorial You may have to look at the others that proceed it depending on how good you are at JavaScript. I hope that helps
Thank you for your response, if possible try to make a tutorial series for advance java also, i never found tutorial series for this anywhere.
What do you want to see me cover in regards to advanced java? If you give me a list I’ll incorporate it into the tutorial. Thanks
Hello, sorry for answer in old topic. I would like to see turtorials about Hibernate/Spring. But first of all about JUnit, and testing.
Thanks for being my teacher ! Have a nice day.
Those tutorials are in the works. I will definitely cover them as soon as I finish up Android.
Awsome, I am really waiting forward for them 😀 regards from Poland.
Hi,
I installed Eclipse as shown in yor tutorial. When I want to create a new file.java I cannot save it. I do not see any path as in your tutorial. Do I have to create a project first or did I do something wrong with the settings? Thanks.
Did you create a new workspace when you first started Eclipse? You also have to make sure Java is highlighted in the upper right hand corner in Eclipse. I have everything saved in the Package Explorer – src – default package folder. I hope that helps
Thanks for your reply. It actually worked when I made a Java Project first. With this I was able to create and save the java file. I did create a workspace.
Great! I figured it was just something silly
Great tutorial on java! Can you make a tutorial on else, if, for while,arrays and loops.
Thanks
Absolutely I’ll cover conditional statements, loops, OOP, Arrays, Strings, Collections, functions, Threads, Networking, regex, Swing, Streams, Databases, Servlets, etc. It is going to be a huge tutorial 🙂
I am taking Java programming in school and I want to learn the basic concepts before I start taking my classes. Looking forward to the videos.
Thanks!
I’ll cover more than just the basics. I plan to cover conditional statements, loops, OOP, Arrays, Strings, Collections, functions, Threads, Networking, regex, Swing, Streams, Databases, Servlets, etc.
I hope you find them useful. They start off kind of boring, but get interesting very quick
if possible please provide tutorial on servlet and jsp.
I will cover servlets and a bunch of other things in regards to Java. I always do my best to create complete tutorials
Hey,
These tutorials are very helpful.. I will appreciate if you could upload tutorials for JSP, Servlets, Spring and Hibernate…
Thanks !!!
I plan on making many more java videos. I’m glad you like them. Java is kind of a hard language to teach. I’m doing my best
i have a question, final keyword is used to make something constant, ie.(we cannot change it), but what is this mean that “we cannot change it” if i change the value of the final variable, in the program/code, i can do that, so what is “we cannot change the value of final variable”.
You can’t change a final variable if it is a primitive data type. You could however change the value of a non-primitive. The only thing that’s final about a non-primitive is its reference in memory. I haven’t really gotten to advanced stuff like this, so if this doesn’t make sense stay tuned for the future videos
okay…thanks…
It would be great if you could set some small assignments in Java so people can test their skills.
Thanks
Java beginner
The best advice I can give you is to study the video and then try to recreate everything out of your head. I’ll look into making assignments because I’ve received that request many times.
Please admin i am studying java
and just asking about the applications which contains big numbers
it could not be defined in int , long or double
is there is a(BigIntegar) in java or how can i?
thank you very much for your help
You are looking for BigInteger and BigDecimal. Click the links for more information on them. I hope that helps
Great videos! I enjoyed your SQL Video’s they helped me get a B+ in my last class. Now I am watching your Java series to supplement my reading.
Great I’m glad you like them. I’m making a pretty neat Java tutorial that shows you how to load SQL query results into a JTable. It is going up today
i love it
The best tutorial channel on YouTube.
I have a question though. My heart and soul is broken with Eclipse.
I’m trying to run the HelloWorld code in Eclipse and all I get is a “Launch Error – Editor does not contain a main type”.
Please help me before I throw my laptop out the window!
Gary.
Thank you 🙂 I made a tutorial on how to install Eclipse and Java here Install Java 1.7 Eclipse
As per your error, you probably saved the program in the wrong folder. You need to have the Package Explorer panel open in Eclipse. If you don’t see it do this: Click Window – Show View – Package Explorer
Now that you have it on your eclipse screen click on the folder it has in it. You’ll see another folder named src. Inside of src, you’ll see default package. Save your java code in default package.
If you don’t see anything in Package Explorer, right click in Package Explorer and Click New Project.
That should do it. If not tell me
Derek
Hi Derek excellent explanation.I was not having an idea in programming lang.after wacthing your series i am pretty much aware of the lang.great work derek.
Thank you. Always feel free to ask any questions and to leave tutorial requests
Hello Team,everything cant be spoon feeded.you all must start practicing wat derek has taught.his tutorials is sufficient for a person to become a good programmer.you people have to give your effort to after watching his video series.
I am new to this so i dont know which program to download????
I use Eclipse Classic to write Java programs in this tutorial. It works and looks the same on every OS.
thanks
You’re very welcome
Great Videos! I watched all of the java videos you posted. Could you please post some videos for replacing token values in the file1.properties and file2.properties? i.e
HOST_NAME = ;
DB =
I cover buffered reader and writer in Java Video Tutorial 32 and binary streams in Java Video Tutorial 33.
I’ll be covering J2EE and Java networking very soon in their own tutorials. Thanks for the request 🙂
Another question, I created a class and later deleted it along with the folder in the windows’ explorer. Now it give me error whenever I try to create that class. It says class type error when I try to give the name to that class. How do I fix this problem. I’m using eclipse.
Also, for last comment less than or greater than symbol removed the information that I typed for dollarsymbolHost eq.to lt symbol-replace this token-greater than symbol;
You have to replace the class to get your program to work. Maybe it is still in the recycle bin?
Thanks for the tutorials. I have watched some of your other videos, but i have just started watching this series. Still these seem to be some of the most well made video tutorials out there. thnx for all the work i’m sure u put into these.
You’re very welcome 🙂 I guess after making almost 500 videos I’m getting better. I wish i would have incorporated testing into the videos from the beginning like the khan academy does. I’m always working to make them better and I hope they continue to live up to your kind words. Thank you for the support
After about two weeks of web surfing for concise to-the-point tutorials I discovered your videos on YouTube. I watched a majority of your Python 2.7 tutorials and can say that 50% of my knowledge was from those videos (the other 50% acquired over time and through my job). The speed and clarity of your videos stuck in my mind, and when I decided a few days ago I’d like to learn Java, I didn’t think twice about where to begin. Nothing else compares to these videos. You are the Master Tutor!
Thank you so much for taking the time to offer these free videos. I have found that I absolutely love programming, and am excited to learn more and more, most of which will be with your videos. Don’t stop!
Hi Justin, I’m very happy that you enjoy my videos. Thank you for taking the time to say you like them. I have many more in the works. I recently stopped because I have a terrible cold, but I’ll start making them again next week. Always feel free to ask questions.
Derek
To cast to other primitive types just proceed with the conversion to type
ie (byte) (short) (long) (double)
(float) & (boolean) & (char) don’t work.
(char) stays as a number instead of a character
what do you mean by float, boolean and char not work?
Do you mean by casting double to float, boolean and char is not working or for any type to float, boolean and char also not working?
I have tested in following way and it shows that float, boolean and char can be casted. Would you mind to explain further?
double aDoubleValue = 3.1456789;
float doubleToInt = (float)aDoubleValue;
System.out.println(doubleToInt);
// It displays 3.5679
double aDoubleValue = 3.1456789;
float doubleToInt = (float)aDoubleValue;
System.out.println(doubleToInt);
// It displays 3
double aDoubleValue = 3.1456789;
char doubleToInt = (char)aDoubleValue;
System.out.println(doubleToInt);
// It displays a question mark inside a box
Also, what do you mean by (char) stays as a number instead of a character?? can you provide some example?
Got a question: I was searching for the solution to the problem “(Counting occurrence of numbers) Write a program that reads the integers
between 1 and 100 and counts the occurrences of each. Assume the input ends
with 0.”
GOT this solution from a site … forgot the address
import java.util.Scanner;
public class OccurenceOfNumbers {
public static void main(String[] args)
{
Scanner input = new Scanner(System.in);
int[] counts = new int[100];
System.out.print(“Enter the integers between 1 and 100: “);
System.out.print(“\nEnter 0 to terminate the program and count numbers: “);
int number = input.nextInt();
while (number != 0) {
counts[(number – 1)] += 1;
number = input.nextInt();
}
for (int i = 1; i 0)
System.out.println(i + 1 + ” occurs ” + counts[i] + (
counts[i] == 1 ? ” time” : ” times”));
}
}
Please Sir can u explain me this part
int number = input.nextInt();
while (number != 0) {
counts[(number – 1)] += 1;
number = input.nextInt();
}
Why number = input.nextInt(); is done twice …. HELP!!!
nextInt() gets the integer entered on the keyboard.
while that number isn’t equal to 0
In the count array, in the index with a value 1 less then the number entered at the keyboard increment the value inside by 1
Ask for the next integer entered in the keyboard.
Does that help?
Thank you Sir ….. U are always ready to help …. Meet u when i again get stuck …. 😀
You’re very welcome 🙂
Hi – I need some help.
I have filled an array with random numbers, found the minimum value and know what position it is in. Now I need to switch the number in that position with the number in position 0, left sentinal. I don’t have my code with me to include here but is there anyway you can help me by explaining how I would go about this? Thanks – love your tutorial!
Here I cover Java sorting algorithms. I think this is the code you want:
int temp = theArray[indexOne];
theArray[indexOne] = theArray[indexTwo];
theArray[indexTwo] = temp;
How do you recommend watching your videos and practicing to really understand, remember, and apply all of the stuff from the Java tutorial through Android, which you are doing now?
For best results
1. Print the code
2. Take notes on the code as you watch the videos
3. Make programs of your own using what you learn in each video
4. Send me any questions you have
5. Have Fun 🙂
Your videos are awesome. especially the way u cover the subject with in a given time is nice
Thank you 🙂
Thanku Sir , i m from india… a btech student… i m watching ol ur videos…i m vry vry thankful to u… ur videos r so helpful… its much much easy to learn java through ur videos…excellent teaching sir…Really awsme vids n codes… thanks again sir… 🙂
Thank you very much for the nice message 🙂 I’m very happy to be able to help you. I’m more popular in India then I am in my own country. That was very surprising. Always feel free to ask questions. I will do my best to help – Derek
First of all thank you so much for this wonderful job !!
I have a global request about all the java tutorials : Could you please make a post where I (and others) would be able to download an archive which would contains all the .java code ? ( Because yes I’m a little bored to copy-paste all files from the differents posts into notepad and save it as .java :p )
Thank you in advance !
PS : little advice : I looked for a guest book page where I could thank you but I didn’t find it, so maybe you could create one 😉
PPS : Sorry for my worst english !
bad* english :p
You’re very welcome 🙂 How would you want the download to be structured? Do you want every tutorial available in one download? Understand that I sometimes won’t have everything so heavily commented in the code I have on my computer because I never planned on doing that.
Your english is fabulous! As a person that grew up in a home were many languages were spoken, I’m aware of how hard it is to learn english. I’ll see what I can do about the guest book
Derek
Yes, that’s what I was thinking about ! Every classes files in a .zip or .rar 🙂 But if you don’t have the online code as files on your computer don’t worry, I’ll continue my copy-paste…
Thank you for my english 🙂
Ill see if I can dig up all that code. Sorry, I never expected to make so many videos on just java. Your English is perfect 🙂
Ok thank you. Do not apologize, I’m happy you made so many videos 🙂
Hi bro, your tutorials are fabulous, i have seen many tutorials online from premium faculties but not that impressing, your tutorials are excellent and brilliant, i dnt have words to praise your style.
Humble request to upload web services(soap,rest) in java tutorials.
Thanks so much
Thank you very much 🙂 I’m glad you are enjoying them. I will cover restful services in java after my Android tutorial, but I have been thinking about covering them either using Ruby, or PHP right now in the next few weeks. I need them for the Android tutorial. Tell me if you are in any way interested in that? Thank you
which book you will refer for Java ..thanks in advance..and yeah i love your videos..its easy to grasp
which book you will refer for Java ..thanks in advance..and yeah i love your videos..its easy to grasp – S
Head First books are my favorite
Hey your tutorials are amazingly good.. Can I start android development videos now or I need to get Java more.. Basically how much knowledge do I need to know to begin android development.. I am a computer engineering student and have programming experience
in c,javascript,php,phython
Thank you 🙂 I’d say give the Android tutorials a try and if you find that you need help I have a Java video tutorial that can help.
If you know all the other languages you shouldn’t have any trouble. Always feel free to ask questions
which basic laguages i need to know for android development.
which basic laguages i need to know for android development?
please reply
You basically just need Java for most everything. My Java tutorial will teach you everything you need. Parts 1 through 19, minus 8 and 10 will teach you all you need. Feel free to ask questions
Hi Derek,
First of all your tutorial is great I really like to appreciate
your effort you put in.
Derek can you provide all of us tutorial on collection and mapping in detail how it can be used to carry database and class object.
Thanks,
Shafin
Your video is awesome. I really appreciate you put your effort on it.
Can you provide tutorial on collection and mapping how it can be use with database object and class object, multithreading in detail.
Thank you very much 🙂 I cover most of these topics in this Java tutorial and also in my algorithms tutorials. I’m planning an advanced algorithms tutorial that should be out soon
nice vidios
keep moving forward
but please what theme did you use for the frames?
I use netbeans
Thank you very much 🙂 I’m using Eclipse in this series, but you can definitely use NetBeans
Derek Sir,
Great fan of yours. You teach better than any teacher I have come across. I want to make my fundamentals strong and clear Java certification by oracle. I believe watching your Java series and of-course advanced topic I can shape up my skills and take career to new height.
Any other MANTRA of yours is there then please teach us.
Great People Leave Great Stories Behind.
Thanks A LOT
Sameer
Thank you very much Sameer 🙂 You’re very kind. I’ll continue to do my best to cover these topics to the best of my ability. Thank you for stopping by my website.
Thank you very much for your tutorial ,really thank you
You are very welcome 🙂 Thanks for taking the time to tell me you liked it
Hi, your tutorials are splendid, but l have a little problem, when watching after some time its fade, and its hard for you to see what you are doing , l dont know why , my appeal is for you to correct that , thanks. Big up. Give u my thumb.
Thank you 🙂 It may help if you print out the code and take notes while pausing the video. That seems to help many people. Also since the notes are in your own words they tend to make you remember concepts later. I hope that helps.
Hi admin,
Thank you very much for all your videos, it is so informative for the beginners. I been watching your java,html,css and mysql video and lot that I can learn from your videos. I would also like to raise a request to please cover advance java video tutorials as well. I hope you would consider request in spite of your busy schedule.
You’re very welcome 🙂 What topics are you looking for in regards to advanced java tutorials? I have covered Design Patterns, Object Oriented Design, Refactoring, Java Algorithms and now Android with Java. I’m always interested in any other ideas for tutorials. Sorry if you missed these. My site isn’t that organized.
Thank you
Derek
Hi Derek,
I think the statement you have mentioned in this page in “Code from Video” section needs to be changed because this is correct only for static methods.
“// Class variables must start with static if you want to access them with any other methods in the class”
changed to
“// Class variables must start with static if you want to access them with static methods in the class”
Thank you for pointing that out. I’ll try to reword that
Derek,
I have been learning Java with heavy fire the past two months in aspirations to be a Java developer. I’ve read MIT courseware, several books on Java, and even downloaded a few “Learn Java” apps for quick reference while I am on my “Thinking Seat”! But no one else has broken down the language so eloquently as you have. I’m only on the 15th video so far, but you have explained everything perfectly. I have a greater understanding for Java now and hope to be through the videos by the end of the month!
YOU ARE AMAZING SIR!
David,
Thank you very much 🙂 I’m happy to hear that I was able to help so much. I continue covering java in my Design Patterns, Object Oriented Design, Refactoring, Java Algorithms and Android tutorials if you are interested in learning more. Always feel free to ask questions. I do my best to help.
Derek
Oh I will learn however much I can gobble up! I was a little stumped in regular expressions (especially the phone number part) but I managed to work my way through it. Will you be offering any tutoring in C++ as well? Would it be an easy jump after I’ve learned Java?
Yes I will definitely cover C++ after I finish covering C
Can u please do a tutorial on the seperate things you need to learn to make a minecraft mod??If it is not already there??
Here is a tutorial on setting everything up Minecraft Modding Guide.
This tutorial is pretty good after you learn the basics of Java.
how about scala,ruby on rails,opengl and system programming(linux commands,bash programming etc)is there any change to make tutorials for these languages?
I’m thinking about covering Ruby and OpenGL very soon. Thanks for the requests 🙂
Hi Derek,
I just wanted to know difference and how the jvm handles single quote(”) and double quotes(“”);
Thank you
Hi,
Use single quotes for characters and double quotes for Strings
Zillion of thanks for your great tutorial it is Time Saver + Money Saver = Make life Easy! but i have a small request….. I am just wondering can you make all those code in pdf format so we don’t need to print which saves money and tree 🙂 and more importantly we can upload it in Ipad and read when we watch the videos. Thanks once again. God bless you!
A Zillion you’re welcomes 🙂 I’m glad you enjoyed it.
Hello.
Can I have codes for all of your java videos, if its possible??
All the videos and the code is available on this one page Java Video Tutorial. I hope they help.
Hello Derek,
I have a question on servlet.
In the below web.xml content, Why / is used to mention login.jsp and loginpage. What is the meaning?
/login.jsp
/loginpage
Sham
/login.jsp
/loginpage
I’m sorry, but I’m not sure which specific tutorial you are referring to.
Derek,
I finished your Java videos back in October, and thanks to you I obtained my OCAJP Certification in November and looking for more I can do in the programming world. Thank you, sir!
Cheers,
Dave
Hi Dave,
Congratulations on getting the certification. I’m very happy if I was able to help 🙂
Derek
i want to learn java as i am beginner .. basic syntax how to make logics and how to use oop in java please help me..recommend me good book.so that i start learning fast & easily.
Q:;you make videos on books lesson?
The best book is Heads First Java. Feel free to ask questions. i do my best to help.
I want to learn java . tell me the book from which as a beginner i learn java,. Also oop in java …. i want to learn how to use 2 classes (1 with main method and other is simple)
The book everyone seems to like is Heads First Java. It is fun.
what can i do after watching your video…. i want to learn fast .
Is this possible that we can sort the CHAR in String?
I’m not sure what you are trying to do. Please provide an example and i’ll try to help
Sorry i want to combine two CHAR in one String Like this:
String randowString = "I'm a Developer";
String anotherString = "and name is bee ground";
String andanotherString = randowString + ' ' + anotherString;
System.out.println(andanotherString);
This is a Strings Variable. so i want to asked can we combine the CHAR like STEING
Check out my String Builder tutorial. I think that is what you need.
Mr Banas,
I am a freelance developer and I read your work just to learn something new or to refresh or for the fun. I give you a 10 🙂 and I like very much your philosophy of live. You have a big fan…
(I have just listened to you q&a session that I liked very much)
Could you notice the following ?
1)
// A char will except a number or a character surrounded by apostrophes
should be replaced by
// A char will accept a number or a character surrounded by apostrophes
2)
// A string conatins a series of characters
should be replaced by
// A string contains a series of characters
3)
// Every main function must except an array of string objects
should be replaced by
// Every main function must accept an array of string objects
See you next time ! Rudy (from Belgium)
Thank you 🙂 I corrected the errors. Now you know why I have never made a video on grammar.
By the way if you ever go in Belgium please tell me and I will buy you a drink 🙂 or something you like to reward you for the good work ! You deserve it !
Thank you 🙂 I’d love to go to Belgium some day. My wife would love it if I left the house every once in a while as well.
Can you recommend any programming books?also can you make any video to solve programming exercises?because mostly programmers
have no idea on problem solving.
The Heads First Java book is very popular. I made a tutorial on how to turn a problem into finished code. It is called Object Oriented Design. My UML tutorial might be very useful as well for that. I hope they help
I have a question, I have been watching your tutorials through and I have been deleting all the code when I went to the next video and realized that this code would be great for reference with the comments in the future because i’m new to java, so I was wondering if there was any way to download all of the files into eclipse and put them into a project so that when I am coding in the future I will have a big reference for the code.
Thanks – Bobby.
Yes I structured them so that they could be used as references. There really isn’t any way for me to set it up though so you can download everything directly into Eclipse. Sorry about that
Thank you, very clear and useful way of explaining the fundamentals!!
Thank you 🙂
Hi Derek Banas,Thanks a LOT for the videos and for the Great Explanation.You are our Life and Career Saver.Please keep up the Good Work.Please continue to work on these projects which help people like us.May GOD BLESS YOU… 🙂
Thank you for the kind message 🙂 May God bless you as well.
Hi I love your videos. But I have a question. If I learn Java, will it help me in making Android Apps? If not, what programming language is used to make Android Apps? Which software do you use to film your videos? Lastly, what programming language do you need to know to make IOS apps?
Yes Java is used to make Android apps. iOS is normally programmed using Objective C.
your java video tutorials are very good for newbie’s…can you please make some for java serial communication???
Thnx in advance….
Thank you 🙂 I’ll be making some new java tutorials soon.
Awesome tutorial, how do you know a lot about a lot?
Thank you 🙂 I actually don’t know a lot about many things. Almost everything I know has something to do with programming. I know almost nothing about pop culture, geography, history, etc. My wife laughs at me because I don’t know things like who the mayor or governor is 🙂
Hi! I am trying to build a web browser with additional fuctionality like progress bar , bookmarks, back, forward, menu,save, print … is it possible to build it with only java programming or not ? I have the ideas but unable to implement them … Can you help me or suggest me anything?
I covered how to make a basic browser using JEditorPane.
You may like JxBrowser. It allows you to embed a browser.
These are the best. I would like to ask about it but, it seems to literally cover anything I can think of…
Thank you 🙂 Always feel free to ask questions.
Derek? On those java tutorials that you uploaded, does it covered everything about java?
It covers a vast majority of Java. I even make a pretty complex game by the end. You can see the whole thing here Java video tutorial. I don’t know how I did it, but it is the most popular Java video tutorial online as far as I know.
Java Video Tutorials. You wrote in Caption:
“This tutorial starts out slow, but by the end you will be a Java expert!”
I think your tutorial is fast and superb and I hope as you have said to become an expert even before I reached last video.
Thanks a lot for such a nice effort and Please carry on this good work…
Thank you 🙂 I did my best to teach everything about Java. I also have tutorials on object oriented design, algorithms, design patterns, refactoring, UML, etc. to help you become an expert.
Hi Derek I’m just starting out and have no idea what’s going on in video’s,books I just can’t grasp it, so I want to ask, I see you recommend Head first java, is this a good book for someone who has never programmed in their live
Simply the best, turns out it’s like really easy. Your tutorials are by far the best out there, you talented man really talented well done!
Thank you 🙂 I’m glad you find them useful.
Yes the head first books are very good. Have fun 🙂 | http://www.newthinktank.com/2011/12/java-video-tutorial/ | CC-MAIN-2021-31 | refinedweb | 6,823 | 73.68 |
A thread that processes pending crash reports in a CrashReportDatabase by uploading them or marking them as completed without upload, as desired. More...
#include "handler/crash_report_upload_thread.h"
A thread that processes pending crash reports in a CrashReportDatabase by uploading them or marking them as completed without upload, as desired.
A producer of crash reports should notify an object of this class that a new report has been added to the database by calling ReportPending().
Independently of being triggered by ReportPending(), objects of this class can periodically examine the database for pending reports. This allows failed upload attempts for reports left in the pending state to be retried. It also catches reports that are added without a ReportPending() signal being caught. This may happen if crash reports are added to the database by other processes.
Constructs a new object.
Informs the upload thread that a new pending report has been added to the database.
This method may be called from any thread.
Starts a dedicated upload thread, which executes ThreadMain().
This method may only be be called on a newly-constructed object or after a call to Stop().
Implements crashpad::Stoppable.
Stops the upload thread.
The upload thread will terminate after completing whatever task it is performing. If it is not performing any task, it will terminate immediately. This method blocks while waiting for the upload thread to terminate.
This method must only be called after Start(). If Start() has been called, this method must be called before destroying an object of this class.
This method may be called from any thread other than the upload thread. It is expected to only be called from the same thread that called Start().
Implements crashpad::Stoppable. | https://crashpad.chromium.org/doxygen/classcrashpad_1_1CrashReportUploadThread.html | CC-MAIN-2019-13 | refinedweb | 283 | 65.83 |
When you using Vue.js functional components you have nothing except render function and it's context with some parameters.
I always prefer to use functional component instead of a stateful component in my shared components library at work and it works fine when I don't need state. But sometimes I need
mounted and
beforeDestroy hooks in a stateless component.
The problem
Let's look at the example. We need a simple modal component that renders some overlay and block with passed children. Something like this:
export default { functional: true, render (h, context) { return ( <div class="modal"> <div class="modal__overlay" /> <div class="modal__content">{context.children}</div> </div> ); }, };
I didn't provide any styles but it should look like bootstrap modal. Now if the window has y scroll opened modal will be moving with page scroll. To create better UX we should disable scroll when modal is opened and enable it again when modal is closed. When using usual components you can do it in
mounted and
befoDestroy hooks:
export default { // ... mounted () { document.body.style.overflow = 'hidden'; }, beforeDestroy () { document.body.style.overflow = null; }, // ... };
But how to implement this logic without hooks? The answer is: using
<transition> component with
appear prop!
The solution
<transition> component has its own hooks for entering and leaving, so we can just wrap all our component in it and define hooks. The
appear prop guarantees that our "mounted" hook will be fired when component mounts.
const mounted = () => { document.body.style.overflow = 'hidden'; }; const beforeDestroy = () => { document.body.style.overflow = null; }; export default { functional: true, render (h, context) { return ( <transition appear <div class="modal__overlay" /> <div class="modal__content">{context.children}</div> </div> </transition> ); }, };
That's it! Now we have some hooks in a functional component.
You can also improve your UI by implementing transition animations.
Photo by Suad Kamardeen on Unsplash
Discussion (0) | https://dev.to/denisinvader/mounted-and-beforedestroy-hooks-in-vuejs-functional-components-7bi | CC-MAIN-2021-43 | refinedweb | 304 | 59.3 |
How do I add color to my svg image in react
react svg tutorial
react-svg style tag
react svg sprite
react svg not rendering
change svg color css
react svg hover
referencing svg in react
I have a list of icons. I want to change the icons colors to white. By default my icons are black. Any suggestions guys?
I normally use
'fill: white' in my css but now that I am doing this in React... it's not working!
import React from 'react' import menuIcon from '../img/menu.svg'; import homeIcon from '../img/home.svg'; <ul> <li> <a href="/" className="sidebar__link"> <img src={menuIcon} </a> </li> <li> <a href="/" className="sidebar__link"> <img src={homeIcon} </a> </li> </ul> .sidebar__icon { fill: #FFFFF; width: 3.2rem; height: 3.2rem; }
use your svg as a component, then all the svg goodness is accessable:
const MenuIcon = (props) =>( <svg xmlns="" fill={props.fill} className={props.class}></svg> )
and in your render
<li> <a href="/" className="sidebar__link"> <MenuIcon fill="white"/> </a> </li>
Import svg and edit fill color : reactjs, and I want to change its fill color. path/to/img.svg I dont really need the extra money but should i freelance react development outside of work and maybe I’ve recently made a gallery for ManyPixels displaying SVG illustrations available to help people with their projects, kinda similar to Undraw in its behaviour.. Using it is pretty simple: you browse the gallery, pick color that match your brand, click it, download it as PNG or SVG (with updated colors!) and voilà!
You can change css of svg by accessing g or path. Starting from create-react-app version 2.0
import React from 'react' import {ReactComponent as Icon} from './home.svg'; export const Home = () => { return ( <div className='home'> <Icon className='home__icon'/> </div> ); };
.home__icon g { fill: #FFFFF; } .home__icon path { stroke: #e5e5e5; stroke-width: 10px; }
How to work with SVG on react, We can change the fill color while adding hover effect and so much more. In this tutorial, we will be making a component that will render SVG react-native-svg (Rendered Image color change black) Avaz Aliyev. Loading Unsubscribe from Avaz Aliyev? Sign in to add this video to a playlist. Sign in. Report
If you are using
create-react-app you can use as below
import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo /> </div> );
the most important part not forgetting importing them as
ReactComponent as Logo
Create React app documentation
Creating an SVG Icon System with React, SVG has the ability to add title and ARIA tags, which provide a huge boon to They're an image that you're positioning with font styles. 'Nuff said SVGs offer a navigable DOM to animate parts of an icon, or colorize sections. On the native side, React Native ART translates paths to Core Graphics on iOS and Canvas on Android. But it has only one class called Path and it can be a bit tricky to convert all your svg icons. We recommend to use react-native-svg package. It was branched from React Native ART and provides an HTML SVG-like interface.
Is there any example how to style svg icon (fill path color)? · Issue , Hello, I have following svg icon: import React from 'react' import { Svg, Path } As you can see I tried a lot of variations how to fill the svg path. I want to override default icon color and also be able to change fill color on hover.
In my case, I needed to delete this part of my svg file to make it work:
<style type="text/css"> .st0{fill:#8AC6F4;} </style>
an then this works
import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo /> </div> );
Flexible Icons with React and SVG - NYT Open, If it's an image file, it is typically a PNG with transparency. If you live in a It also means we can change the color(s) of the logo without exporting a new file. However We use React to render SVG (since it's HTML!). All of this Where does React come into all this? When using SVG in a web document, you have two options. Either render the SVG document as is, or use it as source in the img tag. The preferable option is to use it as is, since SVG in the image tag is rendered as an image and cannot be manipulated beyond the css styles for the image tag.
Using SVG Icons Components in React, So changing colors and strokes on SVG can be done all via CSS. Either render the SVG document as is, or use it as source in the img tag. I've detailed a few ways to create React components to manipulate SVG images.
How to use SVGs in React, SVG is a vector graphics image format based on XML. It is like rendering text compared to rendering pixels and colors for other image formats. First, we install the file-loader library $ npm install file-loader --save-dev , this.
How to use SVG in React? The styled components way., Does our hero: Create separate SVG's for each needed color and put a bunch of img src width height combos all over the place? Use one SVG.
- Did you try to use the style attribute ?
<img src={homeIcon} style={{fill:"#FFFFFF"}}
- No i did not use...I am using sass inn an external file
- Anyway you can use online style attribute
- Thanks, but do i have to create component for all the my icons? I have about 5 icons to include.
- That's the way to manipulate the SVG DOM, yes. The only other option is to use css filters on the <img>
- Anyway, it would be kind of you and maybe helpful for others with the same question in mind if you marked this question as answered.
- Hey Bibi... I worked, I am new to react so it took a while to get it...thanks again!
- did you forget the src attribute?
- Does the component get props? Like the
fillcolor?
- @user2078023 if you want to add props on svg's you need to create a component and pass as props there.
- @user2078023 yes, but do not forget to set
currentColorin the svg file like <svg fill="currentColor">
- @user2078023 The answer is yes, But make sure your SVG tag has handled the color with default fill props. if the color is handle with any inner element in the svg then there is no use.
- This example doesn't show how to add color to the SVG, only how to import it as a React component
- A final, but big con is accessibility - this approach removes all text in an svg that is readable (tooltip, description, and title), thereby making it useless in readers for users with accessibility challenges.
- Thanks for the point, I am always trying to keep a custom title whenever it's required. | https://thetopsites.net/article/54519654.shtml | CC-MAIN-2021-25 | refinedweb | 1,166 | 71.95 |
The downfall of HTML Imports is upon us (to me)
Meghan Denny
・2 min read
[Deprecation] Styling master document from stylesheets defined in HTML Imports is deprecated, and is planned to be removed in M65, around March 2018.
I just read this in my console today after my Chrome browser just updated to M61. And it's the saddest news all I've read all day. The next step in the downfall of HTML Imports. And I can't believe it's happening because it is the perfect delivery method for CSS/JS libraries, frameworks, and of course, Custom Elements.
I first noticed the beginning of the end when I saw this:
HTML Modules #645
Now that JavaScript modules are on the verge of widespread browser support, we should think about an HTML module system that plays well with it. HTML Imports doesn't have a mechanism for exporting symbols, nor can they be imported by JavaScript, however its loading behavior is quite compatible with JavaScript modules.
@dglazkov sketched out a proposal for HTML modules here:
The main points are that, using the JavaScript modules plumbing, you can import HTML.
Either in HTML:
<script type="module" url="foo.html">
or JavaScript:
import * as foo from "foo.html";
But within the scope of that sketch there are still several questions about the specifics of exporting symbols from HTML and importing to JavaScript.
It's a proposal to make an amendment to HTML Imports to add the functionality through Javascript instead of through
<link rel="import">. While I'm not totally against the idea of being able to import
<template> elements and such inside JS, I hate the idea of it replacing the HTML way.
I love the idea of Custom Elements and it's honestly my favorite feature I've seen added since I started web dev. I have a repository dedicated to custom elements where I make a bunch. The most notable section of which is a folder with a bunch of Fluent Design inspired elements.
And the whole project can be used in one line.
<link rel="import" href="">
That one file sets some basic CSS, and imports all the other elements. However, Chrome is the only browser that has native support. Everyone else has to use a bodged polyfill because every other browser isn't even interested in implementing it for some reason.
In the end, I hope this HTML based feature stays supported in HTML.
Great writeup! If you love custom elements i'm really excited for your opinion on my new project. Requires no node, no webpack, no babel, no bower, no dependencies. Just a
<script>tag.
github.com/devpunks/snuggsi
If you'd like to visit the project I mentioned above you can do so here :D It's a WIP but I work on it all the time and use it in my own projects | https://dev.to/nektro/the-downfall-of-html-imports-is-upon-us-to-me | CC-MAIN-2019-43 | refinedweb | 481 | 69.82 |
Hello everyone,
Here is my simple bonus assignment program. The issue I met with is, the ResultTotalReceived is larger then ResultTotalSent, which violates corporation policy and exception is thrown.
The program works in this way,
1. At source side, calculate and assign the bonus according to each worker's factor (100F for worker1 and 300F for worker2 in my sample). All figures are float type.
2. Convert the float type to string and sent to another destination to store the bonus into storage;
3. The detination side will perform basic checking rules before storing the data, e.g. the total bonus assigned should not exceed the total bonus available at source side.
In my code below,
The value of ResultTotalSent is 199.321, and the value of ResultTotalReceived is 199.321045, which is larger than ResultTotalSent.
My questions is,
1. If I want to solve this issue at source side, what is the elegant way to solve this issue? Currently, my temp solution is using ToString("F2"). Any issues with this solution?
2. Why there is such issue? It is the issue of ToString of Float class -- I have this suspecision is because ResultTotalSent is precise but after ToString conversion the result and conversion back at detination side, it is not precise?
Code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace TestFloat
{
class Program
{
static void Main(string[] args)
{
float TotalBonus = 199.321F;
float Worker1 = 100F;
float Worker2 = 300F;
float Result1 = TotalBonus * Worker1 / (Worker1 + Worker2);
float Result2 = TotalBonus * Worker2 / (Worker1 + Worker2);
float ResultTotalSent = Result1 + Result2;
string Result1String = Result1.ToString();
string Result2String = Result2.ToString();
// sending to another computer using string
// received from another computer using string
string ReceivedString1 = Result1String;
string ReceivedString2 = Result2String;
float Received1 = float.Parse(ReceivedString1);
float Received2 = float.Parse(ReceivedString2);
float ResultTotalReceived = Received1 + Received2;
// sanity checking failed, since ResultTotalReceived > TotalBonus
return;
}
}
}
thanks in advance,
George | http://cboard.cprogramming.com/csharp-programming/103472-float-calculation-issue-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 311 | 51.55 |
Objectives
- We keep on playing with DC motors and the Adafruit Motor Shield V1.
- We will see the foundations of making a rover turn without using a steering wheel.
- We will program the basic movements of the robot.
Bill of materials
Controlling the turn
In previous chapters we have seen the foundations of DC motors and how to use a Motor Shield to handle multiple motors simultaneously. But we have not talked about a basic question that some people often find difficult to understand: the turn.
As automobiles are part of our daily life, we easily understand the principle they use to turn via the steering wheel, and we have an intuitive idea of how they deflect the tracing direction of the wheels with respect to the direction of movement of the car. This way, the front wheels always trace the tangent to the curve of movement.
- But perhaps it is not so obvious that the two front wheels of modern cars can not turn at the same speed in the curves, if we want to maintain the stability and governance of the vehicle.
It is quite easy, and if you think about it, it is the same principle used by tractors in the field, that can make unthinkable turns for a car, or tanks and excavators that are moved by metal tracks (without steering wheel).
As cars have a single combustion engine, we have to assemble a complicated mechanical system, which distributes traction between the wheels, but in electric cars in general, and in our 4×4 Rover in particular, we have one motor per wheel, so we can control the speed of each one independently.
- This is true even in commercial hybrid vehicles, which use a combustion engine to recharge the central batteries, but they have an electric motor per wheel which allows us, apart from 4×4 traction, to do some unusual things in a traditional car.
By controlling the speed of each wheel we can avoid the need for a mechanical steering and we can make movements that would surprise a car driver quite a lot, including turning without touching the steering wheel.
The trick is precisely to spin each wheel at a different speed, so as to force the vehicle to rotate in the desired direction and with the desired radius.
If the wheels on the right side turn twice as fast as those on the left side, our Rover must turn to the left and vice versa, making a continuous turn. The smoother the turn, the less speed difference between the turns of the two sides.
The turning radius will be more or less closed, depending on the relationship between the speed of both sides. Not only we can turn the Rover but also control how close is the turn. If we stop one side completely and hold the other, the turn will be very sharp.
With four independently driven wheels we can even turn the wheels on one side in the opposite direction to the other side, and in this case the Rover will turn around its center of gravity without moving.
To govern our four-wheel drive Rover it will suffice to control the speed of rotation of the wheels, and we already know how to do this from previous chapters. So let’s put our hands to work with the sketch.
The Rover’s control program
Let’s start with some declarations, as always:
#include <AFMotor.h> AF_DCMotor Motor1(1); AF_DCMotor Motor2(2); AF_DCMotor Motor3(3); AF_DCMotor Motor4(4); int Speed = 255 ; // Defines the base speed of the Rover float P = 0.25 ; // Proportion of the turn giro
We include the Adafruit Motor library and instantiate four motor objects corresponding to the 4 motors. Then we create the Speed variable, that will contain the speed at which we want to move the Rover. This way, by varying the value of Speed, we will vary the overall speed of the vehicle. Finally we define the P variable, that sets the speed ratio between one side and the other when turning.
As we are going to make the rover turn by modifying the speed ratio of the motors, we should start by defining a function to program the speed of each wheel independently:
void SetSpeed(int s1, int s2, int s3, int s4) { Motor1.setSpeed(s1); Motor2.setSpeed(s2); Motor3.setSpeed(s3); Motor4.setSpeed(s4); }
Bear in mind that these commands only set the speed of each motors, they don’t make them spin yet. In order for the motors to turn we will have to define some functions corresponding to basic movements, such as Forward, Reverse and Stop:
void Forward() { SetVel(Speed,Speed,Speed,Speed); // Same speed to the 4 wheels Motor1.run(FORWARD) ; Motor2.run(FORWARD); Motor3.run(FORWARD); Motor4.run(FORWARD); } void Reverse() { SetVel(Speed,Speed,Speed,Speed); // Same speed to the 4 wheels Motor1.run(BACKWARD) ; Motor2.run(BACKWARD); Motor3.run(BACKWARD); Motor4.run(BACKWARD); } void Stop() { Motor1.run(RELEASE); Motor2.run(RELEASE); Motor3.run(RELEASE); Motor4.run(RELEASE); }
We have already discussed that in order to make the rover turn, it will suffice to move the wheels of each side of the Rover at different speed. We can test it by making one of the sides turn, for example, at 1/3 of the speed on the other side, using the variable P defined at the beginning.
- In a real Rover, we can modify the turning radius by reading a potentiometer, like a lever, so that as we move the lever to the right, for example, it reduces proportionally the speed of the wheels on the right.
- I leave it as an exercise as we do not think you will find it difficult. Later on we will talk about it when we see how to use a remote control to move the Rover.); }
And we just need to do a test in real motion. Let’s program the movement inside the sketch, (because we still do not have a remote control).
void loop() { Forward(); delay(2000); Reverse(); delay(2000); TurnLeft(); delay(3000); TurnRight(); delay(3000); }
Easy, isn’t it? It moves fordward, backwards, turns left and then right. Upload the sketch to the board and test how it moves.
Testing the movement of the 4×4 Rover
To move the Rover the first intention is, usually, to use the USB cable and leave it plugged to the Rover when moving, but although it seems to be a good idea, we regret to say it will not work.
- The USB connection can drive the motors without load, but you can barely move the Rover on a carpet, as the motors have the bad habit of consuming a lot of intensity and a typical USB connection can not supply more than half an Ampere, maximum.
- You will have to use batteries. You’d better use 6 batteries instead of 4, because we need at least 7V to supply 5V guaranteed (and 4 batteries x 1,5V = 6V, which is insufficient)
- In addition and to make things worse, cheap motors are usually very inefficient in terms of energy and these, believe me, are very cheap.
To test the movement we will need an battery holder and some bateries. Something like this:
The connector on the image above, fits perfectly in your Arduino (that connector that nobody uses) and avoid us to carry a hanging cable, which always ends up getting entangled due to the movement.
I can help showing you what happens if we turn the wheels on each side in opposite directions. Let’s see the sketch:
void TurnLeft() { SetSpeed( Speed, Speed, Speed, Speed) ; Motor1.run(BACKWARD) ; Motor2.run(FORWARD); Motor3.run(BACKWARD); Motor4.run(FORWARD); } void TurnRight() { SetVel( Speed, Speed, Speed, Speed) ; Motor1.run(FORWARD) ; Motor2.run(BACKWARD); Motor3.run(FORWARD); Motor4.run(BACKWARD); }
This kind of turn is a bit counter intuitive, because cars can not do it, but it works very well and causes the rover to turn around its own geometric center.
And there is only one thing left, to define a way to control the Rover 4×4 by means of a remote control system. In the next chapter we will see which options we have and which to choose.
Summary
- We keep on using the Adafruit MotorShield v1, and there is not much left to do with it to control DC motors.
- We have seen the foundations of how to make a four-wheel drive rover turn by applying different speeds to the wheels of each side.
- We have seen some basic sketches to control the turn of the robot, including turning around its center. | http://prometec.org/controlling-a-4x4-rover-robot/ | CC-MAIN-2019-13 | refinedweb | 1,441 | 64.24 |
EARLY RELEASE
Note that this is a very early-stage release, with no unit tests. Be careful if using it in production at this point.
dart_up
For a managed/hosted solution (truly "serverless"), check out! (this service does not yet exist)
Web application container for Dart servers, akin to PM/2 (Node.js).
Runs applications in isolates in the same VM, with
.packages files
used to provide dependencies. Also supports
lambdas, which are lightweight
executables run on-demand (and kept alive for an amount of time), instead of
long-lived daemons.
dart_up is completely open-source, and aims to make Dart server deployment
much simpler. It's essentially a Dart-specific "serverless" container (except,
if you're self-hosting, you'll obviously need to provision a server).
Installation
To install the standalone server:
$ pub global activate dart_up
Usage
After installing
dart_up on your server, deploying a Dart app
is as simple as running
dart_up push <file>, which creates an
app snapshot, and pushes that, along with your
pubspec.yaml, to the
dart_up daemon. Your application will be spawned in a new isolate,
and auto-restarted on crashes/errors.
Create a Lambda
For example, consider the following
hello_lambda.dart:
import 'dart:isolate'; import 'package:dart_up/lambda.dart'; main(_, SendPort sp) { return runLambda( sp, (req) => Response.text('Hello, lambda world!')); }
Assuming you have a running
dart_up daemon, all you need to
do is
push an application snapshot:
$ dart_up push --name hello --lambda example/hello_lambda.dart Building .dart_tool/dart_up/example/hello_lambda.dill... 2.8s • hello - dead
Lambdas are
dead by default. To trigger a lambda, visit
/:name:
$ curl; echo Hello, lambda world!
Create a Daemon
If the
--lambda flag is not passed, then a daemon will created.
The given application will be started immediately, and also
started whenever
dart_up is rebooted. By default, when the
application exits (either with success, or an error), it will
be re-spawned. This functionality can be disabled by passing the
--no-auto-restart flag.
Consider this example:
import 'package:angel_framework/angel_framework.dart'; import 'package:angel_framework/http.dart'; main() async { var app = Angel(), http = AngelHttp(app); app.fallback((req, res) => 'Hello from dart_up!'); await http.startServer('127.0.0.1', 3001); print('dart_up_example listening at ${http.uri}'); }
Application Management
The following commands are available for
dart_up management
(Note: this document may not be up-to-date, especially if new
commands are added in the codebase):
$ dart_up --help Dart Web application container. Usage: dart_up <command> [arguments] Global options: -h, --help Print this usage information. Available commands: help Display help information for dart_up. kill Kills a running application. list Lists the status of all active applications within the dart_up instance. push Builds an app snapshot, and pushes it to a dart_up server. remove Kills, and removes an application from the list. serve Launch an HTTP server that manages other Dart applications. start Restarts a dead/inactive process. Run "dart_up help <command>" for more information about a command.
dart_up push example/my_server.dart will cause
dart_up to
manage an instance of this application, restarting it if it
ever crashes.
For convenience, you can override the default
--url option with the
DART_UP_URL environment variable:
$ DART_UP_URL= dart_up list
Password Authentication
dart_up supports
bcrypt-hashed passwords, and uses
Basic authentication
to ensure that external clients have access to the daemon.
dart_up can also
be configured to even required passwords for requests from
localhost.
# Set a password. Obviously, be smart about file permissions. Even though # the passwords are strongly-hashed, the *usernames* are plain text. $ dart_up password my_username ✔ Password [hidden] ‥ *********** Successfully edited file .dart_tool/dart_up/passwords. # Always require Basic authentication, even for localhost. # Otherwise, it'll only be required for external clients. $ dart_up serve --require-password # Disregard `x-forwarded-for` header, i.e. if you're not using nginx `proxy_pass`. $ dart_up serve --require-password --no-x-forwarded-for # Any so-called "client command," like `list`, `push`, etc., takes a `--basic-auth`/`-B` option. # This way, you'll be prompted for a username and password. $ dart_up list -B ✔ Username ‥ my_username ✔ Password [hidden] ‥ ***** • hello - dead
Deploying
dart_up
You more than likely don't want the
dart_up daemon to face
the Web. In fact, you might not even want it to be accessible
to other processes on the server (in which case you should
configure it for password authentication).
That being said, if the
dart_up daemon goes down, then logically,
all of the applications it's running will become inaccessible.
Therefore, you should be sure that in the case
dart_up dies,
it is immediately restarted. On Ubuntu, using
systemd is the best
way to do this.
These instructions are pretty abstract, though, because how you
deploy
dart_up is up to you. The simplest way is to just
have a single daemon, and trust all applications running in the
VM (i.e. if only your organization is using the server). In a
multi-tenant situation, though, running all clients' programs in
the same memory space is a recipe for disaster. A better solution
is to create users for each client, give each client one
separate
dart_up process, and use Unix permissions to enforce
access control and security. | https://pub.dev/documentation/up/latest/ | CC-MAIN-2020-45 | refinedweb | 849 | 58.99 |
Distances between objects.
More...
#include <hwloc.h>
relative_depth
NULL
Distances between objects.
One object may contain a distance structure describing distances between all its descendants at a given relative depth. If the containing object is the root object of the topology, then the distances are available for all objects in the machine.
The distance may be a memory latency, as defined by the ACPI SLIT specification. If so, the latency pointer will not be NULL and the pointed array will contain non-zero values.
latency
In the future, some other types of distances may be considered. In these cases, latency will of the considered objects below the object containing this distance information. | https://www.open-mpi.org/projects/hwloc/doc/v1.2.2/structhwloc__distances__s.php | CC-MAIN-2017-17 | refinedweb | 112 | 57.47 |
YOLO, or You Only Look Once, is one of the most widely used deep learning based object detection algorithms out there. In this tutorial, we will go over how to train one of its latest variants, YOLOv5, on a custom dataset. More precisely, we will train the YOLO v5 detector on a road sign dataset. By the end of this post, you shall have yourself an object detector that can localize and classify road signs.
You can also run this code on a free GPU using the Gradient Notebook for this post.
Before we begin, let me acknowledge that YOLOv5 attracted quite a bit of controversy when it was released over whether it's right to call it v5. I've addressed this a bit at the end of this article. For now, I'd simply say that I'm referring to the algorithm as YOLOv5 since it is what the name of the code repository is.
My decision to go with YOLOv5 over other variants is due to the fact that it's the most actively maintained Python port of YOLO. Other variants like YOLO v4 are written in C, which might not be as accessible to the typical deep learning practitioner as Python.
With that said, let's get started.
This post is structured as follows.
- Set up the Code
- Download the Data
- Convert the Annotations into the YOLO v5 Format
- YOLO v5 Annotation Format
- Testing the annotations
- Partition the Dataset
- Training Options
- Data Config File
- Hyper-parameter Config File
- Custom Network Architecture
- Train the Model
- Inference
- Computing the mAP on test dataset
- Conclusion... and a bit about the naming saga
Bring this project to life
Set up the code
We begin by cloning the YOLO v5 repository and setting up the dependencies required to run YOLO v5. You might need
sudo rights to install some of the packages.
In a terminal, type:
git clone
I recommend you create a new
conda or a
virtualenv environment to run your YOLO v5 experiments as to not mess up dependencies of any existing project.
Once you have activated the new environment, install the dependencies using pip. Make sure that the pip you are using is that of the new environment. You can do so by typing in terminal.
which pip
For me, it shows something like this.
/home/ayoosh/miniconda3/envs/yolov5/bin/pip
It tells me that the pip I'm using is of the new environment called
yolov5 that I just created. If you are using a pip belonging to a different environment, your python would be installed to that different library and not to the one you created.
With that sorted, let us go ahead with the installation.
pip install -r yolov5/requirements.txt
With the dependencies installed, let us now import the required modules to conclude setting up the code.
import torch from IPython.display import Image # for displaying images import os import random import shutil from sklearn.model_selection import train_test_split import xml.etree.ElementTree as ET from xml.dom import minidom from tqdm import tqdm from PIL import Image, ImageDraw import numpy as np import matplotlib.pyplot as plt random.seed(108)
Download the Data
For this tutorial, we are going to use an object detection dataset of road signs from MakeML.
It is a dataset that contains road signs belonging to 4 classes:
- Traffic Light
- Stop
- Speed Limit
- Crosswalk
The dataset is a small one, containing only 877 images in total. While you may want to train with a larger dataset (like the LISA Dataset) to fully realize the capabilities of YOLO, we use a small dataset in this tutorial to facilitate quick prototyping. Typical training takes less than half an hour and this would allow you to quickly iterate with experiments involving different hyperparamters.
We create a directory called
Road_Sign_Dataset to keep our dataset now. This directory needs to be in the same folder as the
yolov5 repository folder we just cloned.
mkdir Road_Sign_Dataset cd Road_Sign_Dataset
Download the dataset.
wget -O RoadSignDetectionDataset.zip
Unzip the dataset.
unzip RoadSignDetectionDataset.zip
Delete the unneeded files.
rm -r __MACOSX RoadSignDetectionDataset.zip
Convert the Annotations into the YOLO v5 Format
In this part, we convert annotations into the format expected by YOLO v5. There are a variety of formats when it comes to annotations for object detection datasets.
Annotations for the dataset we downloaded follow the PASCAL VOC XML format, which is a very popular format. Since this a popular format, you can find online conversion tools. Nevertheless, we are going to write the code for it to give you some idea of how to convert lesser popular formats as well (for which you may not find popular tools).
The PASCAL VOC format stores its annotation in XML files where various attributes are described by tags. Let us look at one such annotation file.
# Assuming you're in the data folder cat annotations/road4.xml
The output looks like the following.
<annotation> <folder>images</folder> <filename>road4.png</filename> <size> <width>267</width> <height>400</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>trafficlight</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>20</xmin> <ymin>109</ymin> <xmax>81</xmax> <ymax>237</ymax> </bndbox> </object> <object> <name>trafficlight</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>116</xmin> <ymin>162</ymin> <xmax>163</xmax> <ymax>272</ymax> </bndbox> </object> <object> <name>trafficlight</name> <pose>Unspecified</pose> <truncated>0</truncated> <occluded>0</occluded> <difficult>0</difficult> <bndbox> <xmin>189</xmin> <ymin>189</ymin> <xmax>233</xmax> <ymax>295</ymax> </bndbox> </object> </annotation>
The above annotation file describes a file named
road4.jpg that has a dimensions of
267 x 400 x 3. It has 3
object tags which represent 3 bounding boxes. The class is specified by the
name tag, whereas the details of the bounding box are represented by the
bndbox tag. A bounding box is described by the coordinates of its top-left (
x_min,
y_min) corner and its bottom-right (
xmax,
ymax) corner.
YOLO v5 Annotation Format
YOLO v5 expects annotations for each image in form of a
.txt file where each line of the text file describes a bounding box. Consider the following image.
The annotation file for the image above looks like the following:
There are 3 objects in total (2
persons and one
tie). Each line represents one of these objects. The specification for each line is as follows.
- One row per object
- Each row is
class
x_center
y_center
width
heightformat.
- Box coordinates must be normalized by the dimensions of the image (i.e. have values between 0 and 1)
- Class numbers are zero-indexed (start from 0).
We now write a function that will take the annotations in VOC format and convert them to a format where information about the bounding boxes are stored in a dictionary.
# Function to get the data from XML Annotation def extract_info_from_xml(xml_file): root = ET.parse(xml_file).getroot() # Initialise the info dict info_dict = {} info_dict['bboxes'] = [] # Parse the XML Tree for elem in root: # Get the file name if elem.tag == "filename": info_dict['filename'] = elem.text # Get the image size elif elem.tag == "size": image_size = [] for subelem in elem: image_size.append(int(subelem.text)) info_dict['image_size'] = tuple(image_size) # Get details of the bounding box elif elem.tag == "object": bbox = {} for subelem in elem: if subelem.tag == "name": bbox["class"] = subelem.text elif subelem.tag == "bndbox": for subsubelem in subelem: bbox[subsubelem.tag] = int(subsubelem.text) info_dict['bboxes'].append(bbox) return info_dict
Let us try this function on an annotation file.
print(extract_info_from_xml('annotations/road4.xml'))
This outputs:
{'bboxes': [{'class': 'trafficlight', 'xmin': 20, 'ymin': 109, 'xmax': 81, 'ymax': 237}, {'class': 'trafficlight', 'xmin': 116, 'ymin': 162, 'xmax': 163, 'ymax': 272}, {'class': 'trafficlight', 'xmin': 189, 'ymin': 189, 'xmax': 233, 'ymax': 295}], 'filename': 'road4.png', 'image_size': (267, 400, 3)}
We now write a function to convert information contained in
info_dict to YOLO v5 style annotations and write them to a
txt file. In case your annotations are different than PASCAL VOC ones, you can write a function to convert them to the
info_dict format and use the function below to convert them to YOLO v5 style annotations.
# Dictionary that maps class names to IDs class_name_to_id_mapping = {"trafficlight": 0, "stop": 1, "speedlimit": 2, "crosswalk": 3} # Convert the info dict to the required yolo format and write it to disk def convert_to_yolov5(info_dict): print_buffer = [] # For each bounding box for b in info_dict["bboxes"]: try: class_id = class_name_to_id_mapping[b["class"]] except KeyError: print("Invalid Class. Must be one from ", class_name_to_id_mapping.keys()) # Transform the bbox co-ordinates as per the format required by YOLO v5 b_center_x = (b["xmin"] + b["xmax"]) / 2 b_center_y = (b["ymin"] + b["ymax"]) / 2 b_width = (b["xmax"] - b["xmin"]) b_height = (b["ymax"] - b["ymin"]) # Normalise the co-ordinates by the dimensions of the image image_w, image_h, image_c = info_dict["image_size"] b_center_x /= image_w b_center_y /= image_h b_width /= image_w b_height /= image_h #Write the bbox details to the file print_buffer.append("{} {:.3f} {:.3f} {:.3f} {:.3f}".format(class_id, b_center_x, b_center_y, b_width, b_height)) # Name of the file which we have to save save_file_name = os.path.join("annotations", info_dict["filename"].replace("png", "txt")) # Save the annotation to disk print("\n".join(print_buffer), file= open(save_file_name, "w"))
Now we convert all the
xml annotations into YOLO style
txt ones.
# Get the annotations annotations = [os.path.join('annotations', x) for x in os.listdir('annotations') if x[-3:] == "xml"] annotations.sort() # Convert and save the annotations for ann in tqdm(annotations): info_dict = extract_info_from_xml(ann) convert_to_yolov5(info_dict) annotations = [os.path.join('annotations', x) for x in os.listdir('annotations') if x[-3:] == "txt"]
Testing the annotations
Just for a sanity check, let us now test some of these transformed annotations. We randomly load one of the annotations and plot boxes using the transformed annotations, and visually inspect it to see whether our code has worked as intended.
Run the next cell multiple times. Every time, a random annotation is sampled.
random.seed(0) class_id_to_name_mapping = dict(zip(class_name_to_id_mapping.values(), class_name_to_id_mapping.keys())) def plot_bounding_box(image, annotation_list): annotations = np.array(annotation_list) w, h = image.size plotted_image = ImageDraw.Draw(image) transformed_annotations = np.copy(annotations) transformed_annotations[:,[1,3]] = annotations[:,[1,3]] * w transformed_annotations[:,[2,4]] = annotations[:,[2,4]] * h transformed_annotations[:,1] = transformed_annotations[:,1] - (transformed_annotations[:,3] / 2) transformed_annotations[:,2] = transformed_annotations[:,2] - (transformed_annotations[:,4] / 2) transformed_annotations[:,3] = transformed_annotations[:,1] + transformed_annotations[:,3] transformed_annotations[:,4] = transformed_annotations[:,2] + transformed_annotations[:,4] for ann in transformed_annotations: obj_cls, x0, y0, x1, y1 = ann plotted_image.rectangle(((x0,y0), (x1,y1))) plotted_image.text((x0, y0 - 10), class_id_to_name_mapping[(int(obj_cls))]) plt.imshow(np.array(image)) plt.show() # Get any random annotation file annotation_file = random.choice(annotations) with open(annotation_file, "r") as file: annotation_list = file.read().split("\n")[:-1] annotation_list = [x.split(" ") for x in annotation_list] annotation_list = [[float(y) for y in x ] for x in annotation_list] #Get the corresponding image file image_file = annotation_file.replace("annotations", "images").replace("txt", "png") assert os.path.exists(image_file) #Load the image image = Image.open(image_file) #Plot the Bounding Box plot_bounding_box(image, annotation_list)
Great, we are able to recover the correct annotation from the YOLO v5 format. This means we have implemented the conversion function properly.
Partition the Dataset
Next we partition the dataset into train, validation, and test sets containing 80%, 10%, and 10% of the data, respectively. You can change the split values according to your convenience.
# Read images and annotations images = [os.path.join('images', x) for x in os.listdir('images')] annotations = [os.path.join('annotations', x) for x in os.listdir('annotations') if x[-3:] == "txt"] images.sort() annotations.sort() # Split the dataset into train-valid-test splits train_images, val_images, train_annotations, val_annotations = train_test_split(images, annotations, test_size = 0.2, random_state = 1) val_images, test_images, val_annotations, test_annotations = train_test_split(val_images, val_annotations, test_size = 0.5, random_state = 1)
Create the folders to keep the splits.
!mkdir images/train images/val images/test annotations/train annotations/val annotations/test
Move the files to their respective folders.
#Utility function to move images def move_files_to_folder(list_of_files, destination_folder): for f in list_of_files: try: shutil.move(f, destination_folder) except: print(f) assert False # Move the splits into their folders move_files_to_folder(train_images, 'images/train') move_files_to_folder(val_images, 'images/val/') move_files_to_folder(test_images, 'images/test/') move_files_to_folder(train_annotations, 'annotations/train/') move_files_to_folder(val_annotations, 'annotations/val/') move_files_to_folder(test_annotations, 'annotations/test/')
Rename the
annotations folder to
labels, as this is where YOLO v5 expects the annotations to be located in.
mv annotations labels cd ../yolov5
Training Options
Now, we train the network. We use various flags to set options regarding training.
img: Size of image. The image is a square one. The original image is resized while maintaining the aspect ratio. The longer side of the image is resized to this number. The shorter side is padded with grey color.
batch: The batch size
epochs: Number of epochs to train for
data: Data YAML file that contains information about the dataset (path of images, labels)
workers: Number of CPU workers
cfg: Model architecture. There are 4 choices available:
yolo5s.yaml,
yolov5m.yaml,
yolov5l.yaml,
yolov5x.yaml. The size and complexity of these models increases in the ascending order and you can choose a model which suits the complexity of your object detection task. In case you want to work with a custom architecture, you will have to define a
YAMLfile in the
modelsfolder specifying the network architecture.
weights: Pretrained weights you want to start training from. If you want to train from scratch, use
--weights ' '
name: Various things about training such as train logs. Training weights would be stored in a folder named
runs/train/name
hyp: YAML file that describes hyperparameter choices. For examples of how to define hyperparameters, see
data/hyp.scratch.yaml. If unspecified, the file
data/hyp.scratch.yamlis used.
Data Config File
Details for the dataset you want to train your model on are defined by the data config
YAML file. The following parameters have to be defined in a data config file:
train,
test, and
val: Locations of train, test, and validation images.
nc: Number of classes in the dataset.
names: Names of the classes in the dataset. The index of the classes in this list would be used as an identifier for the class names in the code.
Create a new file called
road_sign_data.yaml and place it in the
yolov5/data folder. Then populate it with the following.
train: ../Road_Sign_Dataset/images/train/ val: ../Road_Sign_Dataset/images/val/ test: ../Road_Sign_Dataset/images/test/ # number of classes nc: 4 # class names names: ["trafficlight","stop", "speedlimit","crosswalk"]
YOLO v5 expects to find the training labels for the images in the folder whose name can be derived by replacing
images with
labels in the path to dataset images. For example, in the example above, YOLO v5 will look for train labels in
../Road_Sign_Dataset/labels/train/.
Or you can simply download the file.
!wget -P data/
Hyperparameter Config File
The hyperparameter config file helps us define the hyperparameters for our neural network. We are going to use the default one,
data/hyp.scratch.yaml. This is what it looks like.
# Hyperparameters for COCO training from scratch # python train.py --batch 40 --cfg yolov5m.yaml --weights '' --data coco.yaml --img 640 --epochs 300 # See tutorials for hyperparameter evolution lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf) momentum: 0.937 # SGD momentum/Adam beta1 weight_decay: 0.0005 # optimizer weight decay 5e-4 warmup_epochs: 3.0 # warmup epochs (fractions ok) warmup_momentum: 0.8 # warmup initial momentum warmup_bias_lr: 0.1 # warmup initial bias lr box: 0.05 # box loss gain cls: 0.5 # cls loss gain cls_pw: 1.0 # cls BCELoss positive_weight obj: 1.0 # obj loss gain (scale with pixels) obj_pw: 1.0 # obj BCELoss positive_weight iou_t: 0.20 # IoU training threshold anchor_t: 4.0 # anchor-multiple threshold # anchors: 3 # anchors per output layer (0 to ignore) fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5) hsv_h: 0.015 # image HSV-Hue augmentation (fraction) hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # image HSV-Value augmentation (fraction) degrees: 0.0 # image rotation (+/- deg) translate: 0.1 # image translation (+/- fraction) scale: 0.5 # image scale (+/- gain) shear: 0.0 # image shear (+/- deg) perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 flipud: 0.0 # image flip up-down (probability) fliplr: 0.5 # image flip left-right (probability) mosaic: 1.0 # image mosaic (probability) mixup: 0.0 # image mixup (probability)
You can edit this file, save a new file, and specify it as an argument to the train script.
Custom Network Architecture
YOLO v5 also allows you to define your own custom architecture and anchors if one of the pre-defined networks doesn't fit the bill for you. For this you will have to define a custom weights config file. For this example, we use the the
yolov5s.yaml. This is what it looks like.
# parameters nc: 80 #, C3, [128]], [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 9, C3, [256]], [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 1, SPP, [1024, [5, 9, 13]]], [-1, 3, C3, [1024, False]], # 9 ] # YOLOv5 head head: [[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, Concat, [1]], # cat backbone P4 [-1, 3, C3, [512, False]], # 13 [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, Concat, [1]], # cat backbone P3 [-1, 3, C3, [256, False]], # 17 (P3/8-small) [-1, 1, Conv, [256, 3, 2]], [[-1, 14], 1, Concat, [1]], # cat head P4 [-1, 3, C3, [512, False]], # 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, Concat, [1]], # cat head P5 [-1, 3, C3, [1024, False]], # 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) ]
To use a custom network, create a new file and specify it at run time using the
cfg flag.
Train the Model
We define the location of
train,
val and
test, the number of classes (
nc) and the names of the classes. Since the dataset is small, and we don't have many objects per image, we start with the smallest of pretrained models
yolo5s to keep things simple and avoid overfitting. We keep a batch size of
32, image size of
640, and train for 100 epochs. If you have issues fitting the model into the memory:
- Use a smaller batch size
- Use a smaller network
- Use a smaller image size
Of course, all of the above might impact the performance. The compromise is a design decision you have to make. You might want to go for a bigger GPU instance as well, depending on the situation.
We use the name
yolo_road_det for our training. The tensorboard training logs can be found at
runs/train/yolo_road_det. If you can't access tensorboard logs, you can setup a
wandb account so that the logs are plotted over on your wandb account.
Finally, run the training:
!python train.py --img 640 --cfg yolov5s.yaml --hyp hyp.scratch.yaml --batch 32 --epochs 100 --data road_sign_data.yaml --weights yolov5s.pt --workers 24 --name yolo_road_det
This might take up to 30 minutes to train, depending on your hardware.
Inference
There are many ways to run inference using the
detect.py file.
The
source flag defines the source of our detector, which can be:
- A single image
- A folder of images
- Video
- Webcam
...and various other formats. We want to run it over our test images so we set the
source flag to
../Road_Sign_Dataset/images/test/.
- The
weightsflag defines the path of the model which we want to run our detector with.
confflag is the thresholding objectness confidence.
nameflag defines where the detections are stored. We set this flag to
yolo_road_det; therefore, the detections would be stored in
runs/detect/yolo_road_det/.
With all options decided, let us run inference over our test dataset.
!python detect.py --source ../Road_Sign_Dataset/images/test/ --weights runs/train/yolo_road_det/weights/best.pt --conf 0.25 --name yolo_road_det
best.pt contains the best-performing weights saved during training.
We can now randomly plot one of the detections.
detections_dir = "runs/detect/yolo_road_det/" detection_images = [os.path.join(detections_dir, x) for x in os.listdir(detections_dir)] random_detection_image = Image.open(random.choice(detection_images)) plt.imshow(np.array(random_detection_image))
Apart from a folder of images, there are other sources we can use for our detector as well. The command syntax for doing so is described by the following.
python detect.py --source 0 # webcam file.jpg # image file.mp4 # video path/ # directory path/*.jpg # glob rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream rtmp://192.168.1.105/live/test # rtmp stream # http stream
Computing the mAP on the test dataset
We can use the
test file to compute the mAP on our test set. To perform the evaluation on our test set, we set the
task flag to
test. We set the name to
yolo_det. Things like plots of various curves (F1, AP, Precision curves etc) can be found in the folder
runs/test/yolo_road_det. The script calculates for us the Average Precision for each class, as well as mean Average Precision.
!python test.py --weights runs/train/yolo_road_det/weights/best.pt --data road_sign_data.yaml --task test --name yolo_det
The output of looks like the following:
Fusing layers... Model Summary: 224 layers, 7062001 parameters, 0 gradients, 16.4 GFLOPS test: Scanning '../Road_Sign_Dataset/labels/test' for images and labels... 88 fo test: New cache created: ../Road_Sign_Dataset/labels/test.cache test: Scanning '../Road_Sign_Dataset/labels/test.cache' for images and labels... Class Images Targets P R mAP@.5 all 88 126 0.961 0.932 0.944 0.8 trafficlight 88 20 0.969 0.75 0.799 0.543 stop 88 7 1 0.98 0.995 0.909 speedlimit 88 76 0.989 1 0.997 0.906 crosswalk 88 23 0.885 1 0.983 0.842 Speed: 1.4/0.7/2.0 ms inference/NMS/total per 640x640 image at batch-size 32 Results saved to runs/test/yolo_det2
And that's pretty much it for this tutorial. In this tutorial, we trained YOLO v5 on a custom dataset of road signs. If you want to play around with the hyperparameters, or if you want to train on a different dataset, you can grab the Gradient Notebook for this tutorial as a starting point.
Conclusion... and a bit about the naming saga.
As promised earlier, I want to conclude my article with giving my two cents about the naming controversy YOLO v5 created.
YOLO's original developer abandoned its development owing to concerns about his research being used for military purposes. After that, multiple sets of people have come up with improvements to YOLO.
Afterwards, YOLO v4 was released in April 2020 by Alexey Bochkovskiy and others. Alexey was perhaps the most suitable person to do a sequel to YOLO, since he had been the long-time maintainer of the second most popular YOLO repo, which unlike the original version, also worked on Windows.
YOLO v4 brought a host of improvements, which helped it greatly outperform YOLO v3. But then Glenn Jocher, maintainer of the Ultralytics YOLO v3 repo (the most popular python port of YOLO) released YOLO v5, the naming of which drew reservations from a lot of members of the computer vision community.
Why? Because in a traditional sense, YOLO v5 doesn't bring any novel architectures / losses / techniques to the table. There is yet to be a research paper released for YOLO v5.
It, however, provides massive improvements in terms of how quickly people can integrate YOLO into their existing pipelines. The foremost thing about YOLO v5 is that it's written in PyTorch / Python, unlike the original versions from v1-v4, which are in C. This alone makes it much more accessible to people and companies working in the deep learning space.
Moreover, it introduces a clean way of defining experiments using modular config files, mixed precision training, fast inference, better data augmentation techniques, etc. In a way, it would be fine to call it v5 if we viewed YOLO v5 as a piece of software, rather than an algorithm. Maybe that's what Glenn Jocher had in mind when he named it v5. Nevertheless, many folks from the community, including Alexey, have vehemently disagreed and pointed out that it's wrong to call it YOLO v5 since performance-wise, it is still inferior to YOLO v4.
Here is a post that gives you a more detailed account of the controversy.
What's your take on this? Do you think YOLO v5 should be called so? Do let us know in the comments or simply tweet at @hellopaperspace.
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/train-yolov5-custom-data/ | CC-MAIN-2022-27 | refinedweb | 4,136 | 58.38 |
![if !IE]> <![endif]>
Cookies
When a client visits a Web site, the server for that Web site may write a cookie to the client’s machine. This cookie can be accessed by servers within the Web site’s domain at a later time. Cookies are usually small text files used to maintain state information for a par-ticular client. State information may contain a username, password or specific information that might be helpful when a user returns to a Web site. Many Web sites use cookies to store a client’s postal zip code. The zip code is used when the client requests a Web page from the server. The server may send the current weather information or news updates for the client’s region. The scripts in this section write cookie values to the client and retrieve the values for display in the browser.
Figure 28.19 is an XHTML form that asks the user to enter three values. These values are passed to the fig28_20.py script, which writes the values in a client-side cookie.
Figure 28.20 is the script that retrieves the form values from fig28_19.html and stores those values in a client-side cookie. Line 6 imports the Cookie module. This module provides capabilities for reading and writing client-side cookies.
Lines 9–15 define function printContent, which prints the content header and XHTML DOCTYPE string to the browser. Line 17 retrieves the form values by using class FieldStorage from module cgi. We handle the form values with a try/except/ else block. The try block (lines 19–22) attempts to retrieve the form values. If the user has not completed one or more of the form fields, the code in this block raises a KeyError exception. The exception is caught in the except block (lines 23–28), and the program calls function printContent, then outputs an appropriate message to the browser.
<!DOCTYPE html PUBLIC
"-//W3C//DTD XHTML 1.0 Transitional//EN"
"DTD/xhtml1-transitional.dtd">
<!-- Fig. 28.19: fig28_19.html -->
<html xmlns = "" xml:
<head>
<title>Writing a cookie to the client computer</title>
</head>
<body style = "background-image: images/back.gif;
font-family: Arial,sans-serif; font-size: 11pt" >
<span style = "font-size: 15pt; font-weight: bold">
Click Write Cookie to save your cookie data.
</span><br />
<form method = "post" action = "/cgi-bin/fig28_20.py">
<span style = "font-weight: bold">Name:</span><br />
<input type = "text" name = "name" /><br />
<span style = "font-weight: bold">Height:</span><br />
<input type = "text" name = "height" /><br />
<span style = "font-weight: bold">Favorite Color</span><br />
<input type = "text" name = "color" /><br />
<input type = "submit" value = "Write Cookie" />
</form>
</body>
</html>
Fig. 28.19 XHTML form to get cookie values from user
The code in the else block (lines 29–68) executes after the program successfully retrieves all the form values. Line 32 specifies the format for the expiration value of the cookie. The format characters in this string are defined by the time module. For a com-plete list of time tokens and their meanings, visit
#!C:\Python\python.exe
# Fig. 28.20: fig28_20.py
# Writing a cookie to a client's machine
import cgi
import Cookie
import time
def printContent():
print "Content-type: text/html"
print """
<html xmlns = "" xml:
<head><title>Cookie values</title></head>"""
form = cgi.FieldStorage() # get form information
try: # extract form values
name = form[ "name" ].value
height = form[ "height" ].value
color = form[ "color" ].value
except KeyError:
printContent()
print """<body><h3>You have not filled in all fields.
<span style = "color: blue"> Click the Back button,
fill out the form and resubmit.<br /><br />
Thank You. </span></h3>"""
else:
# construct cookie expiration date and path
expirationFormat = "%A, %d-%b-%y %X %Z"
expirationTime = time.localtime( time.time() + 300 )
expirationDate = time.strftime( expirationFormat,
expirationTime )
path = "/"
# construct cookie contents
cookie = Cookie.Cookie()
cookie[ "Name" ] = name
cookie [ "Name" ][ "expires" ] = expirationDate
cookie[ "Name" ][ "path" ] = path
cookie[ "Height" ] = height
cookie[ "Height" ][ "expires" ] = expirationDate
cookie[ "Height" ][ "path" ] = path
cookie[ "Color" ] = color
cookie[ "Color" ][ "expires" ] = expirationDate
cookie[ "Color" ][ "path" ] = path
# print cookie to user and page to browser
print cookie
printContent()
print """<body style = "background-image: /images/back.gif;
font-family: Arial,sans-serif; font-size: 11pt">
The cookie has been set with the following data: <br /><br />
<span style = "color: blue">Name:</span> %s<br />
<span style = "color: blue">Height:</span> %s<br />
<span style = "color: blue">Favorite Color:</span>
<span style = "color: %s"> %s</span><br />""" \
% ( name, height, color, color )
print """<br /><a href= "fig28_21.py">
Read cookie values</a>"""
print """</body></html>"""
Fig. 28.20 Writing a cookie to a client’s machine
The time function (line 33) of module time returns a floating-point value that is the number of seconds since the epoch (i.e., January 1, 1970). We add 300 seconds to this value to set the expirationTime for the cookie. We then format the time using the local-time function. This function converts the time in seconds to a nine-element tuple that rep-resents the time in local terms (i.e., according to the time zone of the machine on which the script is running). Lines 34–35 call the strftime function to format a time tuple into a string. This line effectively formats tuple expirationTime as a string that follows the format specified in expirationFormat.
Line 39 creates an instance of class Cookie. An object of class Cookie acts like a dictionary, so values can be set and retrieved using familiar dictionary syntax. Lines 41–51 set the values for the cookie, based on the user-entered values retrieved from the XHMTL form.
Line 54 writes the cookie to the browser (assuming the user’s browser has enabled cookies) by using the print statement. The cookie must be written before we write the content type (line 56) to the browser. Lines 57–65 display the cookie’s values in the browser. We then conclude the else block by creating a link to a Python script that retrieves the stored cookie values (lines 67–68).
Figure 28.21 is the CGI script that retrieves cookie values from the client and displays the values in the browser. Line 18 creates an instance of class Cookie. Line 19 retrieves the cookie values from the client. Cookies are stored as a string in the environment variable HTTP_COOKIE. The load method of class Cookie extracts cookie values from a string. If no cookie value exists, then the program raises a KeyError exception. We catch the exception in lines 20–22 and print an appropriate message in the browser.
If the program successfully retrieves the cookie values, the code in lines 23–37 dis-plays the values in the browser. Because cookies act like dictionaries, we can use the keys method (line 31) to retrieve the names of all the values in the cookie. Lines 32–35 print these names and their corresponding values in a table.
#!C:\Python\python.exe
# Fig. 28.21: fig28_21.py
# Program that retrieves and displays client-side cookie values
import Cookie
import os
print "Content-type: text/html"
print """
<html xmlns = "" xml:
<head><title>Cookie values</title></head>
<body style =
font-family: Arial, sans-serif; font-size: 11pt">"""
try:
cookie = Cookie.Cookie()
cookie.load( os.environ[ "HTTP_COOKIE" ] )
except KeyError:
print """<span style = "font-weight: bold">Error reading cookies
</span>"""
else:
print """<span style = "font-weight: bold">
The following data is saved in a cookie on your computer.
</span><br /><br />"""
print """<table style = "border-width: 5; border-spacing: 0;
padding: 10">"""
for item in cookie.keys():
print """<tr>
<td style = "background-color: white">%s</td>
</tr>""" % ( item, cookie[ item ].value )
print """</table>"""
print """</body></html>"""
Fig. 28.21 CGI script that retrieves and displays client-side cookie values.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | http://www.brainkart.com/article/Cookies---Python_11132/ | CC-MAIN-2019-51 | refinedweb | 1,303 | 64.91 |
User Tag List
Results 1 to 2 of 2
- Join Date
- Dec 2007
- 348
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
fakemail in python not working on windows (os.fork)
hi guys,
having failed to install the perl version of fakemail ( - thanks anyway Kevin!) I downloaded the python version and it works fine if I manually invoke it from the command line or a .bat file.
I cannot however get the following test case to run:
The problem (seems to be) with the --background parameter sent to the script:
Code Python:
fakemail.py --host=localhost --port=10025 --path=. --background
Because if I run that from the command line, it shows the error: AttributeError: object has no attribute 'fork'
A quick Google search shows that python on Windows does not use os.fork() (among other os functions it seems).
If I use fakemail without the background parameter then the manual scripts work, but the test case just hangs (since I presume PHP is waiting for fakemail to terminate before running the rest of the script).
I'm stuck!! Any help appreciated, I've been wrestling with this for a few days now..
- Join Date
- Dec 2007
- 348
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
The problem code is here apparently:
Code Python:
def become_daemon(): # See "Python Standard Library", pg. 29, O'Reilly, for more # info on the following. pid = os.fork() if pid: # we're the parent if pid is set os._exit(0) os.setpgrp() os.umask(0)
According to
I can use spawnv to replicate this process but I don't know python at all so I'm not sure how.
Any help or suggestions to run fakemail.py alternatively would be much appreciated!!
Bookmarks | http://www.sitepoint.com/forums/showthread.php?611717-fakemail-in-python-not-working-on-windows-(os-fork)&p=4226840 | CC-MAIN-2016-44 | refinedweb | 288 | 70.43 |
You have been contributing to open source by hosting your projects on Github for some time now. That’s great! 👍 But my question is, do you take some time to describe what your project does?🤔 Or even how to run or install the project to guide anyone who interested in the project? No! I have been there before.🤫 I almost always ignored the readme file if present or forgot to add it at all to the projects I build on the side.
It is important to note that writing a readme for each project helps:.
If you have ever wondered how to build the production or release version of your React Native app, then you have come to the right place. Before proceeding, this tutorial is targeted only at the android version of our React Native app.
You would not be here if you do not have an app already. You can feel free and jump to topic three(3) of the table content. Due to educational purposes, I will start from scratch. …
npx react-native-cli rn-firebase
Install the React Native Firebase “app” module to the root of your React Native project with NPM or Yarn(I will be using NPM throughout, hence find the Yarn equivalent to installing these libraries):
npm install --save @react-native-firebase/app
Note: The
@react-native-firebase/appmodule must be installed before using any other Firebase service.
To allow the Android app to securely connect to your Firebase project, a configuration file must be downloaded and added to your project. …
Install
react-native-dotenv library/package by running the command
npm i react-native-dotenv
cd /ios
pod install
Configure your
babel.configure.js to allow you to inject your environment variables into your react-native environment using dotenv for multiple environments.
Inside your
babel.configure.js file, add the code below:
module.exports = {
"plugins": [
["module:react-native-dotenv", {
"moduleName": "@env",
"path": ".env",
"blacklist": null,
"whitelist": null,
"safe": false,
"allowUndefined": true
}]
]
}
Create a
.env file in the root of your project folder and add your environment variables.
Installing vector icons in a bare React Native application requires some few steps to follow in order to set it up correctly for both android and ios platforms.
If you’re using expo, react-native-vector-icons is installed by default on the template project that get through
expo init. It is part of the expo project.
For a bare React Native project, some additional steps need to be followed after running the command
npm install react-native-vector-icons depending on the platform you are running.
If you want to properly install react-native-vector-icons on an android platform, follow the steps below:
…
Modals are one of the most widely used components in mobile apps. React Native modals allows you to present content above an enclosing view.
Find out more about modals in React Native from their official documentation
Surprisingly, using modals in React Native is quite easy and straight forward. Maybe it might be due to how well the documentation is written. To use modals in a react-native project, simply import the Modal component from the react-native library:
import {Modal} from 'react-native'
The next most important step is to trigger the modal to either open or close when an event happens. …
Still exploring and experimenting with React Natives’ components as of day 3. I added an important feature that is very common with lists on mobile apps, the RefreshControl component.
You can read more about this component from the official documentation
The RefreshControl component allows users to fetch new data on their screen when they pull the list down. In simple terms, it adds pull to refresh functionality inside of a ScrollView or ListView(FlatList and SectionList).
As I mentioned on day 2, I had some challenges styling the course card whiles using the FlatList component and had to fall back on…
On day 2, I explored React native ListView components. The ListView component renders a list of scrollable elements on the screen. Below is a list of ListView components that are provided out of the box by the framework.
ScrollView is a generic component that renders all its child elements at once. Hence, it is not advisable to use the ScrollView component for a long list of data or dynamic data.
FlatList is a component for rendering a performant scrollable list.
The SectionList works the same as the FlatList. …
A Software Engineer who loves to share his knowledge through writing | https://victorbruce82.medium.com/?source=post_internal_links---------0---------------------------- | CC-MAIN-2021-25 | refinedweb | 752 | 62.27 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
The write function is not updating while raising an error
The write function is not updating while raising an error
@api.model
def create(self,vals):
record = self.search([('create_uid', '=', self.env.user.id), ('purchase_order_type', '=', 'po_type_local'), ('state', 'not in', ['sent', 'draft'])])
for rec in record:
d1 = datetime.strptime(rec.date_order, '%Y-%m-%d %H:%M:%S')
d2 = datetime.strptime(fields.Date.today(), '%Y-%m-%d')
d3 = fields.Date.today()
daysDiff = str((d2-d1).days+1)
if self.env.user.lpo_extend < d3:
if int(daysDiff) > 2:
if rec.invoice_count == 0:
self.env.user.write({'lpo_block': True})
raise UserError("You cannot create this order! Previous LPO is not processed yet!")
count = 0
if self.env.user.lpo_extend < d3:
for line in rec.invoice_ids:
if line.state != 'draft':
count += 1
if int(daysDiff) > 2:
if count == 0:
self.env.user.write({'lpo_block': True})
raise UserError("You cannot create this order! Previous LPO is not processed yet!")
return super(PurchaseOrder, self).create(vals)
Hi Jithin
That is an expected behavior in Odoo and many other transationals systems where a transaction is rolled back when an exception occurred to prevent data inconsistencies. If you really wanna make the write commited you have 2 choices
1- Commit the cursor by yourself wish could lead to previous changes in the data model to be persisted even when a rollback due to an error detected will be performed by Odoo
self.env.cr.commit()
2- Use another cursor for that specific write, for example(more easy with the old api style):
with self.pool.cursor() as new_cr:
self.pool.get('res.users').write(new_cr, [self.env.user.id], {'lpo_block': True})
Hope that helps
Could you be more specific. Can you show any related modules.
Mail me in k.jithin20@gmail.com
I update the answer with | https://www.odoo.com/forum/help-1/question/the-write-function-is-not-updating-while-raising-an-error-111065 | CC-MAIN-2017-09 | refinedweb | 327 | 53.78 |
#include <mysql/components/service.h>
#include "my_inttypes.h"
Go to the source code of this file.
Converts a character buffer to string of specified charset to a string object.
The caller provides the destination string object, which content will be modified.
Converts the mysql_string to a given character set.
Find a CHARSET_INFO by name.
Get the "utf8mb4" CHARSET_INFO.
The string functions as a service to the mysql_server component.
So, that by default this service is available to all the components register to the server.
Lookup available character sets.
Status: Active.
Service to get a byte in String and number of bytes in string.
Service for String case conversions, to lower case and to upper case.
Service to get a character in String and number of characters in string.
Service for conversions, string to buffer and buffer to string.
Status: Active.
Service for conversions, string to buffer and buffer to string.
Status: Deprecated, use mysql_string_charset_converter instead.
Service for String c_type.
Service for String create and destroy.
Service for listing Strings by iterator.
Append a string.
Compare two strings.
Access the string raw data.
The raw data returned is usable only while the string object is valid.
Reset a string to the empty string. | https://dev.mysql.com/doc/dev/mysql-server/latest/mysql__string_8h.html | CC-MAIN-2022-05 | refinedweb | 202 | 71.51 |
In the first part of this post, we talked about a PAL algorithm that works with some relations represented in graphs. This is the Link Prediction algorithm. Now, we’re going to visualize the results. This visualization is presented through a Web interface, combining the SAP UI5 library and D3js in order to use a more “advanced” representation.
You may want to read the first part if you want to have the SQLScript procedure, and the result set that is going to be represented: first part.
The creation of the project
In the SAP HANA Development perspective, with the contextual menu, we need to create a new SAP UI5 project.
You can read more information about how to use SAP UI5 in the SAP HANA Studio in this link.
Then, we need to share it in the server, so we can visualize it in the SAP HANA XS server. Once you deployed that files to the server, this won’t be an application till you put some files.
These are the required files that you need to put in the same directory structure (the .xsapp file is an empty file):
.xsaccess
{ "exposed" : true, "authentication" : [ { "method" : "Basic" } ] }
.xsapp
In this case, this is my directory structur. I can see the web page in http://<host>:80<instance>/mexbalia/buyers/WebContent/index.html
It may change on your own project.
The customization and visualization
In the index.html file, we need to add some sap-ui-libs:
<script src="" id="sap-ui-bootstrap" data- </script>
The sap-ui.core.js is usually located in <host>:80<instance>/sap/ui5/1/resources/sap-ui-core.js, do the math.
To complete the preparation, put a copy of D3 in your server and add a script tag to reference it in the index.html file.
<script type="text/javascript" src='js/d3.min.js'></script>
If you want to put a header in the webpage, we can use the next code:
<script type="text/javascript"> var oAppHeader = new sap.ui.commons.ApplicationHeader("appHeader"); oAppHeader.setLogoSrc(""); oAppHeader.setLogoText("SAP - Link Prediction PoC"); oAppHeader.setDisplayWelcome(true); oAppHeader.setUserName("Carlos Mendez"); oAppHeader.setDisplayLogoff(true); oAppHeader.placeAt("header"); </script>
And, of course, you need a DIV where to put that component with ID “header”, inside the BODY tag.
It starts looking like this:
For the representation of the original data, i.e., the relations between all the buyers and sellers, we will use a dependency wheel. There is a library builded in D3, and you can read about it in this link. The dependency wheel will be located in a SAP UI5 HTML panel, inside index.html.
You can expose some tables or views using the generation of WebServices with OData; and you can find a lot of related information in the SAP HANA documentation.
This is the file services/buyers.xsodata
service namespace "mexbalia.hana.services"{ "CARLOS"."BUYERS" as "BUYERS" keys generate local "Id"; }
And, this is the BUYERS view
select id_from "BUYER", id_to "SELLER", month_in_year "MONTH", count(*) as TOT from "CARLOS"."LOG_TRANS" where "TYPE" = 'B' and MONTH_IN_YEAR = 9 group by id_from, id_to, MONTH_IN_YEAR
Genrating that webservice, the Dependency Wheel of the september transactions looks like the next figure:
That dependency wheel allows you to put the pointer in one vertex, and just its relations are going to be marked. For an example, you can see the next figure:
In a traditional graph, the same original data would look like this:
I don’t personally think that this is a good way to represent suh a big graph. Maybe one with just a few relations, but this is not the case.
Actually, we do have a small graph to be represented. This is the Link Prediction results’ graph. If we make a new view from the PAL_LP_RESULT_TBL table (running the algorithm for september also), and we select only those predictions that has more than 40% probability to occur, we will have a very small graph.
PREDICTION view
select NODE1 "BUYER", NODE2 "SELLER", count(*) as TOT from PAL_LP_RESULT_TBL where SCORE >= 0.4 group by NODE1, NODE2
It’s just necessary to expose the new view with a web service, just like we did before.
With that exposure, this is the visualization of the Link Prediction algorithm results:
So, like you can see, we have presented a complete cycle of a HANA PAL algorithm application. We prepared the data, the algorithm, and finally saw the results. This is just an example. I would like to read more about your own applications.
You can download all the necessary code, and files, from the github repository. | https://blogs.sap.com/2014/04/11/buyers-and-sellers-on-the-visualization-of-the-link-prediction-results-combining-sap-ui5-with-d3js/ | CC-MAIN-2017-22 | refinedweb | 765 | 56.05 |
When I go to
localhost:3000 I get:
Uncaught ReferenceError: LocaleProvider is not defined at app.js:281 at maybeReady (meteor.js:809) at HTMLDocument.loadingCompleted (meteor.js:821)
But this makes no sense because it worked fine before the update. The import is:
import { LocaleProvider } from 'antd';
Now here’s the really strange part. If I type
require('antd') in the browser console, I get this:
Uncaught Error: Cannot find module 'antd' at require (modules-runtime.js?hash=2b888cb…:133) at <anonymous>:1:1
However, I revert to Meteor 1.4.3.2, then
require('antd') and lo and behold:
I’m guessing some changes to the build system (which fixed yarn compatibility) might’ve broken something else? | https://forums.meteor.com/t/ant-design-breaks-on-meteor-1-4-4/35676 | CC-MAIN-2022-27 | refinedweb | 119 | 59.3 |
When.
Therefore, when a system is distributed across a network, it also needs a cache that is running on the network. Nowadays, there are plenty of network servers that offer caching capability—for example, Redis.
As you’re going to see in this tutorial, memcached is another great option for caching. After a quick introduction to basic memcached usage, you’ll learn about advanced patterns such as “cache and set” and using fallback caches to avoid cold cache performance issues.
Installing memcached
Memcached is available for many platforms:
- If you run Linux, you can install it using
apt-get install memcachedor
yum install memcached. This will install memcached from a pre-built package but you can alse build memcached from source, as explained here.
- For macOS, using Homebrew is the simplest option. Just run
brew install memcachedafter you’ve installed the Homebrew package manager.
- On Windows, you would have to compile memcached yourself or find pre-compiled binaries.
Once installed, memcached can simply be launched by calling the
memcached command:
$ memcached
Before you can interact with memcached from Python-land you’ll need to install a memcached client library. You’ll see how to do this in the next section, along with some basic cache access operations.
Storing and Retrieving Cached Values Using Python
If you never used memcached, it is pretty easy to understand. It basically provides a giant network-available dictionary. This dictionary has a few properties that are different from a classical Python dictionnary, mainly:
- Keys and values have to be bytes
- Keys and values are automatically deleted after an expiration time
Therefore, the two basic operations for interacting with memcached are
set and
get. As you might have guessed, they’re used to assign a value to a key or to get a value from a key, respectively.
My preferred Python library for interacting with memcached is
pymemcache—I recommend using it. You can simply install it using pip:
$ pip install pymemcache
The following code shows how you can connect to memcached and use it as a network cache in your Python applications:
>>> from pymemcache.client import base # Don't forget to run `memcached' before running this next line: >>> client = base.Client(('localhost', 11211)) # Once the client is instantiated, you can access the cache: >>> client.set('some_key', 'some value') # Retrieve previously set data again: >>> client.get('some_key')'some value'
memcached network protocol is really simple an its implementation extremely fast, which makes it useful to store data that would be otherwise slow to retrieve from the canonical source of data or to compute again.
While straightforward enough, this example allows storing key/value tuples across the network and accessing them through multiple, distributed, running copies of your application. This is simplistic, yet powerful. And it’s a great first step towards optimizing your application.
Automatically Expiring Cached Data
When storing data into memcached, you can set an expiration time—a maximum number of seconds for memcached to keep the key and value around. After that delay, memcached automatically removes the key from its cache.
What should you set this cache time to? There is no magic number for this delay, and it will entirely depend on the type of data and application that you are working with. It could be a few seconds, or it might be a few hours.
Cache invalidation, which defines when to remove the cache because it is out of sync with the current data, is also something that your application will have to handle. Especially if presenting data that is too old or stale is to be avoided.
Here again, there is no magical recipe; it depends on the type of application you are building. However, there are several outlying cases that should be handled—which we haven’t yet covered in the above example.
A caching server cannot grow infinitely—memory is a finite resource. Therefore, keys will be flushed out by the caching server as soon as it needs more space to store other things.
Some keys might also be expired because they reached their expiration time (also sometimes called the “time-to-live” or TTL.) In those cases the data is lost, and the canonical data source must be queried again.
This sounds more complicated than it really is. You can generally work with the following pattern when working with memcached in Python:
from pymemcache.client import base def do_some_query(): # Replace with actual querying code to a database, # a remote REST API, etc. return 42 # Don't forget to run `memcached' before running this code client = base.Client(('localhost', 11211)) result = client.get('some_key') if result is None: # The cache is empty, need to get the value # from the canonical source: result = do_some_query() # Cache the result for next time: client.set('some_key', result) # Whether we needed to update the cache or not, # at this point you can work with the data # stored in the `result` variable: print(result)
Note: Handling missing keys is mandatory because of normal flush-out operations. It is also obligatory to handle the cold cache scenario, i.e. when memcached has just been started. In that case, the cache will be entirely empty and the cache needs to be fully repopulated, one request at a time.
This means you should view any cached data as ephemeral. And you should never expect the cache to contain a value you previously wrote to it.
Warming Up a Cold Cache
Some of the cold cache scenarios cannot be prevented, for example a memcached crash. But some can, for example migrating to a new memcached server.
When it is possible to predict that a cold cache scenario will happen, it is better to avoid it. A cache that needs to be refilled means that all of the sudden, the canonical storage of the cached data will be massively hit by all cache users who lack a cache data (also known as the thundering herd problem.)
pymemcache provides a class named
FallbackClient that helps in implementing this scenario as demonstrated here:
from pymemcache.client import base from pymemcache import fallback def do_some_query(): # Replace with actual querying code to a database, # a remote REST API, etc. return 42 # Set `ignore_exc=True` so it is possible to shut down # the old cache before removing its usage from # the program, if ever necessary. old_cache = base.Client(('localhost', 11211), ignore_exc=True) new_cache = base.Client(('localhost', 11212)) client = fallback.FallbackClient((new_cache, old_cache)) result = client.get('some_key') if result is None: # The cache is empty, need to get the value # from the canonical source: result = do_some_query() # Cache the result for next time: client.set('some_key', result) print(result)
The
FallbackClient queries the old cache passed to its constructor, respecting the order. In this case, the new cache server will always be queried first, and in case of a cache miss, the old one will be queried—avoiding a possible return-trip to the primary source of data.
If any key is set, it will only be set to the new cache. After some time, the old cache can be decommissioned and the
FallbackClient can be replaced directed with the
new_cacheclient.
Check And Set
When communicating with a remote cache, the usual concurrency problem comes back: there might be several clients trying to access the same key at the same time. memcached provides a check and set operation, shortened to CAS, which helps to solve this problem.
The simplest example is an application that wants to count the number of users it has. Each time a visitor connects, a counter is incremented by 1. Using memcached, a simple implementation would be:
def on_visit(client): result = client.get('visitors') if result is None: result = 1 else: result += 1 client.set('visitors', result)
However, what happens if two instances of the application try to update this counter at the same time?
The first call
client.get('visitors') will return the same number of visitors for both of them, let’s say it’s 42. Then both will add 1, compute 43, and set the number of visitors to 43. That number is wrong, and the result should be 44, i.e. 42 + 1 + 1.
To solve this concurrency issue, the CAS operation of memcached is handy. The following snippet implements a correct solution:
def on_visit(client): while True: result, cas = client.gets('visitors') if result is None: result = 1 else: result += 1 if client.cas('visitors', result, cas): break
The
gets method returns the value, just like the
get method, but it also returns a CAS value.
What is in this value is not relevant, but it is used for the next method
cas call. This method is equivalent to the
set operation, except that it fails if the value has changed since the
gets operation. In case of success, the loop is broken. Otherwise, the operation is restarted from the beginning.
In the scenario where two instances of the application try to update the counter at the same time, only one succeeds to move the counter from 42 to 43. The second instance gets a
False value returned by the
client.cas call, and have to retry the loop. It will retrieve 43 as value this time, will increment it to 44, and its
cas call will succeed, thus solving our problem.
Incrementing a counter is interesting as an example to explain how CAS works because it is simplistic. However, memcached also provides the
incr and
decr methods to increment or decrement an integer in a single request, rather than doing multiple
gets/
cas calls. In real-world applications
gets and
cas are used for more complex data type or operations
Most remote caching server and data store provide such a mechanism to prevent concurrency issues. It is critical to be aware of those cases to make proper use of their features.
Beyond Caching
The simple techniques illustrated in this article showed you how easy it is to leverage memcached to speed up the performances of your Python application.
Just by using the two basic “set” and “get” operations you can often accelerate data retrieval or avoid recomputing results over and over again. With memcached you can share the cache accross a large number of distributed nodes.
Other, more advanced patterns you saw in this tutorial, like the Check And Set (CAS)operation allow you to update data stored in the cache concurrently across multiple Python threads or processes while avoiding data corruption.
If you are interested into learning more about advanced techniques to write faster and more scalable Python applications, check out Scaling Python. It covers many advanced topics such as network distribution, queuing systems, distributed hashing, and code profiling. | https://julien.danjou.info/python-memcached-efficient-caching-in-distributed-applications/ | CC-MAIN-2019-30 | refinedweb | 1,774 | 62.38 |
When you started your Ext JS 4.* project with Sencha Cmd you know how easy…
Make a native build with Ext JS 5, Sencha Cmd 5 and Phonegap / Cordova with plugins.:
- Let's generate a new Ext JS 5 app
Browse with your terminal to your Ext JS 5 SDK folder. Run the following command:
sencha generate app MyApp ../phonegapdemoHere we generate an Ext JS 5 demo app, in the phonegapdemo folder, which stands next to the downloaded extjs5 sdk folder.
- Open app.json
Add the following code block:
keyword. Note also the
idproperty, which expects an id in reversed domain style; and
namewhich should be the name of the Sencha App namespace. In case I want to build via the Phonegap cloud web service; I should set the property
remoteto true. Then, you will also need to create a
local.propertiesfile in the phonegapdemo folder; with the login details for build.phonegap.com:
phonegap.remote.username=my@email.com phonegap.remote.password=mypassword
- Create a native build
Back to your terminal, navigate to the phonegapdemo folder, and run the following command:
sencha app build iosNote!)
- Enable the JS Phonegap/Cordova API
Although you could build and run your application on a device by now; it might be handy when you enable the Phonegap/Cordova device API. For example, in case you need to install plugins, such as the inappbrowser plugin.array in app.json. Not sure if I did it wrong, but I was running into sencha build errors because of that. Mind you, the cordova JavaScript file will be created while building the app; so it's not available in the project root.
- Let's build it (again)!
sencha app build ios:
- Edit config.xml
You can find it here: phonegapdemo/phonegap/config.xml Now add the following line, (if not already available):
<gap:plugin</gap:plugin>
- Install the plugin:
Run from the command-line the following command, from the phonegapdemo/phonegap folder:
phonegap plugin add org.apache.cordova.inappbrowserAgain, Mac OSX users, you will need to have admin rights, so prefix with
sudo. This command will add the plugin into the phonegapdemo/phonegap/plugins/ folder.
- How to open URLs
Edit the demo app, and create a button, which will open an external URL in a separate browser. For example:!
January 2, 2015 at 8:42 pm
Reply
January 6, 2015 at 6:41 pm
Reply
January 7, 2015 at 10:44 am
Reply
January 8, 2015 at 1:24 pm
Reply
March 3, 2015 at 6:56 pm
Reply
March 8, 2015 at 2:45 pm
Reply
December 2, 2015 at 3:29 pm
Reply | https://www.leeboonstra.com/developer/make-a-native-build-with-ext-js-5-sencha-cmd-5-and-phonegap-cordova-with-plugins/ | CC-MAIN-2017-34 | refinedweb | 438 | 63.09 |
Python internal metric unit conversion function
I'm trying to build a function to do internal metric conversion on a wavelength to frequency conversion program and have been having a hard time getting it to behave properly. It is super slow and will not assign the correct labels to the output. If anyone can help with either a different method of computing this or a reason on why this is happening and any fixes that I cond do that would be amazing!
def convert_SI_l(n): if n in range( int(1e-12),int(9e-11)): return n/0.000000000001, 'pm' else: if n in range(int(1e-10),int(9e-8)): return n/0.000000001 , 'nm' else: if n in range(int(1e-7),int(9e-5)): return n/0.000001, 'um' else: if n in range(int(1e-4),int(9e-3)): return n/0.001, 'mm' else: if n in range(int(0.01), int(0.99)): return n/0.01, 'cm' else: if n in range(1,999): return n/1000, 'm' else: if n in range(1000,299792459): return n/1000, 'km' else: return n , 'm' def convert_SI_f(n): if n in range( 1,999): return n, 'Hz' else: if n in range(1000,999999): return n/1000 , 'kHz' else: if n in range(int(1e6),999999999): return n/1e6, 'MHz' else: if n in range(int(1e9),int(1e13)): return n/1e9, 'GHz' else: return n, 'Hz' c=299792458 i=input("Are we starting with a frequency or a wavelength? ( F / L ): ") #Error statements if i.lower() == ("f"): True else: if not i.lower() == ("l"): print ("Error invalid input") #Cases if i.lower() == ("f"): f = float(input("Please input frequency (in Hz): ")) size_l = c/f print(convert_SI_l(size_l)) if i.lower() == ("l"): l = float(input("Please input wavelength (in meters): ")) size_f = ( l/c) print(convert_SI_f(size_f))
You are using
range() in a way that is close to how it is used in natural language, to express a contiguous segment of the real number line, as in in the range 4.5 to 5.25. But
range() doesn't mean that in Python. It means a bunch of integers. So your floating-point values, even if they are in the range you specify, will not occur in the bunch of integers that the
range() function generates.
Your first test is
if n in range( int(1e-12),int(9e-11)):
and I am guessing you wrote it like this because what you actually wanted was
range(1e-12, 9e-11) but you got
TypeError: 'float' object cannot be interpreted as an integer.
But if you do this at the interpreter prompt
>>> range(int(1e-12),int(9e-11)) range(0, 0) >>> list(range(int(1e-12),int(9e-11))) []
you will see it means something quite different to what you obviously expect.
To test if a floating-point number falls in a given range do
if lower-bound <= mynumber <= upper-bound:
[PDF] python-measurement Documentation, python-measurement Documentation, Release 1.0 Warning: Measurements are stored internally by converting them to a floating-point Note: Python has restrictions on what can be used as a method attribute; if you are not� I ended up writing my own python package for unit conversion and dimensional analysis, but it is not properly packaged for release yet. We are using my unit system in the python bindings for our OpenMM system for GPU accelerated molecular mechanics. You can browse the svn repository of my python units code at: SimTK python units
You don't need ranges and your logic will be more robust if you base it on fixed threshold points that delimit the unit magnitude. This would typically be a unit of one in the given scale.
Here's a generalized approach to all unit scale determination:
SI_Length = [ (1/1000000000000,"pm"), (1/1000000000, "nm"), (1/1000000, "um"), (1/1000, "mm"), (1/100, "cm"), (1, "m"), (1000, "km") ] SI_Frequency = [ (1, "Hz"), (1000,"kHz"), (1000000,"MHz"), (1000000000,"GHz")] def convert(n,units): useFactor,useName = units[0] for factor,name in units: if n >= factor : useFactor,useName = factor,name return (n/useFactor,useName) print(convert(0.0035,SI_Length)) # 3.5 mm print(convert(12332.55,SI_Frequency)) # 12.33255 kHz
Each unit array must be in order of smallest to largest multiplier.
units � PyPI, Python support for quantities with units. because a NASA subcontractor ( Lockheed Martin) used Imperial units (pound-seconds) instead of the metric system. Function module MATERIAL_UNIT_CONVERSION will convert material units of measure. It will covert either from an alternate uom to base uom or from base uom to alternate uom. If you need to go from alternate uom to base uom, you will need to call it multiple times. There are other function modules, but this one does not have any restrictions for the
EDIT: Actually,
range is a function which is generally used in itaration to generate numbers. So, when you write
if n in range(min_value, max_value), this function generates all integers until it finds a match or reach the max_value.
The
rangetype represents an immutable sequence of numbers and is commonly used for looping a specific number of times in
forloops.
Instead of writing:
if n in range(int(1e-10),int(9e-8)): return n/0.000000001 , 'nm'
you should write:
if 1e-10 <= n < 9e-8: return n/0.000000001 , 'nm'
Also keep in mind that
range only works on integers, not float.
More EDIT:
For your specific use case, you can define dictionary of *(value, symbol) pairs, like below:
import collections symbols = collections.OrderedDict( [(1e-12, u'p'), (1e-9, u'n'), (1e-6, u'μ'), (1e-3, u'm'), (1e-2, u'c'), (1e-1, u'd'), (1e0, u''), (1e1, u'da'), (1e2, u'h'), (1e3, u'k'), (1e6, u'M'), (1e9, u'G'), (1e12, u'T')])
The use the
bisect.bisect function to find the "insertion" point of your value in that ordered collection. This insertion point can be used to get the simplified value and the SI symbol to use.
For instance:
import bisect def convert_to_si(value): if value < 0: value, symbol = convert_to_si(-value) return -value, symbol elif value > 0: orders = list(symbols.keys()) order_index = bisect.bisect(orders, value / 10.0) order = orders[min(order_index, len(orders) - 1)] return value / order, symbols[order] else: return value, u""
Demonstration:
for value in [1e-12, 3.14e-11, 0, 2, 20, 3e+9]: print(*convert_to_si(value), sep="")
You get:
1.0p 0.0314n 0 2.0 2.0da 3.0G
You can adapt this function to your needs…
Unit converter in Python, I want to convert units within the metric system and metric to imperial and vice- versa. I have started with this code and I found this method is slow� Next, we compile and fit the model by specifying the optimizer, training metric, epoch, and batch size. In the last part, we save the model. Note that since we’re using a tf.keras model, we can simply use the .save function by specifying a folder name. Running the script above begins model training for just 3 epochs.
Units and Quantities (astropy.units) — Astropy v4.0.1, Unit conversion is done using the to() method, which returns a new Quantity in Syntax, the preferred string formatting syntax in recent versions of python. operations like ordinary numbers, but will deal with unit conversions internally. radian: angular measurement of the ratio between the length on an arc and its radius. In this step-by-step tutorial, you'll learn about the print() function in Python and discover some of its lesser-known features. Avoid common mistakes, take your "hello world" to the next level, and know when to use a better alternative.
Computing with Units in Python Pint, It allows conversion between units and even mixing them in the same equation while doing the internal conversion for you, Moreover supports various physical units, If we don't want to convert the unit permenantely we can use method '.to' Pint base units is metric system and can be called as follows:..
util Module — python-pptx 0.6.18 documentation, Utility functions and classes. Emu, Cm, Mm, Pt, and Px. Provides properties for converting length values to convenient units. Used internally because PowerPoint stores font size in centipoints. cm � Integer length in English Metric Units. The engine fired but the spacecraft came within 60 km (36 miles) of the planet -- about 100 km closer than planned and about 25 km (15 miles) beneath the level at which the it could function | http://thetopsites.net/article/54755729.shtml | CC-MAIN-2020-40 | refinedweb | 1,425 | 62.98 |
Represents an application instance for a single session. More...
#include <Wt/WApplication>
Represents an application instance for a single session.
Each user session of your application has a corresponding WApplication instance. You need to create a new instance and return it as the result of the callback function passed to WRun(). The instance is the main entry point to session information, and holds a reference to the root() of the widget tree.
The recipe for a Wt web application, which allocates new WApplication instances for every user visiting the application is thus:
WApplication *createApplication(const WEnvironment WEnvironment& env) { // // Optionally, check the environment and redirect to an error page. // bool valid = ...; WApplication *app; if (!valid) { app = new WApplication(env); app->redirect("error.html"); app->quit(); } else { // usually you will specialize your application class app = new WApplication(env); // // Add widgets to app->root() and return the application object. // } return app; }
Throughout the session, the instance is available through WApplication::instance() (or through #wApp). The application may be quited either using the method quit(), or because of a timeout after the user has closed the window, but not because the user does not interact: keep-alive messages in the background will keep the session around as long as the user has the page opened. In either case, the application object is deleted, allowing for cleanup of the entire widget tree, and any other resources.
The WApplication object provides access to session-wide settings, including:
Enumeration that indicates the method for dynamic (AJAX-alike) updates ((deprecated).
Creates a new application instance.
The
environment provides information on the initial request, user agent, and deployment-related information.
Adds JavaScript statements that should be run continuously.
This is an internal method.
It is used by for example layout managers to adjust the layout whenever the DOM tree is manipulated.
Adds an HTML meta header.
A meta header can only be added in the following situations:
When a header was previously set for the same
name, its contents is replaced.
These situations coincide with WEnvironment::ajax() returning
false (see environment()).
Adds an HTML meta header.
This overloaded method allows to define both "name" meta headers, relating to document properties as well as "http-equiv" meta headers, which define HTTP headers.
Adds an HTML meta link.
When a link was previously set for the same
href, its contents are replaced. When an empty string is used for the arguments
media,
hreflang,
type or
sizes, they will be ignored.
Returns the Ajax communication method (deprecated).
Returns the appRoot special property.
This returns the "appRoot" property, with a trailing slash added to the end if it was not yet present.
The property "appRoot" was introduced as a generalization of the working directory for the location of files that do not need to be served over http to the client, but are required by the program to run properly. Typically, these are message resource bundles (xml), CSV files, database files (e.g. SQLite files for Wt::Dbo), ...
Some connectors do not allow you to control what the current working directory (CWD) is set to (fcgi, isapi). Instead of referring to files assuming a sensible CWD, it is therefore better to refer to them relative to the application root.
The appRoot property is special in the sense that it can be set implicitly by the connector (see the connector documentation for more info). If it was not set by the connector, it can be set as a normal property in the configuration file (the default wt_config.xml describes how to set properties). If the property is not set at all, it is assumed that the appRoot is CWD and this function will return an empty string.
Usage example:
messageResourceBundle().use(appRoot() + "text"); messageResourceBundle().use(appRoot() + "charts"); Wt::Dbo::backend::Sqlite3 sqlite3_(appRoot() + "planner.db");
Attach an auxiliary thread to this application.
In a multi-threaded environment, WApplication::instance() uses thread-local data to retrieve the application object that corresponds to the session currently being handled by the thread. This is set automatically by the library whenever an event is delivered to the application, or when you use the UpdateLock to modify the application from an auxiliary thread outside the normal event loop.
When you want to manipulate the widget tree inside the main event loop, but from within an auxiliary thread, then you cannot use the UpdateLock since.
Protects a function against deletion of the target object.
When posting an event using WServer::post(), it is convenient to bind an object method to be called. However, while WServer::post() guarantees that the application to which an event is posted is still alive, it may be that the targeted widget (or WObject in general) has been deleted.
This function wraps such an object method with a protection layer which guarantees that the method is only called when the targeted object is alive, in the same way as how a signal automatically disconnects slots from objects that are being deleted.
You typically will bind the function immediately within the event loop where you register the "callback", and pass this bound function to (typically an external thread) which calls post() later on. What you cannot do is bind the function only later on, since at that time the target object may already have been destroyed.
As with the signal/slot connection tracking mechanism, this requires that the object is a WObject.
Binds a top-level widget for a WidgetSet deployment.
This method binds a
widget to an existing element with DOM id
domId on the page. The element type should correspond with the widget type (e.g. it should be a <div> for a WContainerWidget, or a <table> for a WTable).
Returns the style class set for the entire page <body>.
Returns a bookmarkable URL for the current internal path.
Is equivalent to
bookmarkUrl(internalPath()), see bookmarkUrl(const std::string&) const.
To obtain a URL that is refers to the current session of the application, use url() instead.
Returns.
See also 10.1 Session management (wt_config.xml) for configuring the session-tracking method..
You can use bookmark.
internalPathshould be UTF8 encoded (we may fix the API to use WString in the future).
Changes the session id.
To mitigate session ID fixation attacks, you should use this method to change the session ID to a new random value after a user has authenticated himself.
Returns the close message..
Declares an application-wide JavaScript function.
The function is stored in WApplication::javaScriptClass().
The next code snippet declares and invokes function foo:
Defers rendering of the current event response.
This method defers the rendering of the current event response until resumeRendering() is called. This may be used if you do not want to actively block the current thread while waiting for an event which is needed to complete the current event response. Note that this effectively freezes the user interface, and thus you should only do this if you know that the event you are waiting for will arrive shortly, or there is really nothing more useful for the user to do than wait for the action to complete.
A typical use case is in conjunction with the Http::Client, to defer the rendering while waiting for the Http::Client to complete.
The function may be called multiple times and the number of deferral requests is counted. The current response is deferred until as many calls to resumeRendering() have been performed.
Returns the server document root.
This returns the filesystem path that corresponds to the document root of the webserver...
Enables server-initiated updates.
By default, updates to the user interface are possible only at startup, during any event (in a slot), or at regular time points using WTimer. This is the normal Wt is
true, this enables "server push" (what is called 'comet' in AJAX terminology). Widgets may then be modified, created or deleted outside of the event loop (e.g. in response to execution of another thread), and these changes are propagated by calling triggerUpdate().
There are two ways for safely manipulating a session's UI, with respect to thread-safety and application life-time (the library can decide to terminate an application if it lost connectivity with the browser).
The easiest and less error-prone solution is to post an event, represented by a function/method call, to a session using WServer::post().
The method is non-blocking: it returns immediately, avoiding dead-lock scenarios. The function is called from within a thread of the server's thread pool, and not if the session has been or is being terminated. The function is called in the context of the targeted application session, and with exclusive access to the session.
A more direct approach is to grab the application's update lock and manipulate the application's state directly from another thread.
At any time, the application may be deleted (e.g. because of a time out or because the user closes the application window). You should thus make sure you do no longer reference an application after it has been deleted. When Wt decides to delete an application, it first runs WApplication::finalize() and then invokes the destructor. While doing this, any other thread trying to grab the update lock will unblock, but the lock will return
false. You should therefore always check whether the lock is valid.
An example of how to modify the widget tree outside the event loop and propagate changes is:
// You need to have a reference to the application whose state // you are about to manipulate. // You should prevent the application from being deleted somehow, // before you could grab the application lock. Wt::WApplication *app = ...; { // Grab the application lock. It is a scoped lock. Wt::WApplication::UpdateLock lock(app); if (lock) { // We now have exclusive access to the application: we can safely modify the widget tree for example. app->root()->addWidget(new Wt::WText("Something happened!")); // Push the changes to the browser app->triggerUpdate(); } }
Enc and WAnchor) but you may want to use this function to encode URLs which you use in WTemplate texts.
Returns the environment information.
This method returns the environment object that was used when constructing the application. The environment provides information on the initial request, user agent, and deployment-related information.
Finalizes the application, pre-destruction.
This method is invoked by the Wt library before destruction of a new application. You may reimplement this method to do additional finalization that is not possible from the destructor (e.g. which uses virtual methods).
Finds a widget by name.
This finds a widget in the application's widget hierarchy. It does not only consider widgets in the root(), but also widgets that are placed outside this root, such as in dialogs, or other "roots" such as all the bound widgets in a widgetset application.
Grabs and returns the lock for manipulating widgets outside the event loop (deprecated).
You need to keep this lock in scope while manipulating widgets outside of the event loop. In normal cases, inside the Wt event loop, you do not need to care about it.
Event signal emitted when enter was pressed.
The application receives key events when no widget currently has focus. Otherwise, key events are handled by the widget in focus, and its ancestors.
Event signal emitted when escape was pressed.
The application receives key events when no widget currently has focus. Otherwise, key events are handled by the widget in focus, and its ancestors.
Event signal emitted when a "character" was entered.
The application receives key events when no widget currently has focus. Otherwise, key events are handled by the widget in focus, and its ancestors.
Event signal emitted when a keyboard key is pushed down.
The application receives key events when no widget currently has focus. Otherwise, key events are handled by the widget in focus, and its ancestors.
Event signal emitted when a keyboard key is released.
The application receives key events when no widget currently has focus. Otherwise, key events are handled by the widget in focus, and its ancestors.
Returns the style class set for the entire page <html>.
Initializes the application, post-construction.
This method is invoked by the Wt library after construction of a new application. You may reimplement this method to do additional initialization that is not possible from the constructor (e.g. which uses virtual methods).
Returns the current application instance.
This is the same as the global define #wApp. In a multi-threaded server, this method uses thread-specific storage to fetch the current session.
Returns the current internal path.
When the application is just created, this is equal to WEnvironment::internalPath().
returnedpath is UTF8 (we may fix the API to use WString in the future).
Signal which indicates that the user changes the internal path.
This signal indicates a change to the internal path, which is usually triggered by the user using the browser back/forward buttons.
The argument contains the new internal path.
internalpath is UTF8 encoded (we may fix the API to use WString in the future).
Returns whether an internal path is valid by default..
Checks if the internal path matches a given path.
Returns whether the current internalPath() starts with
path (or is equal to
path). You will typically use this method within a slot conneted to the internalPathChanged() signal, to check that an internal path change affects the widget. It may also be useful before changing
path using setInternalPath() if you do not intend to remove sub paths when the current internal path already matches
path.
The
path must start with a '/'.
internalpath is UTF8 encoded (we may fix the API to use WString in the future).
Returns a part of the current internal path.
This is a convenience method which returns the next
folder in the internal path, after the given
path.
For example, when the current internal path is
"/project/z3cbc/details", this method returns
"details" when called with
"/project/z3cbc/" as
path argument.
The
path must start with a '/', and internalPathMatches() should evaluate to
true for the given
path. If not, an empty string is returned and an error message is logged.
internalpath is UTF8 encoded (we may fix the API to use WString in the future).
Returns whether the current internal path is valid..
Returns whether a widget is exposed in the interface.
The default implementation simply returns
true, unless a modal dialog is active, in which case it returns
true only for widgets that are inside the dialog.
You may want to reimplement this method if you wish to disallow events from certain widgets even when they are inserted in the widget hierachy.
Returns whether the application has quit. (deprecated)
Returns the name of the application JavaScript class.
This JavaScript class encapsulates all JavaScript methods specific to this application instance. The method is foreseen to allow multiple applications to run simultaneously on the same page in Wt::WidgtSet mode, without interfering.
Returns the layout direction.
Returns the loading indicator.
Returns the current locale.
Returns the resource object that provides localized strings.
The default value is a WMessageResourceBundle instance, which uses XML files to resolve localized strings, but you can set a custom class using setLocalizedStrings().
WString::tr() is used to create localized strings, whose localized translation is looked up through this object, using a key.
Adds an entry to the application log.
Starts a new log entry of the given
type in the Wt application log file. This method returns a stream-like object to which the message may be streamed.
A typical usage would be:
wApp->log("notice") << "User " << userName << " logged in successfully.";
This would create a log entry that looks like:
[2008-Jul-13 14:01:17.817348] 16879 [/app.wt Z2gCmSxIGjLHD73L] [notice] "User bart logged in successfully." *
Makes an absolute URL.
Returns an absolute URL for a given (relative url) by including the schema, hostname, and deployment path.
If
url is "", then the absolute base URL is returned. This is the absolute URL at which the application is deployed, up to the last '/'..
Returns the current maximum size of a request to the application.
The returned value is the maximum request size in bytes.
The maximum request size is configured in the configuration file, see 10.2 General application settings (wt_config.xml).
Returns the message resource bundle.
The message resource bundle defines the list of external XML files that are used to lookup localized strings.
The default localizedStrings() is a WMessageResourceBundle object, and this method returns localizedStrings() upcasted to this type.
Notifies an event to the application.
This method is called by the event loop for propagating an event to the application. It provides a single point of entry for events to the application, besides the application constructor.
You may want to reimplement this method for two reasons:
In either case, you will need to call the base class implementation of notify(), as otherwise no events will be delivered to your application.
The following shows a generic template for reimplementhing this method for both managing request resources and generic exception handling.
MyApplication::notify(const WEvent& event) { // Grab resources for during request handling try { WApplication::notify(event); } catch (MyException& exception) { // handle this exception in a central place } // Free resources used during request handling }
Note that any uncaught exception throw during event handling terminates the session.
Processes UI events.
You may call this method during a long operation to:
This method starts a recursive event loop, blocking the current thread, and resumes when all pending user interface events have been processed.
Because a thread is blocked, this may affect your application scalability.
Pushes a (modal) widget onto the expose stack.
This defines a new context of widgets that are currently visible..
You might want to make sure no more events can be received from the user, by not having anything clickable, for example by displaying only text. Even better is to redirect() the user to another, static, page in conjunction with quit().
Reads a configuration property.
Tries to read a configured value for the property
name. The method returns whether a value is defined for the property, and sets it to
value.
Redirects the application to another location.
The client will be redirected to a new location identified by
url. Use this in conjunction with quit() if you want to the application to be terminated as well.
Calling redirect() does not imply quit() since it may be useful to switch between a non-secure and secure (SSL) transport connection.
Refreshes the application.
This lets the application to refresh its data, including strings from message-resource bundles. This done by propagating WWidget::refresh() through the widget hierarchy.
This method is also called when the user hits the refresh (or reload) button, if this can be caught within the current session.
The reload button may only be caught when Wt is configured so that reload should not spawn a new session. When URL rewriting is used for session tracking, this will cause an ugly session ID to be added to the URL. See 10.1 Session management (wt_config.xml) for configuring the reload behavior ("<reload-is-new-session>").
Returns the URL at which the resources are deployed.
This returns the value of the 'resources' property set in the configuration file, and may thus be a URL relative to the deployment path.
Removes a cookie.
Removes one or all meta headers.
Removes the meta header with given type and name (if it is present). If name is empty, all meta headers of the given type are removed.
Removes the HTML meta link.
Signal which indicates that too a large request was received.
The integer parameter is the request size that was received in bytes.
Loads a JavaScript library.
Loads a JavaScript library located at the URL
url. Wt keeps track of libraries (with the same URL) that already have been loaded, and will load a library only once. In addition, you may provide a
symbol which if already defined will also indicate that the library was already loaded (possibly outside of Wt when in WidgetSet mode).
This method returns
true only when the library is loaded for the first time.
JavaScript libraries may be loaded at any point in time. Any JavaScript code is deferred until the library is loaded, except for JavaScript that was defined to load before, passing
false as second parameter to doJavaScript().
Although Wt includes an off-the-shelf JQuery version (which can also be used by your own JavaScript code), you can override the one used by Wt and load another JQuery version instead, but this needs to be done using requireJQuery().
Loads a custom JQuery library.
Wt ships with a rather old version of JQuery (1.4.1) which is sufficient for its needs and is many times smaller than more recent JQuery releases (about 50% smaller).
Using this function, you can replace Wt's JQuery version with another version of JQuery.
requireJQuery("jquery/jquery-1.7.2.min.js");
"Resolves" a relative URL taking into account internal paths., Wt will simply prepend a sequence of "../" path elements to correct for the internal path. When passed an absolute URL (i.e. starting with '/'), the url is returned unchanged.
For URLs passed to the Wt API (and of which the library knows it is represents a URL) this method is called internally by the library. But it may be useful for URLs which are set e.g. inside a WTemplate.
Returns the URL at which the resources are deployed.
Returns resolveRelativeUrl(relativeResourcesUrl())
Resumes rendering of a deferred event response.
Returns the root container.
This is the top-level widget container of the application, and corresponds to entire browser window. The user interface of your application is represented by the content of this container.
The root() widget is only defined when the application manages the entire window. When deployed as a WidgetSet application, there is no root() container, and
0 is returned. Instead, use bindWidget() to bind one or more root widgets to existing HTML <div> (or other) elements on the page.
Returns the unique identifier for the current session.
The session id is a string that uniquely identifies the current session. Note that the actual contents has no particular meaning and client applications should in no way try to interpret its value.
Sets the Ajax communication method (deprecated).
This method has no effect.
Since Wt 3.1.8, a communication method that works is detected at run time. For widget set mode, cross-domain Ajax is chosen if available.
Sets a style class to the entire page <body>. is added to the page.::deploymentPath()) in the current domain. To set a proper value for domain, see also RFC2109.
Sets a CSS theme.
This sets a WCssTheme.
Sets a style class to the entire page <html>.
Changes the internal path.
A Wt:
Note, since Wt 3.1.9, the actual form of the URL no longer affects relative URL resolution, since now Wt includes an HTML
meta base tag is
true, this signal is also emitted by setting the path.
A url that includes the internal path may be obtained using bookmarkUrl().
The
internalPath must start with a '/'. In this way, you can still use normal anchors in your HTML. Internal path changes initiated in the browser to paths that do not start with a '/' are ignored.
pathshould be UTF8 encoded (we may fix the API to use WString in the future)..
Sets the name of the application JavaScript class.
This should be called right after construction of the application, and changing the JavaScript class is only supported for WidgetSet mode applications. The
className should be a valid JavaScript identifier, and should also be unique in a single page.
Sets the layout direction.
The default direction is LeftToRight.
This sets the language text direction, which by itself sets the default text alignment and reverse the column orders of <table> elements.
In addition, Wt will take this setting into account in WTextEdit, WTableView and WTreeView (so that columns are reverted), and swap the behaviour of WWidget::setFloatSide() and WWidget::setOffsets() for RightToLeft languages..
For example:
body .sidebar { float: right; } body.Wt-rtl .sidebar { float: left; }
Sets the loading indicator.
The loading indicator is shown to indicate that a response from the server is pending or JavaScript is being evaluated.
The default loading indicator is a WDefaultLoadingIndicator.
When setting a new loading indicator, the previous one is deleted.
Changes::locale()), and this is the locale that was configured by the user in his browser preferences, and passed using an HTTP request header.
Sets the resource object that provides localized strings.
The
translator resolves localized strings within the current application locale.
The previous resource is deleted, and ownership of the new resource passes to the application.
Sets the theme.
The theme provides the look and feel of several built-in widgets, using CSS style rules. Rules for each theme are defined in the
resources/themes/theme
/ folder.
The default theme is "default" CSS theme.
Changes the threshold for two-phase rendering.
This changes the threshold for the
size of.
The initial value is read from the configuration file, see 10.2 General application settings (wt_config.xml)..
Returns the window title..
Handles a browser unload event.
The browser unloads the application when the user navigates away or when he closes the window or tab.
When
reload-is-new-session Wt 3.1.6).
Returns whether server-initiated updates are enabled..
Adds an external style sheet.
The
link is a link to a stylesheet.
The
media indicates the CSS media to which this stylesheet applies. This may be a comma separated list of media. The default value is "all" indicating all media.
This is an overloaded method for convenience, equivalent to:
useStyleSheet(Wt::WCssStyleSheet(link, media))
Conditionally adds an external style sheet.
This is an overloaded method for convenience, equivalent to:
useStyleSheet(Wt::WCssStyleSheet(link, media),.
If not empty,
condition is a string that is used to apply the stylesheet to specific versions of IE. Only a limited subset of the IE conditional comments syntax is supported (since these are in fact interpreted server-side instead of client-side). Examples are: | http://www.webtoolkit.eu/wt/doc/reference/html/classWt_1_1WApplication.html | CC-MAIN-2013-20 | refinedweb | 4,346 | 58.08 |
You have a Collection but you need a Java language array.
Use the Collection method toArray( ) .
If you have an ArrayList or other Collection and you need a Java language array, you can get it just by calling the Collection's toArray( ) method. With no arguments, you get an array whose type is Object[]. You can optionally provide an array argument, which is used for two purposes:
The type of the array argument determines the type of array returned.
If the array is big enough (and you can ensure that it is by allocating the array based on the Collection's size( ) method), then this array is filled and returned. If the array is not big enough, a new array is allocated instead. If you provide an array and objects in the Collection cannot be casted to this type, you get an ArrayStoreException.
Example 7-1 shows code for converting an ArrayList to an array of type Object.
import java.util.*; /** ArrayList to array */ public class ToArray { public static void main(String[] args) { ArrayList al = new ArrayList( ); al.add("Blobbo"); al.add("Cracked"); al.add("Dumbo"); // al.add(new Date( )); // Don't mix and match! // Convert a collection to Object[], which can store objects // of any type. Object[] ol = al.toArray( ); System.out.println("Array of Object has length " + ol.length); // This would throw an ArrayStoreException if the line // "al.add(new Date( ))" above were uncommented. String[] sl = (String[]) al.toArray(new String[0]); System.out.println("Array of String has length " + sl.length); } } | https://flylib.com/books/en/2.213.1.126/1/ | CC-MAIN-2020-50 | refinedweb | 254 | 59.4 |
0
OK, as you can probably tell from my code, I am trying to design a program that will keep requesting a password until the right password is entered (using a Do While Loop with char instead of int).
this is my code: (this is one of my first solo pieces of coding, so go easy on me if I have done something completely wrong, as I am quite knew to this.)
#include<iostream.h> using namespace std; int main() { char pass[6]; pass[0] = 'p'; pass[1] = 'a'; pass[2] = 's'; pass[3] = 'q'; pass[4] = 'w'; pass[5] = 'e'; do { cout<< "Enter Password:" <<endl; cin>>pass; } while(pass!=pass[0],pass[1],pass[2],pass[3],pass[4],pass[5]); return 0; }
I am not to sure about the:
(pass!=pass[0],pass[1],pass[2],pass[3],pass[4],pass[5]);
as this was just me fiddling around with arrays etc.
When i try and compile it as it is, i get the error message:[/B]
Error E2034 cannot convert 'char' to 'char*' in function main ()
Any suggestions? :confused:
Thank you in advance :) | https://www.daniweb.com/programming/software-development/threads/90518/need-help-with-my-code | CC-MAIN-2018-22 | refinedweb | 186 | 75.13 |
Let's compile the following code with ECJ compiler from Eclipse Mars.2 bundle:
import java.util.stream.*;
public class Test {
String test(Stream<?> s) {
return s.collect(Collector.of(() -> "", (a, t) -> {}, (a1, a2) -> a1));
}
}
$ java -jar org.eclipse.jdt.core_3.11.2.v20160128-0629.jar -8 -g Test.java
javap -v -p Test.class
(a, t) -> {}
private static void lambda$1(java.lang.String, java.lang.Object);
descriptor: (Ljava/lang/String;Ljava/lang/Object;)V
flags: ACC_PRIVATE, ACC_STATIC, ACC_SYNTHETIC
Code:
stack=0, locals=2, args_size=2
0: return
LineNumberTable:
line 5: 0
LocalVariableTable:
Start Length Slot Name Signature
0 1 0 a Ljava/lang/String;
0 1 1 t Ljava/lang/Object;
LocalVariableTypeTable:
Start Length Slot Name Signature
0 1 1 t !*
!*
LocalVariableTypeTable
Theentry at that index must contain aentry at that index must contain a
constant_poolstructure (§4.4.7) representing a field signature which encodes the type of a local variable in the source program (§4.7.9.1).structure (§4.4.7) representing a field signature which encodes the type of a local variable in the source program (§4.7.9.1).
CONSTANT_Utf8_info
!*
LocalVariableTypeTable
!*
!*
LocalVariableTypeTable
The token
! is used by ecj to encode a capture type in generic signatures. Hence
!* signifies a capture of an unbounded wildcard.
Internally, ecj uses two flavours of
CaptureBinding, one to implement, what JLS 18.4 calls "fresh type variables", the other to implement captures a la JLS 5.1.10 (which uses the same lingo of "free type variables"). Both produce a signature using
!. At a closer look, in this example we have an "old-style" capture:
t has type
capture#1-of ?, capturing the
<T> in
Stream<T>.
The problem is: JVMS 4.7.9.1. doesn't seem to define an encoding for such fresh type variables (which among other properties have no correspondence in source code and hence no name).
I couldn't get
javac to emit any
LocalVariableTypeTable for the lambda, so they might simply avoid answering this question.
Given that both compilers agree on inferring
t to a capture, why does one compiler generate a LVTT, where the other does not? JVMS 4.7.14 has this
This difference is only significant for variables whose type uses a type variable or parameterized type.
According to JLS, captures are fresh type variables, so an LVTT entry is significant, and it is an omission in JVMS not to specify a format for this type.
The above only describes and explains the status quo, demonstrating that no specification tells a compiler to behave differently from current status. Obviously, this is not an entirely desirable situation. | https://codedump.io/share/tVJUzJXi04oJ/1/strange-quotquot-entry-in-localvariabletypetable-when-compiling-with-eclipse-compiler | CC-MAIN-2017-34 | refinedweb | 436 | 58.38 |
On 8/18/05, Guido van Rossum <gvanrossum at gmail. In practice, it causes much confusion if you ever use a local variable that has the same name as the built-in namespace. If you intend to use id as a variable, it leads to confusing messages when a typo or editing error accidentally removes the definition, because the name will still be defined for you. It also leads to confusion when you later want to use the builtin in the same module or function (or in the debugger). If Python defines the name, I don't want to provide a redefinition. Jeremy | https://mail.python.org/pipermail/python-dev/2005-August/055517.html | CC-MAIN-2021-31 | refinedweb | 103 | 68.3 |
im just trying to open a file.
i have done it for 100 times, and then I sent SIGCHLD signal to other processes and i think right after that i couldn't open that file anymore.
#include <signal.h>
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#define FLAGS IPC_CREAT | 0644
int main() {
int res =open("results.txt",FLAGS);
if(res== -1) { printf("error!!")} //prints it every time
return 0;}
You're doing something strange with the flags. I think your intention is as per below code:
#define FLAGS O_CREAT #define MODE 0644 int main() { int res =open("results.txt",FLAGS,MODE); if(res== -1) { printf("error!!")} //prints it every time return 0; } | https://codedump.io/share/hqbOA9J0bxmV/1/open-file-linux-eclipse-c-error-after-getchld | CC-MAIN-2018-09 | refinedweb | 114 | 79.46 |
When we developers work on a Web application, we focus mostly on the server side, the N-tier layering in its architecture, Model View Controller in its presentation layer, Data Transfer Objects across layers, various design patterns, database organization, etc. Then we focus on client or browser side where the presentation layer is rendered – CSS, HTML, JavaScript, etc. Here in this Web application development process, the thrust is primarily on the server side and occasionally we do some tweaking in JavaScript or CSS and achieve some presentation behavior on the browser screen. Now a days Microsoft has helped us in doing most of the development on the server side itself and within Visual Studio IDE and this is especially true if use server controls in an ASP.NET page.
But sometimes, it takes pretty long to detect some issues, as it requires an insight on overall programming model in the context of Web application; this includes the World Wide Web infrastructure, I call it infrastructure as the Web application always assumes this WWW backbone. So what is this overall programming model in the context of modern Web application? We will find an answer in this article.
Once I started thinking on this issue in the past, I came across some wonderful articles on Web development series by Sean Ewington (Beginner’s Walk – Web Development in Code Project) and clearly in this series I find there is a concerted approach to demystify what is going on inside a Web application by studying its several ingredients for development for example, JavaScript, HTML, CSS, ASP.NET with its associated state management techniques. Now when we are talking about this Web application development, things are getting little more complicated as user expectation is increasing and technology landscape is not being stationary. It is also evolving based on demand. For example, today AJAX has appeared in this Web application domain and broadband communication technology is prevalent. So I thought it will be good if I combine all these technologies together and present the whole picture as how the whole thing is working, what are the constraints we need to satisfy and finally take an inside dip into an ASP.NET development environment. In this article, we will look into the overall programming model as has been used in ASP.NET development both in server side as well as in client, i.e. browser side and the model of its infrastructure, i.e. World Wide Web. I am sure this insight will help all new developers who start their development directly using Microsoft Visual Studio and design their web pages for ASP.NET Web application.
In order to understand a Web application better, we need to differentiate it from a Desktop application. This differentiation will help us in understanding the constraints in a Web application.
Desktop application is a stand alone application, say for example Microsoft Word. We need to install it on our computer. Occasionally we might use the internet to download for updates, but the code that runs these applications resides entirely on our Desktop PC.
Web application on the other hand runs on a Web server somewhere and we access the application with our Web browser on the internet. It is always updated on the server and when we use it on client browser we always get the updated version, say for example as in Google mail.
In Web application, there are waits (although things are changing with the power of internet bandwidth and asynchronous request made through AJAX which is described later) waiting for the server to respond, waiting for a request to come back with response through internet and waiting for screen to refresh with this page data on the browser. This wait is called latency between a request and a response.
Desktop applications do not have to depend on something like HTTP (described later), so the application states can easily be managed. Additionally, the stand alone Desktop application usually maintains connection on the database server which can be on the same Desktop PC or to a database server connected through LAN (Local Area Network).
On the other hand, in a Web application, the users fill out form fields and click a 'Submit' button. Then, the entire form is sent to the Web server, the server delegates processing to an engine based on the extension of the page, and when the processing is done, it sends back a completely new page. The new page might be HTML with a new form with some data filled in or it might be verification with its results or possibly a page with certain options selected based on data entered in the original form. Of course, while the script or program on the server is processing and returning a new form, the users have to wait. Our screen will go blank and then be redrawn as data comes back from the server. Here lies the difference in experience. Here, the users don't get instant feedback as is usually observed in a Desktop application.
Ajax attempts to minimize the gap between the interactivity (which is called as user experience) of a Desktop application and the Web application.
Another distinct difference we should note from the security standpoint. In this context a Web application running on a browser should not get direct hardware access or direct OS access without required permissions and specific plug-ins like Adobe Flash player. This is called Sandbox security in a browser, it keeps HTML rendering and JavaScript execution within the browser in isolation from the client OS. Sandboxing is a generic security term referring to a limited-privilege application execution environment. Additionally, again due to security reasons browsers do not allow scripted calls to URLs located outside the domain of the current page, so there are some restrictions.
The following deployment diagram shows a Web application is deployed on a Web server and database in a separate server connected through LAN. Clients which could be accessing this Web applications through browsers on laptops, desktops, mobile, etc. In this simplistic diagram, note the differences on technologies on the client platform as well as within WWW including communication media. Here WWW is not just the components outside the client and server; it also standardizes what will flow through the wire at least with the application level protocol (in this case HTTP) between client and server.
Now let us have a look at the WWW infrastructure. When we talk about the World Wide Web (WWW) infrastructure for a Web application, the interesting model that comes on top is REST. This is explained in detail in his great PHD thesis by no other than Roy T. Fielding (dissertation) where he gave an architectural model for the World Wide Web.
In his thesis, he has mentioned that World Wide Web is really vast and its scale is beyond imagination. It has pervaded and penetrated all round the world and in every corner (I think it is only next to Ether in Physics “the imponderable elastic media” of radio communication). This full potential has been achieved only because of universal method of electronic communication and a standard naming system. WWW also has several constraints to ensure scalability. Roy had translated these requirements in his architectural style known as Representational State Transfer (REST).
In REST, the most important element is Resource. Resource in REST is identified by URL (he referred to as URI) which can be an application like bookcrossing. A Resource can be a Word Document, a PDF Document, an HTML document, an ASP.NET page, an image, etc.
The key characteristic of REST model is loose coupling between its member components. REST defines stateless collaboration among its member components. Roy had explained that this constraint is a must as when communicating over a large electronic network, we should not take certain things for granted. Often, the amount of time between request and response is relatively long, servers may get restarted, and clients may drop out in the middle of a conversation. For server to keep track all these states is extremely difficult and definitely not scalable when we consider the magnitude of internet connections on the World Wide Web.
So as per this standardization, each request in REST model is expected to be self-contained. Thus, the servers do not need to know where it came from, who is the client and what was his/her previous request. All they do is respond to requests as they come without keeping any continuity of state.
Another feature of World Wide Web (as it is consisting of large electronic networks for example, wireless communication, fiber optics and the old dial-ups to name a few) is that technology changes rapidly so it is important that the WWW is able to work independent of the underlying details.
REST is layered. In a layered system, each component layer behavior is such that each layer component cannot "see" beyond the immediate layer with which it is interacting. In REST, layering promotes independence through encapsulation.
In terms of REST, a well-designed Web application running on WWW should behave as a network of web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use.
As explained in REST in World Wide Web, the component interface has been designed to be efficient for large-grain hypermedia data transfer that is simple but necessarily not optimal especially when we apply them for an interactive Web application.
In REST, an architect uses the option by sending raw data to the recipient along with metadata that describes the data type, so that the recipient (in Web application usually the browser) can choose her/his own rendering engine.
Another constraint as set in REST model comes from the style Code-on-Demand. REST allows client functionality to be extended by downloading and executing code in the form of scripts. Since the script (JavaScript) is in text form, the visibility is not impaired (think of different client platforms for example, Windows, Linux, Mac, etc., think of firewalls for security). Also note that this part, i.e. text/HTML/JavaScript is interpreted and not compiled before executing on the browser environment.
In REST architecture, the primary connector types are client and server. The essential difference between the two is that a client initiates communication by making a request, whereas a server listens for connection requests and responds by opening its port in order to supply access to its services.
A user agent uses a client connector to initiate a request and becomes the ultimate recipient of the response. The most common example is a Web browser, which provides access to information services provided by servers and renders service responses according to the application needs.
Web server uses a server connector to service a requested resource.
The following figure (taken from the PHD Thesis) demonstrates process view architecture of a REST-based WWW when a Web application is running on top.
There are three different scenarios: a, b, and c.
In all of them, client requests were not satisfied by the user agent’s client connector cache, so each request has been routed to the resource origin according to the properties of each resource identifier and the configuration of the client connector.
Request (a) has been sent to a local proxy, which in turn accesses a caching gateway found by DNS lookup, which forwards the request on to be satisfied by an origin Server.
Request (b) is sent directly to an origin server, which is able to satisfy the request from its own cache.
Request (c) is sent to a proxy that is capable of directly accessing WAIS, an information service that is separate from the Web architecture, and translating the WAIS response into a format recognized by the generic connector interface.
For network-based applications, system performance is dominated by network communication. For a distributed hypermedia system (which was the original target application on WWW), component interactions consist of large-grain data transfers rather than computation-intensive tasks. The REST model of WWW is developed in response to those needs. Its focus upon the generic connector interface of resources and representations has enabled intermediate processing, caching, and substitutability of components. This in turn has allowed Web-based applications on WWW to scale from 100,000 requests per day in 1994 to 600,000,000 requests per day in 1999 and still growing (can anyone say where it stands today?).
Now take a take a look at ASP.NET in the context of Web application just discussed above. With Microsoft ASP.NET technology, ASP.NET files are just text files and placed in the Web server (Microsoft IIS) and request URL (resource link entered or keyed on browser) points to those files. They are like any other Resource file as has been depicted in REST. When a request comes for an ASP.NET page, the server will locate the requested file and ask the ASP.NET engine to serve the request. The ASP.NET engine will process the server tags and generate HTML for it and return back to the client.
Now take a look at what AJAX has brought in terms of changes on the architectural model. The following diagram shows the difference between a classic Web application and AJAX application. Please note that XML data shown in the diagram could be any text based data like Microsoft JSON.
AJAX application uses a lot of small requests for data, rather than re-requesting the whole page. This allows AJAX applications to be more responsive, as only the new additional delta data need to be transferred between client and server. People who criticize AJAX say it requires more bandwidth, but it is not. Let me explain, remember Just In Time design principle, i.e. when a page is opened, the user may not be interested in the whole page data but rather his interest is localized and the user interface has been designed keeping this in mind (a specific tree node, a panel within a window, etc.) – so in this case we can populate page initially with optimum information and then let user demand for more.
(Note – This diagram is taken from ‘Making Dizzy shine with Ajax’ project report by Mike Arthur and the diagram is copyright to Adaptive Path, LLC 2005.)
(Note – This diagram is taken from ‘Making Dizzy shine with Ajax’ project report by Mike Arthur and the diagram is copyright to Adaptive Path, LLC 2005)
As shown in the diagram, theoretically there can be any number of connections between client and server working asynchronously which may be overlapped but there are limitations imposed by browsers. For example, Internet Explorer allows up to 2 connections to work simultaneously with any server.
As per Microsoft, Internet Explorer follows RFC2616 which states: “Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.”
In this simplistic view (without going into details like IIS architecture, Internet Explorer (browser) architecture, TCP communication stack, etc.), the left side world is the server world and right side world is the client world and they are connected through internet governed by WWW infrastructure standards, so HTTP is the application protocol between this client and server. The figure above illustrates both classic ASP.NET as well as ASP.NET AJAX with partial rendering. We can extend our visualization further as if both left and right worlds are object oriented worlds as per object oriented paradigm (OOP) – objects are serialized and de-serialized each time during each client server interaction and only text/html/JavaScript streams are flowing through internet conduit. Alternatively if we are comfortable with messaging based interactions then we imagine that client and server are connected over the HTTP wire with JSON/XML as messages being exchanged.
Here both client and server provide event driven programming environment. Developer needs to intercept server side and/or client side events to do custom coding for custom behaviors or actions.
Let us have a look at the client side first. When we consider a Web application on a browser, we come across two types of object models:
As a result, the DOM which represents the HTML document is dynamic in nature; it keeps changing during web page round trips between client and server. There are content changes, color changes; other attribute changes like selection changes – some controls may disappear and some new controls get added, etc. based on user interactions with the page. When we combine these behavior changes with server trips say during post-backs (the request and response are happening on the same page), we can imagine as if DOM objects are being serialized as text streams and passing through Internet pipe between client and server in both directions. On receiving response, the client browser de-serializes again and reconstructs the DOM for new presentation and further actions.
Browser programming environment is JavaScript for historical reasons started with Netscape browser and then as standard all browsers have built-in JavaScript interpreter.
JavaScript is primarily used to embed functions in HTML pages that glue together a user's actions with DOM elements in the page. We need to be careful when using the client side events while accessing DOM elements because of the dynamic nature of DOM. Unless we are careful, we may come across ‘Object expected’ error in JavaScript on some event handler say ‘onload’ whereas the same DOM element works fine and is valid for ‘onclick’ event handler. Because of the top down nature of interpretation, it is also important during load where we are placing the JavaScript method, at the beginning of the form (when DOM is not yet built) or at the end of the form (when DOM is already built).
There is no direct interface like remote procedure call (RPC) between server and client side applications as per the constraints imposed by WWW requirements and defined in REST model – the only common thing here is a stream of text which flows through Internet. This text stream contains both data (in the form of HTML) and program (i.e. JavaScript).
Using JavaScript, we can programmatically rename, edit, add, or delete elements in the displayed document and handle any events fired by such elements essentially using Document Object Model (DOM). In addition, we can perform any browser-specific actions, such as opening or popping up a new window or—what makes AJAX so special—invoke the popular XMLHttpRequest object and place asynchronous calls to a remote URL on Web server.
XMLHttpRequest
While JavaScript is a powerful language, putting complex application logic on the client can take a lot of time and effort. ASP.NET AJAX allows us to move some parts of the application processing to the client while using partial page rendering we can use server side event model and affect changes in the page behavior on client browser.
When a request arrives to the web server for an ASP.NET page, the runtime creates an instance of the page's code-behind class and invokes its ProcessRequest method, which starts the server side ASP.NET page lifecycle and, ultimately, generates the page's content, which is returned to the client in text/HTML/JavaScript form.
ProcessRequest
When we combine AJAX with this ASP.NET programming model, then we find there are two variations and they are partial rendering and remote service calls.
Partial rendering with AJAX follows model like a classic ASP.NET 2.0 application. On top of this, it provides a set of new server-side controls like update panel that we can use to implement flicker-free page updates.
Remote services, on the other hand, involve a service-oriented approach where the backend service is invoked by an AJAX front end. This latter approach provides more complete smooth user experience than the first one i.e. partial rendering model.
From an architectural viewpoint, partial rendering doesn't add anything new. It enhances existing ASP.NET applications with some AJAX capabilities—the most important of which is flicker-free page updates.
A partial rendering request is often referred to as an AJAX post-back. An AJAX post-back is like a classic ASP.NET post-back, except that page URL is called by a piece of script code defined in the client-side ASP.NET AJAX library (AJAX engine as shown in figure earlier).
Once on the server, the request goes through the typical lifecycle of post-back requests and raises such events as Init, Load, and Pre-Render. On the server, an AJAX post-back differs from a classic ASP.NET post-back only in the algorithm it uses to render the final markup. This different algorithm is the key to the improved performance and absence of page flickering.
Please note that origin of flickering is latency between request and response. The human eye can detect changes if this delay is more than 20 msec. AJAX eliminates this latency by using asynchronous request and response without the user being aware of this. Additionally since only a portion of the document is being changed each time, even the local processing involved during each interaction on client is significantly reduced and it results in smoother screen transition and rich user experience.
ASP.NET takes a set of files that contain code and markup and generates a Page class that is then compiled and cached.
Page
For each request to the page, the class is instantiated and a complete page lifecycle is followed and a set of events are executed on the server. Some of these events are usually overridden by us in the generated page class through our coding to have customized set of actions and behaviors. Controls in the page also participate in the lifecycle, data-binding to backend databases, reacting to user input, and dealing with changes to their state from the user’s previous view.
For example, the button control exposes a click event. When using it, we don't need to write code to examine all form variables on a page to know if the button was clicked. Instead, we just provide code for this ‘button_Click’ event handler. The event handler code can then update the HTML for the page or the properties and data of other controls on the page.
button_Click
An ASP.NET AJAX page must include one instance of the ScriptManager control. This control is the real nerve center of ASP.NET AJAX pages. It takes care of linking the page to any required framework script files and coordinates partial rendering if it detects that an AJAX post-back is occurring. The ScriptManager control checks an HTTP header in the request to determine whether the request is an AJAX post-back. When we talk about Microsoft AJAX library – there are two main files involved: MicrosoftAjax.js and MicrosoftAjaxWebForms.js.
ScriptManager
MicrosoftAjax.js defines language extensions supported by the Microsoft AJAX Library including namespaces, interfaces, enums, and inheritance. MicrosoftAjaxWebForms.js defines the partial rendering engine and the whole communication network stack.
In an ASP.NET AJAX application, we don't need to reference or embed any of these two files directly. This is one of the tasks for the ScriptManager which manages the download of proper JavaScript files and client-side data, including the AJAX library, proxy classes for remote services, localized version of script files, and globalization data.
Hyper Text Transfer Protocol (HTTP) is an application level protocol which is used in the "World Wide Web (WWW)". It is a request/response style protocol. Clients (Browsers) will request to a Server (Web Server) and the Server responds to these requests. HTTP uses TCP/IP.
In the response message HTTP contains status codes), etc.
In HTTP, requests are directed to resources using a generic interface with standard semantics that can be interpreted by intermediaries in WWW as well as by the machines that originate services. The result is an infrastructure that allows for layers of transformation and indirection that are independent of the information origin and this helps an Internet-scale, multi-vendor, scalable information system. To understand HTTP, let us try to translate into HTTP a real life scenario. Consider the following (taken from ‘An Overview of REST’ by Alan Trick):
[Contents of the milk]
Resources in HTTP are nouns; the verb in HTTP is a method. Two common methods are GET and POST.
GET
There were two pieces of meta-data in the example above.. The response contains a statement that it is sending back 1 percent milk.
The ‘content’ on the web is often an HTML page or another electronic format like a GIF image. It may also contain links to other resources on the Web.
Although early HTTP based single request/response per connection behavior made for simple implementations, it resulted in inefficient use of the underlying TCP transport due to the overhead of per-interaction set-up costs.
To handle this, the Web architects adopted a form of persistent connections, which uses length-delimited messages in order to send multiple HTTP messages on a single connection. For HTTP/1.0, this was done using the "keep-alive" directive within the connection header field. HTTP/1.1 eventually settled on making persistent connections the default, thus signaling its presence via the HTTP-version value, and only using the connection-directive ‘close’ to change the default.
As shown in the REST diagram, proxy lies between client and server. A client connects to the proxy, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy evaluates the request according to its filtering rules. If the request is validated by the filter, the proxy provides the resource by connecting to the relevant server and requesting the service on behalf of the client. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance..
On an IP network, clients should automatically send IP packets with a destination outside a given subnet mask to a network gateway. A subnet mask defines the IP range of a network. For example, if a network has a base IP address of 192.168.0.0 and has a subnet mask of 255.255.255.0, then any data going to an IP address outside of 192.168.0.X will be sent to that network's gateway. On a Windows computer, this gateway feature is achieved by sharing the internet connection on that desktop..
Ajax is shorthand for Asynchronous JavaScript and XML. AJAX as has been discussed in this article has more emphasis on user experience, i.e. responsiveness of application with user actions and it consists of HTML, JavaScript technology, DHTML, and DOM.
JavaScript Object Notation (JSON) in its simplest form allows us to transform a set of data represented in a JavaScript object into a string. It is more compact in notation than XML.
string
Postback is a mechanism of communication between client-side (browser) and server-side (IIS) of a Web application. Through postback, all contents of page/form(s) sent to the server from client for processing and after following page life cycle all server side contents get rendered and client (browser) displays that content. It is also termed as round-trips for the page.
Understanding architectural principles of underlying WWW infrastructure using REST model helps us in understanding the constraints on ASP.NET Web application with or without AJAX. Combine this with Microsoft Event models both in client as well as in server side and we get the complete picture as what is going on at least conceptually. Note that ASP.NET AJAX based Web application by itself may not be RESTful as such and now various efforts are underway on this but these discussions are not the subject matter of this article.
Based on your feedback for this article I am planning to write a sequel where I would demonstrate with an example ASP.NET project, how the changes on the server side code based on some event goes through wire and affects changes in the behavior on the client side. There I will use a helper tool named ‘Fiddler’ which is available free to download from Microsoft. Thank you for reading this article. Please send. | http://www.codeproject.com/Articles/37504/A-Note-on-Web-application-with-Reference-to-ASP-NE?msg=3168975 | CC-MAIN-2014-42 | refinedweb | 4,762 | 51.99 |
Login is not possible
Bug Description
* Impact:
connecting to raring vsftpd servers doesn't work
* Test Case:
- install vsftpd on raring, configure the server, try to connect to it
* Regression potential:
the server was failing to accept connections before so should only be better
---
I'm using Ubuntu 13.04 dev with vsftpd 3.0.2-1ubuntu1. local_enable and write_enable are set to YES but I'm not able to login:
sworddragon@
Connected to localhost.
220 (vsFTPd 3.0.2)
Name (localhost:
331 Please specify the password.
530 Login incorrect.
Login failed.
/var/log/vsftpd.log contains:
Thu Mar 21 09:00:33 2013 [pid 2] CONNECT: Client "127.0.0.1"
Thu Mar 21 09:00:48 2013 [pid 1] [sworddragon] FAIL LOGIN: Client "127.0.0.1"
/var/log/auth.log has created a line for vsftpd too:
Mar 21 12:18:29 localhost vsftpd: PAM audit_log_
Related branches
- Sebastien Bacher: Approve on 2013-05-16
- Ubuntu branches: Pending requested 2013-05-08
- Diff: 64 lines (+44/-0)3 files modifieddebian/changelog (+8/-0)
debian/patches/13-disable-clone-newpid.patch (+35/-0)
debian/patches/series (+1/-0)
P.S. I'm using 12.3 released version, 64 bit. This is no longer a development version issue.
(In reply to comment #30)
> A Linux server with no working FTP server is a real black eye!
Until this is fixed an easy workaround for this "black-eye" is to use pure-ftpd instead which works just fine and is functional equivalent in (almost) all practical sense to vsftpd
changed summary to match the current problem
I am facing the same problem with OpenSuSE 12.3 64bit, network install.
Pure-ftpd is reported (OpenSuSE forums) to work only if pam athentication is disabled (and local authentication enabled) in the pure-ftpd configuration.
(In reply to comment #35)
> Pure-ftpd is reported (OpenSuSE forums) to work only if pam athentication is
> disabled (and local authentication enabled) in the pure-ftpd configuration.
Strange, I'm using pure-ftpd (SuSE 12.3) with configuration
PAMAuthentication yes
and this works just fine (but vsftpd does not).
When I tried it personally, it refused to start. I will check one more time and repost.
Status changed to 'Confirmed' because the bug affects multiple users.
After upgrade from quantal to current raring I have the same problem too.
Ubuntu bug on this also: https:/
The issue is occurring because it seems vsftp has changed it's pid namespace.
Probably from sysdeputil.
"syscall(
There is a specific prohibition in the kernel on this:
-------
commit 34e36d8ecbd958b
Author: Eric W. Biederman <email address hidden>
Date: Mon Sep 10 23:20:20 2012 -0700
audit: Limit audit requests to processes in the initial pid and user namespaces.
This allows the code to safely make the assumption that all of the
uids gids and pids that need to be send in audit messages are in the
initial namespaces.
If someone cares we may lift this restriction someday but start with
limiting access so at least the code is always correct.
-------
Regarding audit=0. I imagine it would solve the issue, rather extreme. Also if I boot with audit=0 then client side ftp fails with "500 OOPS: priv_sock_get_cmd" (seccomp_sandbox=NO in /etc/vsftpd.conf).
Can you verify if the above vsftp codepath is indeed being executed and see what happens if VSF_SYSDEP_
vsftpd calls CLONE_NEWPID on SUSE - it is visible in #comment11 (see vsftpd[1]).
> Also if I boot with audit=0 then client side ftp fails with "500 OOPS:
> priv_sock_get_cmd" (seccomp_sandbox=NO in /etc/vsftpd.conf).
This does not makes any sense to me. This bug is related to enabled seccomp sanbox, but it was fixed before 12.3 release. I'll test that.
> Can you verify if the above vsftp codepath is indeed being executed and see
> what happens if VSF_SYSDEP_
With a traditional fork pam session can be opened, however next test - an attempt to download the file dies on a seccomp sanbox. The same apply for a clone w/o NEW_PID, where an audit error is different. I will track this in an another bug to not pollute this one with third issue.
lowering a priority of this issue, patch is in home:mvyskocil:
https:/
https:/
Well, I have a question now.
Will the system be updated to run VSFTPD correctly or I have to apply the patch manually?
(In reply to comment #41)
> Well, I have a question now.
>
> Will the system be updated to run VSFTPD correctly or I have to apply the patch
> manually?
There will be a maintenance update, once all issues will be resolved.
A pal spotted this bug report and suggests "[this] is caused by vsftp switching pid namespaces (audit kernel code prohibits)". Hope this helps.
This is an autogenerated message for OBS integration:
This bug (786024) was mentioned in
https:/
This is an autogenerated message for OBS integration:
This bug (786024) was mentioned in
https:/
Sent an update to 12.3 via 162608
@maintenance, please open a new maintenance incident
accepted
Hi all,
I see that the update is accepted but not yet released.
Is there an ETA on the update?
Perhaps a testing repo for the update to see if it works?
Cheers,
Angelos
Thanks Markus,
I installed the test-update repository and vsftp from there.
I get the following error:
Any ideas?
Update:
I flushed everything from my server, even the yast-ftp module.
Then I installed vsftp from test-update and it works.
Now I am having issue with Extended Passive Mode that seems to be enabled by default.
I reinstalled yast-ftp module and I get the 500 error as above.
I have the same problem too. Both anonymous user and local user are unable to login.
Update2:
I flushed again everything but did not manage to get it working again.
The log message when I run "service vsftpd status" shows login success, but the client reports error 500 and closes connection.
?
(In reply to comment #52)
> ?
Hello Angelos :)
Yes I tried again, it needs to start through xinetd or it will not start on its own (standalone). I can't say I like it, but I will live until we get the official update for vsftpd through official repos, which I am waiting for very patiantly...
Let's hope it doesn't take forever..
Guys the limitations of open source are showing in this case.. I know it's unfair, but the reaction I am gettinig in my enterprise is surprise and dissappointment. We are definately not winning over any business people like that.
Personally, I am keeping a low profile till this is resolved.
openSUSE-
Category: recommended (moderate)
Bug References: 786024,812406
CVE References:
Sources used:
openSUSE 12.3 (src): vsftpd-3.0.2-4.5.1
Unfortunately the update did not work for me.
I still get the "500 OOPS: priv_sock_get_cmd" error.
Disabling seccomp sandbox is not working for me either...
Same problem. Anonymous works though! Reinstalled entire system twice (quantal) and upgraded (do-release-upgrade -d) to raring. Bug occured both times.
(In reply to comment #55)
> Unfortunately the update did not work for me.
> I still get the "500 OOPS: priv_sock_get_cmd" error.
> Disabling seccomp sandbox is not working for me either...
Well, without a providing any more information I cannot help you much. Would you be so kind to open a new bug?
I would need to explain
what are you try to do - do you see that with (non)-anonymous download? How your vsftpd.conf look like? Does grep 'vsftpd' /var/log/messages says anything usefull?
BTW: the output of strace -tt -s 512 of vsftpd daemon.
Created an attachment (id=535776)
configuration file that fails
# grep 'vsftpd' /var/log/messages
Apr 18 12:38:49 aiolos xinetd[23286]: Reading included configuration file: /etc/xinetd.
Apr 18 12:39:03 aiolos xinetd[23660]: Reading included configuration file: /etc/xinetd.
Thanks,
Angelos
And the strace:
# strace -p 23677 -tt -s 512
Process 23677 attached
12:51:03.048164 accept(3, {sa_family=AF_INET, sin_port=
12:51:12.678545 clone(child_
12:51:12.678783 close(4) = 0
12:51:12.678855 accept(3, 0x7fffba89a3a0, [28]) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
12:51:16.044845 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=23929, si_status=2, si_utime=0, si_stime=0} ---
12:51:16.044914 alarm(1) = 0
12:51:16.044968 rt_sigreturn() = -1 EINTR (Interrupted system call)
12:51:16.045047 alarm(0) = 1
12:51:16.045095 wait4(-1, NULL, WNOHANG, NULL) = 23929
12:51:16.045173 wait4(-1, NULL, WNOHANG, NULL) = -1 ECHILD (No child processes)
12:51:16.045224 accept(3, {sa_family=AF_INET, sin_port=
12:51:16.083371 clone(child_
12:51:16.083620 close(4) = 0
12:51:16.083690 accept(3, 0x7fffba89a3a0, [28]) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
12:51:25.264770 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=23936, si_status=2, si_utime=0, si_stime=0} ---
12:51:25.264834 alarm(1) = 0
12:51:25.264882 rt_sigreturn() = -1 EINTR (Interrupted system call)
12:51:25.264936 alarm(0) = 1
12:51:25.264977 wait4(-1, NULL, WNOHANG, NULL) = 23936
12:51:25.265053 wait4(-1, NULL, WNOHANG, NULL) = -1 ECHILD (No child processes)
12:51:25.265099 accept(3, {sa_family=AF_INET, sin_port=
12:51:25.302455 clone(child_
12:51:25.302684 close(4) = 0
12:51:25.302754 accept(3, ^CProcess 23677 detached
<detached ...>
(In reply to comment #58)
>
Add
allow_
to the bottom of your /etc/vsftpd.conf file.
Thanks, it is working locally now.
I still cannot access from remote location (error while changing to /home/user)
Looking into it.
Thanks,
Angelos
My story:
I've done several installs of 12.3. My latest, I tried when installed to start vsftpd from YaST. It would not start, as usual, with the message that for run levels 3, 5, network-remotefs had to be installed (we all know by now there is no run lever 3 or 5 with systemd ??) I tried again a couple of days ago...same thing. I keep installing all the updates so decided last night to attemp to start vsftpd again from YaST only to discover it was running! I was able to connect from another machine! I don't know which fix did it but it seems to have healed itself in some of the updates that have been released.
Many thanks to the team working on this (and other) issues. If we get these basic things working 12.3 has potential to be the best since 11.4. KDE4.10.2 is VERY nice! Awesome!
Same here. Please fix!
I am also affected by this bug after upgrading to 13.04 :(
Me too, 13.04 upgrade has caused vsftp to stop working with precisely the same symptoms:
auth.log:
Apr 26 10:36:29 ftpserv vsftpd: PAM audit_log_
Same here :(
I have serious problem because of this bug!
PAM unable to dlopen(
PAM adding faulty module: pam_ecryptfs.so
pam_unix(
pam_unix(
pam_winbind(
pam_winbind(
PAM audit_log_
Same here, seems to be a kernel issue. It still works with 3.5.x kernel.
SuSE's fix is here https:/
I just rebuilt 3.0.2-1ubuntu1 with their patch, vsftpd works fine now.
I began experiencing this problem after upgrading to Kubuntu 13.04 (from 12.10) yesterday. For now, I have removed vsftpd and installed pure-ftpd. That is working fine for my needs at the moment.
Exact same issue after upgrading from 12.10 to 13.04, vsftpd is now unusable.
Can confirm what Jürgen Kreileder (jk) said in comment #18.
Building vsftpd 3.0.2-1ubuntu1 with the changes in vsftpd-
I basically used this guide if anyone else want to try: http://
And I tried with a fresh install so it isn't just upgrades that are affected (ref comment #1).
Same issue. Made school very difficult today when my paper was due and I could log in. hahaha. Please fix!!
the open suse link refered to above:
https:/
links to these:
https:/
https:/
Hi all,
I compiled the vsftpd package
Here is the patched vsftpd version in 32bits arch.
Hi all,
Previous messages were sent too fast and I didn't find a way to remove them.
I posted the patched version of vsftpd in both 34 and 32 bits arch : please feel free to download.
Don't forget to remove the previous installed version on your system or dpkg will tell you that the package is already installed :
sudo apt-get remove vsftpd
sudo dpkg -i vsftpd_patched.deb
That's all, and it doesn't remove config files.
If you prefer to compile your own version, here is the procedure :
mkdir vsftpd-patched
cd vsftpd-patched
sudo apt-get build-dep vsftpd
sudo apt-get install fakeroot
apt-get source vsftpd
--> Go on https:/
patch -p0 < vsftpd-
cd vsftpd-3.0.2/
dpkg-buildpackage -us -uc -nc
cd ../
You'll get the compiled .deb in the directory.
Remove previous installed version of vsftpd on your system and install the brand new patched one.
sudo apt-get remove vsftpd
sudo dpkg -i vsftpd_patched.deb
You can remove the directory where you built the package after installation.
Note : you need to build on a 64 bits arch to get a 64bits version of the package and a 32 bits arch for 32bits one.
I used VM for this.
I've tested Vincents 64bit patch. Confirmed fixed.
Same here on a 64bits server install. Merci Vincent
Thanks that patch worked for me too!
Muchas gracias Vincent, fuciono para mi, me salvaste la vida :D
---
Thank you very much Vincent, worked for me, saved my life :D
Test 64 patch.
Also confirming that Vincent DAVY patched vsftpd package fixes the issue.
We need a newer vsftpd on the repository as soon as possible, who knows how many people is having the same problem but haven't found this bug report, I struggled until I found this.
I just lost hair over non responsive vsftpd on freshly updated 1304 server till I came here too.
I was having trouble with my ssh setup so I thought I'd do a quick install of ftp to transfer some keys .. LOL how wrong I was...
Probably everyone who had the unfortunate idea to upgrade to ubuntu 13.04 in the recent days can't use vsftpd anymore.
First I get the error "ubuntu vsftpd: PAM unable to dlopen(
and then "ubuntu vsftpd: PAM audit_log_
details here: http://
Excuseme, how can i use the patch (#18)?
how can i compile it?
Thanks for response.
Confirm that the version in #25 is working for me too, many thanks!
Confirmed that version in #26 is working in Lubuntu 13.04.
Thanks Vincent!
Ok, I've sponsored the proposed fix to saucy and raring and updated the bug a bit to be SRU compliant (https:/
This bug was fixed in the package vsftpd - 3.0.2-1ubuntu2
---------------
vsftpd (3.0.2-1ubuntu2) saucy; urgency=low
* debian/
- patch to remove CLONE_NEWPID syscall
see: https:/
Fixes LP: #1160372
-- Daniel Llewellyn (Bang Communications) <email address hidden> Wed, 08 May 2013 14:08:53 +0100
Hi, I am using Opensue 12.3 64 Bit. Freshly installed and updated to the latest packages from the update repository.
In my opinion the problems regarding the present version 3.0.2-4.5.1 of vsftp are far from resolved. As other related bugs as
https:/
were marked as duplicates of this one I post my findings here.
Bug 1
******
I still need
seccomp_sandbox=NO
to connect, when TLS is enabled. With this option set to NO everything works as expected.
However, if seccomp_sandbox=YES I get the following messages in Filezilla when trying too connect from a remote system which also runs under OS 12.3:
Status: TLS/SSL-Verbindung hergestellt.
Antwort: 331 Please specify the password.
Befehl: PASS *******
Antwort: 230 Login successful.
Befehl: SYST
Antwort: 215 UNIX Type: L8
Befehl: FEAT
Antwort: 211-Features:
Antwort: AUTH TLS
Antwort: EPRT
Antwort: EPSV
Antwort: MDTM
Antwort: PASV
Antwort: PBSZ
Antwort: PROT
Antwort: REST STREAM
Antwort: SIZE
Antwort: TVFS
Antwort: UTF8
Antwort: 211 End
Befehl: OPTS UTF8 ON
Antwort: 200 Always in UTF8 mode.
Befehl: PBSZ 0
Antwort: 200 PBSZ set to 0.
Befehl: PROT P
Antwort: 200 PROT now Private.
Status: Verbunden
Status: Empfange Verzeichnisinha
Befehl: CWD /
Antwort: 250 Directory successfully changed.
Befehl: PWD
Antwort: 257 "/"
Befehl: TYPE I
Antwort: 200 Switching to Binary mode.
Befehl: PASV
Fehler: GnuTLS error -15: Ein unerwartetes TLS-Paket wurde empfangen.
Fehler: Verbindung zum Server getrennt: ECONNABORTED - Connection aborted
Fehler: Verzeichnisinhalt konnte nicht empfangen werden
Bug 2 (maybe related)
******
2) Even with "seccomp_
syslog_enable=YES
I get the following message in filezilla:
Status: Connecting to 192.168.0.37:21...
Status: Connection established, waiting for welcome message...
Response: 500 OOPS: priv_sock_get_cmd
Error: Critical error
Error: Could not connect to server
Bug 3:
******
From some OS 12.3 remote systems I cannot connect in case the following option is not set to NO:
require_
So all in all vsftp still shows major deficiencies on Opensuse 12.3 which were not present in OS 12.2.
Any ideas what I could do ?
(In reply to comment #63)
> From some OS 12.3 remote systems I cannot connect in case the following option
> is not set to NO:
>
> require_
>
I have seen that the OS 12.3-systems for which the setting "require_
is required all had the original Filezilla version 3.5.3 form the OS 12.3 OSS repository installed.
After installing Filezilla version 3.7.0.1 from the network repository
http://
this problem, which is obviously client related, disappears and the setting
require_
works.
The other problems described in comment #63, however, remain.
guys, a fresh install of the vsftp will still show this problem, we had to use the workaround provided. If a configuration setting has changed, ie "require_
@abonilla, @rm: hi, please open a **new** report. It's quite hard to follow the discussion in this one. And please attach the vsftpd.conf and an output of strace -f -tt
You might copy the vsftpd.service to /etc/systemd/
change the ExecStart line to
ExecStart=
and issuse systemctl daemon-reload && systemctl restart vsftpd.service
Dec 1 07:26:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 07:26:22 watcher-U56E sudo: pam_unix(
Dec 1 07:26:28 watcher-U56E sudo: pam_unix(
Dec 1 07:30:01 watcher-U56E CRON[2648]: pam_unix(
Dec 1 07:30:01 watcher-U56E CRON[2648]: pam_unix(
Dec 1 07:41:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 07:41:22 watcher-U56E sudo: pam_unix(
Dec 1 07:41:29 watcher-U56E sudo: pam_unix(
Dec 1 07:56:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 07:56:22 watcher-U56E sudo: pam_unix(
Dec 1 07:56:30 watcher-U56E sudo: pam_unix(
Dec 1 08:11:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 08:11:22 watcher-U56E sudo: pam_unix(
Dec 1 08:11:28 watcher-U56E sudo: pam_unix(
Dec 1 08:17:01 watcher-U56E CRON[2784]: pam_unix(
Dec 1 08:17:01 watcher-U56E CRON[2784]: pam_unix(
Dec 1 08:26:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 08:26:22 watcher-U56E sudo: pam_unix(
Dec 1 08:26:28 watcher-U56E sudo: pam_unix(
Dec 1 08:31:25 watcher-U56E mdm[1596]: pam_unix(
Dec 1 08:31:25 watcher-U56E mdm[1596]: pam_ck_
Dec 1 08:31:27 watcher-U56E dbus[1143]: [system] Rejected send message, 7 matched rules; type="method_
Dec 1 08:31:31 watcher-U56E polkitd(
Dec 1 08:31:34 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND=
Dec 1 08:31:34 watcher-U56E sudo: pam_unix(
Dec 1 08:31:42 watcher-U56E sudo: pam_unix...
I've just stumbled into this bug on 14.04.1. Worked around by commenting out "auth required pam_shells.so" in /etc/pam.d/vsftpd and restarting vsftpd as mentioned in 869684.
my server environment install vsftpd version 2.3.xx and i want config my vsftpd to jail user directory.
but when insert parameter file vsftpd and uncomment 'allow_
one again when insert 'seccomp_
maybe can help for this case :)
thank's before.
SUSE-RU-
Category: recommended (moderate)
Bug References: 786024,
CVE References:
Sources used:
SUSE Linux Enterprise Server 12-SP1 (src): vsftpd-3.0.2-31.1
SUSE Linux Enterprise Server 12 (src): vsftpd-3.0.2-31.1
openSUSE-
Category: recommended (moderate)
Bug References: 786024,
CVE References:
Sources used:
openSUSE Leap 42.1 (src): vsftpd-3.0.2-17.1
Thanks for reporting this bug. I can't reproduce this on a new raring system. Could you please paste your entire /etc/vsftpd.conf and your /etc/pam.d/vsftpd file and any files it @includes? | https://bugs.launchpad.net/ubuntu/+source/vsftpd/+bug/1160372 | CC-MAIN-2016-44 | refinedweb | 3,508 | 66.94 |
Tell us what you think of the site.
hey there,
I might be missing a crucial part of my maya learning, but is there a way to auto load a project?
Say you have a project for each shot in a pipeline, and when opening an animation file Maya automatic sets the project.
Could you provide more details?
What is not working?
i might still be missing something about setting projects in maya, but it doesn’t seem to automaticly set a project when you have a workspace file in the same directory as your maya file.
so i came up with this little code for doing just that.
import maya.cmds as mc
path=(mc.file(q=True,sn=True))fileName=(mc.file(q=True,sn=True,shn=True))
if fileName!='':
temp=path.partition(fileName)
mc.workspace(temp[0],openWorkspace=True)
print 'File Inherit Project Set'else:
mc.workspace(( (mc.optionVar( q='ProjectsDir' ))), openWorkspace=True )
print 'Default Project Set' | http://area.autodesk.com/forum/autodesk-maya/mel/auto-set-project/ | crawl-003 | refinedweb | 160 | 67.96 |
Hi everyone,
I hope you are all fine and live happy and safely. This is the first time to post here in the forum, and I feel that there is a nice and variety developers community here. I really appreciate and willing to share with you some of my experience and knowledge. By the way, my English is second language, so if you misunderstand me please let me know.
Currently, I am working in designing and implementing a Filter-and-Pipe architecture for Word document. I stuck with implementing the pipe class. My classes looks like;
Filter class
Pipe class
Class A extends Filter implements Runnable
Class B extends Filter implements Runnable
Class C extends Filter implements Runnable
Class D extends Filter implements Runnable
Class A: read a txt file line by line and SEND it to Class B.
Class B: read the line coming from A, processing it and SEND it to Class C.
Class C: read the line coming from B, processing it and SEND it to Class D.
Class D: read the line coming from C, processing it and print it out.
each of these classes should run in its own thread. Each Filter should wait if there is no data ready. once it ready, should the sender notify the receiver.
I have implement the classes and stuck with the pipe. I don't know how to send the processed line to the next Filter.
Here are what I have done;
The main class create all objects;The main class create all objects;Code:public class Filter { Pipe _dataINPipe; Pipe _dataOUTPipe; public String getData(){ return _dataINPipe.dataOUT(); } public void sendData( String tempData){ _dataOUTPipe.dataIN(tempData); } } class Pipe{ Queue<String> _inData = new LinkedList<String>(); public void dataIN (String in){ _inData.add(in); } public String dataOUT (){ return _inData.poll(); } }
When I run the project, I notice that the Class D run and stop before it gets all data to print it out. This is because of the class D check if the queue is empty. If True, it will stop processing and stop the thread.When I run the project, I notice that the Class D run and stop before it gets all data to print it out. This is because of the class D check if the queue is empty. If True, it will stop processing and stop the thread.Code:Pipe p1 = new Pipe(); Pipe p2 = new Pipe(); Pipe p3 = new Pipe(); Pipe p4 = new Pipe(); //// A a1 = new A(null,p1) null refer to inPipe, and p1 refer to outPipe A a1 = new A(null,p1); B b1 = new B(p1,p2); C c1 = new C(p2,p3); D d1 = new D(p3,null); Thread th1 = new Thread( a1 ); Thread th2 = new Thread( b1 ); Thread th3 = new Thread( c1 ); Thread th4 = new Thread( d1 ); th1.start() th2.start(); th3.start(); th4.start();
For example, the txt file has 10 line. Class D will stop after line 4 because the Class C takes time to process line 3 and can not pass data to the queue for class D.
I hope you understand the issue. I need your help if you have any idea. | http://forums.devshed.com/java-help-9/pipe-filter-architecture-txt-file-931278.html | CC-MAIN-2015-11 | refinedweb | 527 | 80.11 |
0
I need help i am not sure if i am on the right track.
i have to write a program to calculate the night of nights someone has to stay and what size bed they want. than i have to give a total of cost plus 6% sales tax. her is what i have so far.. i am getting a compiling error at the start of the first "if" statement.
please help
#include <iostream> // required to perform C++ stream I/O #include <string> //requied to perform string varibles #include <iomanip> //requied to perform parameterized stream manipulators using namespace std; // for accessing C++ Standard Library members int main() //requied for every program;should return integer result code { char nights; // store number of nights char size; // store room type char response='y'; //store response while (response!='n' && response!='N') //start of while statement does not run if n/N is typed { cout<< "Please enter the numbers of nights you would like to stay:";//propomt for number of nights to stay cin>>nights; //store the number of nights in variable nights if (nights<='30') && (nights!='0') // response can not be more than 30 or less than 0 { cout<<"please enter K for king size bed or Q for queen size bed:";// prompt for bed size cin>>size;// store the size k or q } if (size=='Q')// if Q than true { cout<<"n/Total: $" <<nights * 100.00; // output the number of nights times $100.00 } else if (size=='K')// if K than true { cout<<"n/Total: $" <<nights * 150.00;// output the number of nights times $150.00 } else (size!='K') && (size!='Q')// if k and q are false than continue { cout<<"number of nights must between 1-30";// output error } cout<<"Would you like to enter additional nights (y = yes, n = no)?";// prompt for additional nights cin>>response;// output response cout<<endl;//add space to make easier to read }//end while cout << "\n"; // insert newline for readability return 0; // indicate that program ended successfully } // end function main
Edited by Nick Evan: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/355352/homework-beginner-c | CC-MAIN-2018-39 | refinedweb | 340 | 65.96 |
The errors produced by PHP are useful when developing scripts, but aren't sufficient for deployment in a web database application. Errors should inform users without confusing them, not expose secure internal information, report details to administrators, and have a look and feel consistent with the application. This section shows you how to add a professional error handler to your application, and also how to improve the internal PHP error handler to produce even more information during development.
If you're not keen to develop a custom handler (or don't want to use ours!), you'll find an excellent class that includes one at.
To begin, we show you how to implement a simple custom handler. The set_error_handler( ) function allows you to define a custom error handler that replaces the internal PHP handler for non-critical errors:
The function takes one parameter, a user-defined error_handler function that is called whenever an error occurs. On success, the function returns the previously defined error handler function name, which can be saved and restored later with another call to set_error_handler( ). The function returns false on failure.
The custom error handler is not called for the following errors: E_ERROR, E_PARSE, E_CORE_ERROR, E_CORE_WARNING, E_COMPILE_ERROR, and E_COMPILE WARNING. For these, the PHP internal error handler is always used.
For example, to set up a new error handler that's defined in the function customHandler( ), you can register it with:
set_error_handler("customHandler");
The function name is passed as a quoted string, and doesn't include the brackets. After the new handler is defined, the error_reporting level in php.ini or defined in the script with error_reporting( ) has no effect: all errors are either passed to the custom handler or, if they're critical, to the PHP internal default handler. We discuss this more later.
A custom error handler function must accept at least two parameters: an integer error number and a descriptive error string. Three additional optional parameters can be also be used: a string representing the filename of the script that caused the error; an integer line number indicating the line in that file where the error was noticed; and, an array of additional variable context information.
Our initial implementation of the customHandler( ) function is shown in Example 12-1. It supports all five parameters, and uses them to construct an error string that displays more information than the default PHP internal handler. It handles only E_NOTICE and E_WARNING errors, and ignores all others.
After running the example, the handler outputs the following:
<hr><font color="red"> <b>Custom Error Handler -- Warning/Notice<b> <br>An error has occurred on 38 line in the /usr/local/apache2/htdocs/example.12-1.php file. <br>The error is a "Missing argument 1 for double( )" (error #2). <br>Here's some context information:<br> <pre> Array ( [number] => ) </pre></font> <hr>
The useful additional information is the output of a call to the print_r( ) that dumps the state of all variables in the current context. In this case, there's only one variable which doesn't have a value: that's not surprising, because the warning is generated because the parameter is missing!
The context information is extracted from the fifth, array parameter to the customHandler( ) function. It contains as elements all of the variables that are in the current scope when the error occurred. In our Example 12-1, only one variable was in scope within the function, $number. If the customHandler( ) function is called from outside of all functions (in the main body of the program), it shows the contents of all global variables including the superglobals $_GET, $_POST, and $_SESSION.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <meta http- <title>Error</title> <body> <h1>Two times!</h1> <?php function customHandler($number, $string, $file, $line, $context) { switch ($number) { case E_WARNING: case E_NOTICE: print "<hr><font color=\"red\">\n"; print "<b>Custom Error Handler -- Warning/Notice<b>\n"; print "<br>An error has occurred on {$line} line in the {$file} file.\n"; print "<br>The error is a \"{$string}\" (error #{$number}).\n "; print "<br>Here's some context information:<br>\n<pre>\n"; print_r($context); print "\n</pre></font>\n<hr>\n"; break; default: // Do nothing } } function double($number) { return $number*2; } set_error_handler("customHandler"); // Generates a warning for a missing parameter print "Two times ten is: " . double( ); ?> </body> </html>
As we stated earlier, the customHandler( ) function isn't called for the critical error types. For example, if we omit the semi-colon from the end of the first print statement:
print "Two times ten is: " . double( )
then the parse error that's output is the PHP default:
Parse error: parse error, unexpected T_PRINT in /usr/local/apache2/htdocs/example.12-1.php on line 46
You can't change this behavior.[1] Custom handlers work only for the E_WARNING and E_NOTICE errors, and for the entire USER class. The techniques to generate USER class errors are discussed in the next section.
[1] This isn't strictly true. It isn't possible to change the behavior within your scripts or in the php.ini file. However, it is possible to force all output produced by your script through a function, and to catch them after they've been output; this has a significant performance penalty. See for detailed information.
The custom handler we've shown here deliberately doesn't support USER class errors. If, for example, an E_USER_ERROR is generated, the handler is called, but nothing is output and the script doesn't stop. It's the responsibility of the programmer to deal with all error types, and to stop or continue the execution as appropriate. We develop a handler for all errors in the next section.
The simple custom error handler in the previous section has several disadvantages:
The handler offers only slightly more information than the PHP internal handler. Ideally, it should also include a backtrace, showing which function called the one containing the error, and so on back to the beginning of the script.
It shows technical information to the user, which is both confusing and a security risk. It should explain to the user that there's a problem with their request, and then log or send the technical information to someone who can fix it.
It can't handle programmer-generated errors. For example, in Chapter 6, we've used the showerror( ) function to handle database server errors. These errors should be integrated with our custom handler.
Our handler doesn't stop script execution, and doesn't leave the application in a known state. For example, if a session is open or the database is locked, the error handler doesn't clean these up.
In this section, we improve our custom handler to address these problems.
Example 12-2 shows an improved error handler that reports more information about how and where the error occurred. For example, if an E_WARNING error is generated by the fragment:
// Generates a warning for a missing parameter print "Two times ten is: " . double( );
then the handler outputs:
[PHP Error 20030616104153]E_WARNING on line 67 in bug.php. [PHP Error 20030616104153]Error: "Missing argument 1 for double( )" (error #2). [PHP Error 20030616104153]Backtrace: [PHP Error 20030616104153] 0: double (line 67 in bug.php) [PHP Error 20030616104153] 1: double (line 75 in bug.php) [PHP Error 20030616104153]Variables in double ( ): [PHP Error 20030616104153] number is NULL [PHP Error 20030616104153]Client IP: 192.168.1.1
The backTrace( ) function uses the PHP library function debug_backtrace( ) to show a call graph, that is, the hierarchy of functions that were called to reach the function containing the bug. In this example, call #1 was from the main part of the script (though this is shown as a call from double( ), which is the function name that was called?this is a bug in debug_backtrace( )) and call #0 was the double( ) function that caused the error.
The debug_backtrace( ) function stores more details than the function name, but they are in a multidimensional array. If you're interested in using the function directly, try adding the following to your code:
var_dump(debug_backtrace( ));
Our custom handler also includes the following fragment:
$prepend = "\n[PHP Error " . date("YmdHis") . "]"; $error = ereg_replace("\n", $prepend, $error);
This replaces the carriage return at the beginning of each error line with a fragment that includes the date and time. Later in this section, we write this information to an error log file.
<?php function backTrace($context) { // Get a backtrace of the function calls $trace = debug_backtrace( ); $calls = "\nBacktrace:"; // Start at 2 -- ignore this function (0) and the customHandler( ) (1) for($x=2; $x < count($trace); $x++) { $callNo = $x - 2; $calls .= "\n {$callNo}: {$trace[$x]["function"]} "; $calls .= "(line {$trace[$x]["line"]} in {$trace[$x]["file"]})"; } $calls .= "\nVariables in {$trace[2]["function"]} ( ):"; // Use the $context to get variable information for the function // with the error foreach($context as $name => $value) { if (!empty($value)) $calls .= "\n {$name} is {$value}"; else $calls .= "\n {$name} is NULL"; } return ($calls); } function customHandler($number, $string, $file, $line, $context) { $error = ""; switch ($number) { case E_WARNING: $error .= "\nE_WARNING on line {$line} in {$file}.\n"; break; case E_NOTICE: $error .= "\nE_NOTICE on line {$line} in {$file}.\n"; break; default: $error .= "UNHANDLED ERROR on line {$line} in {$file}.\n"; } $error .= "Error: \"{$string}\" (error #{$number})."; $error .= backTrace($context); $error .= "\nClient IP: {$_SERVER["REMOTE_ADDR"]}"; $prepend = "\n[PHP Error " . date("YmdHis") . "]"; $error = ereg_replace("\n", $prepend, $error); // Output the error as pre-formatted text print "<pre>{$error}</pre>"; // Log to a user-defined filename // error_log($error, 3, "/home/hugh/php_error_log"); }
Output of errors to the user agent (usually a web browser) is useful for debugging during development but shouldn't be used in a production application. Instead, you can use the PHP library error_log( ) function to log to an email address or a file. Also, you should alert the user of actions they can take, without providing them with unnecessary technical information.
The error_log( ) function has the following prototype:
The string message is the error message to be logged. The message_type can be 0, 1, or 3. A setting of 0 sends the message to the PHP system's error logger, which is configured using the error_log directive in the php.ini file. A setting of 1 sends an email to the destination email address with any additional email extra_headers that are provided. A setting of 3 appends the message to the file destination. A setting of 2 isn't available.
In practice, you should choose between logging to an email address or to a user-defined file; it's unlikely that the web server process will have permissions to write to the system error logger. To log to a file using our customHandler( ) in Example 12-2, uncomment the statement:
error_log($error, 3, "/home/hugh/php_error_log");
This will log to whatever is set as the logging destination by the third parameter; in this example, we're writing into a file in the administrator's home directory. You could use the directory C:\Windows\temp on a Microsoft Windows platform. If you'd prefer that errors arrive in email, replace the error_log( ) call with:
// Use a real email address! error_log($error, 1, "hugh@asdfgh.com");
In practice, we recommend logging to a file and monitoring the file. Receiving emails might sound like a good idea, but in practice if the DBMS is unavailable or another serious problem occurs, you're likely to receive hundreds of emails in a short time.
When the application goes into production, we also recommend removing the print statement that outputs messages to the browser. Instead, you should add a generic message that alerts the user to a problem and asks them contact the system administrator. You might also follow these statements with a call to die( ) to stop the program execution; remember, it's up to you whether you stop the program when an error occurs.
A better approach than adding print statements to show the error to the user is to create a template with the same look and feel as your application, and include the error messages there; we use this approach in our online winestore in later chapters. This approach also has the additional advantage that it prevents the problem we describe next.
An additional problem with printing errors without a template is that they can still appear anywhere in a partial page. This can lead to user confusion, produce non-compliant HTML, and look unattractive. If you use a template, you can choose whether to output the page or not: nothing is output until you call the show( ) method. However, even without a template, it's possible to prevent this happening by using the PHP library output buffering library.
The output buffering approach works as shown in the simplified error handler in Example 12-3. The call to ob_start( ) at the beginning of the script forces all output to be held in a buffer. When an error occurs, the ob_end_clean( ) function in the customHandler( ) function throws away whatever is in the buffer, and then outputs only the error message and stops the script. If no errors occur, the script runs as normal and the ob_end_flush( ) function outputs the document by flushing the buffer. With this approach, partial pages can't occur.
<?php // start buffering ob_start( ); ?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <meta http- <title>Error</title> <body> <?php function customHandler($number, $string, $file, $line, $context) { // Throw away the current buffer ob_end_clean( ); print "An error occurred!"; die( ); } set_error_handler("customHandler"); // Generates an E_NOTICE print $a; // Output the buffer ob_end_flush( ); ?> </body> </html>
In Chapter 6, we triggered our own errors by calling the showerror( ) function, which outputs MySQL error messages. We added our own calls to die( ) to handle PEAR DB errors in Chapter 7. However, these approaches aren't consistent with using the custom error handler we've built in this chapter. Now that we have an error handler, it would be useful to be able to trigger its use through programmer-generated errors. This is where the USER class of errors and the PHP library function trigger_error( ) are useful:
The function triggers a programmer-defined error using two parameters: an error_message and an optional error_type that's set to one of E_USER_ERROR, E_USER_WARNING, or E_USER_NOTICE. The function calls the current error handler, and provides the same five parameters as other PHP error types.
Example 12-4 is a modified handler that processes errors generated by trigger_error( ). In addition, it stops the script when WARNING or ERROR class errors occur.
function customHandler($number, $string, $file, $line, $context) { $error = ""; switch ($number) { case E_USER_ERROR: $error .= "\nERROR on line {$line} in {$file}.\n"; $stop = true; break; case E_WARNING: case E_USER_WARNING: $error .= "\nWARNING on line {$line} in {$file}.\n"; $stop = true; break; case E_NOTICE: case E_USER_NOTICE: $error .= "\nNOTICE on line {$line} in {$file}.\n"; $stop = false; break; default: $error .= "UNHANDLED ERROR on line {$line} in {$file}.\n"; $stop = false; } $error .= "Error: \"{$string}\" (error #{$number})."; $error .= backTrace($context); $error .= "\nClient IP: {$_SERVER["REMOTE_ADDR"]}"; $prepend = "\n[PHP Error " . date("YmdHis") . "]"; $error = ereg_replace("\n", $prepend, $error); // Throw away the buffer ob_end_clean( ); print "<pre>{$error}</pre>"; // Log to a user-defined filename // error_log($error, 3, "/home/hugh/php_error_log"); if ($stop == true) die( ); }
You can use this handler for several different purposes. For example, if a MySQL connection fails, you can report an error and halt the script:
// Connect to the MySQL server if (!($connection = @ mysql_connect($hostname, $username, $password))) trigger_error("Could not connect to DBMS", E_USER_ERROR);
You can also send error codes and messages through to the handler that are reported as the error string:
if (!(mysql_select_db($databaseName, $connection))) trigger_error(mysql_errno( ) . " : " . mysql_error( ), E_USER_ERROR);
You could even use this to log security or other problems. For example, if the user fails to log in with the correct password, you could store a NOTICE:
if ($password != $storedPassword) trigger_error("Incorrect login attempt by {$username}", E_USER_NOTICE);
We use trigger_error( ) extensively for error reporting in the online winestore in Chapter 16 through Chapter 20.
An advantage of a custom error handler is that you can add additional features to gracefully stop the application when an error occurs. For example, you might delete session variables, close database connections, unlock a database table, and log out the user. What actions are carried out is dependent on the application requirements, and we don't discuss this in detail here. However, our online winestore error handler in Chapter 16 carries out selected cleanup actions based on the state of session variables, and leaves the application in a known state. | http://etutorials.org/Programming/PHP+MySQL.+Building+web+database+applications/Chapter+12.+Errors+Debugging+and+Deployment/12.3+Custom+Error+Handlers/ | CC-MAIN-2022-05 | refinedweb | 2,751 | 54.12 |
Introduction
The State Pattern is an interesting design pattern in that it allows us to separate out portions of code into individual related modules, or states. This pattern is particularly useful for applications which need to retain state information, such as the current phase a program is in. While you can typically maintain this information using a basic integer variable, the State pattern helps abstract the specific state logic, reduces code complexity, and greatly increases code readability. This can make the difference between an application maintenance nightmare and a work of art.
This article describes how to use the State Pattern with C# to create a simple, console-based RPG role playing game. You'll be able to see exactly how the State pattern fits into the flow of the rpg game and how it easily cleans up the code, in what might typically be a confusion of integer values and if-then statements.
If Then Else What?
The core problem that the State pattern solves is reducing the complexity of if-then statements, which we ultimately need when managing state information. Since we're going to be creating a role playing game, knowing if the character is in an exploratory or battle state is important.
For our simple role playing game, we'll consider 2 states that our character may be in and 2 commands that he can perform. He can be Exploring or he can be in Battle and he can choose to Look Around or to Attack. Since we have 2 states and 2 command possibilities, we'll naturally have 4 situations to check for. They are:
State: ExploringLookAttack
State: BattleLookAttack
Without design patterns, you might normally design a role playing game similar to the following:
void Main(){ int state = 0; string command = "";
while (command != "Quit") { if (command == "Look" && state == 0) { // Explore the dungeon. DoExploreDungeon(); } else if (command == "Attack" && state == 0) { // There's nothing to attack when you're just exploring! DoNothingToAttack(); } else if (command == "Look" && state == 1) { // You can't explore when a monster is attacking you! DoMonsterFreeHit(); } else if (command == "Attack" && state == 1) { // Attack the monster. DoAttackEvilMonster(); } }}
You would then define the function for each situation listed above. While this would certainly work and might not look like much code, think about what would happen if you added another command, such as "Inventory". You would have to add an additional two if-then statements. If you wanted to add another state, such as "InStore", you would need to create even more if-then statements. You can see how the above code could end up becoming complex and more difficult to maintain. This is where the State Pattern comes in!
Cleaning House with the State Pattern
Let's clean up the above code by implementing the State Pattern. Since we have two possible states, we'll have two state classes which manage the details of each state, but the main loop will be reduced to the following:
void Main(){ RPGController rpg = new RPGController(); string command = "";
while (command != "Quit") { if (command == "Look") { rpg.Explore(); } else if (command == "Attack") { rpg.Battle(); } }}
In the above C# code, using the State Pattern, we've already cut in half the number of if-then statements. In fact, we only need to check which command the user selected and pass call the proper function for that command. The State Pattern itself takes care of determining which specific state should act upon the command. As you can see, this greatly reduces the amount of code complexity. Let's take a look at how we put the State Pattern together.
There's Always an Interface
We start by constructing an interface for the State Pattern controller to use. All states will implement this interface so that they have the same list of functions which the controller class can call. Depending on which state is currently active, the controller will execute one of the interface functions. The functions in the interface can be considered the possible commands the user can choose. The concrete states we construct from this interface can be considered the possible states the character may be in.
As you can tell, we'll need to define the function bodies for each possible function in the state interface. In the case of impossible states, such as being in the Battle state and trying to Explore, we can either throw an exception and leave the function empty, or display an entertaining message to the user. In either case, the state pattern still helps us manage the complexity.
Since we only have two commands, our interface for the State Pattern is defined as follows:
public interface IState { int Explore(); int Battle(int level); }
Exploring the State of the Explore State
We have two states to create implementations for: Explore and Battle. We'll start with the Explore state. This will also be the initial state that the character will reside in. Since the character can select two commands (Look and Attack), and since we defined two functions in the interface (Explore and Battle), we can fill in the body for the user commands as follows:
public class ExploreState : IState { private RPGController context;
public ExploreState(RPGController context) { this.context = context; }
#region IState Members
public int Explore() { Console.WriteLine("You look around for something to kill.");
int ran = RandomGenerator.GetRandomNumber(5); if (ran == 0) { Console.WriteLine("A monster approaches! Prepare for battle!"); context.SetState(context.GetBattleState()); } else if (ran == 1) { Console.WriteLine("You find a golden jewel behind a tree!"); return 2; }
return 0; }
public int Battle(int level) { Console.WriteLine("You find nothing to attack."); return 0; }
#endregion }
First, notice that the state implements IState. This means it has an Explore and Battle function, which are the two commands the user can do. When in the Explore state, upon exploring we'll let the user either find a magical item which increases his experience, or he'll find a monster that he must kill. For the code, we simply pick a random number, and depending on the value, let the user know he found a monster or magical item. If he found a monster, the user's state will be changed to the Battle state. This allows us to call the same interface functions (Explore and Battle) but from a new state, the Battle state. On the other hand, if he found a magical item, his state remains in the Explore state. Notice that we also define the Battle function even though this is within the Explore state. Since the user can't attack a tree, we'll simply tell him there is nothing to attack. You could also throw an exception in the Battle() function if you wish, as long as the user interface prevents the user from choosing the Attack command while in the Explore state. In our case, the commands never change for the user, so we'll define the bodies for all functions. At this point, you could easily draw a simple state diagram, that illustrates our game, but we'll move on with the code.
Battling the Battle State
Just as we did with the Explore state, the Battle state will define the same two interface functions Explore and Battle. The difference is that since we're now in a battle state, executing the Explore command should throw an exception (or display a message), and the Battle command will actually perform an attack. We define this state as follows:
public class BattleState : IState { private RPGController context; private int rounds = 0;
public BattleState(RPGController context) { this.context = context; }
public int Explore() { Console.WriteLine("You'd love to, but see, there's this big ugly monster in front of you!"); return 0; }
public int Battle(int level) { Console.Write("You try to slay the monster.. "); rounds++;
System.Threading.Thread.Sleep(1000);
int maxRan = 10 - level; if (maxRan < 1) { maxRan = 1; }
int ran = RandomGenerator.GetRandomNumber(maxRan); if (ran == 0) { Console.WriteLine("he's dead!"); context.SetState(context.GetExploreState()); int tempRounds = rounds; rounds = 0;
return tempRounds; } else { Console.WriteLine("but fail."); }
if (rounds >= 9) { Console.WriteLine("You panic and run away in fear."); context.SetState(context.GetExploreState());
rounds = 0; }
The beauty of the State Pattern is in how easy it is to add new states and commands to our application. Since the class implements the same interface (ie. the same commands), we can easily define this new Battle state. The only interesting part to this class is the contents of the Battle function, which simply picks a random number to tell if the user killed the monster. For each round the user fails to kill the monster (in one blow), the user loses potential experience. The faster a monster is killed, the more experience he gains.
It's also important to note that in the Battle function we include a transition back to the Explore state (just as we did in the Explore state's Explore() function if the user found a monster). Notice that we include a reference to the State controller's context. This is just a pointer to the state controller, as the controller will ultimately handle which state we're in and calling the desired functions. Let's move on to describing how the controller works.
The Master Pattern Controller
As you noticed in the state definitions above, we have a reference to a "context" variable, which points to the state controller. The controller manages the current state and maipulates the state. Our main program will actually call the controller, not the individual states.
The controller can be defined as follows:
public class RPGController { private IState exploreState; private IState battleState; private IState state; private int level = 1;
public RPGController() { exploreState = new ExploreState(this); battleState = new BattleState(this);
state = exploreState; }
public int Explore() { return state.Explore(); }
public int Battle() { return state.Battle(level); }
public void SetState(IState state) { this.state = state; }
public void SetLevel(int level) { this.level = level; }
public int GetLevel() { return level; }
public IState GetExploreState() { return exploreState; }
public IState GetBattleState() { return battleState; } }
The first point to note is that we define an instance of each state within the controller. We define exploreState and battleState. We instantiate these variables to their designated concrete state (Explore and Battle). We also define a state pointer, which holds our current state. This is similar to defining the integer variable in the original example above.
Next, we define functions for each command that the user can select (or just copy the functions we used in the interface which coorespond to the same thing): Explore and Battle. For the body to these functions, we simply execute the current state's method for that command. So for Explore(), we simply call state.Explore() and for Battle, we simply call state.Battle(). Remember, state is our current state and will point to either exploreState or battleState. Regardless of which concrete state it points to, the functions it can call are the same.
We also define a few helper functions, such as the SetState() function, which lets us change our active state, and the GetExploreState() and GetBattleState() functions, which return a concrete instance of the particular state. These are used during the state transitions when we reference the "context" variable to change states.
Putting it All Together, I Think We've Got a Game
You might be surprised at this point to realize we pretty much have a fully functional state-aware game. The state pattern hides and abstracts much of the complexity that we normally have to manage in the main code. Since the states are tucked away in individual modules managed by the controller, we only have to use the controller to handle our game states. We can define the main program loop as follows:
class Program { static RPGController rpg = new RPGController(); static int score = 0; static int nextLevel = 10;
static void Main(string[] args) { ConsoleKeyInfo key;
Console.WriteLine("-=-=- Simple Battle Adventure v1.0 -=-=-");
do { Console.WriteLine(); Console.WriteLine("L = Look Around, A = Attack, Q = Quit"); Console.Write("Score [" + score + "] Level [" + rpg.GetLevel() + "] Action [L,A,Q]: ");
key = Console.ReadKey();
Console.WriteLine();
DoAction(key.Key); } while (key.Key != ConsoleKey.Q && rpg.GetLevel() < 10);
if (rpg.GetLevel() >= 10) { Console.WriteLine(); Console.WriteLine("You WIN! Final score [" + score + "]"); }
Console.ReadKey(); }
static void DoAction(ConsoleKey key) { if (key == ConsoleKey.L) { int points = rpg.Explore(); if (points > 0) { Console.WriteLine("You gain " + points + " heroic points!"); score += points; } } else if (key == ConsoleKey.A) { int rounds = rpg.Battle(); if (rounds > 0) { int points = 10 - rounds; if (points < 0) { points = 0; }
score += points;
Console.WriteLine("You gain " + points + " heroic points!"); } }
if (score >= nextLevel) { rpg.SetLevel(rpg.GetLevel() + 1); nextLevel = rpg.GetLevel() * 10;
Console.WriteLine("Your wonderous experience has gained you a level! Level " + rpg.GetLevel()); } } }
The first thing we do is include a variable for the RPG controller. This is our key to using the state pattern controller. Next, we display a basic menu to the user and allow him to select a command on the keyboard. Our state pattern comes into play when we process the command in the DoAction() function. Depending on the command selected, we call the controller's designated function for that command (Explore or Battle). The controller (and its concrete states) handle the actual details about what to do based upon the user's input. So, when the user chooses to Look, we just tell the controller to execute a Look and it tells the current state to execute a "Look".
There is one additional utility class left out from the above code, which manages selecting random numbers. This class is defined as follows:
public static class RandomGenerator { private static Random random = new Random();
public static int GetRandomNumber(int maxValue) { return random.Next(maxValue); } }
Output
If you'd like to try the game yourself, you can download a copy of the compiled game by clicking here. Of course, you'll need to have the .NET framework on your PC. Once you put together the above code and run the program, the output of our rpg game using the State Pattern, will look like the following:
-=-=- Simple Battle Adventure v1.0 -=-=-
L = Look Around, A = Attack, Q = QuitScore [0] Level [1] Action [L,A,Q]: LYou look around for something to kill.You find a golden jewel behind a tree!You gain 2 heroic points!
L = Look Around, A = Attack, Q = QuitScore [2] Level [1] Action [L,A,Q]: LYou look around for something to kill.A monster approaches! Prepare for battle!
L = Look Around, A = Attack, Q = QuitScore [2] Level [1] Action [L,A,Q]: AYou try to slay the monster.. but fail.
L = Look Around, A = Attack, Q = QuitScore [2] Level [1] Action [L,A,Q]: AYou try to slay the monster.. he's dead!You gain 5 heroic points!
L = Look Around, A = Attack, Q = QuitScore [7] Level [1] Action [L,A,Q]: AYou find nothing to attack.
L = Look Around, A = Attack, Q = QuitScore [7] Level [1] Action [L,A,Q]: LYou look around for something to kill.You find a golden jewel behind a tree!You gain 2 heroic points!
L = Look Around, A = Attack, Q = QuitScore [9] Level [1] Action [L,A,Q]: LYou look around for something to kill.
L = Look Around, A = Attack, Q = QuitScore [9] Level [1] Action [L,A,Q]: LYou look around for something to kill.A monster approaches! Prepare for battle!
In the above output, you can guess which state the user was in when he performed each command and which action was called. For example, after slaying the monster, the user was returned to the Explore state, but selected the Attack command anyway. The response was "You find nothing to attack", indicating the state transitioned from Battle to Explore successfully.
Conclusion
Managing state in an application typically requires utilizing an integer index variable with a series of if-then or case statements, which can often add to code complexity and maintenance issues. The State Pattern in C# helps us organize our states into individual modules with a single controller and helps us abstract away the details behind the state implementation. This allows us to reduce code complexity, increase readability, and ultimately reduce maintenance in the future. The State Pattern is an easy addition to your design patterns toolkit and can be used in any stateful object oriented application.
About the Author
This article was written by Kory Becker, founder and chief developer of Primary Objects, a software and web application development company. You can contact Primary Objects regarding your software development needs at | http://www.primaryobjects.com/CMS/Article94.aspx | crawl-002 | refinedweb | 2,746 | 64.71 |
Elastic.
While its general interface is pretty natural, I must confess I’ve sometimes struggled to find my way around ElasticSearch’s powerful, but also quite complex, query system and the associated JSON-based “query DSL” (domain specific language).
This post therefore provides a simple introduction and guide to querying ElasticSearch that provides a short overview of how it all works together with a good set of examples of some of the most standard queries.
Table of Contents
- Terminology and URLs
- Quickstart
- Querying
- Query Language
- Appendix
Terminology and URLs
Throughout
{endpoint} refers to the ElasticSearch index type (aka
table). Note that ElasticSearch often let’s you run the same queries on both
“indexes” (aka database) and types.
If you were just using ElasticSearch standalone an example of an endpoint would be:.
Key urls:
Query:
{endpoint}/_search(in ElasticSearch < 0.19 this will return an error if visited without a query parameter)
- Query example:
{endpoint}/_search?size=5&pretty=true
Schema (Mapping):
{endpoint}/_mapping
Quickstart
cURL (or Browser)
The following examples utilize the cURL command line utility. If you prefer, you you can just open the relevant urls in your browser:
# query for documents / rows with title field containing 'jones' # added pretty=true to get the json results pretty printed curl {endpoint}/_search?q=title:jones&size=5&pretty=true
Adding some data:
# Data (argument to -d) should be a JSON document curl -X POST {endpoint} -d '{ "title": "jones", "amount": 5.7 }'
Javascript
A simple ajax (JSONP) request to the data API using jQuery:
var data = { size: 5 // get 5 results q: 'title:jones' // query on the title field for 'jones' }; $.ajax({ url: {endpoint}/_search, dataType: 'jsonp', success: function(data) { alert('Total results found: ' + data.hits.total) } });
Note: we’ve written a simple JS library for ElasticSearch which makes working with ElasticSearch much easier. Here’s a sample:
// Your ElasticSearch instance is running at // We are using index 'twitter' and type (table) 'tweet' var endpoint = ''; // Table = an ElasticSearch Type (aka Table) // var table = ES.Table(endpoint); // Create some data table.upsert({ id: '123', title: 'My new tweet' }).done(function() { // now get it table.get('123').done(function(doc) { console.log(doc); }); }); // Query for data // Queries follow Recline Query spec - // // (very similar to ES) table.query({ q: 'hello' filters: [ { term: { 'owner': 'jones' } } ] }).done(function(out) { console.log(out); });
Python
import urllib2 import json # ================================= # Store some data url = '{endpoint}' data = { 'title': 'jones', 'amount': 5.7 } # have to send the data as JSON data = json.dumps(data) req = urllib2.Request(url, data, headers) out = urllib2.urlopen(req) print out.read() # ================================= # Query the resulting "table" url = '{endpoint}/_search?q=title:jones&size=5' req = urllib2.Request(url) out = urllib2.urlopen(req) data = out.read() print data # returned data is JSON data = json.loads(data) # total number of results print data['hits']['total']
Querying
Basic Queries Using Only the Query String
Basic queries can be done using only query string parameters in the URL. For example, the following searches for text ‘hello’ in any field in any document and returns at most 5 results:
{endpoint}/_search?q=hello&size=5
Basic queries like this have the advantage that they only involve accessing a URL and thus, for example, can be performed just using any web browser. However, this method is limited and does not give you access to most of the more powerful query features..
Full Query API
More powerful and complex queries, including those that involve faceting and statistical operations, should use the full ElasticSearch query language and API.
In the query language queries are written as a JSON structure and is then sent to the query endpoint (details of the query langague below). There are two options for how a query is sent to the search endpoint:
Either as the value of a source query parameter e.g.:
{endpoint}/_search?source={Query-as-JSON}
Or in the request body, e.g.:
curl -XGET {endpoint}/_search -d 'Query-as-JSON'
For example:
curl -XGET {endpoint}/_search -d '{ "query" : { "term" : { "user": "kimchy" } } }'
Query Language
Queries are JSON objects with the following structure (each of the main sections has more detail below):
{ size: # number of results to return (defaults to 10) from: # offset into results (defaults to 0) fields: # list of document fields that should be returned - sort: # define sort order - see query: { # "query" object following the Query DSL: # details below }, facets: { # facets specifications # Facets provide summary information about a particular field or fields in the data } # special case for situations where you want to apply filter/query to results but *not* to facets filter: { # filter objects # a filter is a simple "filter" (query) on a specific field. # Simple means e.g. checking against a specific value or range of values }, }
Query results look like:
{ # some info about the query (which shards it used, how long it took etc) ... # the results hits: { total: # total number of matching documents hits: [ # list of "hits" returned { _id: # id of document score: # the search index score _source: { # document 'source' (i.e. the original JSON document you sent to the index } } ] } # facets if these were requested facets: { ... } }
Query DSL: Overview
Query objects are built up of sub-components. These sub-components are either basic or compound. Compound sub-components may contains other sub-components while basic may not. Example:
{ "query": { # compound component "bool": { # compound component "must": { # basic component "term": { "user": "jones" } } # compound component "must_not": { # basic component "range" : { "age" : { "from" : 10, "to" :.
Examples, of filters are (full list on RHS at the bottom of the query-dsl page):
- term: filter on a value for a field
- range: filter for a field having a range of values (>=, <= etc)
- geo_bbox: geo bounding box
- geo_distance: geo distance
Rather than attempting to set out all the constraints and options of the query-dsl we now offer a variety of examples.
Examples
Match all / Find Everything
{ "query": { "match_all": {} } }
Classic Search-Box Style Full-Text Query
This will perform a full-text style query across all fields. The query string
supports the Lucene query parser syntax and hence filters on specific fields
(e.g.
fieldname:value), wildcards (e.g.
abc*) as well as a variety of
options. For full details see the query-string documentation.
{ "query": { "query_string": { "query": {query string} } } }
Filter on One Field
{ "query": { "term": { {field-name}: {value} } } }
High performance equivalent using filters:
{ "query": { "constant_score": { "filter": { "term": { # note that value should be *lower-cased* {field-name}: {value} } } } }
Find all documents with value in a range
This can be used both for text ranges (e.g. A to Z), numeric ranges (10-20) and for dates (ElasticSearch will converts dates to ISO 8601 format so you can search as 1900-01-01 to 1920-02-03).
{ "query": { "constant_score": { "filter": { "range": { {field-name}: { "from": {lower-value} "to": {upper-value} } } } } } }
For more details see range filters.
Full-Text Query plus Filter on a Field
{ "query": { "query_string": { "query": {query string} }, "term": { {field}: {value} } } }
Filter on two fields
Note that you cannot, unfortunately, have a simple
and query by adding two
filters inside the query element. Instead you need an ‘and’ clause in a filter
(which in turn requires nesting in ‘filtered’). You could also achieve the same
result here using a bool query.
{ "query": { "filtered": { "query": { "match_all": {} }, "filter": { "and": [ { "range" : { "b" : { "from" : 4, "to" : "8" } }, }, { "term": { "a": "john" } } ] } } } }
Geospatial Query to find results near a given point
This uses the Geo Distance filter. It requires that indexed documents have a field of geo point type.
Source data (a point in San Francisco!):
# This should be in lat,lon order { ... "Location": "37.7809035011582, -122.412119695795" }
There are alternative formats to provide lon/lat locations e.g. (see ElasticSearch documentation for more):
# Note this must have lon,lat order (opposite of previous example!) { "Location":[-122.414753390488, 37.7762147914147] } # or ... { "Location": { "lon": -122.414753390488, "lat": 37.7762147914147 } }
We also need a mapping to specify that Location field is of type geo_point as this will not usually get guessed from the data (see below for more on mappings):
"properties": { "Location": { "type": "geo_point" } ... }
Now the actual query:
{ "query": { "filtered" : { "query" : { "match_all" : {} }, "filter" : { "geo_distance" : { "distance" : "20km", "Location" : { "lat" : 37.776, "lon" : -122.41 } } } } } }
Note that you can specify the query using specific lat, lon attributes even though original data did not have this structure (you can also use a query similar to the original structure if you wish - see Geo distance filter for more information).
Facets
Facets provide a way to get summary information about then data in an elasticsearch table, for example counts of distinct values.
ElasticSearch (and hence the Data API) provides rich faceting capabilities. The ES facet docs go a great job of listing of the various kinds of facets available and their structure so I won’t repeat it all here. Here is a list of some of the most important (full list on the facets page):
- Terms - counts by distinct terms (values) in a field
- Range - counts for a given set of ranges in a field
- Histogram and Date Histogram - counts by constant interval ranges
- Statistical - statistical summary of a field (mean, sum etc)
- Terms Stats - statistical summary on one field (stats field) for distinct terms in another field. For example, spending stats per department or per region.
- Geo Distance: counts by distance ranges from a given point
Note that you can apply multiple facets per query.
Appendix
Adding, Updating and Deleting Data
ElasticSeach, and hence the Data API, have a standard RESTful API. Thus:
POST {endpoint} : INSERT PUT/POST {endpoint}/ : UPDATE (or INSERT) DELETE {endpoint}/ : DELETE
For more on INSERT and UPDATE see the Index API documentation.
There is also support bulk insert and updates via the Bulk API.
Schema Mapping
As the ElasticSearch documentation states:
Mapping.
Relevant docs:.
JSONP support
JSONP support is available on any request via a simple callback query string parameter:
?callback=my_callback_name | http://okfnlabs.org/blog/2013/07/01/elasticsearch-query-tutorial.html | CC-MAIN-2016-44 | refinedweb | 1,625 | 61.26 |
ive doing a homework which we need to provide buttons (GUI) for a vending machine.
the vending machine takes dollars, quaters, dimes, and nickels, and the vending machine has drinks (60 cents) and snacks (45 cents).
we done a code were it does work through "SCANNER", but now we have to create a GUI for it.
/** * VendingMachineCUI: A program that emulates a vending machine. It uses a * VendingMachine object to represent a vending machine, and interacts with * the user through the console. */ import java.util.*; // for Scanner class public class VendingMachineCUI { public static void main (String[] args) { char action; // Store user's input characters String strDrinkAvailable, // whether drink is sold out strSnackAvailable; // whether snack is sold out Change change; // Used to keep a change // Create a Scanner object for input Scanner keyboard = new Scanner (System.in); // Create a vending machine object VendingMachine vm = new VendingMachine (); while (true) { if (vm.isItemAvailable (VendingMachine.ITEM_TYPE_DRINK)) strDrinkAvailable = ""; else strDrinkAvailable = " (sold out)"; if (vm.isItemAvailable (VendingMachine.ITEM_TYPE_SNACK)) strSnackAvailable = ""; else strSnackAvailable = " (sold out)"; // Display the menu System.out.println ( "A - Drink" + strDrinkAvailable + "\n" + "B - Snack" + strSnackAvailable + "\n\n" + "D - Enter a dollar\n" + "Q - Enter a quarter\n" + "M - Enter a dime\n" + "N - Enter a nickel\n\n" + "R - Return change\n\n"); // Display the amount entered in the machine System.out.println ("Amount available: " + vm.getAmountAvailable () + "\n\n"); // Get user's action System.out.print ("Your action: "); action = keyboard.next ().charAt (0); action = Character.toUpperCase (action); // Process switch (action) { case 'A': if (! vm.isItemAvailable (VendingMachine.ITEM_TYPE_DRINK)) System.out.println ("Sorry, it's sold out"); else if (! vm.isMoneyEnough (VendingMachine.ITEM_TYPE_DRINK)) System.out.println ("Sorry, not enough money for the purchase"); else if (vm.buyItem (VendingMachine.ITEM_TYPE_DRINK)) System.out.println ("Here is your drink"); break; case 'B': if (! vm.isItemAvailable (VendingMachine.ITEM_TYPE_SNACK)) System.out.println ("Sorry, it's sold out"); else if (! vm.isMoneyEnough (VendingMachine.ITEM_TYPE_SNACK)) System.out.println ("Sorry, not enough money for the purchase"); else if (vm.buyItem (VendingMachine.ITEM_TYPE_SNACK)) System.out.println ("Here is your snack"); break; case 'D': vm.inputDollar (); break; case 'Q': vm.inputQuarter (); break; case 'M': vm.inputDime (); break; case 'N': vm.inputNickel (); break; case 'R': change = vm.makeChange (); // Display the coins to the user System.out.println ( "# of quarters: " + change.getQuarters () + "\n" + "# of dimes: " + change.getDimes () + "\n" + "# of nickles: " + change.getNickels () + "\n"); break; default: System.out.println ("Sorry, " + action + " is an invalid option"); break; // simply do nothing for a invalid input } // end of switch System.out.println ("\n"); // get a vertical space } // end of while } }
thats the scanner one, it does work (since i have other programs which displays the change, and everyything else).
but i dont know where to start...
ive done one program with GUI, and was still strugglin about it.
i know the basics, such as...
inputing
import java.awt.*; import java.awt.event.*;
to activate the whole "event" thing through the eAction..
any help will be appreciateed...
thanks. | https://www.daniweb.com/programming/software-development/threads/125961/gui-vending-machine | CC-MAIN-2022-21 | refinedweb | 484 | 51.95 |
USAGE:
import com.greensock.TweenLite; import com.greensock.plugins.TweenPlugin; import com.greensock.plugins.FrameLabelPlugin; TweenPlugin.activate([FrameLabelPlugin]); //activation is permanent in the SWF, so this line only needs to be run once. TweenLite.to(mc, 1, {frameLabel:"myLabel"});
Note: When tweening the frames of a MovieClip, any audio that is embedded on the MovieClip's timeline (as "stream") will not be played. Doing so would be impossible because the tween might speed up or slow down the MovieClip to any degree.Copyright 2011, GreenSock. All rights reserved. This work is subject to the terms in or for Club GreenSock members, the software agreement that was issued with the membership. | http://www.greensock.com/asdocs/com/greensock/plugins/FrameLabelPlugin.html | CC-MAIN-2022-40 | refinedweb | 110 | 52.26 |
Hi all,
I know there have been several threads about getting card images into a GUI, but I'm flustered trying to understand why some things work and some things don't. Here's the code:
import random from Tkinter import * import Image, ImageTk class Card(object): suits = ["C","D","H","S"] values = ['A','2','3','4','5','6','7','8','9','T','J','Q','K'] down_img = PhotoImage(file="Deck1.gif") cards = [x+y for x in suits for y in values] pics = {} for x in cards: pics[x] = PhotoImage(file=x+'.gif') def __init__(self, suit=None, value=None, up=True): if suit in Card.suits: self.suit = suit else: self.suit = random.choice(Card.suits) if value in Card.values: self.value = value else: self.value = random.choice(Card.values) self.up = up def __str__(self): return self.value+self.suit def picURL(self): return Card.pics[self.suit+self.value] def display(self, x,y,angle,canvas): if self.up: canvas.create_image(x,y,Card.pics[self.suit+self.value]) else: canvas.create_image(x,y,Card.down_image) class Deck(Card): deck = Card.cards[:] def __init__(self, suit=None, value=None, up=True): card = str(suit)+str(value) if card not in Deck.deck: if Deck.deck == []: raise ValueError, "Deck empty!" card = random.choice(Deck.deck) suit=card[0] value=card[1:] Deck.deck.remove(card) self.suit = suit self.value = value self.up = up @staticmethod def shuffle(): Deck.deck = Card.cards class BridgeRound(Frame): # display_data contains display info for each hand: #(startx, starty, dx, dy, angle of rotation) display_data = [(30,0,10,0,0), \ (200, 30, 0, 10, 270), \ (170, 200, -10, 0, 180), \ (0, 170, 0, -10, 90)] def __init__(self, master): Frame.__init__(self,master) self.canvas = Canvas(self) self.canvas.grid() self.grid() self.deal() self.display() def deal(self): Deck.shuffle() self.hands = [[Deck() for i in range(13)] for j in range(4)] def display(self): for i in range(4): hand = self.hands[i] x, y, dx, dy, angle = BridgeRound.display_data[i] for j in hand: print j, j.display(self.canvas,x,y,angle) x += dx y += dy mainw = Tk() mainw.f = BridgeRound(mainw) mainw.mainloop()
You may ask, why does he create the images in the Card class? Isn't that wasteful of memory? Well, yes. BUT...for some reason, if I don't maintain a static reference to the card images, they don't display. Thus, a line like
canvas.create_image(PhotoImage(file="H6.gif"))
results in a blank image. So if I want to display all the cards at once, then I have to maintain them all in a data structure.
Anyways, when I run the code, I get
>>>
Traceback (most recent call last):
File "C:\Python24\src\cards\cards.py", line 5, in -toplevel-
class Card(object):
File "C:\Python24\src\cards\cards.py", line 14, in Card
pics[x] = PhotoImage(file=x+'.gif')
File "C:\Python24\lib\lib-tk\Tkinter.py", line 3203, in __init__
Image.__init__(self, 'photo', name, cnf, master, **kw)
File "C:\Python24\lib\lib-tk\Tkinter.py", line 3144, in __init__
raise RuntimeError, 'Too early to create image'
RuntimeError: Too early to create image
to which I say ?!@#*^$#^?!
So when's the best time to create an image?! I'm sure I can tweak the code to make it run ... but what's the underlying theory about Tkinter images here?
Thanks,
Jeff | https://www.daniweb.com/programming/software-development/threads/65594/trying-to-get-card-images-into-a-gui | CC-MAIN-2017-26 | refinedweb | 571 | 53.68 |
Azure’s new Event Hub
November 18, 2014 7 Comments
Over the past few months, I had the good fortune to be accepted to present at ThatConference in Wisconsin and CloudDevelop in Ohio. I count myself even more fortunate because at the time I submitted my session for both these events, it was about a new Azure solution that hadn’t even been announced yet, the Event Hub.
Whenever possible, I like to put demos into a real world context. For this one, I reached out to two colleagues that were presenting at ThatConference and collectively we came up with the idea to do a conference attendee tracking solution. For my part of this series, I was going to cover using Event Hub to ingest event messages from various sources (social media, mobile apps, and proximity sensors) and feeding those into the hub. I also wrote some code so that the other sessions could consume the messages.
Event Hub vs. Topics/Queues
The first question to get out of the way is that Event Hub is NOT just a new variation on Topics/Queues. For this, I’ve found a simple visual example works best.
The key differentiator between the two is scale. A courier can pick up a selection of packages, and ensure they are delivered. But if you need to move hundreds of thousands of packages, you can do that with a lot of couriers, or you could build a distribution center capable of handling that kind of volume more quickly. Event Hub is that distribution center. But it’s been built as a managed service so you don’t have to build your own expensive facility. You can just leverage the one we’ve built for you.
In Service Bus, topics and queues are about the transportation and delivery of a specific payload (the brokered message) from point A to point B. These come with specific features (guaranteed delivery, visibility controls, etc…) that also limit the scale at which a single queue can operate. Service Bus was built to solve the challenges of scaled ingestion of messages, but did so with the trade-off of these types of atomic operations. The easiest way to think of Event Hub is as a giant buffer into which you place messages, and they are automatically retained for a given period of time. You then have the ability to read those messages much as you would read a file stream from disk. You can even rewind all the way back to the beginning of the stream and process everything again.
And as you might expect given the different focus of the two solutions, the programming models are also different. So it’s also important to understand that switching from one to the other isn’t simply a matter of switching the SDK.
What is the Event Hub?
If you think back to Topics/Queues, you had the option of enabling partitions via the EnablePartioning property. This would cause the topic or queue to switch from a single event hub broker (the service side edge compute node), to 16 brokers, increasing the total throughput of the queue by 16 times. We call this approach, partitioning. And this is exactly what Event Hub does.
When you create an Event Hub, you determine the number of partitions that you want (from 16, the default, up to 1024). This allows you to scale out the processes that need to consume events. Partitions are also used to help direct messages. When you send an event to the hub, you can assign a partition key which is in turn hashed by the Event hub brokers so that it lands in a given partition. This hash ensures that as long as the same partition key is used, the events will be placed into the same partition and in turn will be picked up by the same reciever. If you fail to specify a partition, the events will be distributed randomly.
When it comes to throughput, this isn’t the end of the story..
So what we have is a service that can scale to handle massive ingestion of events, combined with a huge buffer just in case the back end, which also features scalable consumption, can’t keep up with the rate in which messages are being sent. This gives us scale on multiple facets, as a managed, hosted service.
So about that presentation…
So the next obvious question is, “how does it work?” This is where my demos came in. I wanted to show using event hug to consume events from multiple sources: a social media feed, a mobile app used by vendors to scan attendee badges, and proximity sensors scattered around the conference to help measure session attendance.
I started by realizing that when I consume event, I needed to know what type they were (aka how to deserialize them). To make this easy, I started by defining my own customer, .NET message types. I selected twitter for the media feed and for the messages, the type class declaration looks like this:
So we have who tweeted, the text of the message, and when it was created. I decorated the class with various data attributes to aid in serialization.
When a tweet is found, we’ll need a client to send the event…
This creates an EventHubClient object, using a connection string from the configuration file, and a configuration setting that defines the name of the hub I want to send to.
Next up, we need to create my event object, and serialize it.
I opted to use Newtonsoft’s .NET JSON serializer. It was already brought in by the Event Hub nuget package. JSON is lighter weight then XML, and since Event Hub is based on total throughput, not # of messages, it made sense to keep the payloads as small as was convenient.
Finally, I have to actually send the message:
I create an instance of the EventData object using the serialized event, and assign a partition key to it. Furthermore, I also add a custom property to it that my event processors can then use to determine how to deserialize the event. Finally, I call the EventHubClient method, Send, handing my event as a property. The default way for the .NET client to do all this is to use AMQP 1.0 under the covers. But you can also do this via REST from just about any language you want. Here’s an example using Javascript…
This example comes from another part of the presentation’s demo, where I use a web page with imbedded javascript to simulate the vendor mobile device app for scanning attendee badges. In this case, the Partition key is in the URI and is set to “vendor”. While I’m still sending a JSON payload, this one uses a UTF-8 encoding instead of Unicode. Another reason it could be important that we have an event type when we’re consuming events.
Now you’ll recall I explained that the partition key is used to help distribute the events so that we end up with a fairly even distribution amoung the consuming processes. So why would I select to bind each of my examples to a single partition? In my case, I knew that volumes would be low, so there wasn’t much of an issue with overloading my consuming processes. But you can also use this approach if you want to ensure that the same consuming process always gets the events from the same source. Something that can be really handy if the consuming process is using the events to maintain an in-memory state model of some type.
So what about consuming the events?
Events are consumed via “consumer groups”. Each group can track its position within the overall event hub ‘stream’ separately, allowing for parallel consumption of events. A default group is created when the event hub is created, but we can create our own. Consuming processes in turn create receivers, which connect to the various partitions to actually consume the events. This would normally require you to code up some rather complicated logic to ensure that if the process that owns a given set of receivers becomes unavailable, another process can pick up the slack. Fortunately, the event hub team thought of this already and created another nuget package called the EventProcessorHost.
Simply put, this is a handy, .NET based approach to handle resiliency and fault tolerance of event consumers/recievers. It uses Azure Storage blobs to track which receivers are attached to a given partition in an event hub. If you add or remove consuming processes, it will redistribute the receivers accordingly. I used this approach for my presentation to create a simple console app that displays the events coming into the hub. There’s really just three parts of the solution: the program itself, a receiver class, and an event processor class.
The console program is the simplest bit of code…
We use the namespace manager to create a consumer group if the one we want doesn’t already exist. Then we instantiate a Receiver object, and tell that object to start processing events, distributing threads across the various partitions in the event hub. The nuget package comes with its own Receiver class, so there’s not much you really need to do. The core of the receiver is in the MessageProcessingWithPartitionDistribution method.
You’ll note that this may actually be a bit different then the version that arrives with the nuget package. This is because I’ve modified it to use a consumer group name I specify, instead of just the default name. Otherwise, it’s the same example. I get the Azure Storage connection string (where the blobs that will control our leases will go), and then uses that to create an EventProcessorHost object. We then tell the host to start doing asynchronous event processing (via RegisterEventProcessorAsync). This registration, actually points to our third class, which implements the IEventProcessor interface. Again a template is provided as part of the nuget package, so we don’t have to write the entire thing ourselves. But if you look at this ProcessEventsAsync method, we see the heart of it…
What’s happening behinds the scenes is that a thread is being spun up for each partition on the Event Hub. This thread then uses a blob lease to take ownership of a partition, then attached a receiver and begins consuming the events. Each time it pulls events (by default, it will pull 10 at a time), the method I show above is called. This method just loops through the collection of events, and every minute will tell the EventProcessorHost to checkpoint (record were we’re at in the stream) our progress. Inside of the foreach loop is the code that looks at the events, deserializes appropriately, and displays then on the programs console.
You can see we’re checking the events “Type” property, and then deseralizing it back into an object with the proper type of encoding. It’s a simple example, but I think drives the point home.
We can see some of what’s going on under the covers of the processor by looking at the blob storage account we’ve associated with our processor. First up, the EventProcessor creates a container in the storage account that is named the same as our event hub (so if you have multiple hubs with the same name in different namespaces, be sure to use different storage accounts). Within that container is a blob named “evenhub.info” which contains a JSON payload that describes the container and the hub.
{“EventHubPath”:”defaulthub”,”CreatedAt”:”2014-10-16T20:45:16.36″,”PartitionCount”:16}
This shows the location of the hub, when this container/file was created, and the number of partitions in the hub. Getting the number of partitions is why you must use a connection string or SAS for the hub that has manager permissions. Also within this container is one blob (zero indexed) for each partition in the hub. These blobs also contain JSON payloads.
{“PartitionId”:”0″,”Owner”:”singleworker”,”Token”:”642cbd55-6723-47ca-85ee-401bf9b2bbea”,”Epoch”:1,”Offset”:”-1″}
We have the partition this file is for, the owner (aka the EventProcessorHost name we gave this), A token (presumably for the lease), an Epoch (not sure what this is for YET), and an Offset. This last value is the position we’re at in the stream. When you call the CheckPointAsync method of our SimpleEventProcessor, this will update the value of the offset so we don’t read old message again.
Now if we spin up two copies of this application, after a minute or so, you’d see the two change ownership of various partitions. Messages start appearing in both and providing you’re spreading your messages over enough partitions, you’ll even be able to see the partition keys at work as different clients will get messages from specific partitions.
Ok, but what about the presentations?
Now when I started this post, I mentioned that there was a presentation and set of demos to go along with this. I’ve upload both for you to take away and use as you see fit. So enjoy!
Annotated Event Hub PowerPoint Presentation Deck && Event Hub Visual Studio 2013 Demo Solution (contains 3 demos and 5 projects)
Until next time!
Thanks for the article. I have been playing with event hub recently, I am trying to replicate the event hub .net class to Java. I have managed to send event individually but just wondering how do you send multiple event in batch using REST API?
Yes, you can. Its done similarly to the doing batch sends for topics and queues. A writeup on doing this can be found at:
Brent, thanks for this great article. I am still however still unsure about Event Hubs vs Topics. I would love if you could elaborate on this topic. I have pretty good experience with Topics, and now I am judging this new kind on the block called Event Hubs. How are event Hubs any better than Topics. Seem like both can accomplish exactly the same scenarios if implement with enough thought (thinking of subscriptions), or can they?
Thanks in advance,
Lucas
Not better, or worse. Just different. You can accomplish similar solutions, but its a matter of the amount of overhead/complexity involved. One key difference is that Event Hub is more of a buffer, so you have the capability to rewind the “stream” back to older events and replay then. However, with that comes the need to manage where you’re at in the stream which you don’t have to with queues/topics.
Thanks for these explanations. I better understand the difference with the queues. 🙂
Pingback: Bloggers Guide to Azure Event Hub | Raj's Cloud Musings
Thanks. After this, I’m wondering if I should move from my IoT => blob => BI scheme to a IoT=>EventHub=>StreamAnalysys one. Remaining question: will the EventHub client (e.g. on IoT-Core) buffer my input stream for a while (how long/much) if my connection is flaky? This is the reason I chose the my initial schem in the first place. | https://brentdacodemonkey.wordpress.com/2014/11/18/azures-new-event-hub/ | CC-MAIN-2017-22 | refinedweb | 2,541 | 70.33 |
Our ArduStation MEGA boards arrived today and I am truly excited to see how they work. Now starts small assembly job for them and then we can start doing testing for those.
Big thanks for Colin for his original ASM work, it's great to see what great minds can create.
New boards looks just awesome. Few connectors are missing but those are quickly soldered. These boards are from the first batch and we have them now small qty in stock. If there are any changes that needs to be made, we will do it and then first production batch will come.
Technical specs:
- MCU: Atmel 2560 with Atmel 8u2 USB "adapter"
- XBee footprint for telemetry with external pins
- IO: Buzzer, Analog, I2C, GPS
- SDCard holder for storage needs
- 128 x 64 pixel Graphics LCD
- 3 x TTL Serial output pins
- 2 x Servo outputs with internal/external power feed
- Encoder port for menus etc use
- Connection for I2C Keyboard
- 4 x LEDs for showing different statues
Hi PPL, Does anyone know how to get the ublox neo 6 to Work with ardustation mega, i have checked the baud rate on the gps is 38400 baud, but the ardustation says no gps (i have seen something about updating ardustation mega to ArdustationMega_v0.6_ublox.hex via mission planner) how do i do that ?
Thanks
Hi PPL.
What is the max "safe" voltage on the battery wires
Hello all,
I have connected an EM406 GPS module to the GPS port; The Ardustation Mega refuses to connect to it in any mode, be it, _AUTO, _NMEA or _SIRF.
Can anyone help me please?
Thanks,
Madhu.
Thanks Colin,
Seems obvious after you pointed it out! Thanks also for the quick response and dealing with my problem.
I look forward to giving it a go!
Alex
Open up hardware.h and on line 15 change
#define GPS_PROTOCOL GPS_PROTOCOL_UBLOX
to
#define GPS_PROTOCOL GPS_PROTOCOL_MTK19
you can also use _AUTO, but I found it was less reliable.
Colin
Hi All,
I have been trying to get my ASM working with a Mediatek GPS. The 0.06 binary downloads and works fine but when I compile in Arduino I get "NOGPS" on the "ASM Status" page. When I use the precompiled Mediatek binary I get "GPS Fix" but it seems that not all of the pages are there.
All other parameters ork fine and it connects to my APM plane with no other (obvious) problems.
Not sure how to go about compiling the code for Mediatek instead of uBlox.
Any help would be much appreciated.
Thanks, Alex
Salut Jean-Marie, merci pour ta réponse.
Je voulais d'abord savoir quel type de GPS il fallait connecter l'ASM V1.5 (j'ai vu dans ce fil de discussion qu'il y a quelqu'un qui utilise un Mediatek, j'ai l'habitude d'utiliser des Ublox).
En fait, c'est le type de connexion qui me chagrine un peu: la connexion 6 fils, y' a t-il un standard à utiliser ?
Sinon, ça fonctionne: j'ai branché la télémétrie, et ça communique bien avec l'APM2.6.
Aurais-tu une doc ou le manuel d'utilisation ?
J'ai eu mon ASM ici :...
mais la doc :...
ne fonctionne pas , je ne connais pas ce type d'extension .dpbs
As-tu trouvé quelquechose de ton coté ? merci ! :-)
hi Christophe,
Welcome ! I'm french to.
Normally, last 0,6.hex from Colin works with ublox but un the Arduino Sketch it's seems to have:
#include "AP_GPS_NMEA.h"
#include "AP_GPS_SIRF.h"
//#include "AP_GPS_406.h"
#include "AP_GPS_UBLOX.h"
#include "AP_GPS_MTK.h"
#include "AP_GPS_MTK19.h"
#include "AP_GPS_None.h"
#include "AP_GPS_Auto.h"
Normally, if your ASM is near your Plane, You could setup UAV position as ASM=GCS postion but for me, it seems not working well for the moment I mean "partially".
Tracker page is very young and, honestly, depsite Colin marvellous job, I'm not sur it's really work beacause nobody hear seems to have plug servo to have a real 'tracker antenna'
I've my ASM since 2 week but I've not spend a lot of time on it and, except week end, and I prefer flying than trying to have this working!
Christophe, feel free to ask me question in french if you want !
Hello all, I'm new and happy to be here. I'm french, and interrsting by UAV with Skywalker 1900 + APM2.6 system, and ASM 1.5 for Ground Station.
My first question is: GPS must be plugged to ASM , for work ???
I've seen before that Mediatek GPS module it is good ?
Thanks. ;-)
Thanks guys. Tom, those are some good links.
I'll probably go with the original then, as I already have one. :)
From what I'm reading, having an extra radio listening in might work... or it might not. It's slightly complicated. Only thing to do is test it. | https://diydrones.com/profiles/blogs/ardustation-mega-v1-0b-arrived?commentId=705844%3AComment%3A1235982&xg_source=msg_com_blogpost | CC-MAIN-2022-40 | refinedweb | 818 | 75.2 |
Installation instructions¶
Introduction¶
The OpenPTV contains of these main parts:
The Python bindings allow the easy access to the C library. There are two packages that are built around the Python bindings:
- Python 2.7 with PyQt4 GUIs and command line scripts from Yosef Meller called The Particle Bureau of Investigation or pbi
- Python 2.7 with Enthought based GUI (using TraitsUI and Chaco) called PyPTV or openptv-python
The C library
liboptv is using Check framework for the unit tests and Cmake project for the build. We recommend installing both software packages, however it is not obligatory, you may skip the relevant parts if you’re not going to develop or test the library.
One has to install first
liboptv and the Python bindings and after that the pbi or pyptv (or both).
liboptv - a library of the OpenPTV algorithms¶
This is a library - you can build it and link to it in your own project, e.g. calling functions from your own GUI or command-line software. When the package is installed correctly, you can reference it in your code by including files from the
optv directory under the standard include path. For example:
#include <optv/tracking_frame_buf.h>
To build your program you also link it with
liboptv. On
gcc, one adds the flag
-loptv to the command line. Other compilers and IDEs have their own
instructions for adding libraries; consult your IDE/compiler manual for the
details.
Below are instructions for building it and making it ready for use, either from a source distribution (a tarball with all necessary files) or directly from a Git repository.
Installing on Linux/Mac OS¶
Create the directory that will contain everything:
mkdir ~/openptv && cd ~/openptv
Install Check Unit testing framework:
sudo apt-get install check
Install Cmake:
sudo apt-get install cmake
Install missing compilers:
sudo apt-get install build-essential
Install git:
sudo apt-get install git
Install Anaconda (read their instructions, shall be straightforward) or Canopy Python distribution (Python 2.7)
bash canopy_xxx.xxx.sh
Run Canopy (either ~/Canopy/canopy or Applications -> Canopy)
Add scikit-image package through Canopy Package Manager
Open Canopy Terminal through Canopy -> Tools -> Canopy Terminal
Get OpenPTV library liboptv, compile it using cmake and test it:
cd ~/openptv git clone git://github.com/OpenPTV/openptv.git cd openptv/liboptv mkdir build && cd build cmake ../ -G "Unix Makefiles" make make verify
see that all tests passed and then:
sudo make install
check that it’s installed under /usr/local/include and /usr/local/lib
Install Python bindings for the library (installs
`optv`module):
cd ~/openptv/openptv/py_bind python setup.py build_ext --inplace -I/usr/local/include -L/usr/local/lib python setup.py install cd tests nosetest
56 tests should be running and passing
Get pbi or pyptv, an example is for pyptv here:
cd ~/openptv git clone git://github.com/alexlib/pyptv.git cd pyptv/pyptv_gui python setup.py install
Get 3D-PTV test folder and create the res/ folder if missing
git clone
Run the software using the test_cavity folder, if all works fine, that’s it:
python pyptv_gui.py ~/openptv/test_cavity
If you encounter the error like this while trying to run the
openptv-python:
python pyptv_gui.py ../../test_cavity > ImportError: liboptv.so: cannot open shared object file: No such file or > directory
Then it’s suggested to try:
PATH=/usr/local/lib:$PATH python pyptv_gui.py ~/ptv_test_folder/
or
LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH python pyptv_gui.py ~/ptv_test_folder/
Whichever works, it is then useful to update the local ~/.bashrc file to define this every time one opens a shell.
Building from source/Installing on Mac OS X:¶
Basically follow the Linux instructions, whenever the apt-get package manager is used, one can use MacPorts or Homebrew package managers. Instead, one can simply download and install from source the a) check b) cmake c) liboptv, d) openptv-python and install Anaconda using the Mac OS package.
Installing directly from the source tree in Git is fast and simple, using CMake. Before installation first we need to make sure that the dependencies are installed. A bit more detailed instructions for Ubuntu users are given below.
The tests of liboptv depend on the Check framework. You can either get it from your package manager (e.g. on Ubuntu `sudo apt-get install check’), or get the source package from <> and install according to the instructions in the Check package. Typically you would need to run the following commands in a shell:
./configure make make install
The build instructions for liboptv are processed with CMake. Again, this is available either from your package manager on Linux, or from <>.
After installing CMake, installing
liboptvis a simple matter of running the following commands in the
liboptv/directory:
cmake make make verify make install
The install process will put the header files and libraries in default system
paths (on linux,
/usr/local/include and
/usr/local/lib respectively), so that
you can use the
optv library in your code without further modifications. You
can run
cmake with parameters that would change the install locations (see
below on the Windows install process). If you do so, remmember to make sure
that your compiler knows the path to the installed location.
Installing on Windows - outdated¶
Unlike Linux and MacOS, which both implement POSIX standards (the Unix standards base) and contain a default C build environment, Windows has no such environment. Choices range from the bare-bones Windows SDK, whose compiler is based on an outdated verion of C, to Visual Studio, a commercial product with its own IDE and build toolchain. Aside from being costly and proprietary, building with these compilers introduces compatibility problems with other programs. For example, building Python modules with VC 2010 from the Windows SDK fails because Python was build with VC 2008.
All this setup is here to justify the fact that build instructions here are for the MingW compiler and the MSYS package of Unix tools for Windows. After a hard process of trial and error this was found to be the easiest, most compatible solution.
The MSYS package provides the GCC compiler (MinGW), a Bash command-line shell and Unix build tools for Windows. It can be found here: <>
Use the mingw-get-setup method of instalation. During the installation you will be asked to choose subpackages. If you don’t know what you’re doing, choose everything.
After installing MSYS MinGW according to the instructions on the MSYS site, you will have a MinGW shell or MSYS shell in your start menu. Future instructions assume that this shell is used. the installation instructions on the MSYS page given above list some more steps you can and should do, so follow that page carefully. In particular, don’t forget to create the fstab file as instructed there.
Installing Check¶
The tests of liboptv depend on the Check framework. You can get it from <>
Some versions have problems with Windows. Version 0.9.8 is known to work. You can get version 0.9.14 to work by editing lib/libcompat.h and commenting out or removing lines 147-151.
Installing Check is done roughly in the same way as on Linux, in the MSYS shell:
./configure --prefix /usr make make install
However, it is important to note where the install actually lands so that we can
help CMake find it. The Check library would be installed under the MSYS tree
which was set up when installing MSYS. The above installation is in what MSYS
refers to as
/usr. If your MSYS is installed on C:MinGW, then Check would then
be in
C:\MinGW\msys\1.0\lib\ and
C:\MinGW\msys\1.0\include or a similar
path. Make sure to verify this.
Installing liboptv¶
Now that Check is installed, installing liboptv is relatively straightforward.
Since you are reading this file, you already have that package. enter the
liboptv/ subdirectory, create a directory under it called
build/, and change
into it.
For processing of build instructions, install CMake, from cmake.org.
Now, in the Build directory, initialize cmake with the following command:
cmake ../ -G "Unix Makefiles" -i
CMake will then ask you some questions about your system (accepting the defaults is usually ok). Now and at any future step, you can erase the contents of the build/ directory and start over. You can also regenerate makefiles with a simple cmake ../ in a working build directory, since CMake caches values you set before.
Now that CMake is initialized, a command to generate Makefiles with all paths told in advance, would be:
cmake ../ -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_PREFIX_PATH=/c/MinGW/msys/1.0/
Note the path where Check was installed is specified, and be sure to adjust it if it is a different path in your system.
Now to build and install liboptv, type:
make make install
This would install liboptv in what MSYS refers to as
/usr, which is
C:\MinGW\msys\1.0\ on my system. Any further program that is built using MSYS
looks for this path by default so no further adjustment is necessary for using
liboptv in your program, other than adding the include and link directives
specified above.
However, on run time it appears that the pyd file we just installed looks for
the accompanying DLL that was installed alongside it. Windows wants this DLL
to be in the PATH it searches for executables. So the last step of installing
on Windows is to modify the PATH environment variable so that it lists the
place where the liboptv DLL is installed (in our example, this would be
C:\MinGW\msys\1.0\lib). This can be done by right-clicking Computer on the
start menu, choosing Properties -> Advanced system settings -> click Environment
Variables -> edit the PATH variable on the bottom list and add the DLL’s location,
separated by a semicolon (;) charachter from the directories already listed.
Installing Python environment¶
We recommend using Enthought Canopy Distribution or Anaconda to get all the necessary packages. The main ones are:
* Numpy * Scipy * Cython * ETS from Enthought (including Traits, Chaco and Pyface - most difficult to build yourself) * PyQt
For Windows 7 (8.1) there is an additional option to use pre-compiled binaries as explained here: Installation on Windows 7 using MinGW
If nothing works, where I can get help?¶
Send your build logs, description of the problem and details of the operating system, Python version, etc. to our Google group or forum: <>
Is there a virtual machine?¶
Yes, follow these instructions
Is there a Docker?¶
Yes, follow this and | http://openptv-python.readthedocs.io/en/latest/installation_instruction.html | CC-MAIN-2018-30 | refinedweb | 1,756 | 62.68 |
09 July 2009 18:09 [Source: ICIS news]
TORONTO (ICIS news)--Shell may sell or close its 130,000 bbl/day refinery in Montreal in Canada’s Quebec province, a spokeswoman said on Thursday, confirming media reports.
The refinery employs about 500 full-time workers.
“We began a strategic review for the refinery and have informed staff,” Calgary-based Shell spokeswoman Jana Masters told ICIS news.
The review would take several months, she said, adding that Shell did not yet have a timeline for when it would make a definite decision.
The move comes as Shell is reviewing downstream operations in ?xml:namespace>
Including the
In March, Shell’s Canadian affiliate PTT Poly
Last year, the company cancelled plans for a possible grassroots refinery at | http://www.icis.com/Articles/2009/07/09/9231556/shell-could-sell-or-close-130000-bblday-montreal-refinery.html | CC-MAIN-2014-42 | refinedweb | 125 | 60.35 |
0
I've got an XML file that looks like this:
<routingTable> <router> <id>1</id> <ip>1.1.1.1</ip> <nexthop>1.1.1.2</nexthop> </router> <router> <id>2</id> <ip>2.2.2.1</ip> <nexthop>2.2.2.2</nexthop> </router> </routingTable>
With possibly more than two router entries as time goes on. What I would like to do, is take each router instance and throw it into a dictionary. This way, I can query each one by ID, IP, or nexthop. I've been playing with xml.etree.ElementTree but no luck so far. Tried different things with forloops which will print each of the tag values, but I cant really reference them later. ie:
from xml.etree import ElementTree as ET routetable = ET.parse('RoutingTable.xml') for route in routetable.iter(): print route.text | https://www.daniweb.com/programming/software-development/threads/427306/how-can-i-read-xml-to-retrieve-values-of-tags | CC-MAIN-2017-47 | refinedweb | 141 | 79.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.