text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
> [Guido van Rossum] > >>> Actually I was attempting to find a solution not just for > >>> properties but for other situations as well. E.g. someone might > >>> want to define capabilities, or event handlers, or ... > > [Greg Ewing] > >> But anyway, here's another idea: > >> > >> def foo as property: > >> def __get__(self): > >> ... > >> def __set__(self, x): > >> ... > > [Guido van Rossum] > > I don't like things that reuse 'def', unless the existing 'def' > > is a special case and not just an alternative branch in the grammar. > > I think Greg's idea ("as" or "is") could satisfy these conditions well > (generic solution, and unadorned "def" as special case). > > A standard function definition (simple, unadorned "def") is > equivalent to: > > def f as function: > suite In this syntax, where does the list of formal parameters go? Also, this requires that the scope rules for the suite are exactly as they currently are for functions (including references to non-locals). That's all fine and dandy, but then I don't understand how the poor implementation of property is going to extract __get__ etc. from the local variables of the function body after executing it. > A standard method definition is as above, or if a distinction is > required or useful it could be equivalent to: > > def m(self) as method: > suite OK, I can buy that. I've got a little bit of philosophy on the use of keywords vs. punctuation. While punctuation is concise, and often traditional for things like arithmetic operations, giving punctuation too much power can lead to loss of readability. (So yes, I regret significant trailing commas in print and tuples -- just a little bit.) So I can see the attraction of placing "as foo, bar" instead of "[foo, bar]" between the argument list and the colon. > This syntax could be used to remove a recent wart, making generators > explicit: > > def g as generator: > suite > > Of course, it would be a syntax error if the generator suite didn't > contain a "yield" statement. But then 'generator' would have to be recognized by the parse as a magic keyword. I thought that the idea was that there could be a list of arbitrary expressions after the 'as' (or inside the square brackets). If it has to be keywords, you lose lots of flexibility. > This syntax has aesthetic appeal. I know nothing about its > implementability. That's too bad, because implementabilty makes or breaks a language feature. > What namespace these new modifier terms would live in, I also don't > know. The syntax proposal reads well and seems like it could be a > good general-purpose solution. The devil is in the namespace details. Unless we can sort those out, all proposals are created equal. --Guido van Rossum (home page:) | https://mail.python.org/pipermail/python-dev/2003-January/032649.html | CC-MAIN-2017-39 | refinedweb | 452 | 63.49 |
Hide Forgot
It seems like the recomended way for an application to find out weather it
is ok to output UTF-8 encoded chars to use the nl_langinfo(CODESET)
call. If a user wants to communicate that her setup interprets UTF-8 data
correctly (for example, xterm -u8 is used) the logical thing to do would be
to set LANG to for example sv_SE.UTF-8. In theory that is. It turns out
that glibc seems to discard the codeset information if the given locale is
not encoded in that locale. By default
a redhat-7 system returns ISO-8859-1 when nl_langinfo(CODESET) is called
even if LANG is set to sv_SE.UTF-8.
A workaround is to use create a private locale with the right
'codesetlocaledef -v -c -i sv_SE -f UTF-8 $HOME/locales/sv_SE.UTF-8'
and then set LOCPATH for example like this:
[noa@nestor locales]$ LOCPATH=$HOME/locales LANG=sv_SE.UTF-8 ~/c/cs_test
codeset: UTF-8
Conclusion: the codeset part of LANG or LC_CTYPE/LC_ALL should reflect to
what nl_langinfo(CODESET) returned even if the locale in question is not
encoded in that codeset.
Actually it is not.
nl_langinfo(CODESET) informs you in which charset are all other nl_langinfo
values encoded. Ulrich Drepper said he'll actually change glibc today so that
it does not accept locale setting like sv_SE.UTF-8 unless sv_SE locale is
encoded in UTF-8 character set (the other way around this would be translating
the locales on the fly, but that's expensive).
What you should basically do is first check the $OUTPUT_CHARSET variable
and if it is not given, try nl_langinfo(CODESET).
If the currently set locale uses the UTF-8 character encoding, then all standard
input/output/error communication, all file names, and all data in plaintext
files and pipes for which no other encoding is specified explicitely etc. should
be in UTF-8.
It is correct that the recomended way for an application to find out whether the
current locale uses UTF-8 is to use a test like
strcmp(nl_langinfo(CODESET), "UTF-8") == 0
For details, please read
To test on the command line, which encoding you have set, use
locale charmap
If you set LANG=sv_SE.UTF-8 on a system where no "sv_SE.UTF-8" locale is
installed, then glibc will just silently stay in the "C" locale, unless the
program checked the return value of setlocale() properly. This is what happened
to you.
For instance SuSE Linux 7.1 installs by default the precompiled UTF-8 locales
de_DE.UTF-8 el_GR.UTF-8 en_GB.UTF-8 en_US.UTF-8 fa_IR.UTF-8 fr_FR.UTF-8
hi_IN.UTF-8 ja_JP.UTF-8 ko_KR.UTF-8 mr_IN.UTF-8 ru_RU.UTF-8 vi_VN.UTF-8
zh_CN.UTF-8 zh_TW.UTF-8
in /usr/share/locale/. Use one of these.
If your favourite locale it not among these (or your distribution still lacks
preinstalled UTF-8 locales), no problem:
Any non-root user can easily use for instance
localedef -v -c -i da_DK -f UTF-8 $HOME/local/locale/da_DK.UTF-8
export LOCPATH=$HOME/local/locale
export LANG=da_DK.UTF-8
to generate and activate for instance a Danish UTF-8 locale. The root user can
easily add a new UTF-8 locale for all users via
localedef -v -c -i da_DK -f UTF-8 /usr/share/locale/da_DK.UTF-8
and if root wants to make da_DK.UTF-8 the system-wide default locale for every
user, then adding the line
export LANG=da_DK.UTF-8
into /etc/profile will do the trick.
If you start xterm (XFree86 4.0.2 or newer) after setting LANG to a UTF-8
locale, it will go automatically into UTF-8 mode.
If you have further questions on this matter, please consult the experts on the
linux-utf8 mailing list:
The bug report speaks about behaviour when the requested locale is not
present in the system, and at so far glibc does not fall back to
"C" when it cannot find proper charset:
#include <locale.h>
#include <langinfo.h>
int main(void)
{
char *l = setlocale(LC_ALL, "cs_CZ.UTF-8");
printf ("%s %s\n", l, nl_langinfo(CODESET));
}
gives cs_CZ.UTF-8 ISO-8859-2
on glibc 2.2.2 and
cs_CZ ISO-8859-2
on glibc 2.1.3.
It has been changed in glibc a few minutes ago. | https://bugzilla.redhat.com/show_bug.cgi?id=34176 | CC-MAIN-2019-26 | refinedweb | 734 | 55.74 |
- Type:
New Feature
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
The Xiaomi storage team has developed a new feature called HFR(HDFS Federation Rename) that enables us to do balance/rename across federation namespaces. The idea is to first move the meta to the dst NameNode and then link all the replicas. It has been working in our largest production cluster for 2 months. We use it to balance the namespaces. It turns out HFR is fast and flexible. The detail could be found in the design doc.
Looking forward to a lively discussion.
Attachments
- relates to
HDFS-15294 RBF: Balance data across federation namespaces with DistCp and snapshot diff
- Patch Available
HDFS-13123 RBF: Add a balancer tool to move data across subcluster
- Patch Available | https://issues.apache.org/jira/browse/HDFS-15087 | CC-MAIN-2020-24 | refinedweb | 138 | 54.93 |
ISO/IEC JTC1/SC22/WG21/N0799 X3J16/95-0199 *** This is a translation of above document into plain ASCII text. The original *** document is available in PostScript and PDF formats. COLLECTED COMMENTS FROM THE C++ CD BALLOT Sam Harbison October 10, 1995 6:38 pm This document was produced by reformatting the CD ballot comments so as to produce a list of technical issues, sequentially numbered. The National Body (NB) comments are included verbatim; no corrections were made to spelling, etc. All countries except the UK indicated that satisfactory disposition of the indicated comments would change their vote to "approve." (This is a check-box on the CD ballot.) The UK believed that the number of changes would be so great as to require them to look at another CD before changing their vote. Only countries who voted on either the CD or the CD Registration ballots are listed. AUSTRALIA Approved without comments. [John Skaller reports that due to an administrative glitch Australia did not include any comments from him or Fergus.] BELGIUM Did not vote. BRAZIL Approved without comments. CANADA Did not vote. [Possibly the vote came in too late.] CHINA Approved without comments CZECH REPUBLIC Approved without comments. DENMARK Approved without comments. FINLAND Approved without comments. FRANCE Disapproved with comments. AFNOR's comments, already made at the registration stage are still relevant. In particular AFNOR proposal on character sets (in the spirit of Ada standard) has not been taken in account, this is at least one major reason to not change our vote to approval." [I have therefore repeated the original comments from AFNOR on the CD Registration Ballot. -sph] First, France supports the standardization of the C++ programming language and wants this standardization to take effect AS SOON AS POSSIBLE. The C++ reference manual is not enough stable and is still a moving target; as an example, document WG21 N0582 summarizes a non exhaustive list of unresolved issues concerning templates. R-1 France considers that it is time not to STOP defining additional extensions that cannot realistically be reviewed correctly and in a timely planning; the process is too much delayed. A lot of such extensions have been defined during this process and France considers that the very first objective of the standardization IS NOT MET in document N0545, that is, a clear, non-ambiguous, mature and simple definition of the language. R-2 The document is not conformant to ISO directives of drafting and presentation of International Standards. A non exhaustive list of examples is: R-2.a a table of content is missing, R-2.b even though not required by ISO, a glossary shall be included given the context, which is, a complex language with a lot of concepts, R-2.c ISO terminology (and definitions) presented by ISO/SC1 shall be used instead of ANSI/X3-TR-1-82:1982 R-2.d Though not required by ISO, a Rationale is required at this stage of the process. It is unrealistic to make effective review of the proposed standard without a Rationale. R-3 The structure of the proposed reference manual shall be revised. We think that: - The core language shall be separated from the libraries. The core language is what is handled by the compiler. - Language support libraries should stay in the core language. - Other libraries shall be in required (non-optional) normative annexes. R-4 The general structure of the libraries shall be revised. In particular, it is necessary to DECOUPLE libraries. In the current proposal, there are unacceptable cross and forward references between libraries (and sections). The order of introduction of libraries is not adequate. R-5 The character set defined by the reference manual is not acceptable. In fact, programmers from France (and other nations) cannot, given the current definition, use local characters in identifiers, strings and comments. Modern ISO standard programming languages (as ISO8652-95 Ada) provide definition for characters. We recommend that such a definition shall be used in C++. In particular, use of ISO 10646 shall be accepted (see attached document). R-6 We support the proposal in document SC22 N0582 concerning templates. That is, we think that a model of compilation shall be included in the language. R-7 In the same way, we support the introduction of keyword "typename" instead of overloading "typedef". Other important technical comments (on lvalues, void, relation with environment, templates, exceptions, temporaries,...) are not introduced in this list of objections because we think they can be handled during the next stage and also because we need a C++ Rationale. [Here was attached in the original CD Registration Ballot comments pages 9 through 14 of RM9X:5.0, an Ada95 draft (1 June 1994)] GERMANY The German DIN NI-22 has voted on CD 14882 as follows: x Disapproval of the draft for reasons below x Acceptance of these reasons and appropriate changes in the text will change our vote to approval. R-8 One Definition Rule (ODR) The ODR is one of the important requirements for the CD-Ballot already mentioned as R-31 in the Disposition of comments on CD Registration (Doc No: ISO/IEC JTC1/SC22/WG21/N0669) by the German delegation. The ODR is not described in the Draft which is base for this CD-Ballot (although we know that it was defined and voted in for the next issue of the Draft). R-9 Template Compilation Model\x13We doubt that the template model as specified is defined clear enough so that it can be used in a portable way. It is also not clear whether context merging problems keep the user from determining the cause of errors since the context for some specialization has been synthesized by the compiler in a non-obvious and complex way. The context merge might also force implementors to store/retrieve lots of information (persistent symbol table) which can lead to a less efficient implementation. R-10 Locale Library The locale's functionality shall be in accordance with the C locale defined by POSIX and X/OPEN. It is not clear whether this has in principle been accepted for the C++ locale design. The C++ locale should be efficiently implementable on top of the C locale. R-11 Input/Output Library This library seems far from being complete; there is still a lot of open work. We think unless more effort is spent on this part of the library it will fail the schedule. R-12 Other issues The are a lot of outstanding issues listed in the German public review comments (X3J16/95-0132, WG21/N0732) which are not addressed by this Draft. (Table\x111 on page\x114.) Table 1: Germany's "other issues" R-no.; DIN no.; Comment; WP chap.; Related doc(s); Recommendation for ISO/ANSI R-12.a; 1; Predicate throwing; 15; X3J16/95-0093 WG21/N0693; yes R-12.b ; 2; Rethrowing pending exceptions; 15; X3J16/95-0092 WG21/N0692; yes R-12.c ; 3; Incorporate the long long integral data type in C++; 2,3,4,5,7,17,18,22,27; X3J16/95-0115, WG21/N0715; yes, ext-reflector R-12.d ; 4; The default destructor should be virtual automatically; 12.4; Email: 16.6.95; no, need a proposal before R-12.e ; 5; There should be a provision in the language to inquire an objects identity; 9,10; Email 16.6.95; no, use dynamic_cast<void*> R-12.f ; 6; const Pointers to non-const Objects; 8; Proposal 19.6.95; no R-12.g ; 7; Pointers to const Objects; 8; Proposal 19.6.95; no already allowed R-12.h ; 8; Construction and Destruction of const or volatile Objects; 12; Proposal 19.6.95; no R-12.i ; 9; Derived Classes comments; 10; Email: 26.6.95, X3J16: edit-561; yes, 10.3 last sentence is editorial R-12.j ; 10; Numerics Library; 26; Email:26.6.95, X3J16: edit-564; yes, editorial R-12.k ; 11; Missing preprocessing numbers in Chapter 2.3; 2; --, X3J16: edit-562; yes, already a core issue R-12.l ; 12; The return-type of default operator= should be const T&; 12.8; Email: 28.6.95; no R-12.m ; 13; Avoid copy ctor when binding an exception object by reference to catch declaration; 15; via phone; no, bug in some compilers R-12.n ; 14; Why does C++ this archaic header file mechanism?; 16; Email: 29.6.95, DIN #502; no, a modul system was requested R-12.o ; 15; class auto_ptr: get->m; what is m?; 20.4.5, page 20-17; Email: 21.5.95, X3J16: edit-563; yes, edit-reflector R-12.p ; 16; class auto_ptr: the copy ctor should be private to forbid parameter passing of auto_ptr per value.; 20.4.5; Email: 21.5.95, X3J16: edit-566; yes, but need discussion on lib-reflector R-12.q ; 17; The operational semantic for back() is wrong.; 23.1, Table 53, p. 23-5; Email: 21.5.95, X3J16: edit-565; yes, edit-reflector R-12.r ; 18; The meaning of comparing things is not consistent.; 23.1, , p. 23-6; Email: 21.5.95, X3J16: edit-567; yes, edit-reflector R-12.s ; 19; What is "m" ?; 23.3.1, p. 23-32; Email: 21.5.95, X3J16: edit-568; yes, edit-reflector R-12.t ; 20; Logic error: replace !(*i > *j) with !(*i < *j); 25.3.2, p. 25-20; Email:21.5.95, X3J16:lib-3822; yes, lib-reflector R-12.u ; 21; Just titles no contents; 23,24,25; Email: 21.5.95, X3J16: edit-570; yes, edit-reflector; R-12.v ; 22; Findings in chapter 20,23,24,25; 20; Fax: 30.6.95, X3J16: edit-579, lib-3829; yes, edit-reflector, lib-reflector R-12.w ; 24; POD structures; 9; X3J16: edit-575; yes, edit-reflector R-12.x ; 25; Layout compatible types; 3.9; X3J16: edit-576; yes, edit-reflector R-12.y ; 26; Explicit initialization of globals before main; 8; X3J16/95-0093, WG21/N0693; yes, ext-reflector JAPAN [Disapproved with the following comments.] In addition to technical comments, we would like to raise a procedural issue on the current CD ballot. After the draft for voting was sent to each national body, significant amount of modifications to the draft were discussed and agreed on before the voting deadline, i.e., in the Monterey meeting held in July. Since official voting is on the April draft, we find the situation unnatural. Such a situation should be avoided for future voting. In any case, our vote is on April draft, and our comments are on its contents; progress made in Monterey meeting is not reflected in our comments at all. We conducted a review by experts that consists of current working group members, people with experience with other standard activities such as internationalization, and recognized experts in C++ in academic and industrial communities. We are very much aware that the standard as proposed has considerable amount of complexity and the quality of the document remains much to be improved. The national body, however, felt that the current situation reflects the complexity and diversity of programming for modern system development, large and growing popularity of the language, and the effort that majority of people could afford to make under various constraints. When we consider practical alternatives, we believe that the current draft is a reasonable step toward a potential standard, and more effort is needed to stabilize and refine the draft before it is considered for a standard. Among the various comments and issues discussed, we present the following three with the condition mentioned in the first paragraph. Most of other comments are of editorial nature, and they will be provided directly to the editor or to each subcommittee. R-13 A) Implementation Dependent Extensions The current C standard permits implementation dependent extension with the introduction of the notion of conforming implementation as follows. "A confirming implementation may have extensions (including additional library functions), provided they do not alter behavior of any strictly conforming program." (In section 1.7 compliance) This is a very useful rule as one can build an implementation that permits extended set of characters such as Kanji for identifiers within the current framework of the C standard. We propose that the C++ standard should permit this. R-14 B) String We have provided a large number of comments on the draft on the subject. Taka Adachi has already communicated with other members of library subcommittee; some of issues raised are being in agreement. Among them, there are two outstanding issues due to potential of large changes. 1. The dependence of each member function on the traits should be made explicit. 2. Possible elimination of use of charT() as default argument in seven members. - potential of having eos() different from char() and explicit indication of dependence on traits Taka will communicate with library members and work actively to try to reach agreement. R-15 C) IO streams A number of issues and comments were collected and most of them being communicated separately. We will describe one outstanding issue here. The problem is the lack of dependence of type definitions such as int_type, pos_type, off_type and state_type on charT in the definition of templated of ios_trais. For example, considering 'char' specialization, we might define the following. template <class charT> struct ios_traits { . . . . typedef charT char_type; typedef int int_type; . . . . }; We would have to accept int_type as a constant definition in all of the specialized traits, not only in ios_trait<char>, but in ios_traits<wchar_t> and in ios_traits<ultrachar> as well. It would lead to the restriction upon implementations in that all of the charT have to be converted in 'int' range. This may be too restrictive for future wide character types and user-defined character types and user-defined character types. Therefore, consider adopting namespace std( template <class charT> struct ios_traits<charT> {} struct ios_traits<char> { typedef char char_type; typedef int int_type; typedef streampos pos_type; typedef streamoff off_type; typedef mbstate_t state_type; // 27.4.2.2 values: static char_type eos(); static int_type eof(); static int carr* src, size_t n); static state_type get_state(pos_type); static pos_type get_pos(streampos fpos, state_type state); }; struct ios_traits<wchar_t> { typedef wchar_t char_type; typedef wint_t- int_type; typedef wstreampos pos_type; typedef wstreamoff off_type; typedef mbstate_t state_type; // 27.4.2.2 values: static char_type eos(); static int_type eof(); static char char* src, size_t n); static state_type get_state(pos_type); static pos_type get_pos(streampos fpos, state_type state); }; } As a result of the separation of the two specializations, we have to change the descriptions in [lib.streams.types], as follows; *** 27.4.1 Types typedef OFF_T streamoff; The type streamoff is an implementation-defined type that satisfies the requirements of type OFF_T. typedef WOFF_T wstreamoff; The type wstreamoff is an implementation-defined type that satisfies the requirements of type WOFF_T. typedef POS_T streampos; The type streampos is an implementation-defined type that satisfies the requirements of type POS_T. typedef WPOS_T wstreampos; The type wstreampos is an implementation-defined type that satisfies the requirements of type WOPS_T. typedef SIZE_T streamsize; The type streamsize is a synonym for one of the signed basic integral types. It is used to represent the number of characters transferred in an I/O operations, or the size of I/O buffers. The above approach can be found in "defining nothing in the template version of traits and defining everything in each specializations" by N. Kumagai (X3J16/94-0083). We regret that mistakes in the document for Austin (X1J16/95-0064) caused to introduce such inappropriate definitions to the current WP. We should not put any definitions (static member functions or typedefs) related to int_type, off_type, pos_type and/or state_type in the template definition of the traits. The reason is that these three types depend on the template parameter class 'charT' for variety of environments (ASCII, stateless encoding for double byte characters, and UniCode). For example, charT char wchar_t int_type int wint_t off_type streamoff wstreamoff pos_type streampos wstreampos state_type mbstate_t mbstate_t Note that the two of the above types, 'wint_t' 'mbstate_t' are defined in C Amendment 1 (or MSE). We cannot assume that two implementation-defined types, streampos and wstreampos have the same definitions because under some shift encoding, wstreampos has to keep an additional information, the shift state, as well as the file position. We should represent them with two different symbols. POS_T and WPOS_T so as to give a chance to provide separate definitions for these two specifications. For pos_type in both of specialized traits, 'mbstate_t' is introduced from C Amendment 1 (or former MSE) and is an implementation-defined type to represent any of shift states in file encoding. The type, INT_T is not suitable for the definition of streamsize because INT_T represents another character type whose meaning is to specify the definitions of streampos. NETHERLANDS [Disapproved with the following comments.] R-16 1. The document has a normative reference to the C standard (ISO/IEC 9899). However, C++ deviates from C in many places. Still, nowhere in the C++ standard these deviations are made explicit (other than in an informative annex which strictly speaking is no part of the standard). The compliance section should make clear what happens when a program makes use of these deviating features. As it stands, it is not written anywhere that the 'new' (C++) specification takes precedence over the 'old' (C) specification. R-17 2. The document is virtually un-readable for non-experts. Obviously, the document is not meant to teach a person C++. However, for the document to be useful, it should be reasonably accessible (i.e., readable) for any person with a reasonable experience in C++. It is clearly possible to write a standard that way (e.g. the C standard). Furthermore, it is clearly possible to write a definition of C++ that way (e.g., the books of B. Stroustrup). R-18 3. The document is not very clear about what can be expected of a C++ implementation. A more precise definition of what is required of an implementation and what freedom the implementor has is needed. As an example, the C standard can be looked at, with sections on constraints (strict) and semantics (less strict). R-19 4. The standard library contains far too many classes and features. It is clear that C++ (like C) needs a standard library. The reason for this is that the language itself has no method for communication with the system (I/O, date/time and other system dependencies). A standard library for these system-dependent features is necessary to allow portable code to be written. The C standard was focussed upon these issues, however the proposed C++ standard library introduces many classes and features (in particular standard data structures and algorithms) that do not add to the portability of the language. This is not to say that it would not be nice to have standard definitions for these features, but it should not be part of the C++ language standard. Adding these features to the C++ standard complicates both the implementation (e.g. for embedded systems) and the learning of the language. Furthermore, it is not clear at the moment whether the features included in the library are really what is needed. In fact, if one considers the currently available commercial libraries as an indication, there is much more need for a standard library for handling GUI's. This is understandable, since GUI interfaces are generally system dependent and a standard library for that would greatly improve portability. This cannot be said about e.g. a list data structure. A telling sign about the need for the features of the current library is that currently it is not supported by any of the major C++ development environments. R-20 5. NNI objects to the fact that the complete library has been defined in terms of templates. This makes it more difficult to use the library without a complete and in-depth knowledge of the language. A grow-path from C to C++ is also not facilitated by the use of templates in the library. Furthermore, it makes it impossible to implement the library on older systems (where templates might not be supported yet) and on smaller (e.g. embedded) systems (where templates are not implemented to save space). R-21 6. NNI requires that all outstanding issues on open issue lists (like the one in SC22/N1885) are dealt with in a satisfactory manner. Minimal changes needed to change our vote to a (reluctant) yes are: - Solving the issues mentioned in item 6 - Deleting the library from the standard (except for the language support libraries and the C standard library). (This would solve items 4+5). NEW ZEALAND Did not vote. ROMANIA Approved without comments. SLOVENIA Approved without comments. SWEDEN [Disapproved with the following comments.] Note: document numbers of the for 95-0151/N0751 refer to SC22/WG21 documents. R-22 Progress of SC22/WG21 We are generally pleased with the current rate of progress of SC22/WG21. However, as a large number of minor issues remain to be considered and resolved, we favour submitting a second Committee Draft for balloting. We are convinced that the document will be ready for the DIS stage after the second CD ballot. The delay caused by submitting a second CD is unfortunate, but we nevertheless support Schedule Scenario #2 of document 95-0151/N0751. [Scenario 2 is the "second CD with fast turnaround." The schedule we have given SC22 is for "second CD with 2-meeting (slow) turnaround." -sph] R-23 Issues lists The most important work in the near future is to resolve all known outstanding problems, as documented by several "issues lists " distributed in SC22/WG21. R-24 Clarification of templates [temp] We believe that the template source model, the template compilation model, and the practical implications of both, need to be further clarified. In particular, there are several issues of great practical importance that cannot be discussed and analyzed because the necessary framework is missing. For example, R-24.a Building libraries that are distributed in binary form. R-24.b Distributed development, involving the combination of several libraries that may reference other common libraries. R-24.c Shared libraries containing templates. R-24.d Support for large projects, for example, if template instantiation can be implemented with linear algorithms. R-24.e Portability of the source model. R-24.f Early diagnostics. R-25 Copying semantics of auto_ptr [lib.auto.ptr] The current semantics of the copy-constructor and the assignment operator of auto_ptr are error-prone, without providing great utility. We would instead prefer a distinct member function for transferring ownership from one auto_ptr object to another. In essence we prefer the semantics described in document 94-0202/N0589 over the semantics described in the revised version 94-0202R1/N0589R1. R-26 Division of negative integers [expr.mul] Paragraph 4: The value returned by the integer division and remainder operations shall be defined by the standard, and not be implementation defined. The rounding should be towards minus infinity. E.g., the value of the C expression (-7)/2 should be defined to be -4, not implementation defined. This way the following useful equalities hold (when there is no overflow, nor "division by zero "): (i+m*n)/n == (i/n) + m for all integer values m (i+m*n)%n == (i%n) for all integer values m These useful equalities do not hold when rounding is towards zero. If towards 0 is desired, it can easily be defined in terms of the round towards minus infinity variety, whereas the other way around is trickier and much more error-prone. R-27 ISO Latin-1 in identifiers [lex.name] We support the French CDR ballot comment suggesting that C++ allow letters of ISO 8859-1 (Latin-1) in identifiers. [See R-5.] R-28 Strengthening of bool datatype [conv.bool] The original proposal for a Boolean datatype (called bool) provided some additional type-safety at little cost. SC22/WG21 changed the proposal to allow implicit conversion from int to bool, thereby reducing type-safety and error detectability. The implicit conversion from int to bool shall be deprecated, as described in document 93- 0143/N0350. As a future work-item, the implicit conversion should be removed. R-29 Definition of inline functions [dcl.fct.spec] In paragraph 3 it is stated that "A call to an inline function shall not precede its definition. " One consequence of this restriction is that adding an inline keyword (which is only a recommendation, just like register) can make a program ill-formed. We suggest that this restriction is removed. R-30 Declaration of ios_base::iword [lib.ios.base.storage] Function iword() returns a long& while the text seems to imply that the array is an array of type int. We believe the return value shall be int&. R-31 Values of bool type [basic.fundamental] Footnote 26 is confusing and shall be removed. Footnotes are not normative. R-32 Language Independent Arithmetic We generally support a binding to LIA-1 (ISO/IEC 10967-1) in C and C++. Future work should track this standard and subsequent standards, e.g., LIA-2. If the current wording of the standard meets LIA-1, in particular the specification of the numeric_limits class, the standard should explicitly reference document ISO/IEC 10967-1. R-33 Unique type introduced by typedef One of the Swedish comments on the CD suggested a new typedef that would introduce a distinct type. We think this issue should be further investigated at some point in the future. The proposed syntax is: typedef explicit int Height; typedef explicit int Length; SWITZERLAND Approved without comments. UKRAINE Approved without comments. UNITED KINGDOM The UK votes NO to the CD ballot. The technical issues that the UK wants to see addressed are in the accompanying document 95/0013. The UK feels that so many substantive changes need to be made to the Draft Standard that it would be impossible for the vote to be changed to YES without first seeing the revised document, even if all our technical issues were accepted unchanged. It should also be noted that it is our opinion that the document submitted was not stable enough to warrant a CD ballot, and did not include the editorial boxes, some of which dealt with contentious issues. WG21 has been actively pursuing changes, some of which are contrary to the CD document, during the ballot process. ... R-34 ... The UK submits that it is necessary to have a period of stability prior to any future ballots: standards have a long term impact and rushing them into print for short term needs often results in very costly post-publication maintenance and public confusion. R-35 [Document 95-0013 contains on the order of 300 issues, some of which are editorial.] UNITED STATES Abstain. [The US TAG was unable to develop an official position that they could transmit to the ANSI.] [End of collected comments] | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/1995/N0799.htm | crawl-002 | refinedweb | 4,503 | 56.55 |
/* Define frame-object for GNU <>. */
#ifndef EMACS_FRAME_H
#define EMACS_FRAME_H
#include "termhooks.h"
#include "window.h"
INLINE_HEADER_BEGIN
enum vertical_scroll_bar_type
{
vertical_scroll_bar_none,
vertical_scroll_bar_left,
vertical_scroll_bar_right
};
#ifdef HAVE_WINDOW_SYSTEM
enum fullscreen_type
{
FULLSCREEN_NONE,
FULLSCREEN_WIDTH = 0x1,
FULLSCREEN_HEIGHT = 0x2,
FULLSCREEN_BOTH = 0x3, /* Not a typo but means "width and height". */
FULLSCREEN_MAXIMIZED = 0x4,
#ifdef HAVE_NTGUI
FULLSCREEN_WAIT = 0x8
#endif
};
enum z_group
{
z_group_none,
z_group_above,
z_group_below,
z_group_above_suspended,
};
enum internal_border_part
{
INTERNAL_BORDER_NONE,
INTERNAL_BORDER_LEFT_EDGE,
INTERNAL_BORDER_TOP_LEFT_CORNER,
INTERNAL_BORDER_TOP_EDGE,
INTERNAL_BORDER_TOP_RIGHT_CORNER,
INTERNAL_BORDER_RIGHT_EDGE,
INTERNAL_BORDER_BOTTOM_RIGHT_CORNER,
INTERNAL_BORDER_BOTTOM_EDGE,
INTERNAL_BORDER_BOTTOM_LEFT_CORNER,
};
#ifdef NS_IMPL_COCOA
enum ns_appearance_type
{
ns_appearance_aqua,
ns_appearance_vibrant_dark
};
#endif
#endif /* HAVE_WINDOW_SYSTEM */
/* The structure representing a frame. */
struct frame
{
union vectorlike_header header;
/* All Lisp_Object components must come;
#if defined (HAVE_WINDOW_SYSTEM)
/* This frame's parent frame, if it has one. */
Lisp_Object parent_frame;
#endif /* HAVE_WINDOW_SYSTEM */
/* selected window when run_window_change_functions was
called the last time on this frame. */
Lisp_Object old ;
/* Predicate for selecting buffers for other-buffer. */
Lisp_Object buffer_predicate;
/* List of buffers viewed in this frame, for other-buffer. */
Lisp_Object buffer_list;
/* List of buffers that were viewed, then buried in this frame. The
most recently buried buffer is first. For last-buffer. */
Lisp_Object buried_buffer_list;
#if defined (HAVE_X_WINDOWS) && ! defined (USE_X_TOOLKIT) && ! defined (USE_GTK)
/* A dummy window used to display menu bars under X when no X
toolkit support is available. */
Lisp_Object menu_bar_window;
#if defined (HAVE_WINDOW_SYSTEM) && ! defined (HAVE_EXT_TOOL_BAR)
/* A window used to display the tool-bar of a frame. */
Lisp_Object tool_bar_window;
/* Desired and current contents displayed in that window. */
Lisp_Object desired_tool_bar_string;
Lisp_Object current_tool_bar_string;
#endif
#ifdef USE_GTK
/* Where tool bar is, can be left, right, top or bottom.
Except with GTK, the only supported position is `top'. */
Lisp_Object tool_bar_position;
#if defined (HAVE_XFT) || defined (HAVE_FREETYPE)
/* List of data specific to font-driver and frame, but common to faces. */
Lisp_Object font_data;
#endif
/* Desired and current tool-bar items. */
Lisp_Object tool_bar_items;
/* tool_bar_items should be the last Lisp_Object member. */
/* Cache of realized faces. */
struct face_cache *face_cache;
/* Tool-bar item index of the item on which a mouse button was pressed. */
int last_tool_bar_item;
#endif
/* Number of elements in `menu_bar_vector' that have meaningful data. */
int menu_bar_items_used;
#if defined (USE_X_TOOLKIT) || defined (HAVE_NTGUI)
/* A buffer to hold the frame's name. Since this is used by the
window system toolkit, we can't use the Lisp string's pointer
(`name', above) because it might get relocated. */
char *namebuf;
#ifdef USE_X_TOOLKIT
/* Used to pass geometry parameters to toolkit functions. */
char *shell_position;
#endif
/* Glyph pool and matrix. */
struct glyph_pool *current_pool;
struct glyph_pool *desired_pool;
struct glyph_matrix *desired_matrix;
struct glyph_matrix *current_matrix;
/* Bitfield area begins here. Keep them together to avoid extra padding. */
/* True means that glyphs on this frame have been initialized so it can
be used for output. */
bool_bf glyphs_initialized_p : 1;
/* Set to true in change_frame_size when size of frame changed
Clear the frame in clear_garbaged_frames if set. */
bool_bf resized_p : 1;
/* Set to true if the default face for the frame has been
realized. Reset to zero whenever the default face changes.
Used to see the difference between a font change and face change. */
bool_bf default_face_done_p : 1;
/* Set to true if this frame has already been hscrolled during
current redisplay. */
bool_bf already_hscrolled_p : 1;
/* Set to true when current redisplay has updated frame. */
bool_bf updated_p : 1;
/* Set to true to minimize tool-bar height even when
auto-resize-tool-bar is set to grow-only. */
bool_bf minimize_tool_bar_window_p : 1;
#ifdef HAVE_EXT_TOOL_BAR
/* True means using a tool bar that comes from the toolkit. */
bool_bf external_tool_bar : 1;
#endif
/* True means that fonts have been loaded since the last glyph
matrix adjustments. */
bool_bf fonts_changed : 1;
/* True means that cursor type has been changed. */
bool_bf cursor_type_changed : 1;
/* True if it needs to be redisplayed. */
bool_bf redisplay : 1;
#ifdef HAVE_EXT_MENU_BAR
/* True means using a menu bar that comes from the toolkit. */
bool_bf external_menu_bar : 1;
#endif
/* Next two bitfields are mutually exclusive. They might both be
zero if the frame has been made invisible without an icon. */
/* Nonzero if the frame is currently displayed; we check
it to see if we should bother updating the frame's contents.
On ttys and on Windows NT/9X, to avoid wasting effort updating
visible frames that are actually completely obscured by other
windows on the display, we bend the meaning of visible slightly:
if equal to 2, then the frame is obscured - we still consider
it to be "visible" as seen from lisp, but we don't bother
updating it. */
unsigned visible : 2;
/* True if the frame is currently iconified. Do not
set this directly, use SET_FRAME_ICONIFIED instead. */
bool_bf iconified : 1;
/* True if this frame should be fully redisplayed. Disables all
optimizations while rebuilding matrices and redrawing. */
bool_bf garbaged : 1;
/* False means, if this frame has just one window,
show no modeline for that window. */
bool_bf wants_modeline : 1;
/* True means raise this frame to the top of the heap when selected. */
bool_bf auto_raise : 1;
/* True means lower this frame to the bottom of the stack when left. */
bool_bf auto_lower : 1;
/* True if frame's root window can't be split. */
bool_bf no_split : 1;
/* If this is set, then Emacs won't change the frame name to indicate
the current buffer, etcetera. If the user explicitly sets the frame
name, this gets set. If the user sets the name to Qnil, this is
cleared. */
bool_bf explicit_name : 1;
/* True if at least one window on this frame changed since the last
call of run_window_change_functions. Changes are either "state
changes" (a window has been created, deleted or got assigned
another buffer) or "size changes" (the total or body size of a
window changed). run_window_change_functions exits early unless
either this flag is true or a window selection happened on this
frame. */
bool_bf window_change : 1;
/* True if running window state change functions has been explicitly
requested for this frame since last redisplay. */
bool_bf window_state_change : 1;
/* True if the mouse has moved on this display device
since the last time we checked. */
bool_bf mouse_moved : 1;
/* True means that the pointer is invisible. */
bool_bf pointer_invisible : 1;
/* True means that all windows except mini-window and
selected window on this frame have frozen window starts. */
bool_bf frozen_window_starts : 1;
/* The output method says how the contents of this frame are
displayed. It could be using termcap, or using an X window.
This must be the same as the terminal->type. */
ENUM_BF (output_method) output_method : 3;
#ifdef HAVE_WINDOW_SYSTEM
/* True if this frame is a tooltip frame. */
bool_bf tooltip : 1;
/* See FULLSCREEN_ enum on top. */
ENUM_BF (fullscreen_type) want_fullscreen : 4;
/* If not vertical_scroll_bar_none, we should actually
display the scroll bars of this type on this frame. */
ENUM_BF (vertical_scroll_bar_type) vertical_scroll_bar_type : 2;
/* Nonzero if we should actually display horizontal scroll bars on this frame. */
bool_bf horizontal_scroll_bars : 1;
/* True if this is an undecorated frame. */
bool_bf undecorated : 1;
#ifndef HAVE_NTGUI
/* True if this is an override_redirect frame. */
bool_bf override_redirect : 1;
#endif
/* Nonzero if this frame's icon should not appear on its display's taskbar. */
bool_bf skip_taskbar : 1;
/* Nonzero if this frame's window F's X window does not want to
receive input focus when it is mapped. */
bool_bf no_focus_on_map : 1;
/* Nonzero if this frame's window does not want to receive input focus
via mouse clicks or by moving the mouse into it. */
bool_bf no_accept_focus : 1;
/* The z-group this frame's window belongs to. */
ENUM_BF (z_group) z_group : 2;
/* Non-zero if display of truncation and continuation glyphs outside
the fringes is suppressed. */
bool_bf no_special_glyphs : 1;
#endif /* HAVE_WINDOW_SYSTEM */
/* Whether new_height and new_width shall be interpreted
in pixels. */
bool_bf new_pixelwise : 1;
/* True means x_set_window_size requests can be processed for this
frame. */
bool_bf can_x_set_window_size : 1;
/* Set to true after this frame was made by `make-frame'. */
bool_bf after_make_frame : 1;
/* Whether the tool bar height change should be taken into account. */
bool_bf tool_bar_redisplayed : 1;
bool_bf tool_bar_resized : 1;
/* Inhibit implied resize before after_make_frame is set. */
bool_bf inhibit_horizontal_resize : 1;
bool_bf inhibit_vertical_resize : 1;
/* Non-zero if this frame's faces need to be recomputed. */
bool_bf face_change : 1;
/* Non-zero if this frame's image cache cannot be freed because the
frame is in the process of being redisplayed. */
bool_bf inhibit_clear_image_cache : 1;
/* Bitfield area ends here. */
/* This frame's change stamp, set the last time window change
functions were run for this frame. Should never be 0 because
that's the change stamp of a new window. A window was not on a
frame the last run_window_change_functions was called on it if
it's change stamp differs from that of its frame. */
int change_stamp;
/* This frame's number of windows, set the last time window change
functions were run for this frame. Should never be 0 even for
minibuffer-only frames. If no window has been added, this allows
to detect whether a window was deleted on this frame since the
last time run_window_change_functions was called on it. */
ptrdiff_t number_of_windows;
/* Number of lines (rounded up) of tool bar. REMOVE THIS */
int tool_bar_lines;
/* Height of frame internal tool bar in pixels. */
int tool_bar_height;
int n_tool_bar_rows;
int n_tool_bar_items;
/* A buffer for decode_mode_line. */
char *decode_mode_spec_buffer;
/*;
/* Text width of this frame (excluding fringes, vertical scroll bar
and internal border widths) and text height (excluding menu bar,
tool bar, horizontal scroll bar and internal border widths) in
units of canonical characters. */
int text_cols, text_lines;
/* Total width of this frame (including fringes, vertical scroll bar
and internal border widths) and total height (including menu bar,
tool bar, horizontal scroll bar and internal border widths) in
units of canonical characters. */
int total_cols, total_lines;
/* Text width of this frame (excluding fringes, vertical scroll bar
and internal border widths) and text height (excluding menu bar,
tool bar, horizontal scroll bar and internal border widths) in
pixels. */
int text_width, text_height;
/* New text height and width for pending size change. 0 if no change
pending. These values represent pixels or canonical character units
according to the value of new_pixelwise and correlate to the
text width/height of the frame. */
int new_width, new_height;
/* Pixel position of the frame window (x and y offsets in root window). */
int left_pos, top_pos;
/* Total width of this frame (including fringes, vertical scroll bar
and internal border widths) and total height (including internal
menu and tool bars, horizontal scroll bar and internal border
widths) in pixels. */
int pixel_width, pixel_height;
/* This is the gravity value for the specified window position. */
int win_gravity;
/* The geometry flags for this window. */
int size_hint_flags;
/* Border width of the frame window as known by the (X) window system. */
int border_width;
/* Width of the internal border. This is a line of background color
just inside the window's border. When the frame is selected,
a highlighting is displayed inside the internal border. */
int internal_border_width;
/* Widths of dividers between this frame's windows in pixels. */
int right_divider_width, bottom_divider_width;
/* Widths of fringes in pixels. */
int left_fringe_width, right_fringe_width;
/* Total width of fringes reserved for drawing truncation bitmaps,
continuation bitmaps and alike - REMOVE THIS !!!!. */
int fringe_cols;
/* Number of lines of menu bar. */
int menu_bar_lines;
/* Pixel height of menubar. */
int menu_bar_height;
/* Canonical X unit. Width of default font, in pixels. */
int column_width;
/* Canonical Y unit. Height of a line, in pixels. */
int line_height;
/* The terminal device that this frame uses. If this is NULL, then
the frame has been deleted. */
struct terminal *terminal;
/* Device-dependent, frame-local auxiliary data used for displaying
the contents. When the frame is deleted, this data is deleted as
well. */
union output_data
{
struct tty_output *tty; /* From termchar.h. */
struct x_output *x; /* From xterm.h. */
struct w32_output *w32; /* From w32term.h. */
struct ns_output *ns; /* From nsterm.h. */
intptr_t nothing;
}
output_data;
/* List of font-drivers available on the frame. */
struct font_driver_list *font_driver_list;
#if defined (HAVE_X_WINDOWS)
/* Used by x_wait_for_event when watching for an X event on this frame. */
int wait_event_type;
#endif
/* What kind of text cursor should we draw in the future?
This should always be filled_box_cursor or bar_cursor. */
enum text_cursor_kinds desired_cursor;
/* Width of bar cursor (if we are using that). */
int cursor_width;
/* What kind of text cursor should we draw when the cursor blinks off?
This can be filled_box_cursor or bar_cursor or no_cursor. */
enum text_cursor_kinds blink_off_cursor;
/* Width of bar cursor (if we are using that) for blink-off state. */
int blink_off_cursor_width;
/* Configured width of the scroll bar, in pixels and in characters.
config_scroll_bar_cols tracks config_scroll_bar_width if the
latter is positive; a zero value in config_scroll_bar_width means
to compute the actual width on the fly, using config_scroll_bar_cols
and the current font width. */
int config_scroll_bar_width;
int config_scroll_bar_cols;
/* Configured height of the scroll bar, in pixels and in characters.
config_scroll_bar_lines tracks config_scroll_bar_height if the
latter is positive; a zero value in config_scroll_bar_height means
to compute the actual width on the fly, using
config_scroll_bar_lines and the current font width. */
int config_scroll_bar_height;
int config_scroll_bar_lines;
/* The baud rate that was used to calculate costs for this frame. */
intmax_t cost_calculation_baud_rate;
/* Frame opacity
alpha[0]: alpha transparency of the active frame
alpha[1]: alpha transparency of inactive frames
Negative values mean not to change alpha. */
double alpha[2];
/* Exponent for gamma correction of colors. 1/(VIEWING_GAMMA *
SCREEN_GAMMA) where viewing_gamma is 0.4545 and SCREEN_GAMMA is a
frame parameter. 0 means don't do gamma correction. */
double gamma;
/* Additional space to put between text lines on this frame. */
int extra_line_spacing;
/* All display backends seem to need these two pixel values. */
unsigned long background_pixel;
unsigned long foreground_pixel;
#ifdef NS_IMPL_COCOA
/* NSAppearance theme used on this frame. */
enum ns_appearance_type ns_appearance;
bool_bf ns_transparent_titlebar;
#endif
} GCALIGNED_STRUCT;
/* Most code should use these functions to set Lisp fields in struct frame. */
INLINE void
fset_buffer_list (struct frame *f, Lisp_Object val)
{
f->buffer_list = val;
}
fset_buried_buffer_list (struct frame *f, Lisp_Object val)
{
f->buried_buffer_list = val;
}
fset_condemned_scroll_bars (struct frame *f, Lisp_Object val)
{
f->condemned_scroll_bars = val;
}
fset_face_alist (struct frame *f, Lisp_Object val)
{
f->face_alist = val;
}
INLINE void
fset_parent_frame (struct frame *f, Lisp_Object val)
{
f->parent_frame = val;
}
#endif
fset_focus_frame (struct frame *f, Lisp_Object val)
{
f->focus_frame = val;
}
fset_icon_name (struct frame *f, Lisp_Object val)
{
f->icon_name = val;
}
fset_menu_bar_items (struct frame *f, Lisp_Object val)
{
f->menu_bar_items = val;
}
fset_menu_bar_vector (struct frame *f, Lisp_Object val)
{
f->menu_bar_vector = val;
}
fset_menu_bar_window (struct frame *f, Lisp_Object val)
{
f->menu_bar_window = val;
}
fset_name (struct frame *f, Lisp_Object val)
{
f->name = val;
}
fset_param_alist (struct frame *f, Lisp_Object val)
{
f->param_alist = val;
}
fset_root_window (struct frame *f, Lisp_Object val)
{
f->root_window = val;
}
fset_scroll_bars (struct frame *f, Lisp_Object val)
{
f->scroll_bars = val;
}
fset_selected_window (struct frame *f, Lisp_Object val)
{
f->selected_window = val;
}
fset_old_selected_window (struct frame *f, Lisp_Object val)
{
f->old_selected_window = val;
}
INLINE void
fset_title (struct frame *f, Lisp_Object val)
{
f->title = val;
}
fset_tool_bar_items (struct frame *f, Lisp_Object val)
{
f->tool_bar_items = val;
}
fset_tool_bar_position (struct frame *f, Lisp_Object val)
{
f->tool_bar_position = val;
}
#endif /* USE_GTK */
fset_tool_bar_window (struct frame *f, Lisp_Object val)
{
f->tool_bar_window = val;
}
fset_current_tool_bar_string (struct frame *f, Lisp_Object val)
{
f->current_tool_bar_string = val;
}
fset_desired_tool_bar_string (struct frame *f, Lisp_Object val)
{
f->desired_tool_bar_string = val;
}
#endif /* HAVE_WINDOW_SYSTEM && !USE_GTK && !HAVE_NS */
INLINE double
NUMVAL (Lisp_Object x)
{
return NUMBERP (x) ? XFLOATINT (x) : -1;
}
INLINE double
default_pixels_per_inch_x (void)
{
Lisp_Object v = (CONSP (Vdisplay_pixels_per_inch)
? XCAR (Vdisplay_pixels_per_inch)
: Vdisplay_pixels_per_inch);
return NUMVAL (v) > 0 ? NUMVAL (v) : 72.0;
}
default_pixels_per_inch_y (void)
{
Lisp_Object v = (CONSP (Vdisplay_pixels_per_inch)
? XCDR (Vdisplay_pixels_per_inch)
: Vdisplay_pixels_per_inch);
return NUMVAL (v) > 0 ? NUMVAL (v) : 72.0;
}
#define FRAME_KBOARD(f) ((f)->terminal->kboard)
/* Return a pointer to the image cache of frame F. */
#define FRAME_IMAGE_CACHE(F) ((F)->terminal->image_cache)
#define XFRAME(p) \
(eassert (FRAMEP (p)), XUNTAG (p, Lisp_Vectorlike, struct frame))
#define XSETFRAME(a, b) (XSETPSEUDOVECTOR (a, b, PVEC_FRAME))
/* Given a window, return its frame as a Lisp_Object. */
#define WINDOW_FRAME(w) ((w)->frame)
/* Test a frame for particular kinds of display methods. */
#define FRAME_INITIAL_P(f) ((f)->output_method == output_initial)
#define FRAME_TERMCAP_P(f) ((f)->output_method == output_termcap)
#define FRAME_X_P(f) ((f)->output_method == output_x_window)
#ifndef HAVE_NTGUI
#define FRAME_W32_P(f) false
#else
#define FRAME_W32_P(f) ((f)->output_method == output_w32)
#endif
#ifndef MSDOS
#define FRAME_MSDOS_P(f) false
#define FRAME_MSDOS_P(f) ((f)->output_method == output_msdos_raw)
#endif
#ifndef HAVE_NS
#define FRAME_NS_P(f) false
#define FRAME_NS_P(f) ((f)->output_method == output_ns)
/*)
#ifdef HAVE_NS
#define FRAME_WINDOW_P(f) FRAME_NS_P(f)
#endif
#ifndef FRAME_WINDOW_P
#define FRAME_WINDOW_P(f) ((void) (f), false)
/* Dots per inch of the screen the frame F is on. */
#ifdef HAVE_WINDOW_SYSTEM
#define FRAME_RES_X(f) \
(eassert (FRAME_WINDOW_P (f)), FRAME_DISPLAY_INFO (f)->resx)
#define FRAME_RES_Y(f) \
(eassert (FRAME_WINDOW_P (f)), FRAME_DISPLAY_INFO (f)->resy)
#else /* !HAVE_WINDOW_SYSTEM */
/* Defaults when no window system available. */
#define FRAME_RES_X(f) default_pixels_per_inch_x ()
#define FRAME_RES_Y(f) default_pixels_per_inch_y ()
#endif /* HAVE_WINDOW_SYSTEM */
/* Return a pointer to the structure holding information about the
region of text, if any, that is currently shown in mouse-face on
frame F. We need to define two versions because a TTY-only build
does not have FRAME_DISPLAY_INFO. */
#ifdef HAVE_WINDOW_SYSTEM
# define MOUSE_HL_INFO(F) \
(FRAME_WINDOW_P(F) \
? &FRAME_DISPLAY_INFO(F)->mouse_highlight \
: &(F)->output_data.tty->display_info->mouse_highlight)
#else
# define MOUSE_HL_INFO(F) \
(&(F)->output_data.tty->display_info->mouse_highlight)
/* True if frame F is still alive (not deleted). */
#define FRAME_LIVE_P(f) ((f)->terminal != 0)
/* True if frame F is a minibuffer-only frame. */
#define FRAME_MINIBUF_ONLY_P(f) \
EQ (FRAME_ROOT_WINDOW (f), FRAME_MINIBUF_WINDOW (f))
/* True if frame F contains it's own minibuffer window. Frame always has
minibuffer window, but it could use minibuffer window of another frame. */
#define FRAME_HAS_MINIBUF_P(f) \
(WINDOWP (f->minibuffer_window) \
&& XFRAME (XWINDOW (f->minibuffer_window)->frame) == f)
/* Pixel width of frame F. */
#define FRAME_PIXEL_WIDTH(f) ((f)->pixel_width)
/* Pixel height of frame F. */
#define FRAME_PIXEL_HEIGHT(f) ((f)->pixel_height)
/* Width of frame F, measured in canonical character columns,
not including scroll bars if any. */
#define FRAME_COLS(f) (f)->text_cols
/* Height of frame F, measured in canonical lines, including
non-toolkit menu bar and non-toolkit tool bar lines. */
#define FRAME_LINES(f) (f)->text_lines
/* Width of frame F, measured in pixels not including the width for
fringes, scroll bar, and internal borders. */
#define FRAME_TEXT_WIDTH(f) (f)->text_width
/* Height of frame F, measured in pixels not including the height
for scroll bar and internal borders. */
#define FRAME_TEXT_HEIGHT(f) (f)->text_height
/* Number of lines of frame F used for menu bar.
This is relevant on terminal frames and on
X Windows when not using the X toolkit.
These lines are counted in FRAME_LINES. */
#define FRAME_MENU_BAR_LINES(f) (f)->menu_bar_lines
/* Pixel height of frame F's menu bar. */
#define FRAME_MENU_BAR_HEIGHT(f) (f)->menu_bar_height
/* True if this frame should display a tool bar
in a way that does not use any text lines. */
#define FRAME_EXTERNAL_TOOL_BAR(f) (f)->external_tool_bar
#else
#define FRAME_EXTERNAL_TOOL_BAR(f) false
/* This is really supported only with GTK. */
#ifdef USE_GTK
#define FRAME_TOOL_BAR_POSITION(f) (f)->tool_bar_position
#else
#define FRAME_TOOL_BAR_POSITION(f) ((void) (f), Qtop)
/* Number of lines of frame F used for the tool-bar. */
#define FRAME_TOOL_BAR_LINES(f) (f)->tool_bar_lines
/* Pixel height of frame F's tool-bar. */
#define FRAME_TOOL_BAR_HEIGHT(f) (f)->tool_bar_height
/* Lines above the top-most window in frame F. */
#define FRAME_TOP_MARGIN(F) \
(FRAME_MENU_BAR_LINES (F) + FRAME_TOOL_BAR_LINES (F))
/* Pixel height of frame F's top margin. */
#define FRAME_TOP_MARGIN_HEIGHT(F) \
(FRAME_MENU_BAR_HEIGHT (F) + FRAME_TOOL_BAR_HEIGHT (F))
/* True if this frame should display a menu bar
#ifdef HAVE_EXT_MENU_BAR
#define FRAME_EXTERNAL_MENU_BAR(f) (f)->external_menu_bar
#define FRAME_EXTERNAL_MENU_BAR(f) false
/* True if frame F is currently visible. */
#define FRAME_VISIBLE_P(f) (f)->visible
/* True if frame F is currently visible but hidden. */
#define FRAME_OBSCURED_P(f) ((f)->visible > 1)
/* True if frame F is currently iconified. */
#define FRAME_ICONIFIED_P(f) (f)->iconified
/* Mark frame F as currently garbaged. */
#define SET_FRAME_GARBAGED(f) \
(frame_garbaged = true, fset_redisplay (f), f->garbaged = true)
/* True if frame F is currently garbaged. */
#define FRAME_GARBAGED_P(f) (f)->garbaged
/* True means do not allow splitting this frame's window. */
#define FRAME_NO_SPLIT_P(f) (f)->no_split
/* Not really implemented. */
#define FRAME_WANTS_MODELINE_P(f) (f)->wants_modeline
/* True if all windows except selected window and mini window
are frozen on frame F. */
#define FRAME_WINDOWS_FROZEN(f) (f)->frozen_window_starts
/* True if at least one window changed on frame F since the last time
window change functions were run on F. */
#define FRAME_WINDOW_CHANGE(f) (f)->window_change
/* True if running window state change functions has been explicitly
requested for this frame since last redisplay. */
#define FRAME_WINDOW_STATE_CHANGE(f) (f)->window_state_change
/* The minibuffer window of frame F, if it has one; otherwise nil. */
#define FRAME_MINIBUF_WINDOW(f) f->minibuffer_window
/* The root window of the window tree of frame F. */
#define FRAME_ROOT_WINDOW(f) f->root_window
/* The currently selected window of frame F. */
#define FRAME_SELECTED_WINDOW(f) f->selected_window
/* The old selected window of frame F. */
#define FRAME_OLD_SELECTED_WINDOW(f) f->old_selected_window
#define FRAME_INSERT_COST(f) (f)->insert_line_cost
#define FRAME_DELETE_COST(f) (f)->delete_line_cost
#define FRAME_INSERTN_COST(f) (f)->insert_n_lines_cost
#define FRAME_DELETEN_COST(f) (f)->delete_n_lines_cost
#define FRAME_FOCUS_FRAME(f) f->focus_frame | https://emba.gnu.org/emacs/emacs/blame/a038df77de7b1aa2d73a6478493b8838b59e4982/src/frame.h | CC-MAIN-2020-40 | refinedweb | 3,274 | 56.25 |
Hello I have been at this for a few days, and I have not figured it out but feel as if i am very close. I must create A program that takes a users input, then tells the user if that number is prime or composite, if composite it must display all of the composite numbers prime factors. Someone please take a look at it!
#include <iostream> using namespace std; #ifndef __TRUE_FALSE__ #define __TRUE_FALSE__ #define TRUE 1 #define FALSE 0 #endif int main () { int number = 0; int x = 0; int z = 0; bool NotPrime = FALSE; cout << "Enter an Integer" << endl; cin >> number; //Im a bit confused right here, this is where im having trouble for (int x=2; x<number; ++x) { if (number % x == 0) { bool NotPrime=FALSE; cout << " This is a Composite" << endl; } else cout << " This is a Prime" << endl; } //This while find the prime factors of the composite, but i cant get it too work with //the other code while(NotPrime=FALSE){ for(int x=2; x <= number; x++) { if(number % x ==0){ cout << x << endl; number = number/x; } else x++; } } return 0; } | https://www.daniweb.com/programming/software-development/threads/411161/c-prime-composite-number-prime-factorization | CC-MAIN-2018-51 | refinedweb | 185 | 52.2 |
- Code: Select all
import sys
def main():
#list required arguments and open files
if len(sys.argv)!=4:
print "Enter program_name, infile, outfile, reagent"
sys.exit()
infile = open(sys.argv[1]).readlines()
outfile = open(sys.argv[2], 'w')
#save infile header line to hline1 and split into parts
hline1 = infile.pop(0)
hparts = hline1.split('\t')
reagent = sys.argv[3]
#write outfile header line
outfile.write('value' + '\t' + 'reagent' + '\t' + 'patient' + '\t' + 'antigen' + '\n')
# read matrix rows and split into parts
for line in infile:
line = line.rstrip()
parts = line.split('\t')
patient = parts[0]
#Use loop to print parts to outfile
for i in range(1,len(parts)):
outfile.write(parts[i] + '\t' + reagent + '\t' + patient + '\t' + hparts[i] + '\n')
#print reference line
print "The file been written to", outfile
main()
I've attached a sample test1.txt file and a testout.txt file (without the extra newlines to show what I want).
Thanks in advance! | http://www.python-forum.org/viewtopic.php?p=9806 | CC-MAIN-2016-36 | refinedweb | 157 | 71.51 |
VS2010 Improvements – My Summary
Watch it on line or download
VS2010 fully implemented with WPF.
Code editing improvements
Intellisence improvements :
– Searching for a method or class —
you can access “this.” and then you can insert only some part of your function name or if you are using a Camel case / Pascal convention of names you can type only the capital letters and it’ll find the function for you .
– Navigate to dialog – Ctrl + “,”
type some word you are looking for and get all the references of the word in all the project and jump to the desired location.
also can use Pascal capital letters for the search through Navigate to dialog.
– View call hierarchy – Stand over your function and in right click you have this new option.
you’ll get all the objects that use this and all the objects this is using .
– Generate Diagram Sequence – Stand over your function and in right click you have this new option.
it’ll build a full diagram sequence for you. You can define the depth of the diagram. This function exists only in the VS2010 Ultimate version – very cool feature.
through the diagram you can change the sequence , go to the actual code , customize the layout
– With Alt + choose with mouse some text – you can choose some code vertically and do with it something – like tab it or remove it.
Code Snippets – added also to Html and Javascript Editors
Javascript –
type “control” in js file and press TAB twice –
you’ll get snippet , that when you’ll change the namespace it’ll update itself:
Type.registerNamespace("myControl,UI");
myControl,UI.mycontrol = function (element) {
myControl,UI.mycontrol.initializeBase(this, [element]);
}
myControl,UI.mycontrol.prototype = {
initialize: function () {
myControl,UI.mycontrol.callBaseMethod(this, 'initialize');
},
dispose: function () {
myControl,UI.mycontrol.callBaseMethod(this, 'dispose');
}
}
myControl,UI.mycontrol.registerClass('myControl,UI.mycontrol', Sys.UI.Control);
Html -
just type textbox and press TAB twice -
it’ll build the control for you -
<asp:TextBox
so , now if you’ll add an id for it :
<asp:TextBox
and type bellow a word require it’ll add :<asp:RequiredFieldValidator
few more snippets
– just write “script” + TABx2 when you want to pen Javascript block.
– just write “a” + TABx2 when you want to create some hyperlink , and you can update its properties just by clicking the tab each time , without using the mouse.
Debbuging options –
Can go back and forth inside the debugging process . (This function exists only in the VS2010 Ultimate version or in testing targeted versions. Not in Premium ).
Black box feature -Can export the debugging process to another machine , import it there and recreate the test. – take a call dump of information of an application that crushed , either a Client App or a Server app on a different machine – they will give as all the traces – call stack .. .
Interactive testing allowing people without VS to perform tests on applications.
Call Stack Information more readable for the human eye.
Not from the lecture –
Useful Extension for VS that I love to use are:
Power Commands – this one is the best
Productivity Power Tools .
You can download it through the VS2010 Tools-> Extension Manager editor | https://grekai.wordpress.com/tag/vs/ | CC-MAIN-2017-51 | refinedweb | 522 | 63.09 |
When you're slogging away on a project like a migrations framework, it can seem a little unforgiving. You spend so many commits just laying down foundations and groundwork that it's such a relief when things finally start to work together - something I've always known as a black triangle.
You can imagine, then, how happy I am with the progress made this week. Not only did the autodetector get a lot better, but there are now commands. Not only that, but you can frigging migrate stuff with them.
I Should Calm Down A Bit
I know, that might seem like the entire purpose of a migrations framework, but it's nice to finally get all this code I've been planning for years (quite literally in some cases) working together and playing nicely.
Enough talk. Let's look at some examples. Here's a models.py file I found lying around:
from django.db import models class Author(models.Model): name = models.CharField(max_length=255) featured = models.BooleanField()
Pretty simple, eh? Let's make a migration for it!
Of course, the command to do that has changed; in South, you had to run manage.py schemamigration appname for each app you wanted to migrate. Now, you just run manage.py makemigrations:
$ ./manage.py makemigrations Do you want to enable migrations for app 'books'? yes Migrations for 'books': 0001_initial.py: - Create model Author
makemigrations will scan all apps and output all changes at once, possibly involving multiple migrations for each app. It will know to add dependencies between apps with ForeignKeys to each other, and to split up a migration into two if there's a circular ForeignKey reference.
There's no more --auto, no more --initial, and it'll even remind you to create migrations for new apps (don't worry, it won't prompt more than once).
Let's make some changes to our models.py file; in particular, I'm going to allow longer names, and add an integer rating column:
from django.db import models class Author(models.Model): name = models.CharField(max_length=500) featured = models.BooleanField() rating = models.IntegerField(null=True)
Let's run makemigrations:
$ ./manage.py makemigrations Migrations for 'books': 0002_auto.py: - Add field rating to author - Alter field name on author
Notice how it's telling you nicely what each migration contains. There's also some colour on these commands to make them easier to read on the console; making migrations should be a pleasant experience, after all.
The Challenges
Of course, the result is lovely, but I'd like to look closer at one particular challenge - that of autodetection.
Autodetection is a very deep topic, and one I'll doubtless return to. However, the particular problem this week was having the autodetector intelligently discover new apps to add migrations to.
In South, you have to manually enable migrations for an app using the --initial switch to schemamigration, but I wanted to eliminate that distinction here. It's trivial enough to detect apps without migrations which need them, but that's not quite enough.
The problem is, you see, that there's plenty of apps that don't have migrations and don't need them. Third-party libraries, old internal packages with manual SQL migrations, and of course our own django.contrib (though I'm sure a few of those will inevitably grow migrations).
Thus, without any sort of extra code, makemigrations will prompt every time you run it about each of these unmigrated apps. That's going to get very annoying, and so I devoted quite a bit of thought to how to address this.
There's no way to write a marker into the apps themselves - third-party libraries may be shared and/or read-only - and so there are only two solutions:
- A setting, called UNMIGRATED_APPS
- An autogenerated file in the project directory
While we're trying to have less settings as part of Django - there's currently way too many - I feel that UNMIGRATED_APPS is a good fit to INSTALLED_APPS and fits the Django culture better.
It does mean having to update the settings file each time you add an app you don't want to migrate, rather than makemigrations updating an autogenerated file for you, but the command can remind you of this and even print you the new value ready to paste into your settings file.
Plus, it means migration refuseniks can just set UNMIGRATED_APPS = INSTALLED_APPS and get on with coding.
Opinions on this are of course always welcome, via Twitter or email.
One More Thing
There's one final feature I want to show off. If you ever renamed a field in South you'd know that it detected it as a removal and an addition and lost all your data. That's no more. Behold:
from django.db import models class Author(models.Model): name = models.CharField(max_length=500) featured_top = models.BooleanField() rating = models.IntegerField(null=True)
$ ./manage.py makemigrations Migrations for 'books': 0003_auto.py: - Rename field featured on author to featured_top
Don't worry, there's more nice features like that in the works. | http://www.aeracode.org/2013/6/20/tunnel-lights/ | CC-MAIN-2016-50 | refinedweb | 851 | 65.93 |
Fragments: A Little Background
Update: The actual application is available on the Google Play store.
Once upon a time, Android developers used only two things called activities and views in order to create their user interfaces. If you’re like me and you come from a desktop programming environment, an Activity is sort of like a form or a window. Except it’s more like a controller for one of these classes. With that analogy in place, a view is then similar to a control. It’s the visual part you’re interacting with as a user. I remember the learning curve being pretty steep for me being so stuck in my desktop (C# and WPF) development, but once I came up with these analogies on my own, it seemed pretty obvious. So to make an Android application, one would simply put some views together and chain some activities to show these views. Pretty simple.
Something changed along the way though. It was apparent that the Activity/View paradigm was a bit lacking so something new was added to the mix: The Fragment. Fragments were introduced in Android 3.0 (which is API level 11). Fragments added the flexibility to be able to swap out parts of an activity without having to completely redefine the whole view. This means that having an application on a mobile phone with a small screen can appear differently than when it’s on a large tablet, and as a developer you don’t have to redesign the whole bloody thing. Awesome stuff!
So, to clarify, a fragment is just a part of the activity. By breaking up activities into fragments, you get the modular flexibility of being able to swap in and out components at will. If you’re like me and you took a break from Android when fragments were introduced, then you may have another little learning curve. The goal of this article is to create a tabbed Android user interface using fragments.
For what it’s worth, when I first tried putting together a tabbed UI with fragments, it was a complete mess. I was surfing the net for examples, but I couldn’t find anything that really hit it home for me. Once I had it working, I decided I should redo it and document the process. That’s how this article came to be! Another side note… I’m a C# developer by trade and I haven’t developed with Android/Java within a team. If you don’t like my coding conventions then please try to look past that to get the meat of the article!
As per usual, you can follow along by downloading all of the code ahead of time. Please check out the section at the end of the article and pick whichever option you’d like to get the source!
Setting Up: Getting Your Project Together
I’m going to make a few assumptions here. You should have Eclipse installed with the latest Android Development Tools. There are plenty of examples out there for how to get your environment put together, but I’m not going to cover that here.
You’re going to want to start by making a new Android Application in eclipse. By going to the “File” menu, then the “New” sub menu, then the “Other” sub menu, you should get a dialog letting you pick Android application. You’ll get a wizard that looks like the following (where I’ve filled in the information with what I’ll be using for this entire example):
The wizard gives you some options for what you want to have it generate for you. In this case, I opted out of having a custom icon (since that’s not really important for this tutorial) and I chose to have it create an activity for me.
Our activity is actually going to be pretty light-weight. The bulk of what we’re going to be doing is going to be inside of our fragments. Because of this, we should be totally fine just making our main activity a simple blank activity.
The final step in the wizard just wants you to confirm the naming for your generated code.
Let’s create our “MainActivity” activity with a layout called “activity_main”. Pretty straight forward.
At this point, we actually have an Android application that we can deploy to a phone or a virtual device. If you’re new to Android programming, I suggest you try it out. It’s pretty exciting to get your first little application running.
The Layouts
The layout XML files in Android provide the hierarchies of views that will get shown in the UI. If you haven’t modified the one that was created by default, it will probably look like this:
The default Main Activity XML will look like this. It’s really just a text view that says “Hello World”.
What does that give us? Well, we get a RelativeLayout view that acts as a container for a TextView. The TextView says “Hello World”. Amazing, right?
Let’s switch up our main activity’s layout a bit. Instead of a RelativeLayout, let’s drop in a linear layout that has a vertical orientation. We’ll blow away the TextView too, and drop in a Fragment. Our fragment will need to point to our custom fragment class (which we haven’t created yet). For now, make the class “com.devleader.tab_fragment_tutorial.TabsFragment”. Later in the example, we’ll create the TabsFragment class and put it within this package. When the application runs, it will load up our custom fragment (specified by the full class name) and place it within our LinearLayout.
The layout XML for the main activity looks like the following:
<LinearLayout xmlns: <fragment class="com.devleader.tab_fragment_tutorial.TabsFragment" android: </LinearLayout>
We’re going to need a layout for our tabs fragment. This is going to be the view portion of the UI that gets dropped in to our main activity. It’s going to be responsible for showing the tabs at the top of the UI and then providing container views for the contents that each tab will want to show.
In order to create this layout, right click on your “layout” folder nested within the “res” folder in the Eclipse IDE. Go to “new”, and then click on the “Other” child menu. Pick “Android XML Layout File” from your list of options. Select “TabHost” as the layout’s root element. Let’s call this file “fragment_tabs.xml”.
The top level component in this layout will be a TabHost. We’ll put our TabWidget in next, which is going to contain the actual tab views, and then a FrameLayout with two nested FrameLayouts inside of it for holding the contents that we want to show for each tab. To clarify, the user will be clicking on views within the TabWidget to pick the tab, and the contents within the tab1 and tab2 FrameLayouts will show the corresponding user interface for each tab.
The layout XML for the tabs fragment looks like the following:
<TabHost xmlns: android: <FrameLayout android: </FrameLayout> </LinearLayout> </TabHost>
You may have noticed I used some pretty aggressive hard-coded colors in the layout file. I highly advise you switch these to be whatever you want for your application, but when I’m debugging UI layouts I like to use really high contrasting colors. This helps me know exactly where things are (as opposed to having 10 views all with the same background). Maybe I’m a bit crazy, but I find it really helpful.
Now that we have the main activity done and the tab fragment all set up, the last thing we need is to create some sort of layout for our individual tab views. This will be the view that is placed inside of the TabWidget on our tabs fragment layout. These views will have the title of the tab and they’ll be what the user actually interacts with in order to switch tabs.
The layout XML for our simple tab view looks like the following:
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout>
And that’s it for layouts! Just these three simple files. Now, we need to fill out our classes!
The Classes
If we start from the beginning with the classes, the first (and only) class that gets generated for you is the MainActivity class. If you left it untouched (hopefully you did since there was no indication to change it yet!) then you should have a class that looks like:
In order to make this example work, we barely even need to modify this class at all. You’ll notice our MainActivity extends the Activity class. Because we’re going to be using fragments in our application, we need to modify this class to extend the FragmentActivity. In this entire example, I opted to use the Android v4 Support Library. Thus, in order to make this example work, please ensure you’re using FragmentActivity from the package “android.support.v4.app.FragmentActivity“.
Once you’ve made this replacement (“Activity” for “FragmentActivity”) we’re all done in this class. Great stuff, right? Let’s move on.
We’re going to want to make a class that defines what a tab is. In order to make some nice re-usable code that you can extend, I decided to make a base class that defines minimum tab functionality (at least in my opinion). Feel free to extend upon this class later should your needs exceed what I’m offering in this tutorial.
The base TabDefinition class will:
- Take in the ID of the view where the tab’s content will be put. In our example, this will be the ID for tab1 or tab2’s FrameLayout.
- Provide a unique identifier to look up the tab.
- Be required to provide the Fragment instance that will be used when the tab is activated.
- Be required to create the tab view that the user will interact with in order to activate the tab.
Let’s add a new class called “TabDefinition” to the package “com.devleader.tab_fragment_tutorial”, just like where our MainActivity class is. The code for the TabDefinition class is as follows:
package com.devleader.tab_fragment_tutorial; import java.util.UUID; import android.support.v4.app.Fragment; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; /** * A class that defines a UI tab. */ public abstract class TabDefinition { // // Fields // private final int _tabContentViewId; private final String _tabUuid; // // Constructors // /** * The constructor for {@link TabDefinition}. * @param tabContentViewId The layout ID of the contents to use when the tab is active. */ public TabDefinition(int tabContentViewId) { _tabContentViewId = tabContentViewId; _tabUuid = UUID.randomUUID().toString(); } // // Exposed Members // /** * Gets the ID of the tab's content {@link View}. * @return The ID of the tab's content {@link View}. */ public int getTabContentViewId() { return _tabContentViewId; } /** * Gets the unique identifier for the tab. * @return The unique identifier for the tab. */ public String getId() { return _tabUuid; } /** * Gets the {@link Fragment} to use for the tab. * @return The {@link Fragment} to use for the tab. */ public abstract Fragment getFragment(); /** * Called when creating the {@link View} for the tab control. * @param inflater The {@link LayoutInflater} used to create {@link View}s. * @param tabsView The {@link View} that holds the tab {@link View}s. * @return The tab {@link View} that will be placed into the tabs {@link ViewGroup}. */ public abstract View createTabView(LayoutInflater inflater, ViewGroup tabsView); }
Now that we have the bare-minimum definition of what a tab in our UI looks like, let’s make it even easier to work with. In my example, I just want to have my tabs have a TextView to display a title–They’re really simple. I figured I’d make a child class of TabDefinition called SimpleTabDefinition. The goal of SimpleTabDefinition is really just to provide a class that takes the minimum amount of information to get a title put onto a custom view.
Please keep in mind that there are many ways to accomplish what I’m trying to illustrate here, but I personally felt having a base class with a more specific child class would help illustrate my point. You could even put in a second type of child class that would make a graphical tab that shows a graphical resource instead of a string resource. Tons of options!
Let’s add another new class called “SimpleTabDefinition” to the package “com.devleader.tab_fragment_tutorial”. The code for SimpleTabDefinition is as follows:
package com.devleader.tab_fragment_tutorial; import android.support.v4.app.Fragment; import android.view.Gravity; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.LinearLayout.LayoutParams; /** * A class that defines a simple tab. */ public class SimpleTabDefinition extends TabDefinition { // // Fields // private final int _tabTitleResourceId; private final int _tabTitleViewId; private final int _tabLayoutId; private final Fragment _fragment; // // Constructors // /** * The constructor for {@link SimpleTabDefinition}. * @param tabContentViewId The layout ID of the contents to use when the tab is active. * @param tabLayoutId The ID of the layout to use when inflating the tab {@link View}. * @param tabTitleResourceId The string resource ID for the title of the tab. * @param tabTitleViewId The layout ID for the title of the tab. * @param fragment The {@link Fragment} used when the tab is active. */ public SimpleTabDefinition(int tabContentViewId, int tabLayoutId, int tabTitleResourceId, int tabTitleViewId, Fragment fragment) { super(tabContentViewId); _tabLayoutId = tabLayoutId; _tabTitleResourceId = tabTitleResourceId; _tabTitleViewId = tabTitleViewId; _fragment = fragment; } // // Exposed Members // @Override public Fragment getFragment() { return _fragment; } @Override public View createTabView(LayoutInflater inflater, ViewGroup tabsView) { // we need to inflate the view based on the layout id specified when // this instance was created. View indicator = inflater.inflate( _tabLayoutId, tabsView, false); // set up the title of the tab. this will populate the text with the // string defined by the resource passed in when this instance was // created. the text will also be centered within the title control. TextView titleView = (TextView)indicator.findViewById(_tabTitleViewId); titleView.setText(_tabTitleResourceId); titleView.setGravity(Gravity.CENTER); // ensure the control we're inflating is layed out properly. this will // cause our tab titles to be placed evenly weighted across the top. LinearLayout.LayoutParams layoutParams = new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); layoutParams.weight = 1; indicator.setLayoutParams(layoutParams); return indicator; } }
Awesome stuff. Now we can define tabs easily in our application. We just have one more class left, I promise! In the following section, I’ll re-iterate over everything, so if you’re feeling a bit lost… Just hang in there.
The one part we’re actually missing is the fragment that will manage all of our tabs. We created the layout for it already, which has a TabHost, a TabWidget (to contain the clickable tab views), and some FrameLayouts (that contain the content we show when we press a tab). Now we just need to actually attach some code to it!
The TabsFragment class that we’re going to want to add to the package “com.devleader.tab_fragment_tutorial” is responsible for a few things. First, we’re going to be defining our tabs in here. This class will be responsible for taking those tab definitions and creating tabs that get activated via the TabHost. As a result, this fragment class is going to have to implement the OnTabChangedListener interface. This will add a method where we handle switching the fragment shown to match the fragment for the contents of the tab that was pressed.
The code for our TabsFragment class looks like the following:
package com.devleader.tab_fragment_tutorial; import android.os.Bundle; import android.support.v4.app.Fragment; import android.support.v4.app.FragmentManager; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.TabHost; import android.widget.TabHost.OnTabChangeListener; import android.widget.TabHost.TabSpec; /** * A {@link Fragment} used to switch between tabs. */ public class TabsFragment extends Fragment implements OnTabChangeListener { // // Constants // private final TabDefinition[] TAB_DEFINITIONS = new TabDefinition[] { new SimpleTabDefinition(R.id.tab1, R.layout.simple_tab, R.string.tab_title_1, R.id.tabTitle, new Fragment()), new SimpleTabDefinition(R.id.tab2, R.layout.simple_tab, R.string.tab_title_2, R.id.tabTitle, new Fragment()), }; // // Fields // private View _viewRoot; private TabHost _tabHost; // // Exposed Members // @Override public void onTabChanged(String tabId) { for (TabDefinition tab : TAB_DEFINITIONS) { if (tabId != tab.getId()) { continue; } updateTab(tabId, tab.getFragment(), tab.getTabContentViewId()); return; } throw new IllegalArgumentException("The specified tab id '" + tabId + "' does not exist."); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { _viewRoot = inflater.inflate(R.layout.fragment_tabs, null); _tabHost = (TabHost)_viewRoot.findViewById(android.R.id.tabhost); _tabHost.setup(); for (TabDefinition tab : TAB_DEFINITIONS) { _tabHost.addTab(createTab(inflater, _tabHost, _viewRoot, tab)); } return _viewRoot; } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); setRetainInstance(true); _tabHost.setOnTabChangedListener(this); if (TAB_DEFINITIONS.length > 0) { onTabChanged(TAB_DEFINITIONS[0].getId()); } } // // Internal Members // /** * Creates a {@link TabSpec} based on the specified parameters. * @param inflater The {@link LayoutInflater} responsible for creating {@link View}s. * @param tabHost The {@link TabHost} used to create new {@link TabSpec}s. * @param root The root {@link View} for the {@link Fragment}. * @param tabDefinition The {@link TabDefinition} that defines what the tab will look and act like. * @return A new {@link TabSpec} instance. */ private TabSpec createTab(LayoutInflater inflater, TabHost tabHost, View root, TabDefinition tabDefinition) { ViewGroup tabsView = (ViewGroup)root.findViewById(android.R.id.tabs); View tabView = tabDefinition.createTabView(inflater, tabsView); TabSpec tabSpec = tabHost.newTabSpec(tabDefinition.getId()); tabSpec.setIndicator(tabView); tabSpec.setContent(tabDefinition.getTabContentViewId()); return tabSpec; } /** * Called when switching between tabs. * @param tabId The unique identifier for the tab. * @param fragment The {@link Fragment} to swap in for the tab. * @param containerId The layout ID for the {@link View} that houses the tab's content. */ private void updateTab(String tabId, Fragment fragment, int containerId) { final FragmentManager manager = getFragmentManager(); if (manager.findFragmentByTag(tabId) == null) { manager.beginTransaction() .replace(containerId, fragment, tabId) .commit(); } } }
And that’s it! Just four classes in total, and one of them (MainActivity) was almost a freebee!
Putting It All Together
Let’s recap on all of the various pieces that we’ve seen in this example. First, we started with the various layouts that we’d need. Our one and only activity is pretty bare bones. It’s going to contain our tabs fragment view. The tabs fragment view is responsible for containing the individual tabs a user clicks on as well as the content that gets displayed for each tab. We also added a layout for really simplistic tab views that only really contain a TextView that shows the tab’s title.
From there, we were able to look at the classes that would back up the views. To use our fragment implementation, we only had to modify our parent class of our only activity. I opted to create some classes that define tab functionality to make extending the UI a bit easier, and adding additional child classes that fit in this pattern is simple. The TabsFragment class was the most complicated part of our implementation, and truth be told, that’s where most of the logic resides. This class was responsible for defining the tabs we wanted to show, and what fragments we would swap in when each tab was clicked.
In order to extend this even more, the things you’ll want to consider are:
- Defining your own type of tab definition classes. Maybe you want to look at graphical tabs, or something more complicated than just a title.
- Implementing your own fragment classes that you display when your tabs are clicked. In the example, the contents of the tabs are empty! This is definitely something you’ll want to extend upon.
- Adding more tabs! Maybe you need three or four tabs instead of two.
Summary
Fragments in Android really aren’t all that complicated. As a new Android developer or transitioning from the pre-API level 11 days, they might seem a bit odd. Hopefully after you try out this example they’re a lot more clear. Hopefully by following along with this tutorial you found that you were easily able to set up a tabbed user interface in Android and get a basic understanding for how fragments work.
Source and Downloads
I like being able to provide the source in as many formats as possible… so here we go:
- Paste Bin
- Google Code
- BitBucket
- GitHub
Update: The actual application is available on the Google Play store.
4 Trackbacks / Pingbacks for this entry
November 14th, 2013 on 12:47 pm
[…] This application is the result of a tutorial available at. […]
November 17th, 2013 on 11:23 pm
[…] that I put out on the Google Play store. It’s the end result of the tutorial I wrote up over here. I think it’s going to blow past my legitimate application for converting […]
November 24th, 2013 on 5:22 pm
[…] Post de Nick Cosentino sobre técnicas de desarrollo con fragmentos. […]
December 29th, 2013 on 9:53 am
[…]
Essendo Xandros open – anche se “discontinued” – credo di sì.
Nel frattempo ho trovato questo interessante link in merito: Fragments: Creating a Tabbed Android User Interface « Dev Leader Dev Leader
[…] | http://devleader.ca/2013/11/04/fragments-tabbed-android-user-interface/ | CC-MAIN-2018-09 | refinedweb | 3,497 | 57.57 |
Use Cloudflare Network Interconnect
Onboarding
When setting up Magic Transit to work with Cloudflare Network Interconnect (CNI), the onboarding process includes the following additional steps:
Cloudflare generates Letters of Authorization (LOAs) for your CNI cross-connects and sends them to your organization.
You order the cross-connects that you want to use with CNI. You can use any of the following:
- Private network interconnects (PNI) are available at any of our private peering facilities.
- Virtual private network interconnects (vPNI) allow you to easily connect with Cloudflare at any of our interconnection platform locations.
- Internet exchange point (IXP) interconnects allow you to establish a link with Cloudflare at any of the more than 200 IXPs in which we participate.
You send Cloudflare confirmation when the cross-connects are in place.
Cloudflare makes routing configuration changes and validates BGP sessions and GRE tunnel routes.
Each of these steps can take 2–7 business days.
For more details on the CNI onboarding process, see Set up Cloudflare Network Interconnect: Onboarding.
Guidelines
When working with Magic Transit and CNI, observe these guidelines:
Cloudflare Network Interconnect does not support 1500 byte packets, so you still need to implement MSS clamping.
You must set the MSS clamp size to 1332 bytes to accommodate the additional overhead from the Foo-over-UDP (FOU) protocol and IPv6. These are used to backhaul data from the colocation facility where traffic is ingested (close to the end user) to the facility with the CNI link.
Cloudflare Network Interconnect does not process outgoing traffic from your data centers. Egress traffic returns to end users through direct server return (DSR), not through Cloudflare. For this reason, CNI is not a replacement for your existing transit providers. | https://developers.cloudflare.com/magic-transit/use-magic-transit-with-cni | CC-MAIN-2020-50 | refinedweb | 284 | 52.7 |
CSL styles.
This module is meant to be used as a static resources package, in order to make it easy to include the required Citation Style files (.csl) when using citeproc-py.
In order to avoid always installing ~40MB of files each time you include it in a project you could specify it as an extra in your setup.py, and only use it in the production environment or as an optional feature of your module. (Example setup.py)
The included files are originally hosted on the CSL Style Repository which belongs to the CSL Project
Note: The style files are referenced as a git submodule. This means that this repository/package is pinned on a specific commit of the CSL Style Repository, and thus may not include any fixes or new styles that may have been added. Next versions of this repository will of course ‘bump’ the styles version to the latest commit, but this will not happen on a scheduled basis for the time being.
citeproc-py-styles is on PyPI so all you need is:
pip install citeproc-py-styles
This is a minimal example of how one could use citeproc-py-styles to render a citation with citeproc-py:
from citeproc import (Citation, CitationItem, CitationStylesBibliography, CitationStylesStyle, formatter) from citeproc.source.json import CiteProcJSON from citeproc_styles import get_style_filepath csl_data = json.loads("...") source = CiteProcJSON(csl_data) style_path = get_style_filepath('apa') style = CitationStylesStyle(style_path) bib = CitationStylesBibliography(style, source, formatter.plain) bib.register(Citation([CitationItem('data_id')])) print(''.join(bib.bibliography()[0]))
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/citeproc-py-styles/ | CC-MAIN-2017-26 | refinedweb | 268 | 51.18 |
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
#include <iostream>
#include <iomanip>
#include <conio.h>
using namespace std;
int main()
{
int rooms;
int length;
int width;
float carpetCost;
float carpetPrice;
int SQYRD = 9;
int INSTALL = 5;
int GOOD = 3;
int BETTER = 4;
int BEST = 5;
int EXCELLENT = 7;
cout << "How many rooms will you be carpeting?"; cin >> rooms;
for(int x = 0; x < rooms; x++)
cout << "\nWhat is the length of room 1(in feet)?"; cin >> length;
cout << "\nWhat is the width of room 1(in feet)?"; cin >> width;
cout << "\nWhat si the price of carpeting?"; cin >> carpetPrice;
int squareFeet = length * width;
// not sure how to round squareyards up with ceil or something else
int squareYards = ceil(squareFeet/SQYRD);
int installation = INSTALL * squareYards;
int padCost = GOOD * squareYards;
carpetCost = carpetPrice * squareYards;
float totalCost = installation + padCost + carpetCost;
cout << endl;
cout << "Padding choces";
cout << "\n 1. Good-$3 per yard -[1-3yr Warranty]";
cout << "\n 2. Better-$4 per yard -[3-5 yr Warranty]";
cout << "\n 3. Best- $5 per yard-[5-10 yr Warranty]";
cout << "\n 4. Excellent-$7 per yard-[10-20 yr Warranty]";
// I need to choose one padding option based off the above but not sure how
cout << "\n Select a padding option";
cout << "\n Enter price of carpeting per sq yd of room 1: ";
cout << "\n Yards required = ", squareYards;
cout << "\n Installation = ", installation;
cout << "\n Padding cost = ", padCost;
cout << "\n Carpet cost = ", carpetCost;
cout << "\n totalCost = ", totalCost;
_getch();
return 0;
} | http://www.cplusplus.com/forum/beginner/98190/ | CC-MAIN-2017-17 | refinedweb | 235 | 66.07 |
06 December 2010 08:21 [Source: ICIS news]
By Tahir Ikram
?xml:namespace>
“We are optimistic that a win-win situation for all will be found soon,” GPCA secretary general Abdulwahab al-Sadoun told ICIS in a phone interview.
The trade squabbles between
GPCA had taken a stand against any form of protectionism, even during periods of economic difficulties.
“It’s in the best interest of everyone to have a free trade policy to be implemented. Any protectionist measures will have ramifications for consumers as well,” Al-Sadoun said.
The GPCA forum this year, which will focus on innovation, had attracted 1,300 delegates so far, representing a 23% increase in number of attendees from last year’s meeting, he said.
“This is a reflection of the prominence of the gulf region … as [the] centre of gravity for [the] petrochemicals business,” Al-Sadoun said. | http://www.icis.com/Articles/2010/12/06/9416757/gpca-chief-hopeful-of-end-to-india-middle-east-add-row.html | CC-MAIN-2014-52 | refinedweb | 144 | 51.89 |
What is a java stream
A stream is a sequence of elements usually from a collection. Collection may be an array, list, set, file or a sequence of elements.
A stream can perform different operations on its elements. These operations include iterating over its elements, counting total number of elements, filter some elements based on some condition, transform elements etc.
A java stream is represented by
java.util.stream.Stream interface and its implementation classes. Streams were introduced in java 8.
There are 7 different methods of creating a stream as described ahead.
1. Using Stream.of
java.util.stream.Streamcontains a static interface method of which accepts an array or a variable number of arguments and returns a stream of those arguments.
Argument values comprise the elements of the stream as shown below.
// create an array Integer[] arr = {1, 2, 3, 4}; // create a stream Stream<Integer> stream = Stream.of(arr);
Above code can also be shortened to
Stream<Integer> stream = Stream.of(1, 2, 3, 4);
Signature of
of method is
static <V> Stream<V> of(V... values)
where V is a generic type.
2. Stream over a collection
java.util.Collection interface has a
stream method which returns a stream of elements over the collection on which it is invoked.
stream method is a default interface method added in java 8 to the Collection interface. Since List and Set implement this interface, this method is available to them.
Creating a stream on a List is shown below.
//create a list List<Integer> list = Arrays.asList(1, 2, 3, 4, 5); // create a stream Stream<Integer> stream = list.stream();
By default,
Stream.ofmethod described above returns a stream over an array but you can also create an array stream using
streammethod
java.util.Arraysclass which accepts an array as argument as shown below.
// create an array Integer[] arr = {1, 2, 3, 4}; // create a stream Stream<Integer> stream = Arrays.stream(arr);
4. Using Stream builder
A stream can be created using a builder. In order to get a stream builder, invoke the static
builder method. This method returns an object of type
java.util.stream.Stream.Builder.
This builder object is then used to add elements to the stream using
add method and then create the stream using
build method as shown below.
// create a builder Builder<Integer> builder = Stream.<Integer>builder(); // add elements builder.add(3).add(4); // create stream Stream<Integer> s = builder.build();
Above code can be reduced to a one-liner
Stream<Integer> s = Stream.<Integer>builder().add(3).add(4).build();
5. Empty stream
An empty stream does not contain any elements. It can be created using
empty method from
java.util.stream.Stream interface.
empty is once again a static interface method and returns a stream object with no elements.
Empty stream is generally used to return an object from a method that returns a stream instead of returning
null so as to avoid NullPointerException later as shown below.
Stream<Integer> getStream(Integer[] arr) { return arr.length > 0 ? Arrays.stream(arr) : Stream.empty(); }
6. From a file
java.nio.file.Files class has a static lines method which accepts a file path returns a stream. This stream can then be iterated to read the contents of file line by line as shown below.
Stream<String> stream = Files.lines(Paths.get(filePath)); stream.forEach((line) -> System.out.println(line));
7. Generate stream
A stream of a specific size can also be generated using static
generate method from
java.util.stream.Stream interface.
Below example generates a stream of 10 integers. Integers are random numbers generated between a range.
Random random = new Random(); Stream<Integer> stream = Stream.generate(() -> { // generate random numbers between 0 and 99 return random.nextInt(100); }).limit(10);
Note that
generate method accepts an argument of type java.util.function.Supplier which is a functional interface and hence can be implemented using a Lambda expression as shown above.
Iterating a stream
A stream can be iterated using
forEach method. This method accepts an argument of type
java.util.function.Consumer interface.
This is a functional interface having a single method
accept that takes an argument but returns nothing. Below example shows how to iterate an array of integers using
forEach method.
// array Double[] nums = {22.4, 34.7, 1.2, 0.4}; // get stream Stream<Double> stream = Arrays.stream(ages); // iterate stream stream.forEach(v -> System.out.println(v));
forEach is a terminal operation. You will learn more about terminal operation in the coming sections.
Stream pipeline
A stream can be visualized as a pipeline with source of elements at one end and a result at the other end with different operations in between as depicted below.
Source could be any one of the ways of stream creation explained in the last section. There are many different operations that can be applied on a stream classified as Terminal and Non-terminal covered in following section.
Stream operations
A stream can be processed to perform a varying number of operations. All the operations on a stream can be categorized into following two categories.
Non-terminal operations
These operations are used to modify or remove or transform stream elements based on some condition. All non-terminal operations return another stream with modified elements.
This is the reason that multiple terminal operations can be chained together. Non-terminal operations are also called intermediate operations.
Following are some non-terminal operations that can be performed on a stream.
1. filter
Stream elements can be filtered or removed based on a certain condition using filter operation. This operation is applied using
filter method which accepts an argument of type
java.util.function.Predicate.
This is a functional interface having a method
test which accepts a single value and tests it for some condition. It returns
true if the value passes the test,
false otherwise.
Example of filter operation on stream is given below.
// array of ages Integer [] ages = {24, 43, 32, 68, 61, 29, 33}; // get stream over array Stream<Integer> stream = Arrays.stream(ages); // filter out ages more than 60 stream.filter((v) -> v < 60).forEach(v -> System.out.println(v));
Above example filters integer array and allows only those elements that are greater than 60. Notice the argument to
filter method is a Lambda expression as an implementation of
test method of
java.util.function.Predicate interface.
The lambda expression returns
true for elements that are less than 60, thus allowing to pass the test and elements greater than 60 are filtered out.
2. map
Map operation is used to transform the elements of a stream. Map operation is applied using
map method. This method accepts a function that is applied to its argument and returns a new stream with modified elements.
Suppose you want to calculate square of each element of an array. Application of
map method for this is shown below.
Integer [] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(ages); // calculate square of stream elements stream.map(v -> v * v).forEach(v -> System.out.println(v));
map may also be applied on objects to return a customized value. Below example creates an array of student objects and uses
map to return only the name of students.
import java.util.stream.Stream; class Student { int roll; String name; public Student(int r, String n) { roll = r; name = n; } } public class MapStreamExample { public static void main(String[] args) { // create student objects Student s1 = new Student(1,"A"); Student s2 = new Student(2,"B"); // create array Student[] arr = {s1,s2}; // get stream of students Stream<Student> stream = Arrays.stream(arr); // get only names of students stream.map(st -> st.name).forEach(v -> System.out.println(v)); } }
3. distinct
As name suggests, this intermediate operation removes duplicate elements from the stream. This method does not accept any arguments and returns a new stream with the duplicate elements removed. Example,
Integer [] nums = {24, 43, 32, 24, 68}; Stream<Integer> stream = Arrays.stream(ages); // fetch unique elements stream.distinct().forEach(v->System.out.println(v));
Above code removes duplicate element(24) from the array.
4. peek
This method returns the elements of the stream as it is without any modification or filtering. It accepts a
java.util.function.Consumer as argument and returns a new stream.
Since a consumer can not return a value,
peek can not modify the value of elements. Peek is an intermediate operation which is used for debugging to look at the elements of a stream as they flow through the stream pipeline during multiple intermediate operations. Example,
Integer [] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(ages); // calculate square of stream elements stream.filter(v -> v <20).peek(v -> System.out.println(v))map(v -> v * v).forEach(v -> System.out.println(v));
5. limit
This terminal operation is used to reduce or limit the number of elements in a stream.
limit method takes an integer as argument and limits the count of elements in the stream to this integer. Example,
Integer [] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(numbers); // only 2 elements stream.limit(2).forEach(v -> System.out.println(v));
Above example will print the first two stream elements.
6. flatMap
Flatmap non-terminal operation is used to generate a stream whose each element is another stream produced by applying an operation on each element.
It is used to join elements of multiple streams together in a single stream. Final stream contains the contents of the streams of all its elements.
Flat map operation is performed by using
flatMap method.
flatMap accepts an argument of type
java.util.Function interface.
This interface has a single method
apply which accepts an argument and returns a value. Thus,
apply can be used as a mapping function that is applied to its argument.
Signature of
flatMap is as below.
Stream flatMap(Function mapper);
Scenarios where
flatMap is useful are:
- Extracting words from lines of a file or a list/array of strings.
- Converting a multi-dimensional array to single dimension.
- Collect elements of multiple lists in one list.
Example of
flatMap to get words from lines of file is given below.
// get lines from file Stream<String> lines = Files.lines(Paths.get("e://testfile.txt")); // get stream of words from lines Stream<String> words = lines.flatMap(line -> Stream.of(line.split(" "))); // iterate over word stream words.forEach(w -> System.out.println(w));
In the above example,
flatMap is invoked on the lines of file. Implementation of
java.util.function.Function is supplied as a Lambda expression to
flatMap.
This expression splits each line on a blank space and converts the resulting array into a stream of words. Thus, final stream returned by
flatMap is a stream of words of lines in file.
Summary of all the non-terminal or intermediate stream operations is summarized below.
Here, T is the generic type.
Terminal operations
As the name suggests, these operations are applied at the end of the stream. In other words, applying these operations terminates the stream.
These operations return a single result and after a terminal operation is applied, no other operation can be applied over the stream, neither the stream can be re-used.
Trying to use the stream after a terminal operation has been applied will result in
java.lang.IllegalStateException: stream has already been operated upon or closed
Below is a list of terminal stream operations.
1. count
It returns the total number of elements in the underlying stream. Internally
count performs iteration of stream elements. Example,
Integer [] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(numbers); System.out.println(stream.count()); // prints 4
2. collect
This method iterates over the elements of the stream and puts them into a collection. The collection is specified as an argument to this method using
java.util.stream.Collectors class. Example,
Integer[] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(numbers); List<Integer> list = Stream.of(numbers).map(n -> n * n).collect(Collectors.toList());
Above example calculates the square of array elements and adds them to a list. It is also possible to convert array to a list using collect by simply removing the
map intermediate operation.
collect method can also be used to convert an array to a set by using
toSet method of Collectors as shown below.
Integer[] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(numbers); Set<Integer> set = Stream.of(numbers).collect(Collectors.toSet());
3. anyMatch
anyMatch method accepts an argument of type
java.util.function.Predicate which is a test condition. It applies the given test to all the elements of stream and returns
true if any element matches the condition and
false if no elements meet the condition.
anyMatch returns
true for the first matching element, it does not check further elements. It can be used to search or check if an element exists in an array. Example,
class Student { int roll; String name; public Student(int r, String n) { roll = r; name = n; } } public class Main { public static void main(String[] args) { Student s1 = new Student(1,"A"); Student s2 = new Student(2,"B"); Student[] attendance = {s1, s2}; // check if student is absent boolean isPresent = Arrays.stream(s).map(st -> st.roll).anyMatch(rollNum -> rollNum == 3); System.out.println(isPresent ? "Present" : "Absent"); } }
Above example creates an array of students and checks if a student with roll number 3 is present or not. Note the usage of map method to return roll number from student object.
Also note that an operation receives the return value from the previous intermediate operation and not the actual stream element.
If that would not be the case,
anyMatch should have received an object instead of an integer.
4. allMatch
anyMatch will return
true if all the stream elements match the given condition and
false if even a single element does not meet the condition. Example,
Integer [] numbers = {5, 10, 15, 30}; boolean multiples = Stream.of(numbers).allMatch(e -> e % 5 == 0); System.out.println(multiples ? "All multiples of 5" : "All are not multiples of 5");
Above code checks if all array elements are multiples of 5.
5. noneMatch
This method also accepts an argument of type
java.util.function.Predicate and returns
true if all the elements do not match the given condition and
false if a single element matches the condition or the stream is empty.
noneMatch works opposite to
allMatch. Example,
String[] arr = {"bowler", "orange", "round"}; // check for string less than 5 characters boolean lengthy = Arrays.stream(arr).noneMatch(str -> str.length() < 5); // returns true System.out.println(lengthy ? "No lesser than 5" : "Found lesser than 5");
In the above example,
noneMatch checks the length of array elements and returns true if all the elements are greater than 5 characters.
6. findFirst
This method returns the first element of the stream. It returns an object of
java.util.Optional which is empty if the stream contains no element.
If the stream has at least one element, then the first element can be retrieved using
get method of returned optional object. Example,
String[] arr = {"bowler", "orange", "round"}; Optional<String> element = Arrays.stream(arr).findFirst(); System.out.println("First element is: " + element.get());
7. findAny
This method returns a random element from the stream. findAny also returns a java.util.Optional object. It is empty if the stream does not contain any elements otherwise, use its get method for getting the value of element. Example,
String[] arr = {"bowler", "orange", "round"}; // random element Optional<String> element = Arrays.stream(arr).findAny(); System.out.println("Any element is: " + element.get());
8. reduce
This terminal operation is used to perform an operation on the elements of the stream to return a single value. This method may be used to calculate the sum of elements of an integer array or to print an array meaningfully as shown below.
String[] arr = { "bowler", "orange", "round" }; Optional<String> element = Arrays.stream(arr).reduce((a, b) -> a + ", " + b); System.out.print(element.get());
reduce method in this example accepts an argument of type
java.util.function.BinaryOperator which is also a functional interface having
apply method.
This method accepts two arguments and returns a value as implemented in the above example using a Lambda expression.
Above code snippet will print
bowler, orange, round
reduce method example for calculating the sum of array elements is given below.
Integer[] numbers = { 5, 10, 15, 30 }; Integer sum = Stream.of(numbers).reduce(0, (a, b) -> a+b); System.out.println(sum); // prints 60
This example uses an overloaded
reduce method which accepts an identity value and an object of type
java.util.function.BinaryOperator as arguments and returns a single result after applying the operation on stream elements.
Identity value is 0 for addition and subtraction, 1 for multiplication and division.
9. max
max method is used to return maximum value from among stream elements. In order to determine maximum element, max accepts and object of
java.util.Comparator interface as argument. Example,
Integer[] numbers = { 5, 10, 15, 30 }; Integer max = Stream.of(numbers).max((v1, v2) -> v1 - v2).get(); // 30
max returns an
Optional, hence use its
get method to get the result.
10. min
Similar to
max,
min method is used to determine minimum value from stream of elements. It also uses a comparator to determine minimum value. Example,
Integer[] numbers = { 5, 10, 15, 30 }; Integer max = Stream.of(numbers).min((v1, v2) -> v1 - v2).get(); // 5
Below is a summary of terminal operations of a stream.
Here, T is the generic type.
Streams are lazy
Streams add processing to data structures but this processing is only performed when required. This means that intermediate operations are performed only when a terminal operation is used.
If there is no terminal operation, then no action is performed on the stream. Thus, streams are lazy. Example,
Integer [] numbers = {5, 12, 15, 29}; Stream<Integer> stream = Arrays.stream(ages); // calculate square of stream elements stream.filter(v -> v <20).map(v -> v * v);
Above code example applies two operations filter and map on the stream but no terminal operation. Thus, no processing is performed.
Also, intermediate operations are only performed as required. Example,
if you use
filter,
map and
findAny over a stream of 100 numbers, then filter and map will not be executed 100 times, only 1 time.
This is because
findAny terminal operation returns a random element and will be completed after first invocation, thus eliminating the need for executing
filter and
map once again.
Stream benefits
At first look, you may find streams as an overhead and unnecessary effort since one may achieve the same result without using stream.
But streams have following two important benefits.
- Streams make the code concise and cleaner. If you want to find smallest number in an integer list, then following code would be required.
int max = Integer.MIN_VALUE; for(int num : list) { if(num > max) { max = num; } }
Same can be achieved using stream as
list.stream().max((v1, v2) -> v1 – v2).get();
which is certainly cleaner.
- Streams become very useful when multiple intermediate operations are chained together to achieve a result. Getting the same result without streams would become too complex.
That is all on streams introduced in java 8. Hit the clap if you found it useful.0 | https://codippa.com/stream-in-java-with-examples/ | CC-MAIN-2020-16 | refinedweb | 3,215 | 50.53 |
Turkey new modern royal style furniture classic living room sofa damask fabric for sofa
US $1000-3000 / Set
1 Set (Min. Order)
Foshan Nanhai Pengyi Households Factory
92.2%
Zhejiang furniture sofa,damask sofa furniture
US $43-50 / Set
5 Sets (Min. Order)
Huizhou Bosenyu Furniture Co., Ltd.
98.5%
Damask sofa fabric sofa floral fabric sofa design
US $50-200 / Piece
5 Pieces (Min. Order)
Foshan Yihaixuan Furniture Co., Ltd.
95.2%
Queenshome house fabric wooden modern couch used scandinavian funky furniture rooms wood soap low cost loft mega flat pack sofa
US $169.0-184.0 / Piece
30 Pieces (Min. Order)
Guangdong Queenshome Furnishing Co., Ltd.
52.6%
alibaba damask mexico damask sofa furniture DF017
US $300-800 / Set
5 Sets (Min. Order)
Foshan Jiu Jia Furniture Co., Ltd.
99.4%
Rustic manner sofa furniture
US $500-650 / Piece
3 Pieces (Min. Order)
Foshan Danxueya Furniture Co., Ltd.
91.7%
Jacquard Damask sofa fabric floral designs
US $180-300 / Piece
5 Pieces (Min. Order)
Foshan Yihaixuan Furniture Co., Ltd.
95.2%
Best selling velvet fabric damask sofa furniture
US $280-500 / Set
50 Sets (Min. Order)
Huizhou Bosenyu Furniture Co., Ltd.
98.5%
Sofa made baroque design jacquard damask fabric for upholstery
US $3.3-4 / Meter
1000 Meters (Min. Order)
Shaoxing City Huayeah Textile Co., Ltd.
94.7%
Elastic polyester duchess satin fabric China supplier cloth fabric
US $2.34-2.34 / Meter
1000 Meters (Min. Order)
Jiangsu Yingming Textile Technology Co., Ltd.
83.1%
custom different style self adhesive fabric sticker label
US $0.01-1 / Piece
200 Pieces (Min. Order)
Shenzhen Sinon Shengshi Industry Co., Ltd.
92.0%
High density custom 100% polyester sew on woven label for sofa
US $0.01-0.06 / Piece
500 Pieces (Min. Order)
Guangzhou Xiang Teng Yun Apparel Co., Ltd.
86.7%
Chenille Damask Fabric
US $3.2-6.5 / Meter
500 Meters (Min. Order)
Tongxiang Tenghui Textile Ltd.
80.5%
Blue damask shadda bazin riche jacquard polyester fabric kint sofa designs Cheap Textile Material Guinea Brocade
US $5.8-6.3 / Yard
10 Yards (Min. Order)
Guangzhou Mikemaycall Trading Co., Ltd.
96.3%
AAS887-leopard Boudoir Damask Chaise Longue Chair Couch Antique Bench Decorative Sofa
US $1.0-1.0 / Set
1 Set (Min. Order)
Foshan Aliye Home Furniture Co., Ltd.
70.3%
Custom Made Textile Velvet damask Fabric for sofa/Cushion/Bolster Upholstery
US $2.38-2.51 / Meter
100 Meters (Min. Order)
Zhejiang Tonghui Textile Co., Ltd.
94.4%
damask comfortable chinese style lining fabric sofa
US $1.1-5 / Meter
200 Meters (Min. Order)
Shaoxing Hafei Home Textile Co., Ltd.
86.7%
100% polyester floral metallic damask fabric brocade fabric for upholstery/ sofa/ decoration
US $5-11.5 / Yard
300 Yards (Min. Order)
Shaoxing Keqiao Tulan Textile Co., Ltd.
96.6%
Damask design velvet fabric for lazy boy upholstery sofa
US $1.4-2.4 / Meter
300 Meters (Min. Order)
Changshu Shanhe Textile Co., Ltd.
81.0%
Florals design damask waterproof jacquard fabric for sofa and curtain
US $0.2-0.4 / Meter
5000 Meters (Min. Order)
Everen Industry Company Limited
Factory direct price damask fabric for sofa
US $2.1-2.1 / Meter
300 Meters (Min. Order)
Hangzhou Yaoyang Technology Co., Ltd.
Printed Loop Velvet Fabric for toy and sofa
US $2.6-2.6 / Kilogram
300 Kilograms (Min. Order)
Wujiang Idear Textile Co., Ltd.
51.6%
Wholesale low price per meter damask velvet silk fabric for sofa
US $1-2 / Meter
1000 Meters (Min. Order)
Shaoxing Keqiao Fullgold Textile Co., Ltd.
50.0%
Custom labels for furniture and sofa
US $0.01-0.1 / Piece
100 Pieces (Min. Order)
Shenzhen Donice Garment Accessories Co., Ltd.
0.0%
top sell 100%polyestery flock on flock fabrics sofa
US $1.1-1.5 / Meter
5000 Meters (Min. Order)
Changshu Yifan Fabric Co., Ltd.
39.1%
malaysia wood sofa sets furniture PFS3863
US $858-1100 / Set
30 Sets (Min. Order)
Foshan Perfect Furniture Company Limited
15.4%
import jacquard designs emboss customized sofa covers textile
US $0.8-4 / Meter
1000 Meters (Min. Order)
Haining Huayi Warp Knitting Co., Ltd.
0.0%
custom satin woven labels
US $0.01-0.18 / Piece
1 Piece (Min. Order)
Ningbo Huarong Computer Label Weaving Co., Ltd.
85.2%
Fashion Personalized Design Custom Garment Trademark Name Logo Sew on Damask Woven Label for Clothes
US $0.002-0.2 / Piece
1 Piece (Min. Order)
Zhangjiagang Kejia Label Weaving Co., Ltd.
100%
Wholesale Custom Famous Band Name Logo Centerfold Machine Woven Damask Shoes Label
US $0.013-0.059 / Piece
200 Pieces (Min. Order)
Xiamen Ronices Industry & Trade Co., Ltd.
96.0%
Sinicline custom wedding dress clothing tags clothing labels and hang tags
US $0.01-0.06 / Piece
5000 Pieces (Min. Order)
Wuhan Sinicline Industry Co., Ltd.
76.8%
Wholesale custom design your company own logo satin fabric woven clothes label
US $0.01-0.04 / Piece
1000 Pieces (Min. Order)
Meijei Label & Printing Co., Ltd.
97.4%
Jeans iron on clothing garment heat seal label
US $0.05-0.18 / Pieces
1000 Pieces (Min. Order)
Dongguan Yaolin Industrial Co., Ltd.
97.0%
Slub Yarn Jacquard damask pillow swing sofa chair Cushion Cover Fabric
US $3.8-5.0 / Meter
50 Meters (Min. Order)
Zhejiang Famous Textile Co., Ltd.
84.0%
ACI-Guinea Brocade New Design 100% Cotton African Bazin Riche Getzner Germany Quality Garment Fabric Shadda Damask 10 Yards
US $1.8-6 / Yard
10 Yards (Min. Order)
Shaoxing Aci Import & Export Co., Ltd.
78.8%
Gripen Printed PU coated synthetic Leather for bags for shoes
US $11.9-11.9 / Meter
200 Meters (Min. Order)
Dongguan Gripen Leather Company Limited
100%
Custom Damask Woven Clothing Label
US $0.03-0.2 / Piece
100 Pieces (Min. Order)
Dongguan Wenxuan Gifts Company Ltd.
78.3%
Christmas Home Decor Pillow Case Cover Square Damask Pattern Cotton Decoration Zippered Pillowcases Throw Cushion Cover Cases
US $1-2 / Piece
1 Piece (Min. Order)
Shaoxing City Keqiao Dairui Textile Co., Ltd.
Leather shoes tongue sofa jeans tape and woven sewing labels
US $0.007-0.07 / Piece
500 Pieces (Min. Order)
JCBasic Garment Accessories (Shanghai) Co., Limited
92.1%
- About product and suppliers:
Alibaba.com offers 871 damask sofa products. About 1% of these are living room sofas, 1% are living room chairs, and 1% are beds. A wide variety of damask sofa options are available to you, such as fabric, metal. You can also choose from sectional sofa, corner sofa, and leisure chair. As well as from european style, chinese style. And whether damask sofa is modern, or antique. There are 871 damask sofa suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of damask sofa respectively. Damask sofa products are most popular in Mid East, Western Europe, and North America. You can ensure product safety by selecting from certified suppliers, including 47 with Other, 46 with ISO9001, and 10 with ISO14001 certification. | https://www.alibaba.com/countrysearch/CN/damask-sofa.html | CC-MAIN-2019-13 | refinedweb | 1,156 | 71 |
1.1 anton 1: \ create a documentation file 2: \ the stack effect of loading this file is: ( addr u -- ) 3: \ it takes the name of the doc-file to be generated. 4: 5: \ the forth source must have the following format: 6: \ .... name ( stack-effect ) \ wordset [pronounciation] 7: \ \G description ... 8: 9: \ The output is a Forth source file that looks like this: 10: \ doc-entry name stack-effect ) wordset [pronountiation] 11: \ description 12: \ 13: \ (i.e., the entry is terminated by an empty line or the end-of-file) 14: 15: \ this stuff uses the same mechanism as etags.fs, i.e., the 16: \ documentation is generated during compilation using a deferred 17: \ HEADER. It should be possible to use this togeter with etags.fs. 18: 19: \ This is not very general. Input should come from stream files, 20: \ otherwise the results are unpredictable. It also does not detect 21: \ errors in the input (e.g., if there is something else on the 22: \ definition line) and reacts strangely to them. 23: 24: \ possible improvements: we could analyse the defining word and guess 25: \ the stack effect. This would be handy for variables. Unfortunately, 26: \ we have to look back in the input buffer; we cannot use the cfa 27: \ because it does not exist when header is called. 28: 29: \ This is ANS Forth with the following serious environmental 30: \ dependences: the variable LAST must contain a pointer to the last 31: \ header, NAME>STRING must convert that pointer to a string, and 32: \ HEADER must be a deferred word that is called to create the name. 33: 34: 35: r/w create-file throw value doc-file-id 36: \ contains the file-id of the documentation file 37: 38: s" \ automatically generated by makedoc.fs" doc-file-id write-line throw 39: 40: : \G ( -- ) 41: source >in @ /string doc-file-id write-line throw 42: source >in ! drop ; immediate 43: 44: : put-doc-entry ( -- ) 45: locals-list @ 0= \ not in a colon def, i.e., not a local name 46: last @ 0<> and \ not an anonymous (i.e. noname) header 47: if 1.2 ! pazsan 48: s" " doc-file-id write-line throw 1.1 anton 49: s" make-doc " doc-file-id write-file throw 50: last @ name>string doc-file-id write-file throw 51: >in @ 52: [char] ( parse 2drop 53: [char] ) parse doc-file-id write-file throw 54: s" )" doc-file-id write-file throw 55: [char] \ parse 2drop 56: POSTPONE \g 57: >in ! 58: endif ; 59: 60: : (doc-header) ( -- ) 61: defers header 62: put-doc-entry ; 63: 64: ' (doc-header) IS header | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/Attic/makedoc.fs?annotate=1.2;hideattic=0;f=h;only_with_tag=v0-2-0 | CC-MAIN-2022-40 | refinedweb | 440 | 62.88 |
I don't really see it that way. To me this is the exact intention of Tiles.
>From my perspective you have two ways of building a UI that has both
dataEntry & readOnly views of the same data. You can have struts make the
call which one to show or you can leave it to Tiles. I think the tiles
approach is far superior. You only have to build and maintain one page for
each object. The objects in that page are basically abstracted into sub
tiles which guarantees a uniform look across the system. If you need to
make a change you make it in exactly one place. All of this really adds up
when you have a lot of these objects in your system.
I was able to get what I wanted by making my own tag. I'm not happy doing
it as now I need to be careful with Tiles if I want to upgrade but not
having this feature is sort of a show-stopper.
public class TilesAttributesTag extends AttributeTagSupport {
private String var = null;
public String getVar() { return var; }
public void setVar(String var) { this.var = var; }
@Override
public void execute() throws JspException, IOException {
Map<String, String> names = new HashMap<String, String>();
for (Iterator i = attributeContext.getAttributeNames(); i.hasNext();) {
String name = (String) i.next();
Attribute attr = attributeContext.getAttribute(name);
names.put(name, attr.getValue().toString());
}
pageContext.setAttribute(getVar(), names);
}
@Override
public void release() {
super.release();
var = null;
}
}
and then my view selector
<e:attributeNames
<t:insertAttribute
<c:forEach
<t:putAttribute
</c:forEach>
</t:insertAttribute>
I do think this functionality should be offered out of the box. I read that
you were working on cascading properties which sounds like it would solve my
problem depending on the implementation. If you just took everything out of
the parents attributeContext and put it into the new attributeContext it
should be ok.
Let me know what you think,
Jonathan
Antonio Petrelli-3 wrote:
>
> 2008/4/15, JRD <danger_jon@hotmail.com>:
>>
>> <t:insertDefinition
>> <t:putAttribute
>> <t:putAttribute
>> <t:putAttribute
>> </t:insertDefinition>
>
>
>
> I think you've gone too far the intention of Tiles: it seems like a job
> for
> a custom component of Struts 2, or even a normal component.
>
> Antonio
>
>
--
View this message in context:
Sent from the tiles users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/tiles-users/200804.mbox/%3C16722530.post@talk.nabble.com%3E | CC-MAIN-2018-51 | refinedweb | 386 | 63.8 |
The program from below asks user for his name, greetings him and then gives us his real ID. We could assume that in case of 16-letter name (or longer) the uid variable will be overwritten and the program gives us incorrect user ID. But it isn't. How to explain this by using
gdb?
#include <stdio.h> #include <unistd.h> #include <sys/types.h> struct user_info { uid_t uid; char name[16]; }; int main(int argc, char *argv[]) { struct user_info info; info.uid = getuid(); printf("Your name: "); scanf("%s", info.name); printf("Hello, %s!\nYour UID id %d.\n", info.name, (int) info.uid); return 0; } | http://www.howtobuildsoftware.com/index.php/how-do/cKC/linux-gdb-debugin-udi-program-by-gdb | CC-MAIN-2018-30 | refinedweb | 106 | 80.88 |
Returning home to a dark house can be depressing so let's use a few off-the-shelf components to build a bright welcome home project using the Raspberry Pi.
You will need:
- Any Raspberry Pi Zero, A+, B+, Raspberry Pi 2 or Raspberry Pi 3
- The latest Raspbian OS
- Energenie power sockets and Pi Remote
- A reed switch
- Jumper wires
- Magnets
- All of the code can be found here
The project
First, we need to attach the Energenie to the first 26 pins of the GPIO on your powered-down Raspberry Pi. (For reference, pin 1 is the pin nearest the SD card slot.) The board will fit neatly over the Raspberry Pi with no parts hanging over.
Now attach a female-to-female jumper cable to GPIO20 and GND through the unused GPIO pins. (If you want to extend the jumper cables simply use male-to-female cables until the desired length is reached.) On one end of the female jumper cable attach the reed switch and then the other.
Using sticky backed plastic attach the switch to a door frame and attach magnets level to the switch but on the door itself so that the switch is closed when the door is closed.
Boot your Raspberry Pi and open a terminal. To install the Energenie library for Python 3 use $ sudo pip-3.2 install energenie.
Once installed open a new Python 3 session via the Programming menu. To pair our Energenie units with our Raspberry Pi open the IDLE shell and type from energenie import switch_on, switch_off. Now plug in your Energenie and press the Green button for six seconds.
This forces it to look for a new transmitter. Back in your IDLE shell, type switch_on(1). This will pair your Raspberry Pi to the unit and designate it '1' and the process can be repeated for four units. With IDLE open click on File > New Window and save your work as entrylight.py.
We'll start by importing the libraries for this project:
from energenie import switch_on, switch_off
import time
import RPi.GPIO as GPIO
The energenie library controls the units for our lights, and time is used to control how long the units are powered for and RPi.GPIO is the library used for working with the GPIO.
GPIO.setmode(GPIO.BCM)
GPIO.setup(20, GPIO.IN, GPIO.PUD_UP)
switch_off()
Next, we set the GPIO to use the Broadcom pin mapping and set GPIO20 to be an input with its internal resistor pulled high, turning the current on to that pin. Finally, we turn off the Energenie units to make sure they are ready.
The main code uses a try…except structure to wrap around an infinite loop. This part of the code requires you to place indents accurately for each line, so make sure it looks like the following image.
Inside the loop we use a conditional statement to check if the input has been triggered, ie the door has been opened. If true then the units are switched on for 30 seconds and turned off again.
We finish the conditional statement with an else condition. This will turn the units off and loop continually. We close the try…except structure with a method to close the project, pressing CTRL+C will end the project and switch off the units should the need arise.
With the code complete, save your work and click on Run > Run Module to test the code.
Energenie
Controlling high voltage devices is a project for those that know their stuff but with Energenie we can significantly reduce the risk.
Energenie units at their core are simply 433MHz receivers that control a relay; a component that uses a low voltage to control a magnetic switch in a high voltage circuit. On the Raspberry Pi we have a transmitter which can instruct the receivers to turn on and off.
Energenie units are a safe way to control mains electricity. The standard Python library for Energenie is rather cumbersome, requiring the user to control the GPIO pins used by the transmitter in order to connect to each device and issue the correct instruction.
This library has been made a lot simpler thanks to Ben Nuttal, a member of the Raspberry Pi Foundation's Education team, and Amy Mather, known to many as Mini Girl Geek, a teenage hacker and maker. This improved library, which we've used in this tutorial, requires that we know the number of each unit and can issue an instruction to one or all units at once.
The library can be found on GitHub, should you wish to inspect the code and learn more about how it works.
- Enjoyed this article? Expand your knowledge of Linux, get more from your code, and discover the latest open source developments inside Linux Format. Read our sampler today and take advantage of the offer inside. | https://www.techradar.com/how-to/computing/how-to-build-automatic-entry-lights-with-a-raspberry-pi-1320515 | CC-MAIN-2018-17 | refinedweb | 815 | 69.31 |
I have a drop down menu in my site is using react component. I am able to open the drop down list, but am having difficulty getting it to select the desired field. How should I approach it. Help here would be appreciated. Have put sample code and sample site which has a similar component as i am trying to use
import { Selector, ClientFunction } from 'testcafe';
fixture `Drop down `.page ``
test('Select option', async t => {
const selectArrow = Selector('.Select-arrow-zone');
const item = Selector('#react-select-2--value-item').withText('Queensland')
await t
.click(selectArrow)
.click(item)
.wait(5000);
});
Hello @neelayThank you for your example of reproducing the problem.
Your Selector for the option element isn't correct. It should be, for example:const item = Selector('#react-select-2--option-2').withText('Victoria');.
const item = Selector('#react-select-2--option-2').withText('Victoria');
From my point of view the easiest way to create Selector for a drop down element's item is to set an event listener in Developer Tools for some event that leads to deleting the items list (e.g blur). Please take a look at the video.
thank you this worked like a charm, the video was also very helpful. | https://testcafe-discuss.devexpress.com/t/selecting-element-from-react-select-dropdown/1074 | CC-MAIN-2021-39 | refinedweb | 203 | 51.44 |
Home > Library > Miscellaneous > Wikipedia
The following tables compare general and technical information for a number of file systems.
General information
Limits
Metadata
Features
Allocation and layout policies
OS support
See also
- Comparison of archive formats
- Comparison of file archivers
- List of archive formats
- List of file archivers
- List of file systems
Notes
- ^.
- ^ Polycenter File System - - HELP
- ^ Microsoft first introduced FAT32 These are the restrictions imposed by the on-disk directory entry structures themselves. Particular Installable File System drivers may place restrictions of their own on file and directory names; and particular and operating systems may also place restrictions of their own, across all filesystems. MS-DOS, Microsoft Windows, and OS/2 disallow the characters \ / : ? * " > < | and NUL in file and directory names across all filesystems. Unices and Linux
- ^ SFS file system
- ^ a b c d e f g h i Depends on whether the FAT12, FAT16, and FAT32 implementation has support for LFNs. Where it does not, as in OS/2, MS-DOS, Windows 95, Windows 98 in DOS-only mode and the Linux "msdos" driver, file names are limited to 8.3 format of 8-bit characters (space padded in both the basename and extension parts) and may not contain NUL (end-of-directory marker) or character 5 (replacement for character 229 which itself is used as deleted-file marker). Short names also do not normally contain lowercase letters. Also note that a few special names (CON, NUL, LPT1) should be avoided, as some operating systems (notably DOS and windows) effectively reserve them.
- ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad The on-disk structures have no inherent limit. Particular Installable File System drivers and operating systems may impose limits of their own, however. MS-DOS does not support full pathnames longer than 260 bytes for FAT12 and FAT16. Windows NT does not support full pathnames longer than 32,767 bytes for NTFS. Linux has a pathname limit of 4,096.
- ^. [1]
- ^.
- ^ The.
- ^ See and.
- ^
- ^"[2]
- ^_.28or_4_GiB_depending_on_implementation.29_file_size_limit
- ^ Assuming the typical 2048 Byte sector size. The volume size is specified as a 32 bit value identifying the number of sectors on the volume.
- ^ Joliet Specification
- ^ Implemented in later versions as an extension
- ^ a b c Some FAT implementations, such as in Linux, show file modification timestamp (mtime) in the metadata change timestamp (ctime) field. This timestamp is however, not updated on file metadata change.
- ^ a b. See [3]
- ^ As of 10.5 Leopard, Mac OS X has support for Mandatory Labels. See [4]
- ^ Some Installable File System drivers and operating systems may not support extended attributes, access control lists or security labels on these filesystems. Linux kernels prior to 2.6.x may either be missing support for these altogether or require a patch.
- ^ ext4 uses journal checksumming only
- ^ a b c d e f The local time, timezone, eg. the suid bit is replaced by a new 'exclusive access' bit.
- ^ MAC/Sensitivity labels in the file system are not out of the question as a future compatible change but aren't part of any available version of ZFS.
- ^ Solaris "extended attributes" are really full-blown alternate data streams, in both the Solaris UFS and ZFS.
- ^ Symlinks only visible to NFS clients. References and Off-Disk Pointers (ODPs) provide local equivalent.
- ^ soft links. See this Microsoft article on Vista kernel improvements..
- ^ NTFS does not internally support snapshots, but in conjunction with the Volume Shadow Copy Service can maintain persistent block differential volume snapshots.
- ^ Mac OS System 7 introduced the 'alias', analogous to the POSIX symbolic link but with some notable differences. Not only could they cross file systems but they could point to entirely different file servers, and recorded enough information to allow the remote file system to be mounted on demand. It had its own API that application software had to use to gain their benefits-- this is the opposite approach from POSIX which introduced specific APIs to avoid the symbolic link nature of the link. The Finder displayed their file names in an italic font (at least in Roman scripts), but otherwise they behaved identically to their referent.
- ^ Metadata-only journaling was introduced in the Mac OS 10.2.2 HFS Plus driver; journaling is enabled by default on Mac OS 10.3 and later.
- ^ Although often believed to be case sensitive, HFS Plus normally is not. The typical default installation is case-preserving only. From Mac OS 10.3 on the command newfs_hfs -s will create a case-sensitive new file system.. See Apple's File System Comparisons (which hasn't been updated to discuss HFSX) and Technical Note TN1150: HFS Plus Volume Format (which provides a very technical overview of HFS Plus and HFSX).
- ^ Mac OS Tiger (10.4) and late versions of Panther (10.3) provide file change logging (it's a feature of the file system software, not of the volume format, actually). See fslogger.
- ^ HFS+ does not actually encrypt files: to implement FileVault, OS X creates an HFS+ filesystem in a sparse, encrypted disk image that is automatically mounted over the home directory when the user logs in.
- ^ "Write Ahead Physical Block Logging" in NetBSD, provides metadata journalling and consistency as an alternative to softdep.
- ^ "Soft dependencies" (softdep) in NetBSD, called "soft updates" in FreeBSD provide meta-data consistency at all times without double writes (journaling).
- ^ a b c d UDF, LFS, and NILFS are log-structured file systems and behave as if the entire file system were a journal.
- ^ Linux kernel versions 2.6.12 and newer.
- ^ a b c Off by default.
- ^ Full block journaling for ReiserFS was not added to Linux 2.6.8 for obvious reasons.
- ^ a b Reiser4 supports transparent compression and encryption with the cryptcompress plugin which is the default file handler in version 4.1.
- ^ Optionally no on IRIX.
- ^. (Filesystem Events tracked by NSure)
- ^ a b Available only in the "NFS" namespace.
- ^ Limited capability. Volumes can span physical disks (volume segment)
- ^ a b These are referred to as "aliases".
- ^ VxFS provides an optional feature called "Storage Checkpoints" which allows for advanced file system snapshots.
- ^ a b ZFS is a transactional filesystem using copy-on-write semantics, guaranteeing an always-consistent on-disk state without the use of a traditional journal. However, it does also implement an intent log to provide better performance when synchronous writes are requested.
- ^ "ZFS on disk encryption". Sun Microsystems..
- ^ a b Variable block size refers to systems which support different block sizes on a per-file basis. (This is similar to extents but a slightly different implementational choice.) The current implementation in UFS2 is read-only.
- ^ a b DoubleSpace in DOS 6, and DriveSpace in Windows 95 and Windows 98 were data compression schemes for FAT, but are no longer supported by Microsoft.
- ^ Only for "stuffed" inodes
- ^ a b c d Other block:fragment size ratios supported; 8:1 is typical and recommended by most implementations.
- ^ a b c Fragments were planned, but never actually implemented on ext2 and ext3.
- ^ e2compr, a set of patches providing block-based compression for ext2, has been available since 1997, but has never been merged into the mainline Linux kernel.
- ^ In "extents" mode.
- ^ "AIX documentation: JFS data compression". IBM..
- ^.
- ^ When enabled, ZFS's logical-block based compression behaves much like tail-packing for the last block of a file.
- ^ OS/2 and eComstation FAT32 Driver[5]
- ^ NTFS for Windows 98[6]
- ^ OS/2 NTFS Driver[7]
- ^ a b c d Sharing Disks - Windows Products[8]
- ^ OS/2 HFS Driver[9]
- ^ DOS/Win 9x HPFS Driver[10]
- ^ Win NT 4.0 HPFS Driver[11]
- ^ a b Ext2 IFS for Windows provides kernel level read/write access to Ext2 and Ext3 volumes in Windows NT4, 2000, XP and Vista.[12]
- ^ a b Ext2Fsd is an open source linux ext2/ext3 file system driver for Windows systems (NT/2K/XP/VISTA, X86/AMD64).[13]
- ^ OS/2 ext2 Driver[14]
- ^
- ^ Supported using only EVMS; not currently supported using LVM
- ^ a b c d Provided in Plan 9 from User Space
- ^ ZFS on FUSE
- ^ Apple Seeds ZFS Read/Write Developer Preview 1.1 for Leopard - Mac Rumors
- ^
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) | http://www.answers.com/topic/comparison-of-file-systems | crawl-002 | refinedweb | 1,391 | 55.03 |
Yaydoc has been supporting custom themes from nearly it’s inception. Themes, which it could not find locally, it would automatically try to install it via pip and set up appropriate metadata about the themes in the generated conf.py. It was one of the first major enhancement we provided as compared to when using bare sphinx to generate documentation. Since then, a large number of features have been added to ease the process of documentation generation but the core theming aspects have remained unchanged.
To use a theme, sphinx needs the exact name of the theme and the absolute path to it. To obtain these metadata, the existing implementation accessed the __file__ attribute of the imported package to get the absolute path to the __init__.py file, a necessary element of all python packages. From there we searched for a file named theme.conf, and thus the directory containing that file was our required theme.
There were a few mistakes in our earlier implementation. For starters, we assumed that the distribution name of the theme in PyPI and the package name which should be imported would be same. This is generally true but is not necessary. One such theme from PyPI is Flask-Sphinx-Themes. While you need to install it using
pip install Flask-Sphinx-Themes
yet to import it in a module one needs to
import flask_sphinx_themes
This lead to build errors when specific themes like this was used. To solve this, we used the pkg_resources package. It allows us to get various metadata about a package in an abstract way without needing to specifically handle if the package is zipped or not.
try: dist = pkg_resources.get_distribution('{{ html_theme }}') top_level = list(dist._get_metadata('top_level.txt'))[0] dist_path = os.path.join(dist.location, top_level) except (pkg_resources.DistributionNotFound, IndexError): print("\nError with distribution {0}".format('{{ html_theme }}')) html_theme = 'fossasia_theme' html_theme_path = ['_themes']
The idea here is that instead of searching for __init__.py, we read the name of the top_level directory using the first entry of the top_level.txt, a file created by setuptools when installing the package. We build the path by joining the location attribute of the Distribution object and the name of the top_level directory. The advantage with this approach is that we don’t need to import anything and thus no longer need to know the exact package name.
With this update, Support for custom themes has been greatly increased. | http://blog.fossasia.org/tag/doc-generator/ | CC-MAIN-2017-30 | refinedweb | 402 | 57.27 |
01 April 2011 06:40 [Source: ICIS news]
SHANGHAI (ICIS)--China’s Wanda Group has invested yuan (CNY) 3.9bn ($?xml:namespace>
The project is located at Dongying Port Economic Development Zone in eastern Shandong province.
It will be developed in two phases. The first phase of construction started in the middle of March and includes a 130,000 tonne/year ACN unit and a 40,000 tonne/year PMMA unit, the source said.
The products will be mainly for captive use after the project starts up, the source added, but declined to give a timeframe.
However, a government official from the zone's Investment Promotion Bureau said the first phase will probably start up next year.
Wanda Group, based at Dongying in Shandong, is a conglomerate with business interests in the tyre, chemical and property sectors.
($1 = CNY6.57)
For more on acrylonitrile | http://www.icis.com/Articles/2011/04/01/9449050/chinas-wanda-group-builds-acn-pmma-units-in-shandong.html | CC-MAIN-2014-52 | refinedweb | 144 | 63.09 |
:
Swing / AWT / SWT
AffineTransform rarities
Piet Souris
Saloon Keeper
Posts: 3250
128
posted 5 months ago
I was using an AffineTransform in one of my panels, and I noticed something strange. I've written about this same strangeness a year or two ago, but I did not get any response, unfortunately.
The problem is that when you use an AffineTransform to enable using user coordinates, and you resize the panel, then the origin of the panel is that of the content pane, or something like that. I've written a short program that demonstrates the problem. The content pane has two panels, one at Page_Start, containing a button and a label, and a center panel that draws a red cross from two corners of that panel to the opposite corners. When the program starts, no use of an AffineTransform is made, and so everything seems normal. You can safely resize the frame. When the button is clicked, and the AffineTransform is set up (that transform is just the identity transform, so nothing different should happen), then at first all seems oke, that is: until the frame gets resized. Then you see that that cross seems to start from the content pane origin, or even from the frames corner.
If I am right, does anyone know what the cause of this behaviour is? Meanwhile, I'm investigating further. If I find out something, I will report it here.
/** * * @author Piet */ public class PaintComponentRariteiten { boolean useTransform = false; JLabel label = new JLabel(); JPanel panel; public static void main(String... args) { new PaintComponentRariteiten(); } PaintComponentRariteiten() { panel = new JPanel() { @Override protected void paintComponent(Graphics g) { super.paintComponent(g); g.setColor(Color.red); Graphics2D g2d = (Graphics2D) g.create(); if (useTransform) { var at = AffineTransform.getScaleInstance(1, 1); g2d.setTransform(at); } int width = this.getWidth(); int height = this.getHeight(); g2d.drawLine(0, 0, width, height); g2d.drawLine(0, height, width, 0); } }; JPanel p2 = new JPanel(); p2.setBackground(Color.red); JButton b = new JButton("click to set/unset transform"); b.addActionListener(this::processButtonclick); p2.add(b); label.setText(("using transform: " + useTransform)); p2.add(label); panel.setPreferredSize(new Dimension(300, 300)); JFrame frame = new JFrame(); Container c = frame.getContentPane(); c.add(p2, BorderLayout.PAGE_START); c.add(panel, BorderLayout.CENTER); frame.pack(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setLocationRelativeTo(null); frame.setVisible(true); } private void processButtonclick(ActionEvent ae) { System.out.println("button clicked"); useTransform = !useTransform; label.setText(("using transform: " + useTransform)); panel.repaint(); } }
Campbell Ritchie
Marshal
Posts: 64471
225
posted 5 months ago
Maybe the problem has to do with this.getHeight() rather than panel.getHeight()?? Line 27.
Piet Souris
Saloon Keeper
Posts: 3250
128
posted 5 months ago
hi Campbell,
unfortunately,, that is not the problem. The "this" is really the panel in question. Yu can add this code snippet to the paintComponent method, to see the sizes:
Container c = frame.getContentPane(); // make frame an instance variable System.out.format("size contentpane: %d, %d%n", c.getWidth(), c.getHeight()); System.out.format("size panel: %d, %d%n", panel.getWidth(), panel.getHeight()); System.out.format("size this: %d, %d%n", this.getWidth(), this.getHeight());
Jack Marsh
Greenhorn
Posts: 5
1
posted 5 months ago
The problem is that you are not supposed to simply set the transform. You are supposed to apply your transform to the current transform.
If you print the transform before your call to set transform, you sill notice that it has a y offset. This is to move the origin of your panel past the
panel containing the button. For me it is a 36 your mileage may vary. When you set this number to zero, as happens when you set the transform to the identity transform, zero zero moves up to the user left corner of the JFrame.
Piet Souris
Saloon Keeper
Posts: 3250
128
posted 5 months ago
hi Jack,
absolutely spot on! Never knew this. Thanks and have a
cow
!
(as Jack advised: I changed 'g2d.setTransform' to 'g2d.transform')
Edit: by the way: it also cured the problem that when dragging the frame off and on the screen, the DoubleBuffering is sti;ll working as normal. With my initial transform, weird things happened!
Jack Marsh
Greenhorn
Posts: 5
1
posted 5 months ago
Thanks for the Cow; I'm glad I could be of help. Next time, please include the imports. I decided to do it bare bones no
IDE
( probably a false economy of time ).
Piet Souris
Saloon Keeper
Posts: 3250
128
posted 5 months ago
Sorry! Mea culpa.
Since IDE's fix the imports for you at the click of a mouse button, I always leave them out, saves a lot of space. Next time I'll put them in again
Post Reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
How to set Z-order of a JLable which is contained in a JPanel
JLabel.setText() is NOT Updating the Text
How to pass mouse event to a Label
Please tell me how to display value on Y axis in this code
How to simulate(define) a mouse dragged event
More... | https://coderanch.com/t/704404/java/AffineTransform-rarities | CC-MAIN-2019-26 | refinedweb | 841 | 58.08 |
Introduction: Monitoring Humidity, Temperature in the Hen House: DHT11, DS18B20, ESP8266
Note, I have now updated this project with an OLED, BMP180 and 2 DS18B20 sensors
In order to keep track off the temperature in the night quarters of my chicken coop I needed a wireless connection to some sensors.
As I had an ESP8266-01 lying around that seemed like the perfect medium to do that with.
BOM
ESP8266-01 (or other version)
DHT11
DS18B20
2x 4k7 resistor
1x 1n4001 diode
LM1117-33
10uF 10V capacitor
2x4 female headers
some veroboard
The ESP8266 has 4 I/O pins. However, GPIO 0 and the Tx pin are a bit fussy in their use (The GPIO0 pin is also used to jump in the program mode) so I decided to use GPIO 2 and the Rx pin. (Mind you, it is NOT impossible to use GPIO0 as an I/O pin, I just took the path of least resistance)
The DHT11 can only report temperatures down to 0 degrees. I am not sure if it can stand subzero temperatures but I guess I will find out. I am hoping that temperatures in the night quarters will not go below zero.
DS18B20 is added to keep an eye in the outside temperature
The 4k7 resistors are pull up resistors for the sensors. They will work without, but it is good to use the resistors anyway.
The 1N4001 diode is to prevent accidental reversed polarity. If you think that is not going to happen to you leave it out.
LM1117-33 as the entire device needs 3.3 Volt. The Minimum input of the LM1117-33 is 4.75 Volt so if you plan to feed it with a 5 Volt supply, the voltage drip over the 1N4001 diode might just be a wee bit too much
Step 1: The Circuit
The circuit is quite straightforward.
There is a powersupply built around an LM1117-33 with a diode and one capacitor.
There is a 2x4 point female header in which the ESP8266-01 will slot. The DHT11 is connected to pin2 and the DS18B20 to pin 3. Both have a 4k7 pull up resistor
Step 2: The Program
#include <DHT.h> #include <OneWire.h> // #include <DallasTemperature.h> // #include <ESP8266WiFi.h> #define DHTPIN 2 #define DHTTYPE DHT11 #define ONE_WIRE_BUS 3 // that is the Rx pin const char* ssid = "MyHouse";//<-- put your SSID here const char* password = "secret123";<// put your password here const char* host = "api.thingspeak.com"; const char* writeAPIKey="W59AEELE0N2EHJA6";// <-- put your API Key here float temperature_buiten; DHT dht(DHTPIN, DHTTYPE, 15);};// if you want more sensors void setup() { // Initialize sensor sensors.begin();//ds18b20 // set the resolution to 10 bit (Can be 9 to 12 bits .. lower is faster) sensors.setResolution(Probe01, 10); //sensors.setResolution(Probe02, 10); dht.begin(); delay(1000); // Connect to WiFi network WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); } } void loop() { //ds18b20stuff------------------- sensors.requestTemperatures(); // Send the command to get temperatures temperature_buiten = sensors.getTempC(Probe01);// //dht11 stuff-------------------- float humidity = dht.readHumidity(); float temperature = dht.readTemperature(); if (isnan(humidity) || isnan(temperature)) { return; } // make TCP connections WiFiClient client; const int httpPort = 80; if (!client.connect(host, httpPort)) { return; } String url = "/update?key="; url+=writeAPIKey; url+="&field1="; url+=String(temperature); url+="&field2="; url+=String(humidity); url+= "&field3="; url+=String(temperature_buiten); url+="\r\n"; // Send request to the server client.print(String("GET ") + url + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); delay(1000); }
The program is fairly straightforward. As I wanted to keep the possibility open to add more DS18B20 sensors (might as well be informed what the temperature of my pond or soil is) I am using a protocol that read outs the unique ID of the sensor. If you do not know how to get that, check the next step.
Step 3: The Unique Address of Your DS18B20
To find out what the unique number of your DS18B20 is, use this code, preferably on an Arduino
/* 2 // Any pin 2 to 12 (not 13) and A0 to A5 /*-----( Declare objects )-----*/ OneWire ourBus(SENSOR_PIN); // Create a 1-wire object void setup() /****** SETUP: RUNS ONCE ******/ { Serial.begin(115200); )***********
Step 4: Thingspeak
I decided to upload the data to Thingspeak, but one may just as well upload them to a personal webserver.
I presume the use of Thingspeak is well known buy now. In short.. one needs to set up a channel and create 3 fields: 1 for the temperature of the DHT11, one for the humidity and one for the temperature of the DS18B20
Step 5: Construction
After I tested it I put it all on a small piece of veroboard. Whether that is the easiest in your case depends.
The ESP8266-01 is notoriously breadboard unfriendly I had made a small adapter that I also used to program the board. Afterwards I just put it in the headers on my veroboard
Step 6: Programming the ESP8266
It is not my intention to give a detailed description of the programming process of an ESP8266 as there are excellent instructables or HowTo'sHowTo's on how to do that, but I will briefly go through the steps and for most people that should be sufficiently clear:
- Make sure you have the ESP8266 cores installed in your arduino IDE and select the board you are using (Generic ESP8266 Module will suffice in most cases)
- Connect the ESP8266 with an USB-ttl module as in the picture. USE 3.3 VOLT LEVELS ONLY
- Vcc to Vcc
- GPIO0 to Ground
- Tx to Rx
- Rx to Tx
- Ground to Ground
- CH_PD to Vcc
Using a 3.3 Volt USB-FTDI converter is the easiest. If you only have a 5 Volt then give the ESP8266 its own 3.3 Volt supply and use a voltage converter between the Tx of the converter and the Rx of the ESP8266.
I repeat: DO NOT USE ANY 5 VOLT LEVELS ON ANY PIN OF THE ESP8266
Recommendations
We have a be nice policy.
Please be positive and constructive. | http://www.instructables.com/id/Atmosphere-in-the-Hen-House-DHT11-DS18B20-ESP8266/ | CC-MAIN-2018-17 | refinedweb | 1,006 | 62.17 |
The Green Tree Service Company offers the following services to its customers:
* tree planting - $35 for a small tree, $100 for a medium tree, $250 for a large tree
* tree removal - $150 per tree
* tree trimming - $50 per hour
* stump removal - $75 for stumps with a diameter of 10 inches or less, $10 per inch for stumps with a diameter greater than 10 inches
Write a C++ program that will read a series of data values from a file using Linux redirection. The data values will represent a group of jobs that the Green Tree Service Company has completed. The program will compute the amount billed for each job and display a nicely formatted report that lists each job, the billed amounts, and a total due. Use named constants for fixed values (minimum of 5).
Input
The input file will contain data for several jobs. Each job will have a Job # which will be an integer. After the Job #, the work performed will be indicated using the following codes:
* P, p = tree planting, this code will be followed by an integer indicating how many trees were planted and one character code (S,s,M,m,L,l) for the size of each tree planted
* R, r = tree removal, this code will be followed by an integer indicating how many trees were removed
* T, t = tree trimming, this code will be followed by a floating point value indicating how many hours were spent trimming
* S, s = stump removal, this code will be followed by a series of integers that represent the diameters of the stumps removed, 0 will serve as a sentinel to signal the end of the stump data
* Q, q = end of job details
Here is a sample file:
123
P 3 M m S
t 2.5
Q
45
S 5 12 3 10 7 16 0
r 4
p 1 L
q
Here is an explanation of the sample file:
123 Job# 123
P 3 M m S planted 3 trees, 2 medium, 1 small
t 2.5 trimmed trees for 2.5 hours
Q end of job# 123
45 Job# 45
S 5 12 3 10 7 16 0 removed 6 stumps with diameters of 5, 12, 3,
10, 7, 16 inches
r 4 removed 4 trees
p 1 L planted 1 large tree
Q end of job# 45
The program must keep processing data until the end of file is encountered.
Output
Display your name, section #, and assignment #. Then output a nicely formatted report with 6 columns labeled as shown in the example below. Job #s should be left justified and will have a maximum of 6 digits. The values displayed in each of the remaining columns will be right justified. Assume the largest $ amount to be displayed will be $100,000.00. Include $ signs and 2 digits to the right of the decimal for all monetary amounts. Display the total amount billed on the last line of the report (do NOT total any other columns).
Sample report (based on sample input file shown above).
Lee Misch Section #_ Assignment #7
Tree Stump Total
Job# Planting Removal Trimming Removal Billed
123 $ 235.00 $ 0.00 $ 125.00 $ 0.00 $ 360.00
45 $ 250.00 $ 600.00 $ 0.00 $ 580.00 $ 1430.00
TOTAL $ 1790.00
Assumptions
* No invalid data will be included in the file.
* The data for each job will end with the Q or q code.
* For each job, there will be at most 1 set of input values per type of service.
* All service codes may be entered as capital or lower case letters
How do I even start? help!!
file input/output:
string manipulation:
i would make a while loop that goes to end of file and looks for the start/end of each job and parse each line to get the info you need, then store it to variables then output later
im still kinda lost but i think im getting a good start with it?
#include <iostream> #include <iomanip> using namespace std; int main() { int data; char q; char Q; double time; double diameter; double M = 100;// medium tree plant double m = 100;// medium tree plant double S = 35;// small tree plant double s = 35;// small tree plant double L = 100;// large tree plant double l = 100;// large tree plant double t = time * 50; // tree trimming double r = 150; // tree removal double St = 75 + (10*diameter); // stump removal while (data != q || Q) { data_count++; cin >> data; } cout << "Section #3 Assignment #7" << endl; cout << "Tree Stump Total" << endl; cout << "Job#" << setw(6) << "Planting" << setw(6) << "Removal" << setw(6) << "Trimming" << setw(6) << "Removal" << setw(6) << "Billed" << setw(6) << endl; return 0; }
Related Articles | http://www.daniweb.com/software-development/cpp/threads/320280/linux-redirection-homework-help-please | CC-MAIN-2013-48 | refinedweb | 781 | 69.65 |
Plot data from excel file in matplotlib Python
This tutorial is the one in which you will learn a basic method required for data science. That skill is to plot the data from an excel file in matplotlib in Python. Here you will learn to plot data as a graph in the excel file using matplotlib and pandas in Python.
How to plot data from excel file using matplotlib?
Before we plot the data from excel file in matplotlib, first we have to take care of a few things.
- You must have these packages installed in your IDE:- matplotlib, pandas, xlrd.
- Save an Excel file on your computer in any easily accessible location.
- If you are doing this coding in command prompt or shell, then make sure your directories and packages are correctly managed.
- For the sake of simplicity, we will be doing this using any IDE available.
Step 1: Import the pandas and matplotlib libraries.
import pandas as pd import matplotlib.pyplot as plt
Step 2 : read the excel file using pd.read_excel( ‘ file location ‘) .
var = pd.read_excel('C:\\user\\name\\documents\\officefiles.xlsx') var.head()
To let the interpreter know that the following \ is to be ignored as the escape sequence, we use two \.
the “var” is the data frame name. To display the first five rows in pandas we use the data frame .head() function.
If you have multiple sheets, then to focus on one sheet we will mention that after reading the file location.
var = pd.read_excel('C:\\user\\name\\documents\\officefiles.xlsx','Sheet1') var.head()
Step 3: To select a given column or row.
Here you can select a specific range of rows and columns that you want to be displayed. Just by making a new list and mentioning the columns name.
varNew = var[['column1','column2','column3']] varNew.head()
To select data to be displayed after specific rows, use ‘skip rows’ attribute.
var = pd.read_excel('C:\\user\\name\\documents\\officefiles.xlsx', skiprows=6)
step 4: To plot the graph of the selected files column.
to plot the data just add plt.plot(varNew[‘column name’]) . Use plt.show() to plot the graph.
import matplotlib.pyplot as plt import pandas as pd var= pd.read_excel('C:\\Users\\name\\Documents\\officefiles.xlsx') plt.plot(var['column name']) var.head() plt.show()
You will get the Graph of the data as the output.
You may also love to learn: | https://www.codespeedy.com/plot-data-from-excel-file-in-matplotlib-python/ | CC-MAIN-2021-17 | refinedweb | 401 | 67.55 |
Error messages 9200 to 9400 NumberDescription9200 ErrorYou have has no access permission to objectName. Check with your administrator.You tried to perform an operation (for example, opening a form) to which you have no permissions. Check with your administrator.9201 ErrorSession is invalid or has timed out. Please reload page to log in again.Your session is no longer available because either the session is invalid, the session timed-out, or no session data was retrieved. Log in again to continue.9202 ErrorThere are no available attachment fields.You tried to attach an object to an attachment pool in which no fields are available (all fields might be in use). Remove any unnecessary attachments if you have permissions, or ask your BMC Remedy AR System administrator to add more attachment fields to the attachment pool.9203 ErrorThe form action request failed.An error occurred in one of the form actions, such as Submit, Search, Modify, or Delete Entry.9204 ErrorPlease select an attachment file first.You tried a display, save, or delete operation but did not first select an attachment.9205 ErrorNo help available.A failure occurred while the system tried to retrieve Help text for this form or field. An associated error message provides details.9206 ErrorCannot get the Help text.A failure occurred while retrieving the list of fields that contain Help text.9207 ErrorIllegal request parameter.You entered a set of incorrect parameters for this operation, for example, an invalid qualification statement. This is an unexpected error.9208 ErrorFailed to get config property.A failure occurred in retrieving a property from the configuration properties file.9209 ErrorCannot find an empty attachment field large enough to hold this attachment.No field in the attachment pool is large enough to contain this particular file. Contact your administrator to enlarge the size of the attachment field so that you can attach it later, or try zipping the file to make it smaller.9210 ErrorThe size of the attachment is too large. Maximum size of available slot is number bytes.9211 ErrorYou have to specify a file to upload.You tried to attach a file to an attachment field but did not enter a file name. To complete this process, enter or select a file.9212 ErrorFailed to add attachment.The operation of adding an attachment file failed. Check your permissions. If that does not solve your problem, contact your BMC Remedy AR System administrator.9213 ErrorFailed to load the attachment because the request has no incorrect setting.The attachment file failed to load because the method attribute of the form tag in the servlet request was set to GET, not POST.9214 ErrorThe file does not exist or is empty.The operation of saving the attachment file to disk failed either because the file does not exist or because its value is NULL. Make sure the file exists.9215 ErrorInternal error.This all-purpose error occurs during numerous internal system failures, including the following conditions:Name of a system object or field ID is NULL when the system tries to retrieve a system objectInternal cache errorWeb tier exceptionXSL stylesheet not found for certain type of objectFailure to create request in Push Field workflow action at runtimeFailure to set a field in the Set Field workflow action at runtimeFailure to expand an open window9216 ErrorDisplay or save attachment failed. Invalid request parameters.9217 ErrorFile not found. Either the file requested is not present or the URL supplied is bad.An attempt to view or save an attachment file failed because the file could not be found on the web server. Contact your administrator.9218 ErrorUnable to retrieve the file for viewing.An attempt to view the file failed because, although the system found the file, it could not be sent.9219 ErrorThere is no attachment file to delete.An attempt to delete an attachment file failed because it could not be found on the web server. Make sure the file exists.9220 ErrorThere is no attachment file to display.An attempt to display the attachment file in a new window failed because it could not be found on the web server. Make sure the file exists.9221 ErrorThere is no attachment file to save.An attempt to save the attachment file to the client's local file system failed because it could not be found on the web server. Make sure the file exists.9222 ErrorThe attachment to be downloaded cannot be found.An attempt to download the attachment file failed because it could not be found on either the web server or the BMC Remedy AR System server. Make sure the file exists.9230 ErrorThe report file location is not specified. Please see your administrator.The native report output failed because the file location was not specified.9231 ErrorInvalid BMC Remedy AR System report definition. Re-attach the definition and try again or create a new report.The native report output failed because of an invalid report definition. To continue, try creating a different definition file or redesigning your report.9233 ErrorAR System report definition does not have the field list. Re-attach the definition and try again or create a new report.The native report output failed because the report definition specified no fields. To continue, try creating a different definition file or redesigning your report.9234 ErrorCannot display BMC Remedy AR System report reportName. Please try again or see your administrator.The native report failed to appear on your browser for unknown reasons. To continue, see the remainder of the message.9235 ErrorAn invalid operation was specified for an AR System report. Please see your administrator.An invalid operation was specified for a BMC Remedy AR System report. See your administrator.9236 ErrorCannot open BMC Remedy AR System report file fileName. Please try again or see your administrator.The native report operation failed to open the report .arr definition file. Verify that the report exists. You might also check your permissions and try again. Also check the disk permissions on your configured report directory.9237 ErrorInvalid BMC Remedy AR System report definition: no 'Report: ' string found. Re-attach the definition and try again or create a new report.The report operation could not parse the field attributes in the definition file. To continue, try creating a different definition file or redesigning your report.9238 ErrorInvalid BMC Remedy AR System report definition. Re-attach the definition and try again or create a new report.The report operation could not parse the field IDs and sort order. The report consists of a string that cannot be understood by the system. To continue, try creating a different definition file or redesigning your report.9240 ErrorCannot decode URL. The URL supplied is invalid. Please see your administrator.The location (URL) of the native report cannot be decoded. Contact your AR System administrator.9241 ErrorCannot create report directory directoryName. Please see your administrator.Check the name of the configured report directory and try again.9242 ErrorBad data type for report attachment field. Please try again or see your administrator.An attempt to extract the report failed because of an invalid data type for the attachment field. Ask your administrator to check the report form definition.9243 ErrorNo attachment info for report attachment. Please try again or see your administrator.The report was not extracted because information about the attachment field could not be retrieved. To continue, check that the report's entry in the report form has a valid attachment.9244 ErrorNo filename for report attachment. Please see your administrator.The report was not extracted because the file name of the report could not be retrieved. To continue, check that the report's entry in the report form has a valid attachment.9245 ErrorNo report directory specified for reporting. Please see your administrator.The report definition file was not extracted because no session-specific report directory could be retrieved from the configuration. To continue, use the BMC Remedy Mid Tier Configuration Tool to specify a report directory.9246 ErrorCannot find report <reportName> of type <reportType> for form <formName> on server <serverName>. Please see your administrator.The report was not retrieved when the Report form was queried for the entry that matches the parameters in the request. To continue, check the report and report type forms on the server for valid report names and types.9247 ErrorCannot find report type <reportType>. Please see your administrator.The report type was not retrieved when the Report form was queried for the entry that matches the parameters in the request. To continue, check the report and report type forms on the server for valid report names and types.9248 ErrorInternal error: Bad data type for query class field. Please see your administrator.An invalid data type was applied to the query class field when the query string was converted from the BMC Remedy AR System native format into the report engine's format.9249 ErrorCannot load query converter class <classType>. Please try again or see your administrator.The query converter class was not loaded when the query string was converted from the BMC Remedy AR System native format into the report engine's format. To continue, verify that the query converter class is installed in your CLASSPATH.9250 ErrorInvalid report operation. This operation is no longer supported.During report creation in the Open Window action, an invalid operation was used to post the report. The only valid operations are run, create, or edit. To continue, fix the action to use a valid operation.9251 ErrorBad data type for report operation command field. Please see your administrator.An inappropriate command was issued for the report because of an invalid data type. To continue, check the report type form definition.9252 ErrorReport operation command is empty. Please see your administrator.The report operation command does not have a command specified for this report type.9253 ErrorForm with ID <IDNumber> cannot be found on ARServer <serverName>. Please see your administrator.A failure occurred because the BMC Remedy AR System server failed to retrieve the form with the specified ID needed to create the report.9254 ErrorMid Tier does not have permission to create directory <directoryName: directoryName>. Please see your administrator.Permissions problems occurred when a report directory was created. To continue, verify that the mid tier has write access to this directory.9255 ErrorCannot start query converter class <className>. Please see your administrator.An instance of the query convertor class was not created when the query string was converted from the BMC Remedy AR System native format into the report engine's format. To continue, verify that the query converter class implements the report query converter interface.9256 ErrorUnable to start query conversion:<stringContents>. Please see your administrator.An initial failure occurred when the original query string was converted into a QualifierInfo structure that the report engine's format can interpret. To continue, check that the query converter class implements the report query converter interface. For information about the QualifierInfo class, see the BMC Remedy AR System Java API documentation.9257 ErrorProblem opening output stream:<stringContents>. Please see your administrator.An attempt to create the report definition file from the Message Catalog failed.9259 ErrorObject(s) cannot be found on BMC Remedy AR System server.This object (for example, a schema, active link, or container) was not in the server cache. To continue, verify that the object exists on BMC Remedy AR System server.9261 ErrorError generating Status History information.The XML string containing the Status History field information was not created. Status history is displayed for the web in a separate window.9262 ErrorSubmit failed.The Submit operation failed while a request was being created. A Submit failure has many possible causes, for example, a required field is blank. (In that case, enter a value for the required field, and retry the Submit operation.)9263 ErrorModify failed.The Modify operation failed when updating an BMC Remedy AR System request. Possible causes for this error include, for example, a simultaneous modification by another user or a failure in workflow.9264 ErrorYou have no access permission to the form <formName.>You entered an incorrect user name or tried to access an BMC Remedy AR System form that you have no permissions to. Try reentering your user name, or check your permissions to continue.9265 ErrorYou have no access permission to the field <fieldName>.You tried to access a field that you have no permissions to. Check your permissions to continue.9266 ErrorPlease select an entry first.An attempt to display the status history in a new window failed because no entry was selected. The request was therefore ignored.9267 ErrorRequired parameter(s) missing for form view creation:<parameterName>.Creation of the form view failed because the required parameters contain either no value or a NULL value. Required parameters include the following items:Form (or form alias)Server name9268 ErrorUnable to generate the JSP page.An attempt to retrieve the path of the JSP™ page that represents the form failed. The JSP path could not be constructed because some of the following data is missing:FormViewApplicationLocale9269 ErrorUnable to perform query because results list field not found. Please inform your BMC Remedy AR System administrator.The query failed because the results list in the form could not be found. This error message also appears when you are unable to use an Open Window active link action to "drill down" to a record in a table field.9270 ErrorEntry with ID IDNumber does not exist in database.The request retrieval failed because the entry ID does not exist in the database.9271 ErrorYou have entered an improperly formatted value for the field.An attempt to validate the values for this field so that it can be properly formatted failed.9272 ErrorValue does not fall within the limits specified for the field.An attempt to validate the values for this field failed because they are out of the acceptable minimum and maximum range limits defined by the administrator.9273 ErrorThis is a display-only field.An attempt to add the field to the query bar HTML input element failed because it is a display-only field. Only fields holding actual database values, such as an integer field, are included.9275 ErrorStatus History operation valid only in modify mode.An attempt to display the status history failed because it is valid only in modify mode, not for submit or query.9276 ErrorThere is no valid web view for this form.You tried to drill down on a table or to display, by using an Open Window action, a form that does not have a web view defined. This could be a form on a 5.x or later environment in which no web view was defined or a form on a pre-5.x BMC Remedy AR System server. The system cannot open a form that does not have a valid web view on the web. If the server is a 5.x or later server, add a web view for the form, and you can then open it. If the server is pre-5.x, upgrade the server to 5. x or later and then create a web view for the form before you can open it.9277 ErrorFound more entries than expected during workflow processing.A failure occurred because multiple matches were found in the form during a Set Fields or Push Fields action. All workflow processing is stopped. To continue, configure the workflow in BMC Remedy Developer Studio for a different multiple match response in BMC Remedy AR System, for example, Set Field to $NULL$, or Use First Matching Request.9278 ErrorNo item matches active link conditions; this operation has been defined so that no match generates an error.A failure occurred because no matches were found in the form during a Set Fields or Push Fields action. All workflow processing is stopped. To continue, configure the workflow in BMC Remedy Developer Studio for a different multiple match response in BMC Remedy AR System, for example, Set Field to $NULL$.9280 ErrorA failure occurred because the name of the server<serverName> is not in the list of valid mid tier servers -<serverName>.A failure occurred because the name of the server is not in the list of valid mid tier servers. To continue, verify that a valid server exists.9281 ErrorA failure occurred in the process used by the Set Fields action.A failure occurred in the process used by the Set Fields action. No results were returned. To continue, verify that the process works independently of the Set Fields action.9282 ErrorFailed to create the following menu:<menuName>.An attempt to expand the query string for a dynamic menu, such as a search menu, failed. The server could not parse the query correctly.9283 ErrorAn internal error has occurred during workflow processing: <errorMessageString>.An internal system failure occurred during the execution of this active link. This error is not due, however, to a more common multiple match or no match error.9289 ErrorInvalid BMC Remedy AR System Server Name.A failure occurred during login because you entered an invalid server name.9290 ErrorA failure occurred because the remote server <remoteServer> is not reachable.9291 ErrorNot a valid administrator password on the server -<serverName>. Please add/modify the password for the server in configuration page.The mid tier administrator password you specified is not recognized. To continue, try re-entering your password or contact your BMC Remedy AR System administrator for assistance.9292 ErrorNot a valid administrator user on the server -<serverName>.A failure occurred during login because the system does not recognize this user name as a valid administrator. To continue, enter a different login name.9294 ErrorYour query has returned too many results. Narrow your query criteria, specify a smaller maximum number of queries to return, or ask your administrator to specify a smaller chunk size for the table or results list.You are running out of available memory because the query returned too many results. To continue, rewrite your query to return fewer requests.9295 ErrorIncorrect login parameters. Web page, user, and/or server name(s) must be provided.A login failure occurred because you did not enter certain parameters, for example, form name or user name. To continue, enter the missing parameters.9296 ErrorNo matches were found for your qualification.No matches were found for your qualification. However, processing continues to display the zero matches label in the results list header. To continue, refine the qualification to make sure that it returns at least one matching request.NoteIcon This message is returned by web clients. In the same situation, similar messages are returned by web services and API programs (see error message 302).9298 ErrorUnable to convert. Your query is too complex. The default results list query will be used.The BMC Remedy AR System query format failed to convert into the report engine's format because the converted query was too complex. The converted query is replaced by the original BMC Remedy AR System query string.9299 WarningThis record has been updated by another user since you retrieved it. Saving your changes will overwrite the changes made by that user. Do you want to save your changes.9300 ErrorThe Run Process specified in the active link action failed because this Run Process command is not supported.The Run Process active link action failed because this Run Process command is not supported. To continue, use a different Run Process command.9301 ErrorThe report file cannot be retrieved from the server to the mid tier: fileName.The report file cannot be retrieved from the server to the mid tier server.9302 ErrorThrow Error - 9302.This generic message indicates an error occurred.9303 ErrorUnable to retrieve a user from the user pool on this old version of AR System.An attempt to retrieve a user from the pool of available users failed because a user pool does not work with this version of BMC Remedy AR System.9304 ErrorUnable to retrieve the report file.An attempt to retrieve the report file failed because the ReportServlet used by the Open Window active link action could not create a temporary directory in which to hold the report or extract the report to the directory.9305 WarningUnable to translate the group names in the Group List or Assignee Group Field into group IDs.The qualification failed because the system was unsuccessful in translating the group names into group IDs.9306 ErrorAttempt to divide by zero (0) in arithmetic operation.The qualification failed because of an illegal mathematical operation. A NUL value is returned. To continue, rewrite the query so that you do not divide by zero.9307 ErrorDate is out of allowed range of 1970 to 2037 for web client.You entered a value for a date/time field that does not fall in the supported range. To continue, enter a date in the range between 1970 to 2037.9308 ErrorPage has been updated. Please Refresh to get the updated page.Your form is outdated. The form was recently updated in memory. To continue, click the Refresh button on the page.9309 ErrorShow Status History action ignored. Status History field does not exist in form.The status history failed to appear in a new window because the Status History field is missing from the form. The request to display the status history is ignored.9310 ErrorInvalid time format. Please enter time in this format:You entered the wrong time format into the Calendar popup window.9325 ErrorYou have entered an improperly formatted value for a currency field.You did not enter a valid currency value on a currency field. For example, you entered 1,00 USD.9326 ErrorYou have entered an invalid currency type.The specified currency value is a not an allowable currency type. Contact your BMC Remedy AR System administrator.9327 ErrorYou have entered an invalid currency code for a currency field. Using previous valid code.You did not enter an allowable currency code on a currency field. The mid tier returns the field results based on the last previous valid currency code.9329 ErrorThe location does not have the required parameter server or webService.When details about the web service request were extracted from the input document, this error message was returned because the namespace was incorrectly formatted. The location must include the server and the web service parameters.9330 ErrorNo Web Service named <webServiceName> exists in server <serverName>.When a web service request was processed on the mid tier, this error message was returned because no such web service exists on the server.9331 ErrorNo operation named <operationName> exists in web service <webServiceName>.When a web service request was processed on the mid tier, this error message was returned because no such operation was found in the web service.9332 ErrorInvalid operation type <operationType> specified on web service container.The mid tier could not make the necessary API call because an invalid operation type was specified for the web service.9334 ErrorThe XPATH expression <xpathExpression> is not found in input mapping.A string value could not be substituted for the XPATH expression because the mid tier server could not find the corresponding mapping node.9335 ErrorInvalid Date Time Value: <dateTimeString>.The mid tier could not parse the XML date/time string.9336 ErrorInvalid URL for accessing WSDL: <requested_URL>.The WSDLServlet servlet could not generate the WSDL for a requested web service because the URL was invalid.9337 ErrorNo web service definition found: <webServiceName>.A web service was requested, but the mid tier could not locate the web service definition.9338 ErrorNo authentication information found: <serverName>.Authentication information is not supplied in the web service request, and anonymous user information is not specified in the BMC Remedy Mid Tier Configuration Tool.9339 ErrorData types of the two operands do not match).9341 ErrorNo Preference server or Home Page Server specified. Home Page needs a server.The AR Server field on the Home Page tab of the BMC Remedy AR System User Preference form and the Server Name field on the Home Page Settings page of the BMC Remedy Mid Tier Configuration Tool are blank. Specify a server in the BMC Remedy Mid Tier Configuration Tool or specify a server for this user in the BMC Remedy AR System User Preference form.9342 ErrorNo servers configured in Mid Tier Configuration. Home Page needs a configured server.No BMC Remedy AR System servers are configured in the BMC Remedy Mid Tier Configuration Tool. Contact your BMC Remedy AR System administrator.9343 ErrorNo Preference form or ServerSetting Home Page form specified. Home Page needs a form.The Form Name field on the Home Page tab of the BMC Remedy AR System User Preference form and the Default Home Form field on the Configuration tab of the Server Information dialog box are blank. Specify a default home page form in the Server Information dialog box or specify a home page form for this user in the BMC Remedy AR System User Preference form.9344 ErrorCannot connect to server serverName to access Home Page.Unable to connect to the server that contains the Home Page form. This can occur when the server is down or when the network is too slow (leading to a time-out). Contact your BMC Remedy AR System administrator.9345 ErrorYour user name or password was not accepted by any AR System server configured in the BMC Remedy Mid Tier Configuration Tool.Your user name or password was not accepted by any BMC Remedy AR System server configured in the BMC Remedy Mid Tier Configuration Tool. Contact your BMC Remedy AR System administrator.9350 ErrorNetwork protocol/data error when performing data operation. Please contact administrator.An internal error occurred because parameters passed to the back channel were incorrect.9351 ErrorUnable to setup data connection, which is preventing the application from working correctly.An internal error occurred during a back channel request from the browser to the mid tier server.9352 ErrorA form definition has been changed, so unable to retrieve data. Please contact administrator.The definition of a form that users have loaded was changed in such a way that a table on the form cannot be refreshed.9353 ErrorThe operation cannot be completed because you have logged out.A user who is logging out tried to make requests to the mid tier. This can occur if a user opened multiple windows and is trying to do something in one window at almost the same time as logging out in another window.9354 ErrorNo compatible (standard or web fixed) view for the requested form can be found - unable to display form.The mid tier could not find a view to display for the specified form. This can occur if a form contains only relative views because relative views are no longer supported by the mid tier.9355 ErrorThe requested form <formName> cannot be found.The form specified in the URL does not exist.9356 ErrorUnsupported locale <locale>.A user tried to load a form on the mid tier in an unsupported locale. The locale is specified either in the languages selection in the browser or in the user preferences. For information about supported locales, see Localizing the mid tier.9357 ErrorUnsupported timezone <timeZone>.A user tried to load a form on the mid tier in an unsupported time zone. The time zone is determined at login time or from the user preferences.9358 ErrorApplication does not exist on server - <serverName>.The application specified in the URL does not exist.9359 ErrorServer and form names are required in the URL.The view form servlet requires the server and form parameters to be in the URL (other parameters are optional). If these parameters are missing this error occurs.9360 ErrorThe size of the current global fields exceeded the allowable 3.5 KB size.The mid tier implementation of global fields limits the amount of data that can be stored in all global fields to about 3.5 KB. To allow more storage, install a supported version of Flash Player.9361 ErrorA current session exists for a different user -<userName>.Log off the existing session and try again. The view form servlet was used with user name and password parameters that differ from the ones used to create the current session.9362 ErrorAliases are not supported by the AR System 6.3 and later. Use form, view, or app parameters instead.Aliases are not supported by the BMC Remedy AR System 6.3 and later mid tier so the alias parameters to the view form servlet are no longer supported.9363 ErrorThe action failed because the mid tier is unavailable or could not be contacted.The action failed because the mid tier is unavailable or could not be contacted. This error could also occur if the action was canceled by the user while it was being performed.9364 ErrorOne or more items match active link conditions; this operation has been defined so that any match generates an error.Used by the Push Fields action when the administrator configures it to return an error when multiple matches are found. On the If Action page, the If Any Requests Match field has Display 'Any Match' Error selected.9365 ErrorNo rows have been selected for ModifyAll.The modify all action requires that at least one row is selected in the results list. Select one or more entries in the results list and try again.9366 ErrorThe Run Process active link action failed because this Run Process command was used incorrectly.The Run Process active link action failed because this Run Process command was used incorrectly, such as using a Run Process command that does not return a value in a Set Fields action.9367 ErrorData types are not appropriate for relational operation.The data types of the fields used in a relational operation are not consistent with the operations allowed for that operation. For information about the allowed data types of operations, see Operator types.9368 ErrorInvalid data type in active link.An BMC Remedy AR System API function call was used with an invalid data type.9369 ErrorFunction not supported.BMC Remedy AR System functions (used in assignments) such as CONVERT, ENCRYPT, and DECRYPT are supported only in filter workflow and not in active link workflow.9370 ErrorThe guide <guideName> is invalid or not owned by any form.The guide or the primary form for the guide could not be determined. Dynamic workflow allows an active link guide to be named at run time. If the guide name does not exist on the server, or the guide does not have a primary form, this error is generated.9371 ErrorThe definition for the guide <guideName> cannot be found and might be missing from the AR System server.The guide specified (via dynamic workflow) is invalid. With dynamic workflow you can specify guide names from workflow instead of in BMC Remedy Developer Studio. The definition for the guide specified cannot be found and might be missing from the BMC Remedy AR System server.9372 ErrorThe specified menu is invalid.The specified menu is invalid. This error occurs if the browser client requests a nonexistent menu. This could be due to an active link change fields action that changed the menu for a character field.9373 ErrorYou have entered an improperly formatted value for a real field.The indicated value was entered as a value for a field with a data type of real. The value is not a legal real value. Change the value to a legal real value, and retry the operation.9374 ErrorYou have entered an improperly formatted value for a decimal field.The value was entered in a field with a data type of decimal. The value is not a legal decimal value. Change the value to a legal decimal value, and retry the operation.9375 ErrorYou entered a nondigit character for a numeric field.A numeric field contains a nondigit value. You can specify digits only for integer (numeric) fields. Change the value to a legal integer value, and retry the operation.9376 ErrorFormat of date or time value is not recognized.The format of a time value is not recognized. You can omit the time portion of a time stamp and include only the date, or you can omit the date and include only the time. The portion omitted defaults to an appropriate value. However, the format of the specified portion of time must match the rules for time stamps. Fix the format of the line, and perform the search again.9377 ErrorTime is out of allowed range of <number> and <number>.BMC Remedy AR System cannot process a time that is out of range. Try again, using a time within the allowed range.9378 WarningThe query matched more than the maximum number of entries specified for retrieval.The retrieval included a search that selected more than the maximum number of items allowed by the client or server. The call returned the number of entries specified as the maximum. Narrow your search criteria or change the limit in user preferences. Use the local setting using the BMC Remedy AR System User Preference form and modify the settings for Limit Number of Items Returned. Only an administrator can change the server settings.9379 WarningThe time entered is invalid. Therefore, the popup will be initialized to the current time.Click the icon to the right of the field to select the desired time.9380 WarningThe date entered is invalid. Therefore, the popup will be initialized to the current date.Click the icon to the right of the field to select the desired date.9381 ErrorNo such user exists.A user with the supplied login ID cannot be found on any AR System server.9382 ErrorAuthentication failed.The supplied password is invalid.9383 ErrorNo forms found containing field ID: <IDNumber>.9384 ErrorMultiple forms contain field ID: IDNumber.9385 ErrorUnable to contact the web server to the complete action.9388 ErrorAuthentication failed.An authentication failure occurred, such as supplying the wrong administrator password on the configuration page for the AR System server.9389 ErrorThe length of server name or form name exceeds the allowed length.The length of server name or form name passed to the viewformservlet is longer than the allowed length. The maximum server name length is 64 characters and maximum form name length is 254 characters.9390 ErrorThere is either no definition or the user user does not have permission for keys keys for the module module in the server <serverName>.The requested plug-in definition does not exist in the Data Visualization Definition form, or the user does not have correct permissions to access that data.9391 ErrorThe requested plug-in <moduleName> does not exist in the list of Data Visualization Module Server(s) in the BMC Remedy Mid Tier Configuration Tool.The requested plug-in does not exist in the list of plug-in servers in the BMC Remedy Mid Tier Configuration Tool.9392 ErrorCould not create the required directory <directoryName> to download module module from server <serverName>.The mid tier should be able to create a directory so that it can download the plug-in information. If it cannot create a directory, this error occurs.9393 ErrorThere is no jar file for the module module in the server <serverName>.The plug-in JAR file is missing for a given data visualization module in the Data Visualization Module form.9394 ErrorCould not download the module jar file for module module from the server <serverName>. Please see the log file for further details.The mid tier cannot download a JAR file for a particular data visualization plug-in in a server.9395 ErrorUnable to find the module module in the mid-tier plug-ins directory.Flashboards and reports are built on mid-tier local data visualization modules. This error occurs if a request is made for these modules and they are unavailable in the local plug-ins directory.9396 ErrorUnable to find the properties file fileName in the mid-tier plug-ins directory.The available local plug-ins directory should have a details.txt file that contains the information of these modules. If the file is missing, this error occurs.9397 ErrorUnable to find the JAR file fileName in the mid-tier plug-ins directory.A JAR file is missing in the local plug-ins directory.9398 ErrorThe module class class does not match the module module from server <serverName>.The entry class specified for the module does not implement the plug-in interface provided in the GraphPlugin.jar file.9399 ErrorThe module class class for the module module from server <serverName> is not found in the jar file.The entry class specified for a module was not implemented.9400 ErrorThe module class class for the module module from server <serverName> does not have the required no arg constructor.The plug-in container in the mid tier cannot instantiate the specified entry class for a module. Was this page helpful? Yes No Submitting... What is wrong with this page? Confusing Missing screenshots, graphics Missing technical details Needs a video Not correct Not the information I expected Your feedback: Send Skip Thank you Last modified by Anagha Deshpande on Mar 30, 2020 Comments 9200 to 9699 - BMC Remedy Mid Tier messages and notifications Error messages 9401 to 9699 | https://docs.bmc.com/docs/ars1808/error-messages-9200-to-9400-820500492.html | CC-MAIN-2020-24 | refinedweb | 6,233 | 59.6 |
I’ve got some of my colleagues interested in using Python to cruch their numbers, and that’s great, because I’m sure it will make their lives much better in the long run. It’s also great because they are pushing me to figure out the easy way to do things, since they are used to certain conveniences from their past work with R and STATA.
For example, Kyle wanted to know does Python make it easy to deal with the tables of data he is faced with every day?
It was by showing my ignorance in a blog post a year ago that I learned about handing csv files with the csv.DictReader class. And this has been convenient enough for me, but it doesn’t let Kyle do the things the way he is used to, for example using data[[‘iso3’, ‘gdp’], :] to select these two important columns from a large table.
Enter larry, the labeled array. Maybe someone will tell me an even cleaner way to deal with these tables, but after a quick
easy_install la, I was able to load up some data from the National Cancer Institute (NCI) as follows:
import csv import la from pylab import * csv_file = csv.reader(open('accstate5.txt')) columns = csv_file.next()[1:] rows = [] data = [] for d in csv_file: rows.append(d[0]) data.append(d[1:]) T = la.larry(data, [rows, columns])
That’s pretty good, but the data is not properly coded as floating point…
T = la.larry(data, [rows, columns]).astype(float)
Now, T has the whole table, indexed by state and by strange NCI code
T.lix[['WA', 'OR', 'ID', 'MO'], ['RBM9094']]
Perfect, besides being slightly ugly. But is there an easier way? the la documentation mentions a fromcsv method. Unfortunately
la.larry.fromcsv doesn’t like these short-format tables.
la.larry.fromcsv('accstate5.txt') # produces a strange error
Now, if you read the NCI data download info page, where I retrieved this data from, you will learn what each cryptic column heading means:
Column heading format: [V] [RG] [T] [A], where V = variable: R, C, LB, or UB RG = race / gender: BM, BF, WM, WF (B = black, W = white, F = female, M = male) T = calendar time: 5094, 5069, 7094, and the 9 5-year periods 5054 through 9094 A = age group: blank (all ages), 019, 2049, 5074, 75+ Example: RBM5054 = rate for black males for the time period 1950-1954
That means I’m not really dealing with a 2-dimensional table after all, and maybe I’d be better off organizing things the way they really
are:
data_dict_list = [d for d in csv.DictReader(open('accstate5.txt'))] data_dict = {} races = ['B', 'W'] sexes = ['M', 'F'] times = range(50,95,5) for d in data_dict_list: for race in races: for sex in sexes: for time in times: key = 'R%s%s%d%d' % (race, sex, time, time+4) val = float(d.get(key, nan)) # larry uses nan to denote missingdata data_dict[(d['STATE'], race, sex, '19%s'%time)] = val R = la.larry.fromdict(data_dict)
In the original 2-d labeled array we can compare cancer rates thusly:
states = ['OR', 'WA', 'ID', 'MT'] times = arange(50,94,5) selected_cols = ['RWM%d%d' % (t, t+4) for t in times] T_selected = T.lix[states, selected_cols] plot(1900+times, T_selected.getx().T, 's-', linewidth=3, markersize=15, alpha=.75) legend(T_selected.getlabel(0), loc='lower right', numpoints=3, markerscale=.5) ylabel('Mortality rate (per 100,000 person-years)') xlabel('Time (years)') title('Mortality rate for all cancers over time') axis([1945, 1995, 0, 230])
It is slightly cooler to do the same plot with the multi-dimensional array
clf() subplot(2,1,1) times = [str(t) for t in range(1950, 1994, 5)] states = ['OR', 'WA', 'ID', 'MT'] colors = ['c', 'b', 'r', 'g'] for state, color in zip(states, colors): plot(R.getlabel(3), R.lix[[state], ['W'], ['M'], :].getx(), 's-', color=color, linewidth=3, markersize=15, alpha=.75) legend(T_selected.getlabel(0), loc='lower right', numpoints=3, markerscale=.5) ylabel('Mortality rate (per 100,000 person-years)') xlabel('Time (years)') title('Mortality rate for all cancers in white males over time') axis([1945, 1995, 0, 230])
That should be enough for me to remember how to use larry when I come back to it in the future. For my final trick, let’s explore how to use larrys to add a column to a 2-dimensional table. (There is some important extension of this to multidimensional tables, but it is confusing me a lot right now.)
states = T.getlabel(0) X = la.larry(randn(len(states), 1), [states, ['new covariate']]) T = T.merge(X) states = ['OR', 'WA', 'ID', 'MT'] T.lix[states, ['RWM9094', 'new covariate']]
Ok, encore… how well does larry play with PyMC? Let’s use a Gaussian Process model to find out.
from pymc import * M = gp.Mean(lambda x, mu=R.mean(): mu * ones(len(x))) C = gp.Covariance(gp.matern.euclidean, diff_degree=2., amp=1., scale=1.) mesh = array([[i,j,k,.5*l] for i in range(52) for j in range(2) for k in range(2) for l in arange(9)]) obs_mesh = array([[i,j,k,.5*l] for i in range(52) for j in range(2) for k in range(2) for l in range(9) if not isnan(R[i,j,k,l])]) obs_vals = array([R[i,j,k,l] for i in range(52) for j in range(2) for k in range(2) for l in range(9) if not isnan(R[i,j,k,l])]) obs_V = array([1. for i in range(52) for j in range(2) for k in range(2) for l in range(9) if not isnan(R[i,j,k,l])]) gp.observe(M, C, obs_mesh, obs_vals, obs_V) imputed_vals = M(mesh) imputed_std = sqrt(abs(C(mesh))) Ri = la.larry(imputed_vals.reshape(R.shape), R.label) subplot(2,1,2) for state, color in zip(states, colors): t = [float(l) for l in Ri.getlabel(3)] plot(t, R.lix[[state], ['B'], ['M'], :].getx(), '^', color=color, linewidth=3, markersize=15, alpha=.75) plot(t, Ri.lix[[state], ['B'], ['M'], :].getx(), '-', color=color, linewidth=2, alpha=.75) ylabel('Mortality rate (per 100,000 person-years)') xlabel('Time (years)') title('Mortality rate for all cancers in black males over time') axis([1945, 1995, 0, 630]) | https://healthyalgorithms.com/2010/06/ | CC-MAIN-2018-30 | refinedweb | 1,063 | 63.39 |
You can subscribe to this list here.
Showing
9
results of 9
On Domingo 18 Enero 2009, Nikodemus Siivola wrote:
>
Don't you need an allow-with-interrupts for that with-interrupts to
actually do something?.
Cheers, Gabor
> We assume in a dozen other places that (aligned) word-sized stores are
> atomic.
Why is this reasonable to assume?
--
LinkedIn Profile:
Xing Profile:
On 18-Jan-09, at 10:08 AM, Tobias C. Rittweiler wrote:
> ?
We assume in a dozen other places that (aligned) word-sized stores are
atomic.
Paul Khuong
?
> + (t
> + (let ((new (cons (list name test-function hash-function) old)))
> + (unless (eq old (compare-and-swap (symbol-value '*user-hash-table-tests*)
> + old new))
> + (go :retry)))))))
> name)
> +
> +(defmacro define-hash-table-test (name test hash)
> + #!+sb-doc
> + "Defines NAME as a new kind of hash table test for use with the :TEST
> +argument to MAKE-HASH-TABLE.
> +
> +TEST must name a two argument equivalence predicate, or be a LAMBDA form
> +implementing one in the current lexical environment.
> +
> +HASH must name a hash function consistent with TEST, or be a LAMBDA form
> +implementing one in the current lexical environment. The hash function must
> +compute the same hash code for any two objects for which TEST returns true,
> +and subsequent calls with already hashed objects must always return the same
> +hash code.
Looking at the implementation, even if the TEST or HASH argument is a
symbol, its value is retrieved from the current lexical environment,
i.e.
(flet ((my-test (x y) ...)
(my-sxhash (x) ...))
(define-hash-table-test my-hash-table-test my-test my-sxhash))
does work.
I think the documentation should say so; my attempt:
Scratch the "in the current-lexical-environment" in both paragraphs,
then add a new paragraph which says:
The functional values of the TEST and HASH arguments are retrieved
from the current lexical environment.
I like Christophe's suggestion for a new sub-namespace.
-T.
"Nikodemus Siivola" <nikodemus@...> writes:
> The attached patch provides my first on SB-EXT:DEFINE-HASH-TABLE-TEST:
The interface that (as I understand it) is provided by Allegro might
be worth a look. There, if I remember correctly, instead of having a
name for a test/hash-value pair, there's an extra keyword argument to
make-hash-table. This is more flexible but maybe also more
error-prone?
I vaguely wonder about having define-hash-table-test define functions
named like (hash-table-test <identifier>) and (hash-table-hash
<identifier>), so that the Allegro interface can be easily supported
while still also allowing easy defaulting based on a
define-hash-table-test name.
Sunday afternoon witterings from me,
Best,
Christophe
The attached patch provides my first on SB-EXT:DEFINE-HASH-TABLE-TEST:
SB-EXT:DEFINE-HASH-TABLE-TEST
* Based on old SB-INT:DEFINE-HASH-TABLE-TEST, but:
** Macro, not a function.
** Arguments are symbols or lambda forms.
** :TEST accepts only 'NAME, not #'TEST as well.
** Pick up redefinitions of the test and hash-function without
re-executing the D-H-T-T form.
* Documentation -- other hash-table extensions as well.
Cheers,
-- Nikodemus | http://sourceforge.net/p/sbcl/mailman/sbcl-devel/?viewmonth=200901&viewday=18 | CC-MAIN-2014-41 | refinedweb | 516 | 54.42 |
Accessing dict keys like an attribute?
Update - 2020
Since this question was asked almost ten years ago, quite a bit has changed in Python itself since then.
While the approach in my original answer is still valid for some cases, (e.g. legacy projects stuck to older versions of Python and cases where you really need to handle dictionaries with very dynamic string keys), I think that in general the dataclasses introduced in Python 3.7 are the obvious/correct solution to vast majority of the use cases of
AttrDict.
Original answer
The best way to do this is:
class AttrDict(dict): def __init__(self, *args, **kwargs): super(AttrDict, self).__init__(*args, **kwargs) self.__dict__ = self
Some pros:
- It actually works!
- No dictionary class methods are shadowed (e.g.
.keys()work just fine. Unless - of course - you assign some value to them, see below)
- Attributes and items are always in sync
- Trying to access non-existent key as an attribute correctly raises
AttributeErrorinstead of
KeyError
- Supports [Tab] autocompletion (e.g. in jupyter & ipython)
Cons:
- Methods like
.keys()will not work just fine if they get overwritten by incoming data
- Each
AttrDictinstance actually stores 2 dictionaries, one inherited and another one in
__dict__
- Causes a memory leak in Python < 2.7.4 / Python3 < 3.2.3
- Pylint goes bananas with
E1123(unexpected-keyword-arg)and
E1103(maybe-no-member)
- For the uninitiated it seems like pure magic.
A short explanation on how this works
- All python objects internally store their attributes in a dictionary that is named
__dict__.
- There is no requirement that the internal dictionary
__dict__would need to be "just a plain dict", so we can assign any subclass of
dict()to the internal dictionary.
- In our case we simply assign the
AttrDict()instance we are instantiating (as we are in
__init__).
- By calling
super()'s
__init__()method we made sure that it (already) behaves exactly like a dictionary, since that function calls all the dictionary instantiation code.
One reason why Python doesn't provide this functionality out of the box
As noted in the "cons" list, this combines the namespace of stored keys (which may come from arbitrary and/or untrusted data!) with the namespace of builtin dict method attributes. For example:
d = AttrDict()d.update({'items':["jacket", "necktie", "trousers"]})for k, v in d.items(): # TypeError: 'list' object is not callable print "Never reached!"
You can have all legal string characters as part of the key if you use array notation.For example,
obj['!#$%^&*()_']
Wherein I Answer the Question That Was Asked
Why doesn't Python offer it out of the box?
I suspect that it has to do with the Zen of Python: "There should be one -- and preferably only one -- obvious way to do it." This would create two obvious ways to access values from dictionaries:
obj['key'] and
obj.key.
Caveats and Pitfalls
These include possible lack of clarity and confusion in the code. i.e., the following could be confusing to someone else who is going in to maintain your code at a later date, or even to you, if you're not going back into it for awhile. Again, from Zen: "Readability counts!"
'spam'd[KEY] = 1# Several lines of miscellaneous code here...assert d.spam == 1KEY =
If
d is instantiated or
KEY is defined or
d[KEY] is assigned far away from where
d.spam is being used, it can easily lead to confusion about what's being done, since this isn't a commonly-used idiom. I know it would have the potential to confuse me.
Additonally, if you change the value of
KEY as follows (but miss changing
d.spam), you now get:
'foo'd[KEY] = 1# Several lines of miscellaneous code here...assert d.spam == 1Traceback (most recent call last): File "<stdin>", line 2, in <module>AttributeError: 'C' object has no attribute 'spam'KEY =
IMO, not worth the effort.
Other Items
As others have noted, you can use any hashable object (not just a string) as a dict key. For example,
2, 3): True,}assert d[(2, 3)] is Trued = {(
is legal, but
type('C', (object,), {(2, 3): True}) d = C()assert d.(2, 3) is True File "<stdin>", line 1 d.(2, 3) ^SyntaxError: invalid syntaxgetattr(d, (2, 3))Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: getattr(): attribute name must be stringC =
is not. This gives you access to the entire range of printable characters or other hashable objects for your dictionary keys, which you do not have when accessing an object attribute. This makes possible such magic as a cached object metaclass, like the recipe from the Python Cookbook (Ch. 9).
Wherein I Editorialize
I prefer the aesthetics of
spam.eggs over
spam['eggs'] (I think it looks cleaner), and I really started craving this functionality when I met the
namedtuple. But the convenience of being able to do the following trumps it.
'spam eggs ham'VALS = [1, 2, 3] d = {k: v for k, v in zip(KEYS.split(' '), VALS)}assert d == {'spam': 1, 'eggs': 2, 'ham': 3}>>>KEYS =
This is a simple example, but I frequently find myself using dicts in different situations than I'd use
obj.key notation (i.e., when I need to read prefs in from an XML file). In other cases, where I'm tempted to instantiate a dynamic class and slap some attributes on it for aesthetic reasons, I continue to use a dict for consistency in order to enhance readability.
I'm sure the OP has long-since resolved this to his satisfaction, but if he still wants this functionality, then I suggest he download one of the packages from pypi that provides it:
Bunch is the one I'm more familiar with. Subclass of
dict, so you have all that functionality.
AttrDict also looks like it's also pretty good, but I'm not as familiar with it and haven't looked through the source in as much detail as I have Bunch.
- Addict Is actively maintained and provides attr-like access and more.
- As noted in the comments by Rotareti, Bunch has been deprecated, but there is an active fork called Munch.
However, in order to improve readability of his code I strongly recommend that he not mix his notation styles. If he prefers this notation then he should simply instantiate a dynamic object, add his desired attributes to it, and call it a day:
type('C', (object,), {}) d = C() d.spam = 1d.eggs = 2d.ham = 3assert d.__dict__ == {'spam': 1, 'eggs': 2, 'ham': 3}C =
Wherein I Update, to Answer a Follow-Up Question in the Comments
In the comments (below), Elmo asks:
What if you want to go one deeper? ( referring to type(...) )
While I've never used this use case (again, I tend to use nested
dict, forconsistency), the following code works:
type('C', (object,), {}) d = C()for x in 'spam eggs ham'.split(): setattr(d, x, C()) i = 1 for y in 'one two three'.split(): setattr(getattr(d, x), y, i) i += 1...assert d.spam.__dict__ == {'one': 1, 'two': 2, 'three': 3}C = | https://codehunter.cc/a/python/accessing-dict-keys-like-an-attribute | CC-MAIN-2022-21 | refinedweb | 1,189 | 63.49 |
bcrypt.
Note: JRuby versions of the bcrypt gem
<= 2.1.3 had a security
vulnerability that
was fixed in
>= 2.1.4. If you used a vulnerable version to hash
passwords with international characters in them, you will need to
re-hash those passwords. This vulnerability only affected the JRuby gem.
How to install bcrypt
gem install bcrypt
The bcrypt gem is available on the following ruby platforms:
- JRuby
- RubyInstaller 1.8, 1.9, 2.0, and 2.1 builds on win32
- Any 1.8, 1.9, 2.0, 2.1, or 2.2 Ruby on a BSD/OS X/Linux system with a compiler
How to use
bcrypt() in your Rails application
Note: Rails versions >= 3 ship with
ActiveModel::SecurePassword which uses bcrypt-ruby.
has_secure_password docs
implements a similar authentication strategy to the code below.
The User model
require 'bcrypt' class User < ActiveRecord::Base # users.password_hash in the database is a :string include BCrypt def password @password ||= Password.new(password_hash) end def password=(new_password) @password = Password.create(new_password) self.password_hash = @password end end
Creating an account
def create @user = User.new(params[:user]) @user.password = params[:password] @user.save! end
Authenticating a user
def login @user = User.find_by_email(params[:email]) if @user.password == params[:password] give_token else redirect_to home_url end end
If a user forgets their password?
# assign them a random one and mail it to them, asking them to change it def forgot_password @user = User.find_by_email(params[:email]) random_password = Array.new(10).map { (65 + rand(58)).chr }.join @user.password = random_password @user.save! Mailer.create_and_deliver_password_change(@user, random_password) end
How to use bcrypt-ruby in general
Check the rdocs for more details -- BCrypt, BCrypt::Password.
How
bcrypt() works
bcrypt() is a hashing algorithm designed by Niels Provos and David Mazières of the OpenBSD Project.
Background
Hash algorithms take a chunk of data (e.g., your user's password) and create a "digital fingerprint," or hash, of it. Because this process is not reversible, there's no way to go from the hash back to the password.
In other words:
hash(p) #=> <unique gibberish>
You can store the hash and check it against a hash made of a potentially valid password:
<unique gibberish> =? hash(just_entered_password)
Rainbow Tables
But even this has weaknesses -- attackers can just run lists of possible passwords through the same algorithm, store the results in a big database, and then look up the passwords by their hash:
PrecomputedPassword.find_by_hash(<unique gibberish>).password #=> "secret1"
Salts
The().
Cost Factors
In addition,
bcrypt() allows you to increase the amount of work required to hash a password as computers get faster. Old
passwords will still work fine, but new passwords can keep up with the times.
The default cost factor used by bcrypt-ruby is 10, which is fine for session-based authentication. If you are using a stateless authentication architecture (e.g., HTTP Basic Auth), you will want to lower the cost factor to reduce your server load and keep your request times down. This will lower the security provided you, but there are few alternatives.
To change the default cost factor used by bcrypt-ruby, use
BCrypt::Engine.cost = new_value:
BCrypt::Password.create('secret').cost #=> 10, the default provided by bcrypt-ruby # set a new default cost BCrypt::Engine.cost = 8 BCrypt::Password.create('secret').cost #=> 8
The default cost can be overridden as needed by passing an options hash with a different cost:
BCrypt::Password.create('secret', :cost => 6).cost #=> 6
More Information:
Etc
- Author :: Coda Hale coda.hale@gmail.com
- Website :: | http://www.rubydoc.info/github/codahale/bcrypt-ruby/frames | CC-MAIN-2017-13 | refinedweb | 584 | 51.14 |
my-ip 0.2.0
Get your private and public ip addresses
To use this package, put the following dependency into your project's dependencies section:
dub.json
dub.sdl
my-ip
A library used to retrieve private and public ip addresses.
Usage
Private addresses
An array with the private addresses, both ipv4 and ipv6, can be obtained using the
privateAddresses function.
Public address
The public address is retrieved from a web service throught the
publicAddress function using a blocking socket. It returns the the ip in dot notation or an
empty string if a problem has occured.
Specific function to get either ipv4 or ipv6 can be used as
publicAddress4 and
publicAddress6.
import myip; void main(string[] args) { writeln("Your ipv4 is ", publicAddress4); auto ipv6 = publicAddress6; if(ipv6.length) { writeln("Your ipv6 is ", publicAddress6); } else { writeln("You don't have an ipv6"); } }
- Registered by Kripth
- 0.2.0 released 8 months ago
- Kripth/my-ip
- MIT
- Authors:
-
- Sub packages:
- my-ip:app
- Dependencies:
- none
- Versions:
- Show all 3 versions
- Download Stats:
8 downloads today
35 downloads this week
67 downloads this month
2640 downloads total
- Score:
- 1.2
- Short URL:
- my-ip.dub.pm | http://code.dlang.org/packages/my-ip | CC-MAIN-2018-43 | refinedweb | 195 | 54.12 |
module Network.SimpleServer.Examples.ChatServer(main, run) where import Data.Char import Data.List import System.Environment import qualified Network.SimpleServer as S -- Constants -- -- A Welcome message to send to clients when they connect. " ++ (unwords msg) -- The disconnect command causes the message "Goodbye!" to be sent to -- the client. Then they are disconnected from the server. disCmd = "/disconnect" disHandler :: S.CmdHandler disHandler _ server client = do S.respond client "Goodbye!" S.disconnect client -- The who command responds to the client with -- a list of usernames whoCmd = "/who" whoHandler :: S.CmdHandler whoHandler _ server client = do clients <- S.clientList server usernames <- mapM (flip S.lookup username) clients let message = "Users:\n" ++ (intercalate "\n" usernames) S.respond client message -- Builds a server on the given port, adds the commands -- starts the server, and waits for the word stop to be entered run :: Int -> IO () run port = do server <- S.new connHandler dissHandler port S.addCommand server whoCmd whoHandler S.addCommand server nameCmd nameHandler S.addCommand server disCmd disHandler S.addCommand server msgCmd msgHandler S.addCommand server pingCmd pingHandler S.start server putStrLn $ "Chat Server Started on Port: " ++ (show port) putStrLn $ "Type 'stop' to stop the server." waitStop server S.stop server putStrLn "Server Stopped" -- Waits for the word 'stop' to be entered waitStop :: S.Server -> IO () waitStop server = do string <- getLine case string of "stop" -> return () _ -> waitStop server -- Starts a server on the specified port or prints the usage message main = do args <- getArgs case args of [] -> printUsage (x:_) -> if isInt x then (return (read x)) >>= run else printUsage printUsage :: IO () printUsage = putStrLn "Usage ./ChatServer [port]" isInt :: [Char] -> Bool isInt = (== []) . (filter (not . isDigit)) | http://hackage.haskell.org/package/simple-server-0.0.3/docs/src/Network-SimpleServer-Examples-ChatServer.html | CC-MAIN-2016-07 | refinedweb | 273 | 62.24 |
What are metaclasses in Python?
Classes as objects
Before understanding metaclasses, you need to master classes in Python. And Python has a very peculiar idea of what classes are, borrowed from the Smalltalk language.
In most languages, classes are just pieces of code that describe how to produce an object. That's kinda true in Python too:
class ObjectCreator(object): pass... my_object = ObjectCreator()print(my_object)<__main__.ObjectCreator object at 0x8974f2c>
But classes are more than that in Python. Classes are objects too.
Yes, objects.
As soon as you use the keyword
class, Python executes it and createsan object. The instruction
class ObjectCreator(object): pass...
creates in memory an object with the name
ObjectCreator.
This object (the class) is itself capable of creating objects (the instances),and this is why it's a class.
But still, it's an object, and therefore:
- you can assign it to a variable
- you can copy it
- you can add attributes to it
- you can pass it as a function parameter
e.g.:
print(ObjectCreator) # you can print a class because it's an object<class '__main__.ObjectCreator'>def echo(o): print(o)... echo(ObjectCreator) # you can pass a class as a parameter<class '__main__.ObjectCreator'>print(hasattr(ObjectCreator, 'new_attribute'))FalseObjectCreator.new_attribute = 'foo' # you can add attributes to a classprint(hasattr(ObjectCreator, 'new_attribute'))Trueprint(ObjectCreator.new_attribute)foo ObjectCreatorMirror = ObjectCreator # you can assign a class to a variableprint(ObjectCreatorMirror.new_attribute)fooprint(ObjectCreatorMirror())<__main__.ObjectCreator object at 0x8997b4c>
Creating classes dynamically
Since classes are objects, you can create them on the fly, like any object.
First, you can create a class in a function using
class:
def choose_class(name): if name == 'foo': class Foo(object): pass return Foo # return the class, not an instance else: class Bar(object): pass return Bar... MyClass = choose_class('foo')print(MyClass) # the function returns a class, not an instance<class '__main__.Foo'>print(MyClass()) # you can create an object from this class<__main__.Foo object at 0x89c6d4c>
But it's not so dynamic, since you still have to write the whole class yourself.
Since classes are objects, they must be generated by something.
When you use the
class keyword, Python creates this object automatically. But aswith most things in Python, it gives you a way to do it manually.
Remember the function
type? The good old function that lets you know whattype an object is:
print(type(1))<type 'int'>print(type("1"))<type 'str'>print(type(ObjectCreator))<type 'type'>print(type(ObjectCreator()))<class '__main__.ObjectCreator'>
Well,
type has a completely different ability, it can also create classes on the fly.
type can take the description of a class as parameters,and return a class.
(I know, it's silly that the same function can have two completely different uses according to the parameters you pass to it. It's an issue due to backwardcompatibility in Python)
type works this way:
type(name, bases, attrs)
Where:
name: name of the class
bases: tuple of the parent class (for inheritance, can be empty)
attrs: dictionary containing attributes names and values
e.g.:
class MyShinyClass(object): pass
can be created manually this way:
type('MyShinyClass', (), {}) # returns a class objectprint(MyShinyClass)<class '__main__.MyShinyClass'>print(MyShinyClass()) # create an instance with the class<__main__.MyShinyClass object at 0x8997cec>MyShinyClass =
You'll notice that we use
MyShinyClass as the name of the classand as the variable to hold the class reference. They can be different,but there is no reason to complicate things.
type accepts a dictionary to define the attributes of the class. So:
class Foo(object): bar = True
Can be translated to:
type('Foo', (), {'bar':True})Foo =
And used as a normal class:
print(Foo)<class '__main__.Foo'>print(Foo.bar)Truef = Foo()print(f)<__main__.Foo object at 0x8a9b84c>print(f.bar)True
And of course, you can inherit from it, so:
class FooChild(Foo): pass
would be:
type('FooChild', (Foo,), {})print(FooChild)<class '__main__.FooChild'>print(FooChild.bar) # bar is inherited from FooTrueFooChild =
Eventually, you'll want to add methods to your class. Just define a functionwith the proper signature and assign it as an attribute.
def echo_bar(self): print(self.bar)... FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})hasattr(Foo, 'echo_bar')Falsehasattr(FooChild, 'echo_bar')Truemy_foo = FooChild() my_foo.echo_bar()True
And you can add even more methods after you dynamically create the class, just like adding methods to a normally created class object.
def echo_bar_more(self): print('yet another method')... FooChild.echo_bar_more = echo_bar_morehasattr(FooChild, 'echo_bar_more')True
You see where we are going: in Python, classes are objects, and you can create a class on the fly, dynamically.
This is what Python does when you use the keyword
class, and it does so by using a metaclass.
What are metaclasses (finally)
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes,you can picture them this way:
MyClass = MetaClass()my_object = MyClass()
You've seen that
type lets you do something like this:
MyClass = type('MyClass', (), {})
It's because the function
type is in fact a metaclass.
type is themetaclass Python uses to create all classes behind the scenes.
Now you wonder "why the heck is it written in lowercase, and not
Type?"
Well, I guess it's a matter of consistency with
str, the class that createsstrings objects, and
int the class that creates integer objects.
type isjust the class that creates class objects.
You see that by checking the
__class__ attribute.
Everything, and I mean everything, is an object in Python. That includes integers,strings, functions and classes. All of them are objects. And all of them havebeen created from a class:
35age.__class__<type 'int'>def foo(): passfoo.__class__<type 'function'>class Bar(object): passb = Bar() b.__class__<class '__main__.Bar'>age =
Now, what is the
__class__ of any
__class__ ?
type 'type'> name.__class__.__class__<type 'type'> foo.__class__.__class__<type 'type'> b.__class__.__class__<type 'type'>age.__class__.__class__<
So, a metaclass is just the stuff that creates class objects.
You can call it a 'class factory' if you wish.
type is the built-in metaclass Python uses, but of course, you can create yourown metaclass.
The
__metaclass__ attribute
In Python 2, you can add a
__metaclass__ attribute when you write a class (see next section for the Python 3 syntax):
class Foo(object): __metaclass__ = something... [...]
If you do so, Python will use the metaclass to create the class
Foo.
Careful, it's tricky.
You write
class Foo(object) first, but the class object
Foo is not createdin memory yet.
Python will look for
__metaclass__ in the class definition. If it finds it,it will use it to create the object class
Foo. If it doesn't, it will use
type to create the class.
Read that several times.
When you do:
class Foo(Bar): pass
Python does the following:
Is there a
__metaclass__ attribute in
Foo?
If yes, create in-memory a class object (I said a class object, stay with me here), with the name
Foo by using what is in
__metaclass__.
If Python can't find
__metaclass__, it will look for a
__metaclass__ this.
Custom metaclasses
The main purpose of a metaclass is to change the class automatically,when it's created.
You usually do this for APIs, where you want to create classes matching thecurrent context.
Imagine a stupid example, where you decide that all classes in your moduleshould have their attributes written in uppercase. There are several ways todo this, but one way is to set
__metaclass__ at the module level.
This way, all classes of this module will be created using this metaclass,and we just have to tell the metaclass to turn all attributes to uppercase.
Luckily,
__metaclass__ can actually be any callable, it doesn't need to be aformal class (I know, something with 'class' in its name doesn't need to bea class, go figure... but it's helpful).
So we will start with a simple example, by using a function.
# the metaclass will automatically get passed the same argument# that you usually pass to `type`def upper_attr(future_class_name, future_class_parents, future_class_attclass Foo(): # global __metaclass__ won't work with "object" though # but we can define __metaclass__ here instead to affect only this class # and this will work with "object" children bar = 'bip'
Let's check:
hasattr(Foo, 'bar')Falsehasattr(Foo, 'BAR')TrueFoo.BAR'bip'
Now, let's do exactly the same, but using a real class for a metaclass:
# remember that `type` is actually a class like `str` and `int`# so you can inherit from itclass UpperAttrMetaclass(type): # __new__ is the method called before __init__ # it's the method that creates the object and returns it # while __init__ just initializes the object passed as parameter # you rarely use __new__, except when you want to control how the object # is created. # here the created object is the class, and we want to customize it # so we override __new__ # you can do some stuff in __init__ too if you wish # some advanced use involves overriding __call__ as well, but we won't # see this def __new__(upperattr_metaclass, future_class_name, future_class_parents, future_class_attrs):nothing): ...
That's it. There is really nothing more about metaclasses.
The reason behind the complexity of the code using metaclasses is not becauseof metaclasses, it's because you usually use metaclasses to do twisted stuffrelying on introspection, manipulating inheritance, vars such as
__dict__, etc.
Indeed, metaclasses are especially useful to do black magic, and thereforecomplicated stuff. But by themselves, they are simple:
- intercept a class creation
- modify the class
- return the modified class
Why would you use metaclasses classes instead of functions?
Since
__metaclass__ can accept any callable, why would you use a classsince it's obviously more complicated?
There are several reasons to do so:
- The intention is clear. When you read
UpperAttrMetaclass(type), you knowwhat's going to follow
- You can use OOP. Metaclass can inherit from metaclass, override parent methods. Metaclasses can even use metaclasses.
- Subclasses of a class will be instances of its metaclass if you specified a metaclass-class, but not with a metaclass-function.
-__.
- These are called metaclasses, damn it! It must mean something!
Why would you use metaclasses?
Now the big question. Why would you use some obscure error-prone feature?
Well, usually you don't:
Metaclasses are deeper magic that99% of users should never worry about it.If you wonder whether you need them,you don't (the people who actuallyneed them to know with certainty thatthey need them and don't need anexplanation about why).
Python Guru Tim Peters
The main use case for a metaclass is creating an API. A typical example of this is the Django ORM. It allows you to define something like this:
class Person(models.Model): name = models.CharField(max_length=30) age = models.IntegerField()
But if you do this:
person = Person(name='bob', age='35')print(person.age)
It won't return an
IntegerField object. It will return an
int, and can even take it directly from the database.
This is possible because
models.Model defines
__metaclass__ andit uses some magic that will turn the
Person you just defined with simple statementsinto a complex hook to a database field.
Django makes something complex look simple by exposing a simple APIand using metaclasses, recreating code from this API to do the real jobbehind the scenes.
The last word
First, you know that classes are objects that can create instances.
Well, in fact, classes are themselves instances. Of metaclasses.
class Foo(object): passid(Foo)142630324
Everything is an object in Python, and they are all either instance of classesor instances of metaclasses.
Except for
type.
type is actually its own metaclass. This is not something you couldreproduce in pure Python, and is done by cheating a little bit at the implementationlevel.
Secondly, metaclasses are complicated. You may not want to use them forvery simple class alterations. You can change classes by using two different techniques:
- monkey patching
- class decorators
99% of the time you need class alteration, you are better off using these.
But 98% of the time, you don't need class alteration at replace the class with something else entirely.:
def make_hook(f): """Decorator to turn 'foo' method into '__foo__'""" f.is_hook = 1 return fclass, mcls).__new__(mclass MyObject: __metaclass__ = MyTypeclass NoneSample(MyObject): pass# Will print "NoneType None"print type(NoneSample), repr(NoneSample)class Example(MyObject): def __init__(self, value): self.value = value def add(self, other): return self.__class__(self.value + other.value)# Will unregister the classExample.unregister()inst = Example(10)# Will fail with an AttributeError#inst.unregister()print inst + instclass Sibling(MyObject): passExampleSibling = Example + Sibling# ExampleSibling is now a subclass of both Example and Sibling (with no# content of its own) although it will believe it's called 'AutoClass'print ExampleSiblingprint ExampleSibling.__mro__
Note, this answer is for Python 2.x as it was written in 2008, metaclasses are slightly different in 3.x.print 'TestName = ', repr(TestName)# output => The Class Name is TestNameTheclass Foo(object): __metaclass__ = MetaSingletona = Foo()b = Foo()assert a is b | https://codehunter.cc/a/python/what-are-metaclasses-in-python | CC-MAIN-2022-21 | refinedweb | 2,183 | 57.47 |
PYTHON
Python Threading And Multiprocessing
Overview
Python supports threading. However, because of the GIL lock, only one thread is allowed to run at once. There Python threading supports concurrency, but not parallelism. This makes Python threading suitable for IO bound operations, but not for processor bound operations. Most threading functions are made available by adding the code
import threading to the top of your Python script.
Gracefully Exiting Multiple Threads
I followed this example at regexprn.com for the most part, except I discovered the example code was buggy and had to make some tweaks, as outlined below.
import time import threading DEBUG = 0 class MyThread1(threading.Thread): def __init__(self): threading.Thread.__init__(self) # A flag to notify the thread that it should finish up and exit self.kill_received = False def run(self): while not self.kill_received: self.do_something() def do_something(self): # Do your thread logic here! # Make sure to add some kind of pause as to not starve other threads time.sleep(5.0) # END | def do_something(self): # END | class MyThread1(threading.Thread): # @brief main() function for script. # @details Starts up the individual threads and controls their execution. def main(args): threads = [] #----- START THE THREADS -----# # Start the internet check thread print 'Starting MyThread1...' t = MyThread1() threads.append(t) t.start() print 'MyThread1 started...' # Start the knob control thread print 'Starting MyThread2...' t = MyThread2() threads.append(t) t.start() print 'MyThread2 started...' #----- MONITOR THE THREADS -----# while len(threads) > 0: if DEBUG: print 'len(threads) = ', len(threads) try: if DEBUG: print 'In try block.' # Join all threads using a timeout so it doesn't block # Filter out threads which have been joined or are None for i in range(len(threads)): # Make sure thread still exists if threads[i] is not None: if DEBUG: print 'Attemping to join()...' threads[i].join(1) if threads[i].isAlive() is False: if DEBUG: print 'isAlive() is False, removing thread from list...' threads.pop(i) if DEBUG: print 'Exiting try block...' except KeyboardInterrupt: print "Ctrl-c received! Sending kill to threads..." for t in threads: t.kill_received = True except Exception as e: print "Unknown exception caught! Sending kill to threads...", e for t in threads: t.kill_received = True print 'main() is returning...' if __name__ == '__main__': main(sys.argv)
Examples
The Columbus Radio project uses multiple Python threads for the UI control. The code is in the ColumbusRadio repo on GitHub. The threads should gracefully exit if Ctrl-C is pressed in the terminal while they are running. | https://blog.mbedded.ninja/programming/languages/python/python-threading/ | CC-MAIN-2021-17 | refinedweb | 413 | 70.09 |
In Part 1 of our series on how to write efficient code using NumPy, we covered the important topics of vectorization and broadcasting. In this part we will put these concepts into practice by implementing an efficient version of the K-Means clustering algorithm using NumPy. We will benchmark it against a naive version implemented entirely using looping in Python. In the end we'll see that the NumPy version is about 70 times faster than the simple loop version.
To be exact, in this post we will cover:
- Understanding K-Means Clustering
- Implementing K-Means using loops
- Using cProfile to find bottlenecks in the code
- Optimizing K-Means using NumPy
Let's get started!
Bring this project to life
Understanding K-Means Clustering
In this post we will be optimizing an implementation of the k-means clustering algorithm. It is therefore imperative that we at least have a basic understanding of how the algorithm works. Of course, a detailed discussion would also be beyond the scope of this post; if you want to dig deeper into k-means you can find several recommended links below.
What Does the K-Means Clustering Algorithm Do?
In a nutshell, k-means is an unsupervised learning algorithm which separates data into groups based on similarity. As it's an unsupervised algorithm, this means we have no labels for the data.
The most important hyperparameter for the k-means algorithm is the number of clusters, or k. Once we have decided upon our value for k, the algorithm works as follows.
- Initialize k points (corresponding to k clusters) randomly from the data. We call these points centroids.
- For each data point, measure the L2 distance from the centroid. Assign each data point to the centroid for which it has the shortest distance. In other words, assign the closest centroid to each data point.
- Now each data point assigned to a centroid forms an individual cluster. For k centroids, we will have k clusters. Update the value of the centroid of each cluster by the mean of all the data points present in that particular cluster.
- Repeat steps 1-3 until the maximum change in centroids for each iteration falls below a threshold value, or the clustering error converges.
Here's the pseudo-code for the algorithm.
I'm going to leave K-Means at that. This is enough to help us code the algorithm. However, there is much more to it, such as how to choose a good value of k, how to evaluate the performance, which distance metrics can be used, preprocessing steps, and theory. In case you wish to dig deeper, here are a few links for you to study it further.
Now, let's proceed with the implementation of the algorithm.
Implementing K-Means Using Loops
In this section we will be implementing the K-Means algorithm using Python and loops. We will not be using NumPy for this. This code will be used as a benchmark for our optimized version.
Generating the Data
To perform clustering, we first need our data. While we can choose from multiple datasets online, let's keep things rather simple and intuitive. We are going to synthesize a dataset by sampling from multiple Gaussian distributions, so that visualizing clusters is easy for us.
In case you don't know what a Gaussian distribution is, check it out!
We will create data from four Gaussian's with different means and distributions.
import numpy as np # Size of dataset to be generated. The final size is 4 * data_size data_size = 1000 num_iters = 50 num_clusters = 4 # sample from Gaussians data1 = np.random.normal((5,5,5), (4, 4, 4), (data_size,3)) data2 = np.random.normal((4,20,20), (3,3,3), (data_size, 3)) data3 = np.random.normal((25, 20, 5), (5, 5, 5), (data_size,3)) data4 = np.random.normal((30, 30, 30), (5, 5, 5), (data_size,3)) # Combine the data to create the final dataset data = np.concatenate((data1,data2, data3, data4), axis = 0) # Shuffle the data np.random.shuffle(data)
In order to aid our visualization, let's plot this data in a 3-D space.') ax.scatter(data[:,0], data[:,1], data[:,2], s= 0.5) plt.show()
It's very easy to see the four clusters of data in the plot above. This, for one, makes it easy for us to pick up a suitable value of k for our implementation. This goes in spirit of keeping the algorithmic details as simple as possible, so that we can focus on the implementation.
Helper Functions
We begin by initialising our centroids, as well as a list to keep track of which centroid each data point is assigned to.
# Set random seed for reproducibility random.seed(0) # Initialize the list to store centroids centroids = [] # Sample initial centroids random_indices = random.sample(range(data.shape[0]), 4) for i in random_indices: centroids.append(data[i]) # Create a list to store which centroid is assigned to each dataset assigned_centroids = [0] * len(data)
Before we implement our loop, we will first implement a few helper functions.
compute_l2_distance takes two points (say
[0, 1, 0] and
[4, 2, 3]) and computes the L2 distance between them, according to the following formula.
$$ L2(X_1, X_2) = \sum_{i}^{dimensions(X_1)} (X_1[i] - X_2[i])^2 $$
def compute_l2_distance(x, centroid): # Initialize the distance to 0 dist = 0 # Loop over the dimensions. Take squared difference and add to 'dist' for i in range(len(x)): dist += (centroid[i] - x[i])**2 return dist
The other helper function we implement is called
get_closest_centroid, the name being pretty self-explanatory. The function takes an input
x and a list,
centroids, and returns the index of the list
centroids corresponding to the centroid closest to
x.
def get_closest_centroid(x, centroids): # Initialize
Then we implement the function
compute_sse, which computes the SSE or Sum of Squared Errors. This metric is used to guide how many iterations we have to do. Once this value converges, we can stop training.
Main Loop
Now, let's write the main loop. Refer to the pseudo-code mentioned above for reference. Instead of looping until convergence, we merely loop for 50 iterations.
# Number of dimensions in centroid num_centroid_dims = data.shape[1] # List to store SSE for each iteration sse_list = [] tic = time.time() # Loop over iterations for n in range(num_iters): # Loop over each data point for i in range(len(data)): x = data[i] # Get the closest centroid closest_centroid = get_closest_centroid(x, centroids) # Assign the centroid to the data point. assigned_centroids[i] = closest_centroid # Loop over centroids and compute the new ones. for c in range(len(centroids)): # Get all the data points belonging to a particular cluster cluster_data = [data[i] for i in range(len(data)) if assigned_centroids[i] == c] # Initialize # Compute the SSE for the iteration sse = compute_sse(data, centroids, assigned_centroids) sse_list.append(sse)
The entire code can be viewed below.
Once we let the code run, let's check how we have performed according to the SSE through the iterations.
plt.figure() plt.xlabel("Iterations") plt.ylabel("SSE") plt.plot(range(len(sses)), sses) plt.show()
If we visualize the clusters, this is what we get.
fig = plt.figure() ax = fig.add_subplot(111, projection='3d') for c in range(len(centroids)): cluster_members = [data[i] for i in range(len(data)) if assigned_centroids[i] == c] cluster_members = np.array(cluster_members) ax.scatter(cluster_members[:,0], cluster_members[:,1], cluster_members[:,2], s= 0.5)
We see that k-means is able to discover all the clusters on its own. We will be using this code as our benchmark.
Timing the Code
Let us now time our code. We can use the
timeit module as we did in the last post. You could also use the
time module, although in our last post I advocated against it because measuring the running time for only a single run can lead to unstable estimates due to processes running in the background, especially when it comes to short snippets (often, one-liners).
However, it's important to note that the code for K-Means doesn't correspond to a short snippet. In fact, the body of code being considered here has a lot of short snippets being repeated multiple times in loops. This is exactly what we were doing with
timeit; running short snippets time and time again. Therefore, even the
time module works fine for our example.
In fact, this is what we are going to use here, as shown below.
import time tic = time.time() # code to be timed goes here toc = time.time() print("Time Elapsed Per Loop {:.3f}".format((tic - toc)/ 50))
When used for our code, the time taken for the loop version is about 0.053 seconds per loop. (Estimates may vary to the tune of 0.001 seconds).
Identifying Bottlenecks
We use a couple of methods to identify bottlenecks: inspection of the loop structure, and the use of
cProfile to measure the running time of various functions in our code.
Analyzing the Loop Structure
First, I want to focus on the body of the loop. Here's a rough outline of how loops are structured in the code.
Loop that goes over iteration (n iters) . . # Get centroids for each data point Loop that goes over the data (i iters) . . # Computing distance to centroids Loop over centroids (k iters) . . Loop over dimensions of data (d iters) . . # Getting members of the cluster Loop over centroids (c iters) . . # Computing new centroids Loop over dimensions of data (d iters) . . # Compute SSE Loop over data (i iters) . . # Compute L2 distance Loop over dimensions of data (d iters)
We use
n to iterate over the number of iterations,
i to iterate over the dataset,
c to iterate over the number of clusters, and
d to iterate over the dimensions of the data.
Here, the loops that will benefit the most from optimization are those that go over the data (
n corresponds to the size of our dataset, which could be very large) and dimensions (
d, in case we have very high-dimensional data, like images). In comparison, the loops that go over the number of clusters,
c, and the number of iterations,
n, may require only a small amount of iterations and may not benefit as much from optimization. Nonetheless, we will try to optimize those as well.
Also notice that the loop that is responsible for calculating the centroids for each data point is a triply nested loop, with individual loops going over the dataset size, number of clusters, and dimensions of the data, respectively. As we've discovered in the previous post, nested loops can be one of the biggest bottlenecks.
Using cProfile to Identify the Bottlenecks
While we could draw an outline detailing loop structure for our code, as it was still somewhat small, doing the same for large bodies of code can be pretty tedious.
Therefore, I am now going to introduce a tool called
cProfile that helps you profile your code and get running times for various elements in your code. In order to use cProfile, put your code inside a python script and run the following command from the terminal.
python -m cProfile -o loops.log kmeans_loop.py
Here,
-o flag denotes the name of the log file that will be created. In our case, this file is named as
loops.log. In order to read this file, we will be needing another utility called
cProfileV which can be installed with
pip.
pip install cprofilev
Once you have installed it, we can use it to view our log file.
cprofilev -f loops.log
This will result in a message that reads something like:
[cProfileV]: cProfile output available at
Go to the address in the browser. You will see something like this.
Here, you see column heading on the top regarding each function call in your code.
ncalls: Number of function calls made.
tottime: Total time taken by a function call including time taken by other function it calls.
percall: Time taken per call.
cumtime: Total time taken by the function alone not including time taken by function it calls.
filename/lineno: File name and Line number of the function call.
Here, we are most interesting in the time a function call takes by itself, i.e.
tottime. We click on the column label to sort the entries by
tottime
Once, we have done that we see exactly which functions are taking the most time.
Here, we see that that the function
compute_l2_distance and
get_closest_centroid take the maximum individual times. Notice that both of these function include loops which can be vectorized. The
compute_l2_distance is called 1 million times, while
get_closest_centroid is called 200,000 times.
We also see that the
<listcomp> (list comprehension function) and the
lambda functions also feature in the top time-taking functions. These functions are used in the lines
104, 57, 112 which includes loops for computing data points belonging to a particular cluster, computing the closest centroid and computing the new centroids respectively. We shall also attempt to vectorize these loops.
Optimizing K-Means Algorithm Using NumPy
In this section, we are going to take the implementation and optimize it using vectorization and Broadcasting.
Vectorizing Helper Functions
We first begin by vectorizing the function
compute_l2_distance. We modify it to accept inputs
x , an array of shape
(1, d) (d = 3 in our example) and
centroids to be of shape
(k, d) (k = 4 in our example).
Since
x is of shape
(1,3) and
centroids of shape
(4,3), the array
x will be broadcasted over the first dimension and we would have the resulting array of shape
(4,3) with each row representing the distance of array
x with each centroid. This vectorizes the loop over the number of centroids.
We then sum the resulting
(4,3) array along it's first dimensions (
sum(axis = 0)) so that we have an array of size
(3,) which contains the distance of
x from 3 centroids.
def compute_l2_distance_old(x, centroid): # Initialise the distance to 0 dist = 0 # Loop over the dimensions. Take sqaured difference and add to `dist` for i in range(len(x)): dist += (centroid[i] - x[i])**2 return dist def compute_l2_distance(x, centroid): # Compute the difference, following by raising to power 2 and summing dist = ((x - centroid) ** 2).sum(axis = 1)
Now, we turn our attention to the
get_closest_centroid function. Again, the inputs to this function would be
x , an array of shape
(1, d) (d = 3 in our example) and
centroids to be of shape
(k, d) (k = 4 in our example).
We replace the line
closest_centroid_index = min(range(len(centroid_distances)), key=lambda x: centroid_distances[x]) with
np.argmin(dist, axis = 0). Note that this
lambda call featured in one of top time-taking functions.
def get_closest_centroid_old(x, centroids): # Initialise = 0) return closest_centroid_index
Consequently, we can also vectorize the loop in
compute_sse function by invoking the
compute_l2_distance on the entire
data and
centroids array. def compute_sse(data, centroids, assigned_centroids): # Initialise SSE sse = 0 # Compute SSE sse = compute_l2_distance(data, centroids[assigned_centroids]).sum() / len(data) return sse
Note that the
data array is of shape
(N, 3) (
N = 4000). The array
centroids[assigned_centroids] helps us get a list of assigned centroids which earlier had to be computed using a loop with the line
centroid = centroids[assigned_centroids[i]].
Adjusting the Loop
In order to make the loop work with our modifications, we have to make some modifications to our code.
First,
centroids is now a array of shape
(4,3) (as compared to a list seen earlier). So we have to change its initialization from:
# Initialise the list to store centroids centroids = [] # Sample initial centroids random_indices = random.sample(range(data.shape[0]), 4) for i in random_indices: centroids.append(data[i])
To:
centroids = data[random.sample(range(data.shape[0]), 4)]
We also have to ensure our data
x is now in form of
[1,k] not
[k,]. To do this , we add the line
x = x[None, :] after reading our data
x = data[i].
We also modify the loop that computes the mean of the assigned centroids ( to compute new centroids) to much easier code from
for c in range(len(centroids)): # Get all the data points belonging to a particular cluster cluster_data = [data[i] for i in range(len(data)) if assigned_centroids[i] == c] # Initialise
To:
# Loop over centroids and compute the new ones. for c in range(len(centroids)): # Get all the data points belonging to a particular cluster cluster_data = data[assigned_centroids == c] # Compute the average of cluster members to compute new centroid new_centroid = cluster_data.mean(axis = 0) # assign the new centroid centroids[c] = new_centroid
Once this is done, our new code for K-Means looks like:
Timing the new code
When we time the vectorized code, we get a time of about 0.031 seconds (+/- 0.001 sec). Well, it's a speed up, but isn't it disappointing? To be honest, A speed up of just 1.7x seems to be a big let off given what we saw in Part 1 of one. It seems that maybe vectorization doesn't work that well at all?
Let's use the profiler to see what the time taking functions are.
Again, the functions that take most time (excluding internal functions) are
compute_l2_distance and
get_closest_centroid. While we vectorized them, a more problematic aspect is them being called a huge number of times. While the
compute_l2_distance is now called 200,000 times (instead of 1M due to vectorizing loop over centroids), the number of calls made to
get_closest_centroid remains the same.
These large number of calls are because we are still using a loop to go over our entire dataset, which has many many (
n = 4000) iterations. In contrast, The loops we have vectorized mostly go over the dimensions (= 3) or centroids (=4) and have comparatively lesser iterations. If we can vectorize the loop over data, we can gain considerable speed ups.
Vectorizing Loop Over Data
In order to think about vectorizing the loop over data, we revisit our visualizing loops as arrays logic from part 1. In our code, the bottleneck basically comes from measure distance between a given data point with all the centroids, for each data point in the dataset.
Let us visualize a 3-D array of dimensions
[n, c, d]. It's
[i, j, k] element represent the distance between $k_{th}$ dimension of the $i^{th}$ data point and the $j_{th}$ centroid.
Operating on this array enables us to perform one iteration of K-Means without a loop, enabling us to get rid of the loop over data.
Such an array can be easily created using broadcasting. We have our data as a
[4000, 3] array and a centroids as a
[4,3] array. To broadcast them, we can reshape
data and
centroids to
[4000, 1, 3] and
[1, 4, 3] respectively. Passing these resized arrays to
compute_l2_distance results in the formation of the array we talked about above, with the shape
[4000, 4, 3].
This can be done simply as:
closest_centroids = get_closest_centroid(data[:, None, :], centroids[None,:, :])
Now that we are dealing with 3-D
data and
centroid arrays, we must make minor modifications to the helper functions (the
axis along which
sum and
min has to be computed).
def compute_l2_distance(x, centroid): # Compute the difference, following by raising to power 2 and summing dist = ((x - centroid) ** 2).sum(axis = 2) # change of axis return dist = 1) # change of axis return closest_centroid_index
Once this is done, we can write our main code without the loop going over dataset.
# Number of dimensions in centroid num_centroid_dims = data.shape[1] # List to store SSE for each iteration sse_list = [] # Main Loop for n in range(50): # Get closest centroids to each data point assigned_centroids = get_closest_centroid(data[:, None, :], centroids[None,:, :]) # Compute new centroids for c in range(centroids.shape[1]): # Get data points belonging to each cluster cluster_members = data[assigned_centroids == c] # Compute the mean of the clusters cluster_members = cluster_members.mean(axis = 0) # Update the centroids centroids[c] = cluster_members # Compute SSE sse = compute_sse(data.squeeze(), centroids.squeeze(), assigned_centroids) sse_list.append(sse)
The full entire code can be found here:
Timing the code
Let's time the code now. Using the
time module, we get an estimated running time of 0.00073 seconds (+/- 0.00002 seconds) per iteration! Now that's a speed up of 72x for the optimized code!
Looking the profiler log of the code:
We see that
compute_l2_distance is now only called 100 times, and hence the
tottime decreases considerably.
get_closest_centroids is only called 50 times and is further down the list.
Conclusion
That brings us to the end of this post! If you were to take anything from this post, it is that merely vectorizing code is not enough; we must also look at which loops have the largest number of iterations. This reminds me of the famous line from George Orwell's Animal Farm:
For loops that have a small number of iterations, you can even leave them be to improve code readability. Of course, there is a trade-off in this case between optimization and readability. That is something I plan to discuss in the future parts of this series, along with concepts like in-place ops, reshaping, transposing, and other options beyond NumPy. Until then, I'll leave you with this key take away message from this post.
Add speed and simplicity to your Machine Learning workflow today | https://blog.paperspace.com/speed-up-kmeans-numpy-vectorization-broadcasting-profiling/ | CC-MAIN-2022-27 | refinedweb | 3,565 | 63.39 |
On Tue, Jul 28, 2009 at 05:37:51PM +0200, Jerome Marchand wrote: > Vivek Goyal wrote: > > On Tue, Jul 28, 2009 at 04:29:06PM +0200, Jerome Marchand wrote: > >> Vivek Goyal wrote: > >>> wasn't clear enough. I meant the class of the process as set by ionice, not > >> the class of the cgroup. That is, of course, only an issue when using CFQ. > >> > >>>. > >>. > > OK. That's how I understood it, but I wanted your confirmation. > > > > > You mentioned that RT and BE task are getting fair share but not IDLE > > task. This is a bug and probably I know where the bug is. I will debug it > > and fix it soon. > > I've tested it with the last version of your patchset (v6) and the problem > was less acute (the IDLE task got about 5 times less time that RT and BE > against 50 times less with v7 patchset). I hope that helps you. Hi Jerome, Can you please try attached patch. It should fix the issue of group in which idle task is running is not getting its fair share. The primary issue here is that for such groups, we were not doing group idle, which will lead to queue and group deletion immediately after dispatching one request and it will not get its fair share. Attached patch should fix the problem. Thanks Vivek --- block/cfq-iosched.c | 5 ++- block/elevator-fq.c | 70 ++++++++++++++++++++++++++++++++++++---------------- block/elevator-fq.h | 1 3 files changed, 54 insertions(+), 22 deletions(-) Index: linux8/block/elevator-fq.c =================================================================== --- linux8.orig/block/elevator-fq.c 2009-07-27 18:18:49.000000000 -0400 +++ linux8/block/elevator-fq.c 2009-07-28 14:30:08.000000000 -0400 @@ -1212,6 +1212,30 @@ io_group_init_entity(struct io_cgroup *i entity->my_sched_data = &iog->sched_data; } +/* Check if we plan to idle on the group associated with this queue or not */ +int elv_iog_should_idle(struct io_queue *ioq) +{ + struct io_group *iog = ioq_to_io_group(ioq); + struct elv_fq_data *efqd = ioq->efqd; + + /* + * No idling on group if group idle is disabled or idling is disabled + * for this group. Currently for root group idling is disabled. + */ + if (!efqd->elv_group_idle || !elv_iog_idle_window(iog)) + return 0; + + /* + * If this is last active queue in group with no request queued, we + * need to idle on group before expiring the queue to make sure group + * does not loose its share. + */ + if ((elv_iog_nr_active(iog) <= 1) && !ioq->nr_queued) + return 1; + + return 0; +} + static void io_group_set_parent(struct io_group *iog, struct io_group *parent) { struct io_entity *entity; @@ -2708,6 +2732,10 @@ static inline int is_only_root_group(voi { return 1; } + +/* No group idling in flat mode */ +int elv_iog_should_idle(struct io_queue *ioq) { return 0; } + #endif /* GROUP_IOSCHED */ /* Elevator fair queuing function */ @@ -3308,12 +3336,18 @@ void __elv_ioq_slice_expired(struct requ if (time_after(ioq->slice_end, jiffies)) { slice_unused = ioq->slice_end - jiffies; if (slice_unused == entity->budget) { - /* - * queue got expired immediately after - * completing first request. Charge 1/2 of - * time consumed in completing first request. + /* Queue got expired immediately after completing + * first request. It happens with idle class queues + * as well as can happen with closely cooperating + * queues or with queues for which idling is not + * enabled. + * + * Charge the full time since slice was started. This + * will include the seek cost also on rotational media. + * This is bit unfair but don't know what's the better + * way to handle such cases. */ - slice_used = (slice_used + 1)/2; + slice_used = jiffies - ioq->slice_start; } else slice_used = entity->budget - slice_unused; } else { @@ -3686,7 +3720,8 @@ void *elv_fq_select_ioq(struct request_q /* * The active queue has run out of time, expire it and select new. */ - if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) { + if ((elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq)) + && !elv_ioq_must_dispatch(ioq)) { /* * Queue has used up its slice. Wait busy is not on otherwise * we wouldn't have been here. If this group will be deleted @@ -3711,9 +3746,7 @@ void *elv_fq_select_ioq(struct request_q * from queue and is not proportional to group's weight, it * harms the fairness of the group. */ - if ((elv_iog_nr_active(iog) <= 1) && !ioq->nr_queued - && !elv_iog_wait_busy_done(iog) && efqd->elv_group_idle - && elv_iog_idle_window(iog)) { + if (elv_iog_should_idle(ioq) && !elv_iog_wait_busy_done(iog)) { ioq = NULL; goto keep_queue; } else @@ -3893,12 +3926,6 @@ void elv_ioq_completed_request(struct re elv_clear_ioq_slice_new(ioq); } - if (elv_ioq_class_idle(ioq)) { - if (elv_iosched_expire_ioq(q, 1, 0)) - elv_ioq_slice_expired(q); - goto done; - } - /* * If there is only root group present, don't expire the queue * for single queue ioschedulers (noop, deadline, AS). It is @@ -3919,14 +3946,14 @@ void elv_ioq_completed_request(struct re * mean seek distance, give them a chance to run instead * of idling. */ - if (elv_ioq_slice_used(ioq)) { + if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq)) { /* This is the last empty queue in the group and it * has consumed its slice. If we expire it right away * group might loose its share. Wait for an extra * group_idle period for a request before queue * expires. */ - if ((elv_iog_nr_active(iog) <= 1) && !ioq->nr_queued) { + if (elv_iog_should_idle(ioq)) { elv_iog_arm_slice_timer(q, iog, 1); goto done; } @@ -3943,8 +3970,10 @@ void elv_ioq_completed_request(struct re goto done; /* Expire the queue */ - if (elv_iosched_expire_ioq(q, 1, 0)) + if (elv_iosched_expire_ioq(q, 1, 0)) { elv_ioq_slice_expired(q); + goto done; + } } else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq) && sync && !rq_noidle(rq)) elv_ioq_arm_slice_timer(q); @@ -3953,9 +3982,8 @@ void elv_ioq_completed_request(struct re * If this is the last queue in the group and we did not * decide to idle on queue, idle on group. */ - if (elv_active_ioq(q->elevator) && !ioq->nr_queued && - !ioq->dispatched && !timer_pending(&efqd->idle_slice_timer) - && (elv_iog_nr_active(iog) <= 1)) { + if (elv_iog_should_idle(ioq) && !ioq->dispatched + && !timer_pending(&efqd->idle_slice_timer)) { /* * If queue has used up its slice, wait for the * one extra group_idle period to let the group Index: linux8/block/elevator-fq.h =================================================================== --- linux8.orig/block/elevator-fq.h 2009-07-24 16:09:04.000000000 -0400 +++ linux8/block/elevator-fq.h 2009-07-28 13:18:00.000000000 -0400 @@ -695,6 +695,7 @@ extern int elv_nr_busy_ioq(struct elevat extern int elv_rq_in_driver(struct elevator_queue *e); extern struct io_queue *elv_alloc_ioq(struct request_queue *q, gfp_t gfp_mask); extern void elv_free_ioq(struct io_queue *ioq); +extern int elv_iog_should_idle(struct io_queue *ioq); #else /* CONFIG_ELV_FAIR_QUEUING */ Index: linux8/block/cfq-iosched.c =================================================================== --- linux8.orig/block/cfq-iosched.c 2009-07-24 16:08:58.000000000 -0400 +++ linux8/block/cfq-iosched.c 2009-07-28 13:54:51.000000000 -0400 @@ -1007,10 +1007,13 @@ static int cfq_dispatch_requests(struct /* * expire an async queue immediately if it has used up its slice. idle * queue always expire after 1 dispatch round. + * + * Also do not expire the queue if we plan to do group idling on it. + * In that case it will be expired later. */ if (elv_nr_busy_ioq(q->elevator) > 1 && ((!cfq_cfqq_sync(cfqq) && cfqq->slice_dispatch >= cfq_prio_to_maxrq(cfqd, cfqq)) || - cfq_class_idle(cfqq))) { + (cfq_class_idle(cfqq) && !elv_iog_should_idle(cfqq->ioq)))) { cfq_slice_expired(cfqd); } | http://www.redhat.com/archives/dm-devel/2009-July/msg00264.html | CC-MAIN-2015-14 | refinedweb | 1,066 | 57.27 |
Diesel
Diesel is a Swift library to write recursive descent parsers for domain specific languages (DSLs), using parser combinators. Like Parsec and other similar parser combinator libraries, Diesel lets you build sophisticated parsers by combining simpler ones.
TL;DR;
The following example defines a parser for IPv4 addresses.
// A digit is a character representing a number. let digit = character(satisfying: { $0.isNumber }) // An octet is a sequence of 1, 2 or 3 digits, converted to an integer. let octet = digit.then(digit.optional.repeated(count: 2)) .map { head, tail in Int(String([head] + tail.compactMap { $0 }))! } // An IPv4 address is a sequence of 4 octets, separated by dots. let ipv4 = octet.then((character(".").then(octet) { _, r in r }).repeated(count: 3)) .map { head, tail in [head] + tail } print(ipv4.parse("192.168.1.1")) // Prints `success([192, 168, 1, 1], "")`
More elaborate examples can be found in the
Examples/ folder.
Motivation
A parser can be understood as a function of the form
(Stream) -> (Element, Stream)
that attempts to extract a valid output out a a given stream.
If it succeeds, it returns said output,
together with an "updated" stream, corresponding to the remainder of the input.
For example, consider the task of reading a single digit out of a character string.
Such a parser could be implemented as a function
(String) -> (Character, String),
that either successfully reads a digit from the beginning of the string, or returns
nil.
In more concrete terms, it could be wrote as follows:
func parseDigit(from string: String) -> (Character, String)? { guard let character = string.first, character.isNumber else { return nil } return (character, String(string.dropFirst())) } print(parseDigit(from: "123")!) // Prints `("1", "23")`
One advantage of this approach is that is that parsers (i.e. parsing higher-order functions) can be combined to create other parsers. For example, one could create a parser for two-digit numbers by reusing the above function twice, feeding the result of its first application to a second one:
func parseTwoDigits(from string: String) -> ((Character, Character), String)? { return parseDigit(from: string).flatMap { (first, remainder) in parseDigit(from: remainder).map { (second, remainder) in ((first, second), remainder) } } } print(parseTwoDigits(from: "123")!) // Prints `(("1", "2"), "3")`
Notice that combining two applications of
parseDigit is slightly more complex than a simple function composition,
as one must cater for cases where the first application does not succeeds.
Fortunately, the boilerplate involved in such combination can be written implemented as one single combinator. A combinator is a higher-order function that accepts one or several parsers to produce a new one.
For instance, we can write a combinator to chain two parsers as follows:
func chain<T, U>( _ first: @escaping (String) -> (T, String)?, _ second: @escaping (String) -> (U, String)?) -> (String) -> ((T, U), String)? { return { string in first(string).flatMap { arg0 in second(arg0.1).map { arg1 in ((arg0.0, arg1.0), arg1.1) } } } } print(chain(parseDigit, parseDigit)("123")!) // Prints `(("1", "2"), "3")`
Diesel embraces this principle, and proposes a collection of combinators
to build more sophisticated from simpler ones.
In diesel, a parser is an object that conforms to a protocol
Parser,
which requires a method
parse(:) representing parser.
All combinators are proposed in the form of properties and methods of a
Parser object.
Note that instead of returning optional values,
Diesel parsers are expected to return a case of the enum type
ParseResult.
This allows to attach optional diagnostics to parse failures,
so as to provide debug information for example:
struct DigitParser: Parser { func parse(_ stream: Substring) -> ParseResult<Character, Substring> { guard let character = stream.first else { return .error(diagnostic: "unexpected empty stream") } guard character.isNumber else { return .error(diagnostic: "expected digit, got '\(character)'") } return .success(character, stream.dropFirst()) } } let parseDigit = DigitParser() let parseTwoDigits = parseDigit.then(parseDigit) print(parseTwoDigits.parse("123")) // Prints `success(("1", "2"), "3")`
Installation
Diesel is distributed in the form of a Swift package for Swift Package Manager. Simply add Diesel as a dependency to your own package and you'll be ready to go. There are no other dependencies.
The master branch of Diesel always refers to the latest stable version of the library,
so using
.branch("master") guarantees that you'll always pull the latest version.
See the official documentation of Swift Package Manager for alternative configurations.
License
Diesel is distributed under the MIT License.
Github
You may find interesting
Dependencies
Used By
Total: 0 | https://swiftpack.co/package/kyouko-taiga/Diesel | CC-MAIN-2020-05 | refinedweb | 726 | 50.94 |
🙂
And I want to especially thank everyone who reads my blatherings, I appreciate it.
Larry,
Congrats on both of your anniversaries, and believe me when I say the pleasure is all mine when it comes to reading your blatherings. Although I was already reading your blog before it and will continue well after it’s over with, the series on concurrency has been fascinating to me. I am more or less fresh out of college, and my classes didn’t touch on the subject very frequently or in much depth, so I feel lucky to have benefited from your experience. This is a really great blog; it’s on a short list of blogs that I can stand to read every weekday. Thanks again for your time spent blogging 🙂
Justin
Just to let you know I have enjoyed and appreciated all of your posts
Larry,
Happy anniversary !
Thanks so much for your time and efforts to maintain this blog. It’s definitely very high on my ‘Blogs I read’ list.
Serge.
I hate to agree with Justin — your blog is on my list of "must reads"….
This is the first time I’ve left comments on a blog, and I just had to give my two cents here as well – your blog has been a great treat to read the past year, and I’m looking forward to (hopefully) many more posts :). Thanks!
Congrats!
I have 7 blogs in my "core" section in sharp reader and yours is one of them…
And yes, I have more than 7 blogs I read in total 🙂
Congrats Larry!
#undef PREVIOUS_POSTS_H
#include <previous_posts/*.h>
Keep up the entertaining and insightful job… as long as you feel like it and not a clock cycle more.
Congrats from Brazil.
Thanks for your effort in keep the posts coming.
Thanks Larry, it’s been a great read.
I second Justin’s comments. Along with Raymond Chen’s blog and Bruce Schneier’s, yours is on my bookmarks toolbar, as it’s consistently useful, interesting and informative.
As I’m a student, weekends and weekdays tend to blur together… on the weekends, I miss reading you & Raymond articles!
Thanks for all the work!
Congratulations and thank you for all the interesting and informative postings here. 🙂
Same as above! 🙂
Congrats Larry,
Like the others said you are in a special section of my feed reader, something to be read and looked forward to everyday.
Conragts and keep’m comming
I’m one of the countless lurkers in this blog. The occasion warrants raising a voice: Great, really great work, Larry! You MUST keep it on. It’s in my TOP-3 list (and I read about 40 MSDN blogs). Always informative, authoritative on technical matters, and somehow engaging on the (few) more personal ones. Plus, it’s one of the things that gave me the last few bits of inspiration and courage to become an active Microsoft job applicant. 😉
Well, this year I didn’t miss the anniversary of my first blog post.
I still can’t quite believe it’s… | https://blogs.msdn.microsoft.com/larryosterman/2005/03/23/missed-anniversaries/ | CC-MAIN-2017-22 | refinedweb | 512 | 72.36 |
When building high-performance software, you need to make sure you have two things to start you off: solid architecture and streamlined code. When your code runs efficiently, you are not only able to reduce resource consumption and completion time, but effectively evaluate the quality of your software. Here, I will share some ways that you can do this.
In the last article, I discussed how building streamlined software is a lot like building race cars. When we look at a fast car, it might be the sleek design and chassis that initially grabs our attention, but, as any enthusiast knows, you need to open the bonnet to see what really makes it tick. The engine is, after all, the true power behind any car. It’s a complex combination of different components, where even the smallest misplaced bolt can have a massive impact on the overall performance of the vehicle.
In a software system, the engine is represented by the code. Each little section comes together to make it operate and, if one piece of code is poorly optimised, often the whole system will feel slow as a result.
What optimising code means and why it’s important
Code optimisation is the process of trying to find the fastest way in which an operation or problem can be resolved by computation time.
It does not look for the shortest, easiest or even the simplest solution, but the one that allows for execution in the fastest amount of time, using the least amount of memory.
The optimisation of code can be quite an exhaustive discussion, but to at least get the juices flowing, I would like to suggest a few things that you can look at to help your team identify what could provide the biggest performance gains. These include:
- Using the most efficient algorithms
- Optimising your memory usage
- Using functions wisely
- Optimising arrays
- Using powers of two for multidimensional arrays
- Optimising loops
- Working on data structure optimisation
- Identifying the most appropriate search type
- Being careful with the use of operators
- Using tables versus recalculating
- Carefully considering data types
Disclaimer:
Much like car parts are not all interchangeable with each other, some of the solutions that I suggest may not work as effectively with your software systems. Different programming languages and compilers work differently in how they convert and optimise functions. So, I would suggest, like with everything, you do some research, monitor the results and stick with what works best for you.
Use the most efficient algorithm
Speed, simply put, is about CPU processing. Ideally, what you want to do when optimising any algorithm is figure out which decision tree or branch logic will require the least number of options to work through – or, more specifically, the least CPU time.
How to do this is quite complex as there are so many different ways to apply algorithms, depending on the solution you are trying to reach. This article explains how to think about algorithms quite well.
I’ve found that the easiest way to evaluate and measure this, is to break up each of the processing aspects of your code, apply the following code (I’ve used C++ here, but you can change this for whatever language you prefer), and measure which one performs the fastest:
LARGE_INTEGER BeginTime; LARGE_INTEGER EndTime; LARGE_INTEGER Freq; QueryPerformanceCounter(&BeginTime); // your code QueryPerformanceCounter(&EndTime); QueryPerformanceFrequency(&Freq); printf("%2.3f\n",(float) ((EndTime.QuadPart - BeginTime.QuadPart) *1000000)/Freq.QuadPart);
The above will effectively output the execution speed of a given line of code, which will help you to measure it effectively. It will also provide you with a platform to experiment and find the best combination of algorithms for your code.
Optimise your code for memory
It is important to know how your compiler manages its memory and programs. Knowing this can prevent your code from utilising too much memory and, thereby, potentially slowing down other aspects of computer processing.
This is especially important for graphically heavy applications, like video games. In these cases, processors are required to work with complex algorithms to generate the CGI images and, how you utilise your memory will make a massive difference in the overall performance of the final product.
Tip: You can use monitoring tools (like Zabbix) to help achieve this.
Use functions wisely
Functions, or shared code that can be called multiple times, are utilised to make code more modular, maintainable and scalable. However, if you are not careful when using them, functions can create some performance bottlenecks, especially if applied to recursion (functions called in a repeated loop).
While functions certainly make coding shorter, repeatedly calling a function to do a similar thing ten times is unnecessary expenditure on your memory and CPU utilisation.
Tip: To do this better, you should implement the repetition directly in your function as this will require less processing. I’ve set up a few examples to show this later in the article.
Inline functions
Another thing to consider is how you use inline functions While these are often used to ease some of the processing restrictions on your CPU, there are other ways of reducing strain on your processor. For instance, for smaller functions, you can make use of macros instead, as these allow you to benefit from speed, better organisation and reusability.
Additionally, when passing a big object to a function, you could use pointers or references, as these provide better memory management. I personally prefer to use references because they create code that is way easier to read. They are also useful things to use when you are not worried about changing the value that is passed to the function. If you use an object that is constant, it could be useful to use
const, which will save some time.
Optimise arrays
The array is one of the most basic data structures that occupies memory space for its elements.
An array’s name is a constant pointer that points at the first element of an array. This means that you could use pointers and pointer arithmetic because of the operations with pointers.
In the example below, we have a pointer to the
int data type that takes the address from the name of the array. In this case, it is
nArray, and we increase that address for one element. The pointer is moved toward the end of the array for the size of the
int data type.
for(int i=0; i<n; i++) nArray[i]=nSomeValue;
Instead of the above code, the following is better:
for(int* ptrInt = nArray; ptrInt< nArray+n; ptrInt++) *ptrInt=nSomeValue;
If you have used double, your compiler will know how far it should move the address.
It may be harder to read code this way, but it will increase the speed of your program. It’s not the most efficient algorithm that you could use, but the syntax is better and this means the code will run faster.
Using matrices
If you use a matrix, and you have the chance to approach the elements of the matrix row by row, always choose this option as this is the most natural way to approach the array members.
Tip: Avoid initialisation of large portions of memory with some elements. If you can’t avoid this type of situation, consider
memset or similar commands.
Use powers of two for multidimensional arrays
A multidimensional array is used to store data that can be referenced across two or three different sets of axis. It makes storing and referencing data easier. If we can perform faster searching, we can save a lot of time in our code, especially when working with large amounts of data.
The advantage of using powers of two for all but the leftmost array size comes when accessing the array. Ordinarily, the compiled code would have to compute a ‘multiply’ to get the address of an indexed element from a multidimensional array, but most compilers will replace a constant multiply with a shift if it can. Shifts are ordinarily much faster than multiplies.
Optimise loops
We utilise loops, or repeated sequences, to sort through or iterate on data to perform actions when required. While this sort of recursion is extremely helpful in specific scenarios, most of the time, it generates slow performing code.
Tip: If possible, try to reduce your reliance on loops. You should only really use them if they are needed multiple times, and contain multiple operations within them. Otherwise, if you need to iteratively sort through something, use another type of sorting algorithm to reduce your processing time.
Work on data structure optimisation
Not all data is equal and so, we need to structure data appropriately for our intended solution. Like with most things, data has a big impact on code performance, and so, the way you structure the data you need in your code will play a big part in enhancing its speed.
Tip: Keeping your data in a list means that you can very easily create a program that will outperform one that has been created using an array. Additionally, if you save your data in some form of a tree, you can create a program that will perform faster than one that doesn’t have adequate data structure.
Identify the most appropriate search type: Binary Search or Sequential Search
One of the most common tasks you do when programming is search for some value in data structures. However, you can’t just apply the same searching principles to different data structures. Rather, you should spend some time identifying the most appropriate approach for what we require.
For example, if you are trying to find one number in an array of numbers you could have two strategies:
- Sequential Search: The first strategy is very simple. You have your array and value you are looking for. From the beginning of the array, you start to look for the value and, if you find it, you stop the search. If you don’t find the value, you will be at the end of the array. There are many improvements to this strategy.
- Binary Search: The second strategy requires the array to be sorted. If an array is not sorted, you will not get the results that you’re looking for. If the array is sorted, you split it into two halves. In the first half, the elements of the array are smaller than the middle one. In the other half, the elements are bigger than the middle one. If you find yourself in this situation, where two markers are not situated the way they should be, you know that you don’t have the value you have been looking for.
Sorting through the elements of an array will cost you some time, but if you’re willing to do that, you’ll benefit from faster binary search.
Be careful with the use of operators
Most basic operations, like +=, -=, and * =, when applied to basic data types can slow down your program because they place unnecessary computation on your processor. To be sure that things aren’t getting slowed down, you will need to know how they get transformed into assembler on your computer.
Tip: An interesting way to do this is to replace the postfix increment and decrement with their prefix versions.
Sometimes you can use the operators >> or << instead of multiplication or division, but be careful, because you could end up with mistakes. When you attempt to fix these, you could inadvertently add some range estimations, making the code you started with way slower.
Bit operators and the tricks that go with them could increase the speed of the program, but you should be very careful because you could end up with machine dependent code, which you want to avoid.
Use tables versus recalculating
Often in coding, we need to perform some sort of complex calculation. When dealing with calculations, we can either perform the calculation directly in the code or make use of a table to reference and save on the processing time.
Tables are often easier to work with and the simplest solution to code, but they don’t always scale well.
Remember that in recalculating, you have the potential to use parallelism, and incremental calculation with the right formulations. Tables that are too large will not fit in your cache and, hence, may be slow to access and cannot be optimised further. Much like we mentioned above when discussing data structures, tables should be used with caution.
Carefully consider your data types
When we assign data to variables in code, we often allocate a size to it. This is the amount of memory space the computer makes available when working with this memory. The larger the program, the more it may utilise data. You want to try and use as little data as possible.
On modern 32 and 64-bit platforms, small data types like chars and shorts actually incur extra overhead when converting to and from the default machine word-size data type.
Tip: Be specific about the data that you are using. Utilise chars for small counters, shorts for slightly larger counters and only use longs or ints when you really have to.
On the other hand, one must be wary of cache usage. Using packed data (and in this vein, small structure fields) for large data objects may pay larger dividends in global cache coherence, than local algorithmic optimisation issues.
A Caveat
While finding the most optimal coding solution is ideal, it doesn’t always mean it’s the best way to go about solving problems. Some of the below points are also worth considering:
- Optimising your code for performance using all possible techniques might generate a bigger file with bigger memory footprint.
- You might have two different optimisation goals that conflict with each other. For example, to optimise the code for performance might conflict with optimising the code for less memory footprint and size. You have to find a balance.
- Performance optimisation is a never-ending process; your code might never be fully optimised. There is always more room for improvement to make your code run faster.
- Sometimes you can use certain programming tricks to make code run faster at the expense of not following best practices. Try to avoid implementing cheap tricks, though as this will not pay off long term.
An optimisation test:
To test how well you’ve understood the different optimisation techniques discussed above, here is a coding solution for you have a look at and identify what can be optimised:
Example code (in C++):
#include <iostream> #define LEFT_MARGIN_FOR_X -100.0 #define RIGHT_MARGIN_FOR_X 100.0 #define LEFT_MARGIN_FOR_Y -100.0 #define RIGHT_MARGIN_MARGIN_FOR_X*LEFT_MARGIN_FOR_X+LEFT_MARGIN_FOR_Y*LEFT_MARGIN_FOR_Y)/ (LEFT_MARGIN_FOR_Y*LEFT_MARGIN_FOR_Y+dB); double dMaximumX = LEFT_MARGIN_FOR_X; double dMaximumY = LEFT_MARGIN_FOR_Y; for(double dX=LEFT_MARGIN_FOR_X; dX<=RIGHT_MARGIN_FOR_X; dX+=1.0) for(double dY=LEFT_MARGIN_FOR_Y; dY<=RIGHT_MARGIN; }
Optimising your code is key in ensuring that your engine is running efficiently. However, to ensure that your system stays that way, you’re going to need to test it continuously. We’ll cover this in the final installment of this series.! | https://www.offerzen.com/blog/sedan-to-supercar-part-2-code-optimisation | CC-MAIN-2019-43 | refinedweb | 2,509 | 58.72 |
Very new to CircuitPython, sorry for the multitude of questions. I've gotten some great advice so far.
Trying to get Trinket MO to do the footswitch thing
Struggling with this code I got from
Two errors show up: I think the main one is "from board import *" results in " unable to detect undefined names"
The other error is [D0] is undefined, or defined from star imports:board
I was hoping that this code would be error free, but it doesn't seem to work for me. Any ideas?
- Code: Select all | TOGGLE FULL SIZE
import digitalio
from board import *
import time
from adafruit_hid.keyboard import Keyboard
from adafruit_hid.keycode import Keycode
# A simple neat keyboard demo in circuitpython
buttonpins = [D0]
# The keycode sent for each button, optionally can be paired with a control key
buttonkeys = [44]
controlkey = Keycode.LEFT_CONTROL
# the keyboard object!
kbd = Keyboard()
# our array of button objects
buttons = []
# make all pin objects, make them inputs w/pullups
for pin in buttonpins:
button = digitalio.DigitalInOut(pin)
button.direction = digitalio.Direction.INPUT
button.pull = digitalio.Pull.UP
buttons.append(button)
print("Waiting for button presses")
while True:
# check each button
for button in buttons:
if (not button.value): # pressed?
i = buttons.index(button)
print("Button #%d Pressed" % i)
# type the keycode!
k = buttonkeys[i] # get the corresp. keycode
kbd.press(k)
# Use this line for key combos kbd.press(k, controlkey)
kbd.release_all()
while (not button.value):
pass # wait for it to be released!
time.sleep(0.01) | https://forums.adafruit.com/viewtopic.php?f=52&p=723840 | CC-MAIN-2019-09 | refinedweb | 249 | 56.76 |
Opened 7 years ago
Closed 6 years ago
#15255 closed Bug (fixed)
DB Cache table creation (createcachetable) ignores the DB Router
Description
When using database-based cache (using django.core.cache.backends.db.DatabaseCache cache backend), one uses the createcachetable command to create the table. This command completely ignores any database routers (in multiple databases situation) which are installed in the settings file. Instead, one can specify a target database with the --database option.
One could argue that this is not a bug, but rather a feature (undocumented one, none the less). However, the DatabaseCache class itself does use the installed router. This creates a confusion: if the router routes to a different database, one may install the table on the wrong database, but will not be able to access it.
I believe the best approach is to follow the syncdb convention: the cache table should not be created when the router returns false form the allow_syncdb method. The table will only be created when the matching DB is specified (or for the default if it is relevant).
Attachments (5)
Change History (21)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
Attached patch resolves the problem described above.
While writing it, I noticed that
createcachetable could be much more automatic. Currently the docs at suggest first creating the table, then adding the cache backend settings. Couldn't we do the opposite: first add the cache backend settings, then create the table?
createcachetable would go through the cache backends, find all instances of BaseDatabaseCache, and create the tables with the appropriate name, like this:
from django.conf import settings from django.core.cache import get_cache for cache_alias in settings.CACHES: cache = get_cache(cache_alias) if isinstance(cache, BaseDatabaseCache): tablename = cache._table ... create the table ...
What do you think? Should I create a different ticket for this?
Changed 7 years ago by
comment:3 Changed 7 years ago by
comment:4 Changed 6 years ago by
Patch looks correct, but it needs tests.
And yeah, I really like the idea of
createcachetable inspecting
settings.CACHES, but please do take that to another ticket.
comment:5 Changed 6 years ago by
By the way, there are tests for
createcachetable in
regressiontests/cache/tests.py; we just need a new test method that tests it with the multidb flag. The test suite should already have an "other" database definition for you to test against.
comment:6 Changed 6 years ago by
New patch includes a test case.
comment:7 Changed 6 years ago by
New patch uses the new @override_settings decorator and remove setUp / tearDown.
comment:8 Changed 6 years ago by
jacobkm said on IRC I could mark it as RFC.
comment:9 Changed 6 years ago by
comment:10 Changed 6 years ago by
comment:11 Changed 6 years ago by
OK, I wasn't using a multi-db capable django.test.TestCase in the v3 patch.
As far as I can tell, two things were still broken when the change was rolled back:
assertNumQueriesmissed the
using=keyword argument
- we expect two queries, not one
I just uploaded a v4 patch that builds upon jezdez' work by fixing this two issues. The "cache" tests pass on mysql, sqlite and postgresql.
comment:12 Changed 6 years ago by
Oops. 15255.4.diff and 15255.4.2.diff are the same file. Trac won't let me remove one.
comment:13 Changed 6 years ago by
I just updated the patch against trunk. I checked that the tests pass under SQLite, PostgreSQL, MySQL and Oracle.
comment:14 Changed 6 years ago by
comment:15 Changed 6 years ago by
Milestone 1.4 deleted
Agreed - allow_syncdb() instructions should be honored. As a side note, there is a related refactoring in the test code (db.backends.creation.create_test_db) -- allow_syncdb is being checked there prior to invoking the database cache table command. | https://code.djangoproject.com/ticket/15255 | CC-MAIN-2017-43 | refinedweb | 650 | 65.73 |
Okay so right now I am working on my 2nd program yet. I just started Java a few days ago and programming--overall-- for only a week, so a lot of information is very foreign to me.
What I've been doing is following some basic tutorials, and with each section I learn, I will make my own program to test my own abilities and understanding.
This means that I am trying to work out the problems of the programs i want to make with less resources (since I have not covered many topics yet)
I am now working on a very minor card game. (It will simply be executed in the play function of Eclipse. There will be no special graphics or anything)
What I thought would be a good idea was to create a class for the characteristics required of each card:
package cardGame; public class cards { String name; int traitBarrier; String traitCardType; int cardNumber; public void setName(String n){ name = n; } public String getName(){ return name; } public void setTraitBarrier(int b){ traitBarrier = b; } public int getTraitBarrier(){ return traitBarrier; } public void setCardNumber(int c){ cardNumber = c; } public int getCardNumber(){ return cardNumber; } public void setTraitCardType(String t){ traitCardType = t; } public String getTraitCardType(){ return traitCardType; } }
Then I created another class to database all the created cards(referencing the previous class):
package cardGame; public class cardLibrary { public static void main(String[] args) { cards card1 = new cards(); cards card2 = new cards(); cards card3 = new cards(); cards card4 = new cards(); cards card5 = new cards(); cards card6 = new cards(); cards card7 = new cards(); cards card8 = new cards(); cards card9 = new cards(); cards card10 = new cards(); card1.name = "Flames Elemental"; card1.cardNumber = 1; card1.traitCardType = "Fr"; card1.traitBarrier = 3; card2.name = "Earth Elemental"; card2.cardNumber = 2; card2.traitCardType = "Ert"; card2.traitBarrier = 5; card3.name = "Aquatic Necromancer"; card3.cardNumber = 3; card3.traitCardType = "Wtr"; card3.traitBarrier = 4; card4.name = "Soul Hunter"; card4.cardNumber = 4; card4.traitCardType = "Dk"; card4.traitBarrier = 4; card5.name = "Crusading Monk"; card5.cardNumber = 5; card5.traitCardType = "Lit"; card5.traitBarrier = 6; card6.name = "Explosion Vigilante"; card6.cardNumber = 6; card6.traitCardType = "Fr"; card6.traitBarrier = 2; card7.name = "Gate Keeper"; card7.cardNumber = 7; card7.traitCardType = "Ert"; card7.traitBarrier = 8; card8.name = "Priestess of the Lake"; card8.cardNumber = 8; card8.traitCardType = "Wtr"; card8.traitBarrier = 3; card9.name = "Jr. Assailant"; card9.cardNumber = 9; card9.traitCardType = "Dk"; card9.traitBarrier = 1; card10.name = "Divine Alchemist"; card10.cardNumber = 10; card10.traitCardType = "Lit"; card10.traitBarrier = 3; } }
Finally in my main class…
I seem to be confused on how exactly I can call the cards from my cardsLibrary class into my main class to create the deck.
Did I skip some steps that were required for me to be able to do this? | http://www.javaprogrammingforums.com/object-oriented-programming/34859-classes-classes-%5Bvery-beginner-asking-question%5D-%5Bw-codes%5D.html | CC-MAIN-2017-47 | refinedweb | 449 | 54.18 |
Dealing”. This logic is typically captured somewhere away from the “ZipCode” value, and typically duplicated throughout the application. For some reason, I was averse to creating small objects to hold these values and their simple logic. I don’t really know why, as data objects tend to be highly cohesive and can cut down a lot of duplication.
Beyond what Fowler walks through, I need to add a couple more features to my data object to make it really useful.
Creating the data object
First I’ll need to create the data object by following the steps in Fowler’s book. I’ll make the ZipCode class a DDD Value Object, and this is what I end up with:
public class Address { public ZipCode ZipCode { get; set; } } public class ZipCode { private readonly string _value; public ZipCode(string value) { // perform regex matching to verify XXXXX or XXXXX-XXXX format _value = value; } public string Value { get { return _value; } } }
This is pretty much where Fowler’s walkthrough stops. But there are some issues with this implementation:
- Now more difficult to deal with Zip in its native format, strings
- Zip codes used to be easier to display
Both of these issues can be easy to fix with the .NET Framework’s casting operators and available overrides.
Cleaning it up
First, I’ll override the ToString() method and just output the internal value:
public override string ToString() { return _value; }
Lots of classes, tools, and frameworks use the ToString method to display the value of an object, and now it will use the internal value of the zip code instead of just outputting the name of the type (which is the default).
Next, I can create some casting operators to go to and from System.String. Since zip codes are still dealt with mostly as strings in this system, I stuck with string instead of int or some other primitive. Also, many other countries have different zip code formats, so I stayed with strings. Here are the cast operators, both implicit and explicit:
public static implicit operator string(ZipCode zipCode) { return zipCode.Value; } public static explicit operator ZipCode(string value) { return new ZipCode(value); }
I prefer explicit operators when converting from primitives, and implicit operators when converting to primitives. FDG guidelines for conversion operators are:
DO NOT provide a conversion operator if such conversion is not clearly expected by the end users.
DO NOT define conversion operators outside of a type’s domain.
DO NOT provide an implicit conversion if the conversion is potentially lossy.
DO NOT throw exceptions from implicit casts.
DO throw System.InvalidCastException if a call to a cast operator results in lossy conversion and the contract of the operator does not allow lossy conversions.
I meet all of these guidelines, so I think this implementation will work.
End result
Usability with the changed ZipCode class is much improved now:
Address address = new Address(); address.ZipCode = new ZipCode("12345"); // constructor address.ZipCode = (ZipCode) "12345"; // explicit operator string zip = address.ZipCode; // implicit operator Console.WriteLine("ZipCode: {0}", address.ZipCode); // ToString method
Basically, my ZipCode class now “plays nice” with strings and code that expects strings.
With any hurdles out of the way for using simple data objects, I can eliminate a lot of duplication and scattered logic by creating small, specialized classes for all of the special primitives in my app. | https://lostechies.com/jimmybogard/2007/12/03/dealing-with-primitive-obsession/ | CC-MAIN-2018-22 | refinedweb | 557 | 52.09 |
The Sidebar ist placholder.
Example:
... ** tb-translation-workflow | ub-helpfiles-new-content ...
Add new pages
Tasks and Tools
Where "Tasks and Tools" is the name of an existing page of the Main namespace.
If the link shall point to an anchor of a page you have to insert a redirect-page as a work around. Following the nomenclature this page should be called ub-name-redirect and shall only contain the #REDIRECT statement wich leads to the final destination page.
So let's improve the example from above:
... ** ub-helpfiles-new-content-url | ub-helpfiles-new-content ...
Add new pages
ub-helpfiles-new_content-redirect
Tasks and Tools#Add_New_Pages "Userbase's custom interface messages". Finally choose your language and fetch the items to be translated. | http://techbase.kde.org/index.php?title=Modifying_the_Sidebar&oldid=62619 | CC-MAIN-2014-15 | refinedweb | 124 | 56.55 |
Everyone makes mistakes—even seasoned professional developers! Python’s interactive interpreter, IDLE, is pretty good at catching mistakes like syntax errors and runtime errors, but there’s a third type of error that you may have already experienced. Logic errors occur when an otherwise valid program doesn’t do what was intended. Logic errors cause unexpected behaviors called bugs. Removing bugs is called debugging.
A debugger is a tool that helps you hunt down bugs and understand why they’re happening. Knowing how to find and fix bugs in your code is a skill that you’ll use for your entire coding career!
In this tutorial, you’ll:
- Learn how to use IDLE’s Debug Control window
- Practice debugging on a buggy function
- Learn alternative methods for debugging your code
Note: This tutorial is adapted from the chapter “Finding and Fixing Code Bugs” in Python Basics: A Practical Introduction to Python 3.
The book uses Python’s built-in IDLE editor to create and edit Python files and interact with the Python shell, so you will see references to IDLE’s built-in debugging tools throughout this tutorial. However, you should be able to apply the same concepts to the debugger of your choice.
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.
Use the Debug Control Window#
The main interface to IDLE’s debugger is the Debug Control window, or the Debug window for short. You can open the Debug window by selecting Debug→Debugger from the menu in the interactive window. Go ahead and open the Debug window.
Note: If the Debug menu is missing from your menu bar, then make sure to bring the interactive window into focus by clicking it.
Whenever the Debug window is open, the interactive window displays
[DEBUG ON] next to the prompt to indicate that the debugger is open. Now open a new editor window and arrange the three windows on your screen so that you can see all of them simultaneously.
In this section, you’ll learn how the Debug window is organized, how to step through your code with the debugger one line at a time, and how to set breakpoints to help speed up the debugging process.
The Debug Control Window: An Overview#
To see how the debugger works, you can start by writing a simple program without any bugs. Type the following into the editor window:
1 for i in range(1, 4): 2 j = i * 2 3 print(f"i is {i} and j is {j}")
Save the file, then keep the Debug window open and press F5. You’ll notice that execution doesn’t get very far.
The Debug window will look like this:
Notice that the Stack panel at the top of the window contains the following message:
> '__main__'.<module>(), line 1: for i in range(1, 4):
This tells you that line 1 (which contains the code
for i in range(1, 4):) is about to be run but hasn’t started yet. The
'__main__'.module() part of the message refers to the fact that you’re currently in the main section of the program, as opposed to being, for example, in a function definition before the main block of code has been reached.
Below the Stack panel is a Locals panel that lists some strange looking stuff like
__annotations__,
__builtins__,
__doc__, and so on. These are internal system variables that you can ignore for now. As your program runs, you’ll see variables declared in the code displayed in this window so that you can keep track of their value.
There are five buttons located at the top left-hand corner of the Debug window: Go, Step, Over, Out, and Quit. These buttons control how the debugger moves through your code.
In the following sections, you’ll explore what each of these buttons does, starting with Step.
Over and Out#
The Over button works sort of like a combination of Step and Go. It steps over a function or a loop. In other words, if you’re about to step into a function with the debugger, then you can still run that function’s code without having to step all the way through each line of it. The Over button takes you directly to the result of running that function.
Likewise, if you’re already inside a function or loop, then the Out button executes the remaining code inside the function or loop body and then pauses.
In the next section, you’ll look at some buggy code and learn how to fix it with IDLE.
Squash Some Bugs#
Now that you’ve gotten comfortable with using the Debug Control window, let’s take a look at a buggy program.
The following code defines a function
add_underscores() that takes a single string object
word as an argument and returns a new string containing a copy of
word with each character surrounded by underscores. For example,
add_underscores("python") should return
"_p_y_t_h_o_n_".
Here’s the buggy code:
def add_underscores(word): new_word = "_" for i in range(len(word)): new_word = word[i] + "_" return new_word phrase = "hello" print(add_underscores(phrase))
Type this code into the editor window, then save the file and press F5 to run the program. The expected output is
_h_e_l_l_o_, but instead all you see is
o_, or the letter
"o" followed by a single underscore.
If you already see what the problem with the code is, don’t just fix it. The point of this section is to learn how to use IDLE’s debugger to identify the problem.
If you don’t see what the problem is, don’t worry! By the end of this section, you’ll have found it and will be able to identify similar problems in other code you encounter.
Note: Debugging can be difficult and time consuming, and bugs can be subtle and hard to identify.
While this section looks at a relatively basic bug, the method used to inspect the code and find the bug is the same for more complex problems.
Debugging is problem solving, and as you become more experienced, you’ll develop your own approaches. In this section, you’ll learn a simple four-step method to help get you started:
- Guess which section of code may contain the bug.
- Set a breakpoint and inspect the code by stepping through the buggy section one line at a time, keeping track of important variables along the way.
- Identify the line of code, if any, with the error and make a change to solve the problem.
- Repeat steps 1–3 as needed until the code works as expected.
Step 1: Make a Guess About Where the Bug Is Located#
The first step is to identify the section of code that likely contains the bug. You may not be able to identify exactly where the bug is at first, but you can usually make a reasonable guess about which section of your code has an error.
Notice that the program is split into two distinct sections: a function definition (where
add_underscores() is defined), and a main code block that defines a variable
phrase with the value
"hello" and then prints the result of calling
add_underscores(phrase).
Look at the main section:
phrase = "hello" print(add_underscores(phrase))
Do you think the problem could be here? It doesn’t look like it, right? Everything about those two lines of code looks good. So, the problem must be in the function definition:
def add_underscores(word): new_word = "_" for i in range(len(word)): new_word = word[i] + "_" return new_word
The first line of code inside the function creates a variable
new_word with the value
"_". You’re all good there, so you can conclude that the problem is somewhere in the body of the
for loop.
Step 2: Set a Breakpoint and Inspect the Code#
Now that you’ve identified where the bug must be, set a breakpoint at the start of the
for loop so that you can trace out exactly what’s happening inside the code with the Debug window:
Open the Debug window and run the file. Execution still pauses on the very first line it sees, which is the function definition.
Press Go to run through the code until the breakpoint is encountered. The Debug window will now look like this:
At this point, the code is paused just before entering the
for loop in the
add_underscores() function. Notice that two local variables,
word and
new_word, are displayed in the Locals panel. Currently,
word has the value
"hello" and
new_word has the value
"_" as expected.
Click Step once to enter the
for loop. The Debug window changes, and a new variable
i with the value
0 is displayed in the Locals panel:
i is the counter used in the
for loop, and you can use it to keep track of which iteration of the
for loop you’re currently looking at.
Click Step one more time. If you look at the Locals panel, then you’ll see that the variable
new_word has taken on the value
"h_":
This isn’t right. Originally,
new_word had the value
"_", and on the second iteration of the
for loop it should now have the value
"_h_". If you click Step a few more times, then you’ll see that
new_word gets set to
e_, then
l_, and so on.
Step 3: Identify the Error and Attempt to Fix It#
The conclusion you can make at this point is that, at each iteration of the
for loop,
new_word is overwritten with the next character in the string
"hello" and a trailing underscore. Since there’s only one line of code inside the
for loop, you know that the problem must be with the following code:
new_word = word[i] + "_"
Look at the line closely. It tells Python to get the next character of
word, tack an underscore onto the end of it, and assign this new string to the variable
new_word. This is exactly the behavior you’ve witnessed by stepping through the
for loop!
To fix the problem, you need to tell Python to concatenate the string
word[i] + "_" to the existing value of
new_word. Press Quit in the Debug window, but don’t close the window just yet. Open the editor window and change the line inside the
for loop to the following:
new_word = new_word + word[i] + "_"
Step 4: Repeat Steps 1 to 3 Until the Bug Is Gone#
Save the new changes to the program and run it again. In the Debug window, press Go to execute the code up to the breakpoint.
Note: If you closed the debugger in the previous step without clicking Quit, then you may see the following error when reopening the Debug window:
You can only toggle the debugger when idle
Always be sure to click Go or Quit when you’re finished with a debugging session instead of just closing the debugger, or you might have trouble reopening it. To get rid of this error, you’ll have to close and reopen IDLE.
The program pauses just before entering the
for loop in
add_underscores(). Press Step repeatedly and watch what happens to the
new_word variable at each iteration. Success! Everything works as expected!
Your first attempt at fixing the bug worked, so you don’t need to repeat steps 1–3 anymore. This won’t always be the case. Sometimes you’ll have to repeat the process several times before you fix a bug.
Alternative Ways to Find Bugs#
Using a debugger can be tricky and time consuming, but it’s the most reliable way to find bugs in your code. Debuggers aren’t always available, though. Systems with limited resources, such as small Internet of Things devices, often won’t have built-in debuggers.
In situations like these, you can use print debugging to find bugs in your code. Print debugging uses
print() to display text in the console that indicates where the program is executing and what the state of the program’s variables are at certain points in the code.
For example, instead of debugging the previous program with the Debug window, you could add the following line to the end of the
for loop in
add_underscores():
print(f"i = {i}; new_word = {new_word}")
The altered code would then look like this:
def add_underscores(word): new_word = "_" for i in range(len(word)): new_word = word[i] + "_" print(f"i = {i}; new_word = {new_word}") return new_word phrase = "hello" print(add_underscores(phrase))
When you run the file, the interactive window displays the following output:
i = 0; new_word = h_ i = 1; new_word = e_ i = 2; new_word = l_ i = 3; new_word = l_ i = 4; new_word = o_ o_
This shows you what the value of
new_word is at each iteration of the
for loop. The final line containing just a single underscore is the result of running
print(add_underscore(phrase)) at the end of the program.
By looking at the above output, you can come to the same conclusion you did while debugging with the Debug window. The problem is that
new_word is overwritten at each iteration.
Print debugging works, but it has several disadvantages over debugging with a debugger. First, you have to run your entire program each time you want to inspect the values of your variables. This can be an enormous waste of time compared to using breakpoints. You also have to remember to remove those
print() function calls from your code when you’re done debugging!
The example loop in this section may be a good example for illustrating the process of debugging, but it’s not the best example of Pythonic code. The use of the index
i is a giveaway that there might be a better way to write the loop.
One way to improve this loop is to iterate over the characters in
word directly. Here’s one way to do that:
def add_underscores(word): new_word = "_" for letter in word: new_word = new_word + letter + "_" return new_word
The process of rewriting existing code to be cleaner, easier to read and understand, or more in line with the standards set by a team is called refactoring. We won’t discuss refactoring in this tutorial, but it’s an essential part of writing professional-quality code.
Conclusion: Python Debugging With IDLE#
That’s it! You now know all about debugging using IDLE’s Debug window. You can use the basic principles you used here with a number of different debugging tools. You’re now well equipped to start debugging your Python code.
In this tutorial, you learned:
- How to use IDLE’s Debug Control window to inspect the values of variables
- How to insert breakpoints to take a closer look at how your code works
- How to use the Step, Go, Over, and Out buttons to track down bugs line by line
You also got some practice debugging a faulty function using a four-step process for identifying and removing bugs:
- Guess where the bug is located.
- Set a breakpoint and inspect the code.
- Identify the error and attempt to fix it.
- Repeat steps 1 to 3 until the error is fixed.
Debugging is as much an art as it is a science. The only way to master debugging is to get a lot of practice with it! One way to get some practice is to open the Debug Control window and use it to step through your code as you work on the exercises and challenges you find in other Real Python tutorials.
For more information on debugging Python code, check out Python Debugging With Pdb. If you enjoyed what you learned in this sample from Python Basics: A Practical Introduction to Python 3, then be sure to check out the rest of the book. | https://realpython.com/python-debug-idle/ | CC-MAIN-2020-40 | refinedweb | 2,656 | 66.67 |
How control calling threads instances of same variable in Singleton?
Robert Paris
Ranch Hand
Joined: Jul 28, 2002
Posts: 585
posted
Mar 18, 2004 14:54:00
0
I have a VM-wide Singleton. In this singleton I have a variable (or property) that is set by calling threads. I want the following:
1. When a calling thread sets it, that value is visible only to that calling thread (or rather within the singleton only when called by that thread) and all the sub-threads it spawns
2. Every calling thread (to the singleton's method) gets a separate instance (although all sub-threads of that calling thread are also considered to be of the same thread)
What is the best/most-accurate(?) way to do this?
As a quick example of something like what I'm talking about:
imagine a variable "currentCaller". Let's say this is set by calling MySingleton.setCaller() (this method may or may not be static - any recommendations?) If Thread "A" sets it to "John" then every time Thread "A" or any threads under Thread "A" call any method on MySingleton, it will know to use currentUser=John. At the same time, Thread "B" sets it to "Joe" and it should be Joe for all it's calls.
Thanks!
David Weitzman
Ranch Hand
Joined: Jul 27, 2001
Posts: 1365
posted
Mar 18, 2004 15:07:00
0
Sounds like what you want is
ThreadLocal
.
Robert Paris
Ranch Hand
Joined: Jul 28, 2002
Posts: 585
posted
Mar 19, 2004 16:31:00
0
Yes, but how exactly is that going to work? Let's say I have the following:
Main Thread | ---------------------------------- | | Thread A Thread B | -------------- | | Thread B1 Thread B2
I want Thread A and B to have diff. values for that
ThreadLocal
variable. However, I would like to have Thread B1 and B2 to have the same as Thread B. In other words, every thread that is the child of the main thread is considered a new thread. Every thread under those threads inherit from the parent which is directly under the main thread. Is there a best way to do this?
I assumed orginally I wanted
InheritableThreadLocal
, but how do I make sure it's inheritable only from the child under "main" downwards and not that everything shares the same (i.e. if they inherited from "main" thread)?
David Weitzman
Ranch Hand
Joined: Jul 27, 2001
Posts: 1365
posted
Mar 20, 2004 13:57:00
0
Look at what this code does. Some variant of it may meet your needs. Note that the call to threadLocal.get() in the main thread is very important, because otherwise the child threads of main will just get a null initial value:
public class Test { static int counter = 0; private static synchronized int nextCount() { return ++counter; } private static ThreadLocal threadLocal = new InheritableThreadLocal() { protected Object initialValue() { return null; } public void set(Object value) { throw new UnsupportedOperationException("I'll take care of the setting, thank you!"); } protected Object childValue(Object parentValue) { return parentValue == null ? "" + nextCount() : parentValue; } }; private static class Runner implements Runnable { public Runner(String creator, int depth) { this.creator = creator; this.depth = depth; } String creator; int depth; public void run() { System.out.println("Thread " + creator + " running." + " In group " + Thread.currentThread().getThreadGroup().getName() + ". Value = "+ threadLocal.get()); if (depth > 0) { new Thread(new Runner(creator + "'s child", depth - 1)).start(); } } } public static void main(String[] args) { threadLocal.get(); new Thread(new Runner("First", 3)).start(); new Thread(new Runner("Second", 3)).start(); new Thread(new Runner("Third", 3)).start(); } }
Outputs something similar to this:
Thread First running. In group main. Value = 1 Thread First's child running. In group main. Value = 1 Thread Second running. In group main. Value = 2 Thread Third running. In group main. Value = 3 Thread First's child's child running. In group main. Value = 1 Thread Second's child running. In group main. Value = 2 Thread Third's child running. In group main. Value = 3 Thread First's child's child's child running. In group main. Value = 1 Thread Second's child's child running. In group main. Value = 2 Thread Third's child's child running. In group main. Value = 3 Thread Third's child's child's child running. In group main. Value = 3 Thread Second's child's child's child running. In group main. Value = 2
[ March 20, 2004: Message edited by: David Weitzman ]
I agree. Here's the link:
subject: How control calling threads instances of same variable in Singleton?
Similar Threads
Dan's: Blocked vs Waiting - possible mistake
Threads Notes for Exams
B&S: Opening database
NX: cacheless design to keep things simple?
Concurrent invocation of a singletone method
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/232506/threads/java/control-calling-threads-instances-variable | CC-MAIN-2013-48 | refinedweb | 796 | 77.13 |
go to bug id or search bugs for
Description:
------------
If the callback function used by usort handles an exception using a try/catch block, a warning is generated. The correct sorting is still done. This happens even when the exception & handling doesn't involve the variables.
The example below is the usort example from the manual with only the try/catch block added. Reproducible in PHP 5.2.11 but not 5.2.9
Reproduce code:
---------------
<?php
function cmp($a, $b)
{
if ($a == $b) {
return 0;
}
try {
throw new Exception();
} catch (Exception $E) {
}
return ($a < $b) ? -1 : 1;
}
$a = array(3, 2, 5, 6, 1);
usort($a, "cmp");
Expected result:
----------------
No warning message.
Actual result:
--------------
PHP Warning: usort(): Array was modified by the user comparison function in /home/jcampbell/usortExceptionWarning.php on line 19
Add a Patch
Add a Pull Request
The problem seems to be that usort checks the amount of references
before and after the function call to see if the user-provided function
modified it, but inside the function call, debug_backtrace_get_args adds
a reference to the passed variables to use in e.g. debug_backtrace's
"arg" element.
This was caused by the fix for bug #50006 (there weren't such checks before :)
Stas, can you check this out? Didn't expect anyone to use exceptions, did you? :D
affects gentoo builds after > 5.2.10 (5.2.11, 5.2.11-r1, and 5.2.12)
The reason seems to be that when making exception backtrace, debug_backtrace_get_args() uses SEPARATE_ZVAL_TO_MAKE_IS_REF() on arguments, which makes it look as if the argument was indeed modified (which usort is designed to protect against, since cmp callback is not supposed to modify the arguments)
I printed a debug line from my usort callback. It called debug_backtrace() to print the line and sourcefile in the debuglog. And therefor triggered the error. Even more, it did not sort.
Maybe the phpmanual should state that usort() callbacks are not allowed to write loglines. I also think that usort() callbacks that DO change the array are perfectly legal, as long as they don't change the sort.
Maybe your sorter code needs stackoverflow protection or whatever, but calling certain code 'invalid', because it causes your code to SEGV is a stupid way to solve a bug.
I notice this is still affecting PHP 5.3.3 (Windows/Apache install).
Is this likely to be fixed soon - is it a question of developer time and priority
or is it too difficult to fix?
It's quite irritating - I realise that the obvious solution is to avoid throwing
the exception (ha-ha) but it's a useful function and exceptions are... inevitable.
This bug is still present as of PHP 5.3.8, we ran into it today and spent most of a day trying to figure out what was causing the error message "Array was modified by the user comparison function", when CLEARLY, NOTHING was changing the array at all!
The exception was not thrown/caught directly in the usort function but rather in a constructor of a class that was called about 3 or 4 functions deep from the usort, making it very difficult to track down.
After finally figuring out the exception was somehow related, we searched google and found this bug report. I'm sure we can agree that the minor act of catching an exception should not result in usort throwing a warning message. This bug is a huge timewaster :(
It took me a while to figure out that some code called from usort was throwing, catching, and (gracefully) handling an Exception. Then I found this post. Quite frustrating.
I turned off warnings with ini_set before calling usort, then turned them on again after. This is an effective workaround for now, but I'd love to clean that nastiness out of my code.
It is also my opinion that usort should be allowed to change the elements in the array. EG: an instance variable of an object may be lazy-loaded as a result of a method call from within a usort callback. Should a warning really be issued in that case?
This same problem arises when using Mockery to mock the object whose method is
being used by usort(), even though the method itself neither is mocked nor handles
any exceptions. The proxy generated by Mockery must wrap the target class's
methods with some exception-handling code.
Unfortunately this forced me to code a workaround that would not use usort. My
hack extracts from the objects in the array the values being sorted on, sorts that
array of values using asort() (to preserve the keys), and finally rebuilds the
list of objects using the keys in the order that they appear in the asorted list
of values. Yuck.
This will probably be obvious to most, but I just wanted to mention that you can always prefix the usort function with the @ symbol to prevent the warning...of course that would also suppress any other types of notices or warnings that might occcur anywhere within the sorting function...
Php 5.4.16 also fails with this.
Still the same status for 3 and a half years old bug?!
I'd like to add, that you do not have to throw an exception to get this warning.
Mere creating it, also triggers the warning, as in:
<?php
function comp($a,$b){
@new Exception("dupa");
}
$a =array(1,2,3);
usort($a, 'comp');
var_dump($a);
?>
PHP Warning: usort(): Array was modified by the user comparison function in
/home/jlopuszanski/test.php on line 6
I ran into a similar issue, i'm sure it'll require the same patch as it's the
backtrace causing the problem but worth noting it doesn't require an exception to
trigger this, just a backtrace.
$ cat usort.php
<?php
set_error_handler(function($errno, $errstr) {
$bt = debug_backtrace();
var_dump($errstr);
});
$arr = [1, 2];
usort($arr, function($a, $b) use ($arr) {
trigger_error('test');
return $a > $b;
});
$ php usort.php
string(4) "test"
string(59) "usort(): Array was modified by the user comparison function"
Sorry the use () isn't relevant, I forgot to remove it when simplifying my test
case | https://bugs.php.net/bug.php?id=50688 | CC-MAIN-2014-15 | refinedweb | 1,028 | 61.97 |
.
Java Beans: They are platform- independent component and usable software programs which you can use develop and assemble easily to create complex applications. JavaBean are also known as beans. Beans are called dynamic as they can be easily customized or changed.
In this example we have created one bean class consisting only of setter and getter method. These setter and getter method will be used in the jsp. To set the value in a jsp page use <jsp:setProperty> standard tag. We are using the EL to retrieve the value of the bean. To retrieve the value from the Html form or a jsp page and set those retrieved values in bean we use ${param.value}. This same as request.getParameter("value") used in scripting. To get the value from java bean by using EL we use ${instanceOfBean.value}. By using this we are able to retrieve the value which we have set in the bean.
The code of the program is given below:
<html> <head> <title>Complex Java Beans Using El</title> </head> <body> <form method = "get" action = "ComplexJavaBeansELProgram.jsp"> Enter your name = <input type = "text" name = "name"><br> Enter your age = <input type = "text" name = "age"><br> Enter your address = <input type = "text" name = "address"><br> Enter your phoneNo = <input type = "text" name = "phone"><br> <input type = "submit" name = "submit" value = "submit"> </form> </body> </html>
package Mybean; public class ComplexJavaBeansUsingEL{ private String name; private int age; private String address; private long phone; public void setName(String name){ this.name = name; } public String getName(){ return name; } public void setAge(int age){ this.age = age; } public int getAge(){ return age; } public void setAddress(String address){ this.address = address; } public String getAddress(){ return address; } public void setPhone(long phone){ this.phone = phone; } public long getPhone(){ return phone; } }
<jsp:useBean <jsp:setProperty <jsp:setProperty <jsp:setProperty <jsp:setProperty <html> <body> <h1>EL and Complex JavaBeans</h1> <table border="1"> <tr> <td>${complex.name}</td> <td>${complex.age}</td> <td>${complex.address}</td> <td>${complex.phone}</td> </tr> </table> </body> </html>
The output of the program is given below:
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: EL and complex JavaBeans1
Post your Comment | http://www.roseindia.net/jsp/simple-jsp-example/ComplexJavaBeansELProgram.shtml | CC-MAIN-2014-52 | refinedweb | 376 | 55.24 |
Docker container isolation, does it care about underlying Linux OS?
If I run Docker Engine and the same container on a set of different Linux distributions, will the container run in the same way? I am asking because in many cases applications depend on a specific Linux distribution for some resources, such as fonts. If my application running inside a Docker container depends on a font used in Ubuntu (and there may be many other dependencies), how is this managed? Will I need to install the font inside container, will I need to run Ubuntu inside the container running the application, or does the application use fonts from the underlying OS running the container?
2 Solutions collect form web for “Docker container isolation, does it care about underlying Linux OS?”
Any missing resources should be installed in a Docker image (which can start from the ubuntu image).
It should not rely on host for dependencies.
The idea is to be able to reproduce the environment each time a container is run from an image.
A container don’t see the host resources (beside mounted volumes), since it has the Docker engine between the container and the host, in order to configure cgroups and namespaces to control which resources the container can see and access.
The “fedora” image referenced in jboss/base is the base image:
In Docker terminology, a read-only Layer is called an image. An image never changes.
Since Docker uses a Union File System, the processes think the whole file system is mounted read-write. But all the changes go to the top-most writeable layer, and underneath, the original file in the read-only image is unchanged.
Since images don’t change, images do not have state.
See “What is the relationship between the docker host OS and the container base image OS?”:
The only relationship between the host OS and the container is the Kernel.
as the kernel is still the kernel of the host, you will not have any specific kernel module/patches provided by the distribution.
What you need to be careful is
- the kernel dependency,
- and some mandatory access control (SELinux, Apparmor) configurations, which are distribution dependent and may have an impact on how your Docker containers work. | http://dockerdaily.com/docker-container-isolation-does-it-care-about-underlying-linux-os/ | CC-MAIN-2018-30 | refinedweb | 376 | 51.78 |
Test::Kwalitee - test the Kwalitee of a distribution before you release it
version 1.17
# in a separate test file BEGIN { unless ($ENV{RELEASE_TESTING}) { use Test::More; plan(skip_all => 'these tests are for release candidate testing'); } } use Test::Kwalitee;.)
To run only a handful of tests, pass their names to the module in the
test argument (either in the
use directive, or when calling
import directly):
use Test::Kwalitee tests => [ qw( use_strict has_tests ) ];
To disable a test, pass its name with a leading minus (
-):
use Test::Kwalitee tests => [ qw( -use_strict has_readme ));
The list of each available metric currently available on your system can be obtained with the
kwalitee-metrics command (with descriptions, if you pass
--verbose or
-v, but as of Test::Kwalitee 1.09 and Module::CPANTS::Analyse 0.87, the tests include:
Build.PL/Makefile.PL should not have an executable bit
Does the distribution have a build tool file?
Does the distribution have a changelog?
Does the distribution have a MANIFEST?
Does the distribution have a META.yml file?
Does the distribution have a README file?
Does the distribution have tests?
Does the distribution have no symlinks?
Can the the META.yml be parsed?
Does the META.yml declare a license?
Does the distribution have proper libs?
If using Module::Install, it is at least version 0.61?
If using Module::Install, it is at least version 0.89?
Is there a
LICENSE section in documentation, and/or a LICENSE file present?
Does the distribution have no POD errors?
If a SIGNATURE is present, can it be verified?
Does the distribution files all use strict?
Were there no errors encountered during CPANTS testing?
With thanks to CPANTS and Thomas Klausner, as well as test tester Chris Dolan.
This software is copyright (c) 2005 by chromatic.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~ether/Test-Kwalitee-1.17/lib/Test/Kwalitee.pm | CC-MAIN-2014-23 | refinedweb | 322 | 68.77 |
Opened 3 years ago
Last modified 3 months ago
#27536 new defect
Conversion of mathematical constant such as pi to RDF leaks memory
Description
As reported in Ask SageMath question #45863:
import gc for x in xrange(10^5,10^5+100): s = sum(RDF(pi) for n in xrange(x)) del(s) print "memory usage: " + str(get_memory_usage()) + ", gc: " + str(gc.collect())
The same happens with the other mathematical constants.
The problem does not occur when
RDF(pi) is replaced by e.g.
RDF.pi().
My guess based on
trace("RDF(pi)") is that it has something to do with caching.
Change History (10)
comment:1 Changed 3 years ago by
- Component changed from memleak to symbolics
- Keywords pynac added
comment:2 Changed 3 years ago by
- Cc rws slelievre added
comment:3 Changed 3 years follow-up: ↓ 5 Changed 19 months ago by
This issue is still present in version 9.2 and seems like a major bug, much more general than just to RDF.
Repeatedly using the constant pi rapidly leaks memory. As a test, running the simple loop below on CoCalc? steadily consumes about 15 MB/sec until memory is exhausted.
while(true): if pi>3: x=0
comment:5 in reply to: ↑ 4 Changed 18 months ago by
Replying to gh-agreatnate:
while(true): if pi>3: x=0
I suspect this is indeed the same problem. Now, we're seeing intervalfield elements pile up and a positive thing: because they have pointers themselves, the GC actually tracks them, so we see them on the python heap! (RDF don't need to be tracked by the garbage collector; ref counting does the trick for them, because they can't generate cycles. Neither can RIF elements, but python doesn't know that. Both intervalfield and complex elements carry two pointers to real numbers. So I guess that's why we're seeing them, but not the realnumbers themselves, because they are leaves in the reference tree, as far as Python knows)
I get:
sage: import gc ....: from collections import Counter ....: if pi < 3: ....: pass ....: gc.collect() ....: pre={id(a) for a in gc.get_objects()} ....: for n in [1..2000]: ....: if pi < 3: ....: pass ....: gc.collect() ....: T=Counter(str(type(a)) for a in gc.get_objects() if id(a) not in pre) ....: 193 0 sage: T Counter({"<class 'dict'>": 4000, "<class 'sage.rings.real_mpfi.RealIntervalFieldElement'>": 4000, "<class 'sage.rings.complex_number.ComplexNumber'>": 3999, "<class 'tuple'>": 5, "<class 'list'>": 2, "<class '_ast.Interactive'>": 1, "<class 'function'>": 1, "<class 'set'>": 1})
Backtracking these elements on the heap gets you nothing, though! So as far as I can see, there are no other python heap opjects that hold references to these objects. An incref/decref disparity somewhere would cause that. Alternatively, links are being held outside the python heap (that's not illegal! They could be on the C stack or C heap) It's surprising to see "dict" leak in addition to mpfi elements.
comment:6 Changed 18 months ago by
Possibly it's in this stuff:
The incref here is probably sometimes required, and perhaps always. It does seem to match with:
On the other hand, here:
it seems the reference is NOT stolen (as is done in numeric types elsewhere). So perhaps that incref simply needs to be removed.
comment:7 Changed 18 months ago by
which of the two increfs should go? in ex.cpp, or in numeric.cpp?
comment:8 Changed 18 months ago by
- Report Upstream changed from N/A to Reported upstream. No feedback yet.
I've opened
comment:9 Changed 18 months ago by
comment:10 Changed 3 months ago by
- Report Upstream changed from Reported upstream. No feedback yet. to N/A
This issue is still present in Version 9.5 and, now that pynac has been merged into Sage, it is an issue in Sage rather than upstream.
It's not a leak on the python heap (gc.get_objects() doesn't report anything new), so it's probably not caching, but a memory leak in some library code that gets used.
Changing
RDFto
RRor
RIFalso leaks; using
floatdoes NOT leak. That might help a bit in pinning down where the leak occurs.
Drilling down yields that
pi._convert({'parent':float})does not leak and
pi._convert({'parent':RDF})does.
This rather directly leads to
self._gobj.evalf(0, kwds), i.e., a call in pynac. That's the most likely candidate for the leak. | https://trac.sagemath.org/ticket/27536 | CC-MAIN-2022-21 | refinedweb | 742 | 66.94 |
Power Control Box
A few years a go I built this solid-state relay power control box.
It connects to a parallel port allowing me to turn the power points on and off using software. The parallel port allows for up to 8 outputs by using data 0 through 7 (Pins 2 though 9).
I’ve had this box attached to my HTPC for the last few years; I use it to control power to my TV, subwoofer, table lamp etc.
As mentioned in my previous posts I’ve just finished building a new HTPC, and guess what, it has no parallel port! I thought it would be a simple case of using a USB to parallel adaptor but unfortunately these adaptors aren’t seen by windows as standard parallel ports; instead it appears in device manager as a “USB Printing Support” device hence can’t be addressed directly to turn the data pins on and off.
Print Server
After much googling I came across a project by Doktor Andy which uses a network print server to drive external devices. This was perfect since I had a HP JetDirect print server. I wasn’t able to get Doktor Andy’s circuit working with the JetDirect but Boyan Biandov who’s name was on Andy’s site was very helpful and told me how to get the JetDirect working. A single 74LS04 chip is all that is required to invert the strobe output and feed it back into the busy input, I’m not really a wiz with electronics but as I understand it this fools the print server in to thinking that there is a printer attached and everything is “ok”.
* EDIT *
You DON’T need to use the additional chip at all. Fred kindly commented and pointed this out: }
Smply change the value to 2
Gerrit and Milan have included some step-by-step instructions and a video below in the comments below on how to set this SNMP option.
Thanks to all have contacted me and contributed!
* EDIT *
The IC requires +5Volts and it is also nessecicary to connect +5volts to pins 10, 13 and 15. It wasn’t hard to find a +5v point on the print server board.
Connections; What needs to be connected to what:
Credit goes to Doktor Andy for this great idea and BIG thanks to Boyan who gave me just the right info when I was about to give up!
Control Software
I’ve created a full windows application to control devices attached to print servers, local parallel ports and K8055 USB boards. Download and read about it here.
Here is the simple c#.net class which I use to access the print server. Say you wanted to turn on pins 2, 4 and 6. Combine the pin values
Pin2=1
Pin3=2
Pin4=4
Pin5=8
Pin6=16
Pin7=32
Pin8=64
Pin9=128
Required value to tun on pins 2, 4 and 6 is 1+4+16=21
Call the output method specifying the port as ipaddress:port and the output value:
(Most print servers use tcp port 9100, multi port JetDirects use 9100 for port one, 9101 for port two etc)
IpPortAccess.Output(192.168.1.10:9100,21);
using System.Net; using System.Net.Sockets; using System; using System.Collections.Generic; using System.Text; namespace PowerControl { class IpPortAccess { public static void Output(string port,int value) { string[] ipport = port.Split(new char[] { ':' }); string _ip = ipport[0]; int _port = Convert.ToInt32(ipport[1]); Socket soc = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); soc.Connect(_ip,_port); byte[] sendData = new byte[1]; sendData[0] = Convert.ToByte(value); soc.Send(sendData); soc.Close(); } } }
Once making a new kitchen or renovating an existing
kitchen, it is highly recommended and highly recommended
to keep an open mind and find out everything possible – towards the end of the process,
it will definitely pay off. Here are some common myths
about the design process as well as results:
“You may easily add later” -the best planning is to take into account
and add all the characteristics of the kitchen at the planning and as well as or renewal stages,
and not at a later stage. For example, in case you are not thinking about
setting up a flat screen TV SET in the kitchen space or if you do not
currently have a water dispenser, the best thing is to make for
future installation of these also to plan electric powered, communication and water items
and keep the alternatives open up. Also, since the kitchen is supposed to
provide us for several years, it is a good idea to
measure all existing ergonomic solutions such
as drawer drawers, bathroom drawer style dishwashers, and so
forth Even if you are young and fit at the moment, these additions will prove themselves Mobility as time
goes on. Either way, if you prefer to deal with kitchen planning in stages – one aspect or specific area at a time – be certain to plan carefully so that everything will fit in stylistic and functional harmony
when the kitchen will finally be complete.
” I do not have so many things” – whether we like it or not, your kitchen has tremendous
potential for ruining due to multitude of items
kept in it, which is why space is plenty to store, This means that there is a relative shortage of overhead storage cupboards, obviously worthwhile considering
a compact kitchen.
“I do not desire a designer ” -not every kitchen design project requires a designer, but many people need someone to help us see the overall picture and watch over the work.
Renting consult with an architect or designer does involve
quite a lot of cost, but in the long run it is all about
keeping money and time. In case you have a very clear picture of what you want, do not save the expense of
a designer as it can best fulfill ideal. Proper and efficient planning of your kitchen and
coordination between all the different professionals engaged in building your kitchen by a
designer can keep your project within the some budget, save you unnecessary money and pointless problems and particularly keep your sanity.
For the reason that the admin of this website is working,
no doubt very shortly it will be famous, due to its quality contents.
Afro combina com uma grande variedade de acessórios.’m wrapping up my project, thought I’d follow up with a schematic I ended up using.
I figured out the problem with Port #1 on my HP Jetdirect 510x’s — it didn’t like the 7404 trick; the STROBE pulse was too quick for it to detect.
So I used a 555 as a “one shot” to more precisely emulate the printer’s BUSY and ACK signals:
With this, all three of my Jetdirect 510x’s ports work normally. The 555 does a better job of simulating the timing diagram of what a printer does, holding BUSY and ACK. In my case I chose values that holds these states for about 100 microseconds.. much longer than the STROBE. The 510x seems to like this better, esp. on port #1.
No SNMP settings needed, and this works /regardless/ of the “handshake” value being 2 or 3.
In this circuit the 555 is rigged as a “monostable multivibrator”, also known as a “one-shot”; a short pulse input (from the -STROBE) gives a slightly longer pulse out (100 us). You can control the rate of data this way; adjust the R/C values (Resistor/Capacitor) to control data rate to whatever speed you want. Just change the 10k resistor and .01uF capacitor as needed. Google around, and you’ll find tables of R/C values for the 555 monostable multivibrator circuit to precisely choose specific pulse widths.
I think fred spoke of doing this with a 555 in his Dec 1 2010 post (ne555 == 555); I’m sure he was talking about using it the same way. It’s a little bit more circuitry, but probably worth the trouble, as it’s surely better to emulate the printer as precisely as possible.
BTW, beware: the wikipedia page on the parallel port seems to show the wrong active states for ACK, BUSY, ERROR and RESET. Here’s their diagram:
To date it shows ACK being active hi, BUSY active low, etc. which I think are wrong. In the above 555 circuit diagram, I used their diagram but made the appropriate corrections; -ACK being active low, +BUSY as active high, etc.
Hi Erco,
Thanks again for all the info. Great work!
Cheers,
Rhys
Just an update … I got me some cheap stepper motors and driver board … and then I remembered the JetDirect still sitting in a box since the video I posted earlier… turns out it’s easy peasy to connect ….
one thing i dont get though is why it doesn’t work at anything above 4 bytes per second ! >:\
Hi Milan, Thanks for stopping by. Very cool idea. Although I can’t seem to see the video (says it’s private).
Cheers,
Rhys
Be sure to set TCP_NODELAY for your TCP connection. Above a certain rate, TCP will cache the bytes you send, presumably to be more efficient for building up large blocks of data and sending them in one packet, instead of sending individual bytes as separate packets.
In our case (sending bytes realtime), it works best to disable this “optimization” so we can have a “realtime” conversation with the device.
Here’s some python code I’m using now during testing which shows how to set TCP_NODELAY:
— snip
#!/usr/bin/python
import socket,time,os,sys
ip = “192.168.1.43”
port = 9100
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
print “connecting to %s on port %d” % (ip, port)
server = (ip,port)
sock.connect(server)
while (1):
print “ON”
sock.send(“\xff”)
time.sleep(.05)
print “OFF”
sock.send(“\x00”)
time.sleep(.05)
—- snip
Without the TCP_NODELAY line, my sleep()s can’t be faster than .2 secs; the bytes get buffered and are pushed out all at once after enough bytes are cached.
When I /set/ TCP_NODELAY (as shown above), I can go faster without trouble.
I don’t know what the max rate is; I imagine if you send a byte stream, instead of a byte-at-a-time, you can send at the fastest rate the HP can handle.
..and as I knew it would, the blog website deleted the leading spaces from my python snippet above, which would make the code un-runnable as shown. All the lines following ‘while (1):’ should be indented 4 spaces for the above code to work.
Hi Erco,
Thanks for all the info. Much appreciated. I’m sure a lot of people will benefit. Sorry about WordPress messing with the code! I need a code-formatting plug-in for comments.
Cheers,
Rhys
Thanks for the 7404 circuit, that worked great, as I’m using the jetdirect to control some TTL circuitry.
I found with a lower power “LS” version of the 7404 (74LS04 = low power/schottky) I could power the chip with the +5v from pin #16 (the “RESET” line which should always be high). It seems to handle the current just fine. (Not the case with a regular 7404 — that needs too much current).
For me, I can’t get the SNMP handshaking hack to bypass the Busy checking on my Jetdirect 510x ports (it has 3 parallel ports), I /have/ to use 7404 inverter trick. And I have two of these devices; both act the same.
I know Fred mentioned that OIDs ending .0, .1 and .2 are a way to access the handshaking for the 3 different ports respectively, but it doesn’t seem to work for my 510x device; when I try to access the .1 and .2 versions of the OID, it complains with the errors shown below. I even upgraded to the latest firmware to date (from 08.60 -> 08.67), still no good. Example:
____
$ snmpset -v1 -c jetdirect 192.168.1.43 1.3.6.1.4.1.11.2.4.3.13.4.0 i 2 <– TRY .0 OF HANDSHAKE OID
SNMPv2-SMI::enterprises.11.2.4.3.13.4.0 = INTEGER: 2 <– WORKS
$ snmpset -v1 -c jetdirect 192.168.1.43 1.3.6.1.4.1.11.2.4.3.13.4.1 i 2 <– TRY .1 OID
Error in packet.
Reason: (noSuchName) There is no such variable name in this MIB. <– FAILS
$ snmpset -v1 -c jetdirect 192.168.1.43 1.3.6.1.4.1.11.2.4.3.13.4.2 i 2 <– TRY .2 OID
Error in packet.
Reason: (noSuchName) There is no such variable name in this MIB. <– FAILS
____
Perhaps this is a problem with my snmpset's internal MIB database.. but I'm thinking in this case it might be the /device/ that is responding with these errors. (I can't tell.. anyone know?)
To complicate matters, I can only seem to use port#2 and #3 on these Jetdirect 510x's I have.. I could never get port #1 to work. I send bytes to it, but I can't see data on pins #2 – #9. Same on both of my Jetdirects, with or without the firmware upgrade. Maybe port #1 got blown up by a previous owner.. I know I didn't break em. But definitely port#1 acts differently than the other two.
Nice wweblog right here! Also your site loads up fast!
What web host arre you the use of? Can I am getting your affiliate link for your host?
I wish my website loaded upp as quickly as yours lol
Hello there, I found your site by means of Google while searching for a comparable matter, your website came up,
it appears great. I have bookmarked it in my google bookmarks.
Hi there, simply was alert to your weblog via Google,
and located that it is really informative. I’m going
to be careful for brussels. I will be grateful should you proceed this in future.
Many folks will likely be benefited from your writing.
Cheers!
Every weekend i used to go to see this website, as i want enjoyment, since
this this web page conations actually pleasant
funny data too.
OK, so yesterday I thought I’d have another go, and this time I was able to figure it out no problem. I think that my problem was the SNMP community name: it was set and thus needed to change the necessary value on the JD170x. So the solution was to reset the print server, set the community name and then use that new name when setting the value.
I also made a little video tutorial for people of my skill level:
Next step would be to try and connect this to a Velleman K8056 relay board I have lying around. Or maybe a cheaper relay board like this:
Hi Milan (and all),
I also made a step by step instruction on how to setup the Jetdirect in order to control it from PHP or a user application.
Look at:.
—
Gerrit.
Thanks so much for your input guys and apologies Gerrit for not getting around to loading your excellent content into the post. Anyway It’s all here now which is good and I’ll make a note above.
Thanks Again,
Rhys
Yes the printserver is always powered,in trigger mode, in the morning the pc with the powercontrol soft is restarting and then the printserver gets one trigger.
Does the same thing occur when you restart the power control service with the windows services tool? Or only when the computer is starting?
Hi Rhys,
Got everything working (with a few mods) but got one question , the power control server is not allways on , i mean , in the morning it’ starting and works until 18.OO .
It works perfectly except while it is starting the power control server software it is triggering my port 5 for a short period , it’s like someone is triggering the contact.
Is this something i can change ?
Because it is triggering my garageport , , you understand it is not advisable that when i powerup the server my garageport is opening ….
I hope you understand my problem here
Hey, are you using trigger mode with a print server? Does the print server remain on 24×7? Does the problem occur if you restart the powercontrol service from the services tool in Windows or does it only occur when you restart the computer?
Hello, i would like to use this to control a relais for 1 second’i mean just like i push a button to open an garagedoor’is this possible ?
Who can help me plz?
Sure. You can just turn the output on then off again. Take a look at my power control tool:
This includes a trigger mode option which just fires the output for a short time.
Cheers,
Rhys
hi,
i’ve been cracking my brain about the setting the “1.3.6.1.4.1.11.2.4.3.13.4.0” value to 2 with an SMTP tool and I just can’t figure it out 🙁 Could someone help me with this part, step by step. (from a Windows machine). Thanks in advance.
By the way: great job on the application.
cheers,
milan
Hi.
I just found this web and very interested with it. I have some question. Is it possible to use the print server parallel port like LPT port? What I mean is: will I be able to read the status port, write the control port freely like in the LPT?
regards,
Andre
Hi Andre,
I haven’t done anything except for output so far. But take a look through Fred’s comments below and you should find the info you need.
Thanks for visiting.
Cheers,
Rhys
Hi Rhys,
Thanks for the reply. I see Fred uses the Status line as input. And you use Data line as output. What I didnt see is using the Control Line as output. Is it possible? Has anyone tried it?
Regards,
Andre
Yeah not sure sorry Andre. I guess you’ll just have to give it a go and let us know how you get on 🙂
Cheers,
Rhys
this week my garagedoor remotecontrol was broken, so i implement printer server to remot open trought cellphone
i have tested the busy line as input but when this line whas up the outputs where locked then i tryed the paper out line and it works great with that i can know on my cellphone if the door is open or closed
with this php code:
snmp_set_quick_print(1);
$state = snmpget(“192.168.x.x”, “public”, “1.3.6.1.4.1.11.2.4.3.13.5.0”);
$status=intval($state);
if ($status & 64) echo “”; else echo “”;
if you want use the printserver as input you can ask snmp too with this register
1.3.6.1.4.1.11.2.4.3.13.5.0
npPortStatusLines OBJECT-TYPE
SYNTAX INTEGER
ACCESS read-only
STATUS optional
DESCRIPTION
“The state of the centronics status lines from the peripheral. The
value is a bit mask.
Bit Value Signal
0 0x01 nACK
1 0x02
2 0x04
3 0x08
4 0x10 nFAULT
5 0x20 SELECT
6 0x40 PERROR
7 0x80 BUSY”
::={ npPort 5 }
i have tested the inputs respond correctly with snmp client software but i wans’t able to install apache snmp layer on my debian for this time
be carefull by using inputs and outputs while some channels are locking the outputs (select, error and aknowledgement) the others are free i think but i have not tested this configuration)
the next step for me is to add a ne555 on the busy line to have timebase for exemple with 1second timebase if i want to have the 1 bit high for 4 seconds i send 4 bytes and a zero byte
it’s better for automation i need this for my windows shutters
another thing with 3 ports jetdirect
the last zero of snmp adrress ist the port number 0:first 1:second 2:thirdt
(for changing the handshake and asking inputs too)
Hi Fred.
Thanks again for your comments. This is really great information. The question about inputs has been asked before but I haven’t been able to answer it. I’m planning to add input support to PowerControl for the K8055 so I will try to add input support for Jet Direct too.
Cheers,
Rhys
sorry for my english i’m french, snmp mean simple network management procol
i have used a snmp client softare to change the value of
1.3.6.1.4.1.11.2.4.3.13.4.0 register to 2
this setting would be stored after powerloss
look at google: snmp client , download , give the ip adress of the server and change the value
make only the briges (10 13 15 all connected to 1) on the db25 connector no needs to open the printserver
works great a command the printserver trought asterisk pbx software with my phone by calling a php page so i can switch things simply…
Thanks Fred, I’m going to give this a go on the weekend.
On linux, if you have the “snmpwalk” command installed (which includes the “snmpget” + “snmpset” commands used below), you can get/set this value.
To get and set this value, the following shows a screen history where I (a) check the current value and (b) change the value to “2”.
$ snmpget -v1 -c jetdirect 192.168.1.43 1.3.6.1.4.1.11.2.4.3.13.4.0
SNMPv2-SMI::enterprises.11.2.4.3.13.4.0 = INTEGER: 3
$ snmpset -v1 -c jetdirect 192.168.1.43 1.3.6.1.4.1.11.2.4.3.13.4.0 i 2
SNMPv2-SMI::enterprises.11.2.4.3.13.4.0 = INTEGER: 2
Note in the above: “$” is the shell prompt, “jetdirect” is the community name I set for the device (in the telnet menu with the “set-cmty-name:” command), and its IP address is 192.168.1.43. The lines that start with “SNMPv2…” are the responses. It shows “3” was the default value before I changed it to “2”.
In linux you can use the netcat command (‘nc’) to send a raw hex FF byte value to the port with e.g.
$ print “\xFF” | nc 192.168.1.43 9100
..where \xFF is the byte to send, and 9100 is the TCP port# of the HP Jetdirect’s raw port.
In my case I have an HP Jetdirect 510x which has 3 parallel ports, so port#1 is TCP port 9100, port#2 is 9101, and port#3 is port 9102.
Could you elaborate on the “a simple setting with any snmp tool”-part. What exactly is it that you need to do?
Yeah it’s a bit vague. I’ll give this a go and see what I can come up with then I’ll update the post.
Hey Fred,
Thanks a lot. Great stuff. This really makes things easier! I’ve edited the post to include your tip.
Cheers,
Rhys }
simply change the value to 2
sorry for my english i’m french
Brian > try using curl to po or some php in linux to connect and post 1 byte
I haven’t got that chip yet however i’ve connected up the status pin and it outputs ‘online’ i’ve put an led on 3&21 and it’s lit?
Yea, you’ll just have to see how you go but I couldn’t get it working without inverting strobe line with the chip.
The 74LS04 chip should fool the print server into thinking that everything is “ok” and ready to receive data.
Ah i’m with you, does the print server need to ‘think’ it’s online before the outputs can be controlled?
Hi Rhys
Do you have a diagram or list of the 8 outputs connections? in order to control the outputs do you merly need to call the ip address, port and output value? ie could it be done via a browser for example?
Right, so the outputs you need to use are data0 – data7 of the parallel port, can’t remember the pins but it will be easy to find. Yes you just need the ip and port. The pins are addressed in 8 bits. data0=1, data1=2, data2=4, data3=8 etc so if you wanted to turn on data0 and data3 you would need to send a value of 9 (8+1). As for a web interface you should be able to do what you need with PHP.
Hi Brian,
You can control 8 outputs without additional circuitry. I guess you might be able to use some thing like a shift register chip to get more outputs but I don’t have any experience with that. The other way to get more outputs is to use a print server which with 3 parallel ports each port is accessible on different tcp port e.g. 9100,9101,9102 etc.
As for Linux, I’ve never done it but I’m sure it’s very easy. You may be able to do something with /dev/tcp I suspect.
Cheers,
Rhys
Hi
I’ve been working on a similar project and will be doing another one with the 170x, can you tell me how many outputs can be controlled and also is it possible to do it via linux using command line? ie can the url 192.168.1.10:9100,21 be called and the pinouts changed?
thanks | https://blog.rhysgoodwin.com/hardware/print-server-power-control-hack/ | CC-MAIN-2020-16 | refinedweb | 4,303 | 70.43 |
Overview
Problem: How to choose a file starting with a given string?
Example: Consider that we have a directory with files as shown below.
How will you select the files starting with “
001_Jan“?
Python Modules Cheat Sheet To Choose A File Starting With A Given String
Choosing a file starting with a given string is easy if you know how to use the Python
os,
re,
pathlib, and the
glob modules. Assume you want to search/select the files starting with
001_Jan'
'from a list of files. You can use each module as follows:
➤OS
import os parent_path = os.listdir("<the folder hosting my-file.txt>") result = [] for file in parent_path: if file.startswith("prefix"): result.append(file) print(result)
➤Re
import os, re parent_path = os.listdir("<the folder hosting my-file.txt>") result = [] for file in parent_path: if re.match('prefix', file): result.append(file) print(result)
➤Glob
from glob import glob result = glob('*prefix*') print(result)
➤Pathlib
from pathlib import Path parent_path = Path('<the folder hosting my-file.txt>/') result = [file.name for file in parent_path.iterdir() if file.name.startswith('prefix')]
Now that you have a quick idea about how to approach the problem let us dive into each solution and find out the mechanism behind each solution.
Method 1: The OS Module
The
os module is the most significant module for working with files and folders in Python. It is primarily designed to access folders and files within your operating system.
Approach: To choose a file starting with a given string within a specific directory, you need to locate the directory containing the required files and then use the
startswith() method to find out all the files which begin with the given string.
Code:
import os parent_path = os.listdir(".") result = [] for file in parent_path: if file.startswith("001_Jan"): result.append(file) print(result)
Output: The result is a list containing the files starting with
001_Jan.
['001_Jan_Backup_01.txt', '001_Jan_Backup_02.txt', '001_Jan_Backup_03.txt']
Explanation: We are storing the current working directory in the
parent_path variable. We then initialize an empty list, result. Next, we loop through the contents of the parent directory, bookmark the file that starts with ‘
001_Jan‘ and append it to the result list. Finally, we print the result using Python’s
print() function.
['index.html']
Note:
startswith() is a built-in method in Python that returns
True when a string starts with a specified value; otherwise it returns
False.
Solve Using a List Comprehension
You can implement the above solution in a single line with the help of a list comprehension as shown below.
import os result = [filename for filename in os.listdir('.') if filename.startswith("001_Jan")] print(result)
Besides the
os module, we can get the same result using the regular expressions, the
glob, and
pathlib modules, as shown in the following sections.
- Recommended Read:
Method 2: Using Regular Expressions
We can use the
re module to work with regular expressions in Python. Regular expressions are crucial in searching and matching text patterns. We can use methods such as
re.compile(),
re.match with escape characters
(. * ^ ? + $ { } [ ] ( ) \ /)and quantifiers to search strings of texts.
Note:
- The
re.match(pattern, string)method returns a match object if the
patternmatches at the beginning of the
string. The match object contains useful information such as the matching groups and the matching positions. An optional argument
flagsallows you to customize the regex engine, for example to ignore capitalization. Read more here.
- The
re.findall(pattern, string)method scans
stringfrom left to right, searching for all non-overlapping matches of the
pattern. It returns a list of strings in the matching order when scanning the string from left to right. Read more here.
Approach: We can use the
re.match()method as demonstrated below to choose the files starting a given string.
import os import re parent_path = os.listdir(".") result = [] for file in parent_path: if re.match('001_Jan', file): result.append(file) print(result)
Output:
['001_Jan_Backup_01.txt', '001_Jan_Backup_02.txt', '001_Jan_Backup_03.txt']
Explanation: The
re.match() method is used inside a loop to find all occurrences of files matching with the given string. If you do not use the loop, only the first file matching the given string will be displayed.
Do you want to master the regex superpower? Check out my new book The Smartest Way to Learn Regular Expressions in Python with the innovative 3-step approach for active learning: (1) study a book chapter, (2) solve a code puzzle, and (3) watch an educational chapter video.
Method 3: Using The Glob Module
The
glob module is one of Python’s built-in modules for finding path names. It was inspired by Unix shell and regular expressions. Most of its methods are similar to Unix commands. The main difference between the
glob and
re modules is that while regular expressions use many escapes and quantifiers, the glob module applies only three of them.
Approach: We can use the
*001_Jan*“.
*character to choose all files starting with “
from glob import glob result = glob('*001_Jan*') print(result)
Output:
['001_Jan_Backup_01.txt', '001_Jan_Backup_02.txt', '001_Jan_Backup_03.txt']
Method 4: Simplify The Process With The Pathlib Module
Python 3.6+ presents you with the
pathlib module to simplify file navigations and searches. It comes with auto-slash mapping, enabling you to work across Unix and Windows effortlessly. It also inherits a chunk of Unix shell commands such as
touch,
join,
unlink, and
rmdir.
Approach: You can use Path to locate the directory and then search the files starting with a given string by iterating across the files in the directory.
Example:
# Import the library from pathlib import Path # Tell Python the beginning of the file iteration parent_path = Path('.') # iterate the files, storing the match in the result variable. result = [file.name for file in parent_path.iterdir() if file.name.startswith('001_Jan')] print(result)
Output:
['001_Jan_Backup_01.txt', '001_Jan_Backup_02.txt', '001_Jan_Backup_03.txt']
Conclusion
You can easily choose a file starting with a given string in Python. As illustrated in this tutorial, all you do is choose amongst the
os,
re,
glob, and
pathlib modules. Please subscribe and stay tuned for more interesting articles in the future. Happy learning! | https://blog.finxter.com/choose-a-file-starting-with-a-given-string/ | CC-MAIN-2022-21 | refinedweb | 1,021 | 58.69 |
Files:
In this example we start
This slot ends the game. It must be called from outside CannonField, because this widget does not know when to end the game. This is an important design principle in component programming. We choose to make the component as flexible as possible to make it usable with different rules (for example, a multi-player version of this in which the first player to hit ten times wins could use the CannonField unchanged).
If the game has already been ended we return immediately. If a game is going on we stop the shot, set the game over flag, and repaint the entire widget.
def restartGame() if isShooting() @autoShootTimer.stop() end @gameEnded = false update() emit canShoot(true) end
This slot starts a new game. If a shot is in the air, we stop shooting. We then reset the gameEnded variable and repaint the widget.
moveShot() too emits the new canShoot(true) signal at the same time as either hit() or miss().
Modifications in CannonField::paintEvent():
def paintEvent(event) painter = Qt::Painter.new(self) if @gameEnded painter.setPen(Qt::black) painter.setFont(Qt::Font.new( "Courier", 48, Qt::Font::Bold)) painter.drawText(rect(), Qt::AlignCenter, tr("Game Over")) end
The paint event has been enhanced to display the text "Game Over" if the game is over, i.e., gameEnded is true. We don't bother to check the update rectangle here because speed is not critical when the game is over.
To draw the text we first set a black pen; the pen color is used when drawing text. Next we choose a 48 point bold font from the Courier family. Finally we draw the text centered in the widget's rectangle. Unfortunately, on some systems (especially X servers with Unicode fonts) it can take a while to load such a large font. Because Qt caches fonts, you will notice this only the first time the font is used.
paintCannon(painter) if isShooting() paintShot(painter) slot in this class instead.
Notice how easy it is to change the behavior of a program when you are working with self-contained components.
connect(@cannonField, SIGNAL('canShoot(bool)'), shoot, SLOT('setEnabled(bool)'))
We also use the CannonField's canShoot() signal to enable or disable the Shoot button appropriately. button. It is also called from the constructor. First it sets the number of shots to 15. Note that this is the only place in the program where we set the number of shots. Change it to whatever you like to change the game rules. Next we reset the number of hits, restart the game, and generate a new target.
This file has just been on a diet. MyWidget is gone, and the only thing left is the main() function, unchanged except for the name change.
The cannon can shoot at a target; a new target is automatically created when one has been hit.
Hits and shots left are displayed and the program keeps track of them. The game can end, and there's a button to start a new game.
Add a random wind factor and show it to the user.
Make some splatter effects when the shot hits the target.
Implement multiple targets. | https://techbase.kde.org/index.php?title=Development/Tutorials/Qt4_Ruby_Tutorial/Chapter_13&diff=73683&oldid=61251 | CC-MAIN-2015-11 | refinedweb | 533 | 75.1 |
Quick Tip: Windows Phone Vibration With VibrateController
Quick Tip: Windows Phone Vibration With VibrateController
Join the DZone community and get the full member experience.Join For Free
I am currently working on another Windows Phone Application, where I needed a way to show the user that they had successfully performed a task and therefore unlocked another feature inside the application.
First idea was adding some sound, but somehow I was not happy with that. So I thought, why not let the phone vibrate?
It’s actually not that hard. The Windows Phone 7 SDK allows me to control the vibrate function easily.
In the namespace Microsoft.Devices there is a class defined with the name ‘VibrateController’.
The only code I need for that function is:
When the user clicks the particular button, the phone vibrates.
A few thoughts to keep in mind:
1. the shortest vibration / pause is 0.1 seconds.
2. the longest is 5 seconds, but that is not recommended.
3. 3 vibrations in one sequence are enough.
4. 0.2 / 0.3 are good choices for an alert.
Here you can find more information about the VibrateController Class.
To be continued…
Published at DZone with permission of Andrea Haubner , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/Quick-Tip-Vibrate-Phone-With-WindowsPhone | CC-MAIN-2018-17 | refinedweb | 229 | 59.3 |
A little bit more insight into what I'm thinking of doing... since
some of you can't read minds to well :-P
I'd like to convert all of the assemblies to basically look like what
the assemblies/geronimo-jetty6-javaee5-gshell produces.
And then I'd like to start converting the other cli bits to gshell
command impls, like: deployer, client and shutdown.
And then (maybe around the same time or before the above), I'd like
to adapt the gshell of target jvm bits to load jars from the
repository, instead of using the lib/* bits.
A little background for those who haven't looked at assemblies/
geronimo-jetty6-javaee5-gshell and what it produces from a lib/*
perspective. Right now I've set up the assembly to produce:
geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib
geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib/boot
geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib/endorsed
geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib/gshell
Where the bits in lib/* and lib/endorsed/* are the same as they were
before. The bits in lib/boot/* and lib/gshell/* are specific to
gshell. And normally a gshell installation would have everything I
put into lib/gshell/* into lib/*, but I moved them to a sub dir for
now... since the bin/*.jar's load jars from the ../lib/* dirs.
The lib/boot/* stuff is the very minimal gshell bootstrap classes,
which setup up the other happiness... and let you do things like:
java -jar ./geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib/boot/
gshell-bootstrap.jar
And that will give you a nice shell... or
java -jar ./geronimo-jetty6-javaee5-gshell-2.1-SNAPSHOT/lib/boot/
gshell-bootstrap.jar start-server
That will launch the G server process using all of the right -
Djava.ext.dirs and whatever properties that we currently have hacked
into platform scripts.
Anyways, so the idea is to move all of the bits which are current in
the lib/* into the repository, and then configure the gshell command
impl to load put the correct dependency artifacts onto the classpath
of the target jvm that is booted up. This will augment the existing
kernel bootstrap from repo stuff, putting evertying except what is
needed from gshell into the repository...
And really, what I'd like to eventually get to is having the
bootstrap from the repository... so that everything except for what
is now it lib/boot/* and lib/endorsed/* can live in the repository
like happy little communistic jars should be :-P
* * *
And then there are longer term things for GShell...
Remote administration (via, telnet, ssh, or custom ssl protocol...
last is most likely to actually happen soonish)
Process management, which is great for clusters, or staging ->
production management. A full suite of command-line tools which can
manage the configuration of a server... easily. So, for example,
lets say you've got a configuration that is working really well for
you... but you want to play with something new...
So you might:
./bin/gsh backup-configuration before-mucking
./bin/gsh start-server
And then go and change a whole bunch of stuff... and it doesn't
work... yikes... so rollback...
./bin/gsh backup-configuration hosed-server
./bin/gsh restore-configuration before-mucking
./bin/gsh start-server
And then maybe you want to play with the "hosed-server" configuration
again...
./bin/gsh start-server --configuration hosed-server
Of course, all of these could have been run from a single ./bin/gsh,
but just for clarity, you can run them one off too.
Maybe list or mange the configurations
./bin/gsh list-configurations
./bin/gsh remove-configuration some-unwanted-config
./bin/gsh copy-configuration default some-new-config
The sky is the limit really... for what kind of management we can do...
Lets say you wanted to do the above on a remote node?
./bin/gsh remote-shell someserver:9443
Connecting to someserver:9447...
Connected
username: system
password: **** (remember this is all jline, so we can mask
passwords like one woudl expect)
someserver:9447 > list-configurations
someserver:9447 > remove-configuration some-unwanted-config
someserver:9447 > copy-configuration default some-new-config
So, all of these operations would happen on the node named
"someserver" listening on 9443 (over ssl of course). Or how about
you want to reboot a server remotely?
someserver:9447 > restart-server now
Geronimo server shutting down...
....
Geronimo server shutdown.
Geronimo server starting...
...
Geronimo server started in ...
Since GShell manages the processes its really easy to perform a full
restart of a Server w/o needing magical platform scripting muck. And
it will just work the same on each platform too.
Once we have clustering, then we can do the same kinda thing for an
entire cluster of nodes...
someserver:9447 > restart-cluster now
Shutting down 2 nodes...
<node1> Geronimo server shutting down...
<node1>....
<node2> Geronimo server shutting down...
<node2>....
<node1>Geronimo server shutdown.
<node2>Geronimo server shutdown.
Starting up 2 nodes...
<node1>Geronimo server starting...
<node1>..
<node2>Geronimo server starting...
<node2>..
<node1>Geronimo server started in ...
<node2>Geronimo server started in ...
Started up 2 nodes.
And well, if you had some kinda script file which controlled say a
logical grouping of nodes you could easily invoke that script (ya
even on a remote system) and it will go and do it:
someserver:9447 > script -l groovy local:-
universe.groovy qa-universe
The local: bit of the uri siginals the local URL handler to be used,
which will cause the to be loaded from
the gsh instance where you are actually logged into (and ran the
remote-shell gshell command) and will pipe its contents securely to
the remote shell running on someserver:9447 and pass it to the script
command to execute.
The restart-universe.groovy might look something like this:
<snip>
import universe.Lookup
assert args.size == 1 : 'Missing universe name'
def universe = args[0]
// Look up a list of nodes (for now say they are basically
hostname:port)
def nodes = Lookup.lookup(universe)
log.info("Stopping universe ${universe}...")
nodes.each { host ->
shell.execute("remove-shell $host stop-server")
}
log.info("Universe ${universe} stopped")
log.info("Starting universe ${universe}...")
nodes.each { host ->
shell.execute("remove-shell $host start-server")
}
log.info("Universe ${universe} started")
</snip>
Its kinda crude script, but I think you get the general point...
* * *
Anyways... I see... well, *HUGE* potential for this stuff...
And really, a lot of what I just described above isn't that far into
fantasy, its all relatively easy to implement on top of GShell... as
it is now (or really as it was a year+ ago when I wrote it). Its
really a matter of do others see the same value... and do others see
the vision of using GShell as the core process launcher to allow
things like "restart-server", or a "stop-server; copy-configuration
default known-good; copy-configuration default testing; start-
server", or that uber-fancy remote-shell muck.
So, I'm gonna give y'all a few days to grok (or try to) what I've
just spit out... please ask questions or comment, as I like to know
I'm not just talking to myself here.
And then maybe later next week, we might vote or come to some other
consensus that this is the right direction for Geronimo, and well...
then I'll make it become reality.
Aighty, and now I'll shut up :-P
--jason
On Sep 8, 2007, at 11:53 AM, Jason Dillon wrote:
> Aighty, well... I've done some long awaited re-factoring, and while
> its still not _perfect_ its a whole lot better now IMO I think
> from a framework perspective that its probably mature enough to
> take on the task of being the server bootloader.
>
> I'm going to continue to refactor the guts of GShell over time, of
> course... but I think that what is there now is highly usable for a
> simple platform independent launcher, as well as for porting over
> the other cli bits we have.
>
> I've done a lot of work in the past week, incase you didn't see the
> storm of scm messages... pulled out pico, plopped in plexus, pulled
> out commons-logging and commons-lang, which are suck and boated (in
> that order). I've gotten the basic framework and supported classes
> to use GShell down to ~ 1mb (a wee bit under)... though when I
> started to add the layout.xml abstraction stuff, I had to pull in
> xstream which bloated her back up to ~1.4m. I may eventually fix
> that... or not, cause xstream is soooo very handy for xml -> object
> stuff.
>
> I've fallen in love with annotations... they are *ucking great.
> They work really well for handling the cli option and argument muck
> which most every command needs to do. And striping out the insano-
> sucking commons-cli really simplified command implementations
> dramatically IMO.
>
> Anyways... I've make a heck of a lot of progress on cleaning up the
> GShell framework... and more is to come I'm sure... But for now, I
> think its probably ready for use primetime as the Geronimo Server's
> bootloader.
>
> I think this provides a some significant value...
>
> 1) Platform scripts become consistent and relatively simple, easy
> to maintain
>
> 2) Everyone will now have a consist way of launching the server,
> even if you like a .sh, .bat, or java -jar, then end process that
> is launched will be the same for everyone.
>
> 3) Opens up the door for some really nice and fancy fancy
> management muck (like restarting the server from the web console,
> or cloning a server instance or backing up a server instance...)
>
> 4) Lays the ground work for future features, like cluster
> management, remote administration and scripting...
>
> * * *
>
> So, I think its time to decide... are we a go or no go for GShell
> as the core CLI for Geronimo thingys and even more important, are
> we go or no go for using GShell to boot up the server process?
>
> --jason
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200709.mbox/%3CF76EA227-5498-4612-8AE4-85826262B0F7@planet57.com%3E | CC-MAIN-2017-13 | refinedweb | 1,672 | 66.03 |
Native Support for JSON
JSON is now natively supported in Mule, meaning you can work with JSON documents and bind them automatically to Java objects. Further information is available in the [JSON Module] configuration reference.
JSON Transformers Added
JSON transformers have been added to make it easy to work with JSON encoded messages. We have used the excellent [Jackson Framework] which means Mule also supports JSON/Object bindings.
Examples
For example, using AJAX, you will usually receive JSON. From here, you can get a request for a javabean from the server side, and you can convert that automatically to JSON.
Another example, if you get a request from outside, such as a Webservice request, your REST type content could be JSON or XML, while internally the components would be javabeans.
In this case, the feature would automatically respond to a JSON request with a JSON response.
Using the JSON Module
JSON, short for JavaScript Object Notation, is a lightweight data interchange format. It is a text-based, human-readable format for representing simple data structures and associative arrays (called objects).
JSON Bindings
Mule support binding JSON data to objects and marshaling Java object to JSON using the [Jackson Framework]. Jackson uses annotations to describe how data is mapped to a Java object model. For example, lets say we have an JSON file that describes a person. When we receive that JSON data we want to convert it into a Person object. The JSON looks like this:
And we have an object
Person we want to create from the JSON data. We use annotations to describe how to perform the mapping. We use the
@JSONAutoDetect to say that field member names map directly to JSON field names:
The
EmailAddress object that is used in the email Addresses is just another JavaBean with the
JSONAutoDetect annotation.
At this point iBeans an figure out weather to perform a JSON transforms based on the parameters of the method being called. For example:
Now if we configure this component in a flow:
Here we could receive the contents of
people.json file above on the JMS queue. Mule would see that
Person.class is an annotated JSON object and that we had received JSON data from the JMS queue and perform the conversion.
Using the Transformer Explicitly
Often you may want to define a transformer explicitly in Mule this is done by importing the
JSON namespace:
Then simply configuring the transformer like any other transformer. When converting from JSON to an object from the transformer needs to define the returnClass. This is the class that the JSON payload will get transformed into:
When converting an object to JSON, you need to specify the expected source class to convert:
Annotating Objects
ckson uses annotations to describe how to marshal and unmarshal an object to and from JSON, this is similar in concept to JAXB. However, sometimes it may not be possible to annotate the object class you want to marshal (usually because you do not have access to its source code). Instead you can define mixins. A Mixin is an interface or abstract class (needed when doing constructor injection) that defines abstract methods with Jackson annotations. The method signatures must match the methods on the object being mashalled, at runtime the annotations will be 'mixed' with the object type. To configure Mixins, use the mixin-map element or configure them on the transformer directly.
Or on transformer directly: | https://docs.mulesoft.com/mule-user-guide/v/3.3/native-support-for-json | CC-MAIN-2018-05 | refinedweb | 572 | 52.9 |
iImageIO Struct Reference
[2D]
The iImageIO interface is used to save and load graphic files. More...
#include <igraphic/imageio.h>
Detailed Description
The iImageIO interface is used to save and load graphic files.
Main creators of instances implementing this interface:
- Image loader multiplexer plugin (crystalspace.graphic.image.io.multiplexer) and all image loader plugins.
Main ways to get pointers to this interface:
- csQueryRegistry<iImageIO> ()
Main users of this interface:
- Application.
- Loader.
Definition at line 75 of file imageio.h.
Member Typedef Documentation
Member Function Documentation
Propagate the image fileformats handled by this plugin.
Load an image from a buffer.
This routine will read from the buffer buf , try to recognize the type of image contained within, and return an csImageFile of the appropriate type. Returns a pointer to the iImage on success, or 0 on failure. The bits that fit the CS_IMGFMT_MASK mask are mandatory: the image always will be loaded in the appropiate format; the bits outside that mask (i.e. CS_IMGFMT_ALPHA) are optional: if the image does not contain alpha mask, the GetFormat() method of the image will return a value without that bit set.
Save an image using format MIME.
If omitted format selection is left to the plugin.
Save an image using a prefered format.
extraoptions allows to specify additional output options. Those options consist of a comma-separated list and can be either 'option' or 'option=value'. The available options vary from plugin to plugin, some common ones are:
compress=# - Set image compression, from 0..100. Higher values give smaller files, but at the expense of quality(e.g. JPEG) or speed(e.g. PNG).
progressive - Progressive/interlaced encoding.
Examples:
compress=50
progressive,compress=30
The documentation for this struct was generated from the following file:
Generated for Crystal Space 2.1 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api/structiImageIO.html | CC-MAIN-2015-27 | refinedweb | 302 | 50.73 |
Building Kubernetes clusters — a lego game
There are many ways and tools that can be used to build a Kubernetes cluster.
Starting from building it from scratch, to tools like
kops,
kube-aws,
kubicorn, or using hosted clusters like GKE or EKS. That’s already a lot of options; which one should you choose? This post I will describe what works like a treat for us, at Wealth Wizards, so we have a repeatable, reliable and fast process for building a Kubernetes cluster. In less than 10min we can get a whole cluster stood up, have everything in place (like logging, security, namespaces configs etc.) and deploy services to it.
AWS is our cloud provider and when we started with Kubernetes,
kube-up was to tool to use for building clusters. Since things are always changing and we wanted a better way to build clusters, soon after joining Wealth Wizards I started exploring
kube-aws as a replacement to
kube-up at team’s suggestion. Initial testing was all good but we could not continue using it because we had to run some software on every node of the cluster that had to be installed through either an
.rpm or
.deb and
kube-aws was using CoreOS. So the next tool in line was
kops.
Kops is a very easy tool to use and get started with. If you just wanna build a cluster, once you’ve decided on a name and an S3 bucket to hold the cluster state, you only have to run
kops create cluster \
--zones us-west-2a \
--name myfirstcluster.example.com \
--state s3://prefix-example-com-state-store
For more details on
kops create see kops_create.md.
This command is great for one time run, but for a repeatable process, storing the whole cluster config in code is preferable.
kops create -f FILENAME [flags] command to the rescue!! Basically you can store the cluster config in a nice formatted YAML file, run
kops create followed by
kops update and job done! You can configure almost everything in the YAML file, but you should stick just to the options that you really need to configure and let kops manage everything else for you. One example is the Docker version that is installed on the nodes, unless you really need to manage this, don’t add it to your YAML file. See this page for a nice YAML example:
So far I have this:
- I’ll build my cluster using Kops and from a YAML file
- I’ll build the cluster in an existing VPC (there’s some requirements for having our own VPC. Kops can create a VPC along a Kube cluster as well, which is quite cool, but this is not an options for us)
- I already created automation for maintaining and managing AWS infrastructure using terraform.
At this point, the next step I need to do is to collect a bunch of ID’s generated by AWS after creating my base infrastructure and automatically populate the YAML file. Enter Ansible - the simplest way to automate apps and IT infrastructure. Don’t believe me? Just google ‘ansible’ and see for yourself. In this case, Ansible is used as a templating tool: I created a role that reads a bunch of variables populated by terraform outputs, applies a jinja2 template and crates a YAML file to be used by kops. Putting all of this together it looks similar to the bellow code snippets.
1) Terraform:
#!/bin/bash -ecd $dir
terraform apply -auto-approve
exit_code=$?source $(git rev-parse --show-toplevel)/{{ scripts_path }}/terraform_output.yml.shexit $exit_code
Terrform output just runs:
terraform output 2>/dev/null | grep “=” | sed ‘s/ =/:/g’ > terraform_output.yml . This creates a YAML file with required outputs (or variables)so it can be consumed by the Ansible role, similar to:
ami_name: ami-111111
kms_key_arn: arn:aws:kms:eu-west-1:111111111111111:key/11111111-410d-4e11-ad59-1111111111
route53_domain: kube.example.com
route53_zone_id: Z3HYBV888111
security_group_kube_master_id: sg-111111
subnet_kube_master_a_cidr_block: 10.0.6.0/24
subnet_kube_master_a_id: subnet-7ae15b32
subnet_public_kube_a_id: subnet-bbec56f3
subnet_public_kube_a_name: kube01-public_kube-a
subnet_public_kube_b_cidr_block: 10.0.1.0/24
subnet_public_kube_b_id: subnet-111111
vpc_cidr: 10.0.0.0/16
vpc_id: vpc-111111
vpc_name: kube01
vpc_parent_dns_domain: example.com
vpc_region: eu-west-1
2) Ansible:
Role snippet:
- name: Inclue terraform_output
include_vars:
file: '{{ terraform_output_file }}'- name: Create {{ cluster_file }}
template: src='cluster.yml.j2'
dest='{{ cluster_dir }}/{{ cluster_file }}'
mode=0640
Jinja2 template snippet:
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: {{ cluster_name }}
spec:
api:
loadBalancer:
type: {{ loadBalancer_type }}
authorization:
alwaysAllow: {}
channel: stable
cloudLabels:
Service: kubernetes
VPC: {{ vpc_name }}
cloudProvider: awsconfigBase: s3://{{ s3_kube_bucket_name }}/{{ cluster_name }}
dnsZone: {{ cluster_name }}
..................................................................
{% for item in node_groups %}---{% for item in node_groups %}
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: "{{ cluster_name }}"
name: {{ item.name }}
spec:
additionalSecurityGroups:
- {{ security_group_kube_node_id }}
{% if item.node_labels is defined %}
nodeLabels:
{% for label in item.node_labels %}
{{ label.name }}: "{{ label.value }}"
{% endfor %}
..................................................................
Playbook:
---- name : build k8s cluster.yamlhosts : localhost
gather_facts: false
connection : localroles:
- kops_templatevars:
terraform_output_file: "terraform_output.yml"
cluster_dir: "/kops"
cluster_file: "{{ cluster_name }}.yaml" master_count: 1
kubernetesVersion: 1.9.6
loadBalancer_type: "Public" node_groups:
- name: default-node
node_labels:
- name: node_type
value: default
machineType: m4.large
maxSize: 1
minSize: 1
maxPrice: 0.111
Basically this playbook is the most used file in terms of cluster configs. It has entires for the things that I need to manage most of the time. All else is defined in the role defaults or the Jinja2 template.
3) Kops:
kops create -f kops.yaml --name=myfirstcluster.example.com --state=s3://kube-11111111kops create secret sshpublickey admin -i ~/.ssh/id_rsa.pub \
--name myfirstcluster.example.com --state s3://kube-11111111kops update cluster --name myfirstcluster.example.com --state s3://kube-11111111kops update cluster --name myfirstcluster.example.com --state s3://kube-11111111 --yes
And that’s it! Press enter after typing
kops update cluster --yes and the cluster will be created.
Now I have: terraform managing the base infrastructure, ansible creating a YAML file by applying a template, and kops building the cluster. All these align nicely, just like lego bricks.
I used these building blocks for our Kube clusters with the goal to have a consistent, repeatable and reliable process. As an added bonus, it gives excellent DR of the compute part of our infrastructure, it’s all maintained in code and makes future upgrades or cluster patches a breeze. | https://medium.com/ww-engineering/building-kubernetes-clusters-a-lego-game-bdd62754cad7 | CC-MAIN-2019-43 | refinedweb | 1,038 | 56.15 |
The collection is a powerful construct that allows a developer to logically group related elements and navigate through them. In this article, we'll explore some concrete implementations of collections that are part of the base .NET framework.
The entire series can be accessed here:
(If you enjoy this article, please vote for it at the Code Project by clicking here, then scroll to the bottom to vote!)
There are two basic name spaces that provide rich collection functionality "out-of-the-box" and are useful in a number of ways. There are
System.Collections for non-generic collections and
System.Collections.Generic. For more specialized collections, we'll also look at
System.Collections.ObjectModel. (Extra Credit: we won't cover it here, but after reading this article you may want to investigate
System.Collections.Specialized).
An Out of the Box Interview Answer
A very common interview question is to explain the difference between
ArrayList and
List. If you got that one correct, you probably mentioned something about boxing, or taking a value type and converting it to an object so it essentially becomes part of the heap instead of the local stack. This operation is expensive. Because
ArrayList is not generically typed, it must box and unbox value types. For this reason, any type of collection that deals with value types (and for that matter, structs) should focus on the
List<T> implementation. Just how expensive is the boxing operation? Try this little console program and see for yourself:
using System; using System.Collections; using System.Collections.Generic; namespace Arrays { internal class Program { private static void Main() { const int ITERATIONS = 9999999; DateTime startBuild = DateTime.UtcNow; ArrayList integers = new ArrayList(); for (int x = 0; x < ITERATIONS; x++) { integers.Add(x); } DateTime endBuild = DateTime.UtcNow; for (int x = 0; x < ITERATIONS; x++) { int y = (int) integers[x]; } DateTime endParse = DateTime.UtcNow; TimeSpan buildArray = endBuild - startBuild; TimeSpan parseArray = endParse - endBuild; startBuild = DateTime.UtcNow; List<int> integerList = new List<int>(); for (int x = 0; x < ITERATIONS; x++) { integerList.Add(x); } endBuild = DateTime.UtcNow; for (int x = 0; x < ITERATIONS; x++) { int y = integerList[x]; } endParse = DateTime.UtcNow; TimeSpan buildList = endBuild - startBuild; TimeSpan parseList = endParse - endBuild; double build = (double) buildArray.Ticks/(double) buildList.Ticks; double parse = (double) parseArray.Ticks/(double) parseList.Ticks; double total = (double) (buildArray.Ticks + parseArray.Ticks)/(double) (buildList.Ticks + parseList.Ticks); Console.WriteLine(string.Format("Build Array: {0} List: {1} {2}", buildArray, buildList, build)); Console.WriteLine(string.Format("Parse Array: {0} List: {1} {2}", parseArray, parseList, parse)); Console.WriteLine(string.Format("Total Array: {0} List: {1} {2}", buildArray + parseArray, buildList + parseList, total)); Console.ReadLine(); } } }
It basically spins through a list of integers, storing them in both an
ArrayList and a
List. On my machine, the
ArrayList takes over 7 times longer to load, and 1.2 times longer to retrieve values, than the strongly typed
List implementation. That is something important to keep in mind when considering collections.
I'm Just Not Your Type
The first collections we'll look at are not generically typed. That doesn't mean they aren't typed ... some in fact are designed for explicit types, but they don't support generics. We already covered the
ArrayList, which I believe is there for backwards compatibility to the versions that didn't support generics, as I cannot imagine a situation when I would use that over a
List.
These classes derive from
CollectionBase and
DictionaryBase which are abstract classes that implement
ICollection and, in the dictionary,
IDictionary.
BitArray
Use this class when manipulating bits. It exposes the bits as an array of
bool, so you can do something fun like:
... if (myArray[x]) { blah blah } ...
The underlying storage is done at a bit level for compact storage. What's nice is that you can initialize the collection with a byte array and perform bitwise operations (logical NOT, AND, OR, XOR) between two arrays (great for masks, etc).
Hashtable
The
Hashtable serves and important function. It makes large collections of objects easier to parse and search based on the implementation of the hashcode algorithm. One important decision to make is whether you will use a
Hashtable or a
Dictionary. What's the difference?
The dictionary maps a key to a value. Each key is unique. Different keys might have the same value, but if you are searching for a specific key, you will get exactly one entry. What's more important to note with the
Dictionary type is that it is defined with a generic type. Therefore, there is no boxing or unboxing and it will, in general, perform faster and better than a hash table when you are using value types.
The hash table requires that its object implement the hashcode algorithm (or that an algorithm is injected into the constructor). The idea is that objects will have a "mostly" unique key per the hashcode algorithm. However, hash tables will allow multiple objects to exist for the same hash code because the algorithm does not guarantee uniqueness. Hash tables are most often used when there is not a well-defined key to map to the value. The hash code function is used to resolve a "region" of objects, then that subset of objects can be further scanned to complete the algorithm. Using a hashcode when you have a well-defined key is also more expensive because it only stores objects, not generic types, so boxing and unboxing will occur if the targets are value types.
Queue: First in Line, First to Eat!
The
Queue is often compared to a physical line. In the lunch line, the first person in the line is also the first person to leave the line (usually). The queue functions this way. To put someone in line, you call the
Enqueue method. To get the person at the front of the line (the next one to "leave") you call the
Dequeue method.
For an idea of how the
Queue collection could be used, consider this practical example: syslog. Syslog is a standard way for network equipment to broadcast status. By default, syslog messages are sent to a host via the UDP protocol on port 514. UDP, unlike TCP/IP, is a disconnected protocol (it doesn't wait for nor require a response, and does not support routing of large packets that must be broken into chunks and reassembled). While you can configure what hardware sends for syslog, some equipment can be incredibly verbose and send out dozens of status updates every second.
Imagine writing a syslog server that retrieves these values from a listening UDP port. The thread listening to the port must be incredibly fast or it will block the port and miss important messages. In order to keep the listen port open, you could implement a synchronized queue. The listener would simply
Enqueue the incoming message, then go back and listen to the next message. A background thread (or even several threads running simultaneously) could then call
Dequeue to perform processing on those messages.
Most of the time you'll want to use the generically typed equivalent for the
Queue to avoid boxing and unboxing.
SortedList
The sorted list is a hybrid between the
List and the
Dictionary. The keys in the list are sorted, so after adding values to the list, you can enumerate the keys and get the value in the sort order of the key. This might be useful to enumerate countries based on the country or perhaps files based on a key that includes their directory, etc.
Just Put it on the Stack
The stack is a very popular pattern for collections. It is a last-in first-out (LIFO) collection, compared to the queue which is FIFI (first-in, first-out). Stacks are important for composite operations that require a history of state. Calculators work by pushing operands and operators on the stack, computing the values, then popping those values to integrate into the next operation. Stacks are also important in recursive functions — if you wanted to recurse without using a method call, you'd loop instead and place your values on the stack, then pop them off until the stack is empty.
Many years ago in the days of VB6 I helped build a complex web application that had many multi-page transactions. To enable the user to navigate these transactions, we used a custom stack. Each navigation involved pushing the parameters and page directives onto the stack, then the target pages would pop these values and use them. A multi-page transaction would only pop the final values when the transaction was complete. This allowed us to rollback transactions, as well as nest transactions (for example, if you were in the middle of transaction A, then navigated to "B" and hit cancel, you'd pop back into A instead of some generic menu).
Again, you will more often than not use the generically-typed version of the
Stack to get the job done.
Generics are Less Expensive
Many of the collections we discussed have generically-typed equivalents that eliminate the need for boxing and un-boxing. When it comes to value types, generically typed classes are almost always less expensive and provide better performance. In addition to generically typed versions of the collections we've already discussed,
System.Collections.Generic provides some unique collections only available as strongly-typed implementations.
Dictionary Lookup
By far one of the more commonly used collections, the dictionary has a strongly typed key that maps to a strongly typed value. This is the classic use for mapping one item to another, whether it's an image name to the bytes of the actual image or a security key to a security context object.
Ready, HashSet, Go!
The
HashSet class does what the name implies: manages sets. Sets are different than typical lists in a few ways. Sets are loose collections of objects: order is not important. Each object must be unique. If you do not require the classes you are collecting be in a particular order, hash sets exhibit very good performance benefits over indexed and ordered lists. The hash set also provides set operations such as union and intersection. According to Kim Hamilton's article introducing the Hash set, the preferred name for this would have been simply
set (you can see the article to learn why the hash part was added).
LinkedList
The linked list is a powerful linked list implementation that provides nodes that link both forward (to the next node) and backwards (to the previous node). The list maintains an internal count. Inserting, deleting, and counting nodes are all O(1) operations. An O(1) operation is an operation that takes the same amount of time regardless of the size of the data it is being performed against ... this means that the list performans just as well when adding or removing nodes as a small or a large list.
Type<T>
The remaining items in this namespace are counterparts to the collections and implementations of the interfaces we've discussed. The caveat is that they are all strongly typed which means better performance in almost all cases involving value types and often for reference types as well. This really leads us to the last collection to be discussed (remember, I left the specialized namespace for homework). This also takes us into a new namespace!
Just an Observation
The
System.Collections.ObjectModel namespace is for object-based operations that belong in reusable libraries. It relates to classes which have methods that return or consume collections. Perhaps the most often used collection here is the
ObservableCollection.
The
ObservableCollection provides a collection that implements
INotifyCollectionChanged, which is similar to
INotifiyPropertyChanged but at the collection level. In short, whenever an object is added to or removed from the collection, or items within the collection are refreshed, the collection will raise the
CollectionChanged event. This is important when there is a dependency on the collection that should be notified whenever the underlying collection changes.
Of course, the most common implementation of this is for databound user interface elements. Objects like lists and grids need to refresh when the underlying lists change. Technologies like Windows Presentation Foundation (WPF) and Silverlight rely on observable collections to optimize the UI and only refresh the elements when there is a requirement, such as the list changing. In fact, these frameworks automatically hook into the events when databound to refresh, so whenever you are dealing with lists that change, you should consider using the observable collection instead of one of the other collection types for binding.
Conclusion
That is a lot of information to cover but hopefully provided insights into the various types of collections and some uses for them. In the next and final installment, we'll consider custom collections and how to tap into
IEnumerable for more advanced functionality. | https://csharperimage.jeremylikness.com/2009/08/whats-in-your-collection-part-2-of-3.html | CC-MAIN-2018-30 | refinedweb | 2,136 | 55.03 |
Hi,
I want to export all my Request,response and test result in text file,so for that i created a Event in Project level and try to insert script and run the TC.I observed nothing is happened ,as per my script in my project location Log folder should create and inside that folder all different text file should be export.now i ran that Groovy script and i found 1 error but I am unable to fixed it,please help.I have attached SS detail for reference.
below is my script.
def prjpath = new com.eviware.soapui.support.GroovyUtils(context).projectPathdef logFolder = new File(prjpath +'/logs')If(!logFolder.exists()){ LogFolder.mkdirs() //create a log folder}def tcName=testRunner.testCase.getName()def tcFolder=new File(prjpath+"/logs/${tcName}")If(!tcFolder.exists()){ tcFolder.mkdirs()}def FOS = new FileOutputStream(prjpath+"/logs/${tcName}"+testStepResult.testStep.name+'.txt',true) //create text filedef pw = new PrintWriter (FOS)testStepResult.writeTo(pw)pw.closed()fos.closed()
Solved!
Go to Solution.
yes,variables are case-sensitive.now it is working fine.
View solution in original post
I think the issue you have is with the case of your if statement, you have used an upper case "I" when it should be a lower case "i" also I think you may have a case issue when you use the logFolder object, you've difined it starting with a lower case "L" but reference it with an upper case "L". | https://community.smartbear.com/t5/SoapUI-Pro/Events-Req-Res-logging-issue/m-p/175777/highlight/true | CC-MAIN-2019-47 | refinedweb | 241 | 58.58 |
lsearch
Linear search and append
DescriptionThe lsearch and lfind functions walk linearly through an array and compare each element with the one to be sought using a supplied comparison.
Example:
Example - Linear search and append
Problem
using lfind (from )
Workings
#include <stdio.h> #include <search.h> int compare(int *x, int *y) { return (*x - *y); } void main () { int array[5] = {44, 69, 3, 17, 23}; size_t elems = 5; int key = 69; int *result; result = (int *)lfind (&key, &array, &elems, sizeof (int), (int(*) (const void *, const void *)) compare); if (result) printf ("Key %d found in linear search\n", key); else printf ("Key %d not found in linear search\n", key); }
Solution
Output:
Key 69 found in linear search
Return ValuesThe lsearch and lfind functions return a pointer to the first element found. If no element was found, lsearch returns a pointer to the newly added element, whereas lfind returns NULL. Both functions return NULL if an error occurs.
HistoryThe lsearch and lfind functions appeared in 4.2BSD In FreeBSD 5.0 they reappeared conforming to conforming to IEEE Std 1003.1-2001 ('POSIX.1')
Last Modified: 18 Dec 11 @ 13:09 Page Rendered: 2022-03-14 16:09:52 | https://www.codecogs.com/library/computing/c/search.h/lsearch.php | CC-MAIN-2022-21 | refinedweb | 198 | 61.46 |
Do you have a question? Post it now! No Registration Necessary
Subject
- Posted on
car blinker circuit?
- 12-06-2005
December 6, 2005, 12:10 am
I need to put a turn indicator blinker system on my 6 Volt vehicle.
- Ken Taylor
December 6, 2005, 3:26 am
Permalink
December 6, 2005, 9:31 am
Permalink
- Ken Taylor
December 6, 2005, 8:50 pm
Permalink
December 6, 2005, 9:05 pm
Permalink
- Ken Taylor
December 6, 2005, 9:54 pm
Permalink
Re: car blinker circuit?
Well, I just tried to do some quick ASCII art and it looked crap, so
each to his own. :-)
If you have a 'normal' blinker switch, it'll send either the 6V or the
ground to the appropriate side of the blinker unit, which then powers
the lamps.
For your application, you could use the same switch and use the 6V or
gnd to power the 555 circuit and one of the MOSFET's, which would
complete the circuit for one set of lamps only. Look at using diodes to
power the 555 and only one of the MOSFET's at a time.
Damn, that made bugger all sense to me too - can anyone do a quick bit
of ASCII???
Cheers.
Ken
December 6, 2005, 11:01 pm
Permalink
- Jasen Betts
December 7, 2005, 5:07 am
Permalink
- Ken Taylor
December 7, 2005, 9:30 am
Permalink
- Jasen Betts
December 7, 2005, 7:20 pm
Permalink
- Jasen Betts
December 7, 2005, 5:04 am
Permalink
- Jasen Betts
December 6, 2005, 10:14 am
Permalink
Re: car blinker circuit?
the 555 may not be suited to the wiring layout.
most indicator flashers connect in series with the switch and only start
when the bulbs are connected. doing that with a 555 may not be so easy.
also possilby there's a cheaper solution, but for a one off
a 555 is plenty cheap enough.
what's the exact aplication (what loads, how is the switch organised)...
what about electro-thermal :)
Are relays no good either?
Bye.
Jasen
December 6, 2005, 12:49 pm
Permalink
Re: car blinker circuit?
I think I've got part of the solution: Use two 555's (or a 556) so that
each is energised by the blinker switch, for either left or right.
Means some unconventional wiring, but that doesn't matter in this case.
Vehicle has no key - ignition is by magneto. So having the blinker
arranged this way saves introducing a switch, which I'd be likely to
forget to turn off.
Cheers
Jordan
- Jasen Betts
December 7, 2005, 6:36 am
Permalink
Re: car blinker circuit?
you can't power up half a 556. making it work with a single chip (and
single mosfet etc) would make it more compact
a 555 might be the wrong solution.
an oscilator based around op-amps could sense the dark resistance and vary
the blink rate OTOH if he's using LED globes (or panels) they're not going
to fail suddenly. because each panel has multiple LEDs in parallel. so
variable blink rate isn't needed
hmmm, this one's compatible with standard wiring
and will only run when the lamps are switched on
+-------------+---------------------------+-->from
| | /| | indicator
| +---------(----[100K]-o< |--------+ | fuse
| | | \| | |
| | . . . .|. . . . | |
| | . VCC(8) . | |
| | . . | |
+---(---RES(4) OUT(3)-------------||--)---+
| . 555 . || |
+---TH(6) DIS(7)-- || |
| . . ||--+--->to switch
+---TR(2) CV(5)-- p-channel and lamps
| . . mosfet
--- . GND(1) .
10u~T~ . . . .|. . . .
| |
+---------+
|
----
////
the inverter could be made from the second half of a 556 etc...
the only problem is it starts off dark and iights up a short
time after the switch is turned on
this modification will have it light up immediately when it
turned on and then start blinking.
------+----
|
/|----(----------+
---------[100K]-o<1| | |
\|----(-------+ |
. . . . . | | |
. | | |
. | | |
OUT(3)------||----+ | |
. ||pch o-|--+----- left signal
. || |
. ||---o------ |
|
indicator o-+-------- right signal
switch
the Nor gate could be done with a couple of diodes, a pull-down
resistor and half a 556.
Bye.
Jasen
December 7, 2005, 11:05 am
Permalink
Re: car blinker circuit?
Thanks Jasen, that's all interesting but I'm scratching my head a bit.
Can you please explain the use of the inverter? Does the circuit "sleep"
until the blinker switch is activated?
Also, how is a 555 (or half a 556) configured to act as an inverter?
I'm slightly more conversant with 555's than with op-amps etc, which is
the reason I'm using those.
What suggestion is there for a mosfet? I'm using what's at hand here -
some MTP3055V's, but these are I think overkill and only show 2.4V
instead of the 6V supply voltage. Would a smaller mosfet be likely to
saturate and supply the full voltage?
Jordan
- Jasen Betts
December 7, 2005, 8:03 pm
Permalink
Re: car blinker circuit?
yeah that was the idea. it's to get it to sleep
connect reset to Vcc, threshold and trigger are connected together
and used as the input, out is a totem-pole output
and discharge an open collector output.
you get a sort of schmitt inverter.
I don't know much about mosfets. but for a negative earth system (which
everyone has assumed without confirming) and switching the live current
(which is normal in automotive applications) P-Channel mosfets are best
suited. and the range is more limited.
if the MTP3055 has a P-channel brother that would be the one to use
the MTP3055 seems to be trading on the well known 2N3055 bipolar NPN
transistors number, and so on a whim entered the 3055's opposite number
"2955" into google
and even that doesn't look promising. I think you're going to need to use a
bipolar power transistor. maybe the afforementioned 2955 now available in
plastic... TIP2955, that's going to want a siveable base drive so maybe a
bd135 to do that (and also function as an inverter)
/
||/
|/~
+-[22R]-|
| |\
| | \
/ \
| / |
|/ | o-+-----
pin3 ----------+---| +----o---- |
| |\ 0-)--+--
+->|--+ | \| | |
| ~\ +--|<-+ |
| | | |
pin2-[100K]-+-------------(----[1K]--------+--|<----+
|
-----
/////
gotta go - more later.
--
Bye.
Jasen
Bye.
Jasen
December 9, 2005, 11:43 am
Permalink
Re: car blinker circuit?
Thanks for the nice ascii work.
I made something which is running well on the bench.
I'm using diodes to select which of two transistors will be powered up,
along with the 555, from the blinker switch wired to supply power.
I'm bringing out 2 wires to provide a visual indicator that it's
flashing - an LED in series from pin 3. It's a bit of a spaghetti
junction, with 7 wires coming from my flasher unit:
1. input left
2. input right
3. indicator in (to dash LED)
4. indicator out
5. output left
6. output right
7. earth
- There's a small delay, but not as long as it takes to give a hand signal.
- No lamp failure feature, which would be nice but at least there are no
filaments.
- Wiring is not conventional. I'd like to try the suggestion using the
inverter, if that makes it more like "standard".
Thanks to all for your help
Jordan
- Jasen Betts
December 9, 2005, 9:47 pm
Permalink
Re: car blinker circuit?
that can be fixed by connecting the timing capacitor to the VCC pin instead
of to ground. a diode from ground to the timing pins might be a good idea too
in case this capacitor configuration is outside of
|
|
+-/\/\-----------(---------+
| | |
| +---------+ |
| | | |
| || | . . . .|. . . . |
+--||--+ . VCC(8) . |
| || | . . |
| +---RES(4) OUT(3)---+---
| . 555 .
+------+---TH(6) DIS(7)--
| . .
+---TR(2) CV(5)--
| . .
| . GND(1) .
| . . . .|. . . .
| |
| |
+---|<----+
|
|
initially-low astable
as you may have guessed I have the 555 outline in text file and just
import it and doodle the lines using the keyboard.
--
Bye.
Jasen
Bye.
Jasen
December 11, 2005, 1:55 pm
Permalink
Re: car blinker circuit?
Well, I tried reconnecting the capacitor as per your suggestion, but
without the diode (or resistor?), and it works a treat.
Now, she starts flashing "on" immediately upon switching.
The first flash is very slightly longer in duration - no worries.
Thanks heaps
Jordan
December 12, 2005, 4:17 am
Permalink
Site Timeline
- » panasonic TX-51GF85H
- — Next thread in » Electronics Down Under
- » Video iPod Car) | https://www.electrondepot.com/australian/car-blinker-circuit-49884-.htm | CC-MAIN-2022-05 | refinedweb | 1,340 | 72.56 |
how to get this set of code right (while, if, else)
import java.util.Scanner; // Make the Scanner class available.
public class test
{
public static void main(String[] args)
{
int number1 = (int)(100 * Math.random()) + 1;
int number2 = (int)(100 * Math.random()) + 1;
int answer;
System.out.println ("This is a addition maths program");
System.out.println (number1 + "+" + number2);
Scanner stdin = new Scanner( System.in ); // Create the Scanner.
System.out.println ("What's your answer?: ");
answer = stdin.nextInt(); //to react with the user input
//this is for the loop, as long as the input is incorrect
while (answer = None)
if (answer == (number1 + number2))
System.out.println ("You are correct");
System.out.println ("The sum is " + (number1 + number2));
else
System.out.println ("Your answer is incorrect, please try again");
}
}
hi, the above is the code i write. it's a simple math question. it should loop as long as the answer isn't correct.
can anyone point out the correct way?
thanks.
Hey
you need a loop around the question and answer !!! !!! :-) | https://www.java.net/forum/topic/general-programming-help/how-get-set-code-right-while-if-else-0 | CC-MAIN-2015-32 | refinedweb | 171 | 62.75 |
Install a DNS Server
Updated: May 9, 2008
Applies To: Windows Server 2008
You can use this procedure to install a Domain Name System (DNS) server role with Server Manager. Installing a DNS server involves adding the DNS Server role to an existing Windows Server 2008 server.
You can install the DNS Server role when you install the Active Directory Domain Services (AD DS) role. This is the preferred method for installing the DNS Server role if you want to integrate your DNS domain namespace with the Active Directory domain namespace. For information about installing DNS server on an Active Directory domain controller, see Configure a DNS Server for Use with Active Directory Domain Services. Use the following procedure if you are not installing DNS server on a domain controller.
Membership in Administrators, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at Local and Domain Default Groups ()..
- We recommend that you configure the computer to use a static IP address before you install the DNS Server role. If you configure the DNS server to use dynamic addresses that are assigned by Dynamic Host Configuration Protocol (DHCP), when the DHCP server assigns a new IP address to the DNS server, the DNS clients that are configured to use that DNS server's previous IP address will not be able to connect to the DNS server because the DNS server’s IP address has changed..
- After you install a DNS server, you can use a text editor to make changes to server boot and zone files for zones that are not integrated with AD DS, but we do not recommend that you edit these files directly. The DNS Manager snap-in in Microsoft Management Console and the DNS command-line tool dnscmd simplify maintenance of these files. We recommend that you use them whenever possible. After you begin using DNS Manager or dnscmd to manage Active Directory–integrated zones, each zone is saved or deleted according to its storage type, that is, whether the zone is stored in a file or in Active Directory. For all storage types, the zone data can be stored on other domain controllers or DNS servers. The zone data is not deleted from Active Directory role is reinstalled, unless you use the dnscmd /load command to recreate the zone.
- When they write DNS server boot and zone data to text files, Windows Server DNS servers use a file format that is compatible with the Berkeley Internet Name Domain (BIND) file format that is recognized by BIND 4 servers. | https://technet.microsoft.com/en-us/library/cc816723(v=WS.10).aspx | CC-MAIN-2015-18 | refinedweb | 433 | 54.36 |
F# Interop with Javascript in Fable: The Complete Guide
Fable, the F# to Javascript compiler, always had the motto “The compiler that emits JavaScript you can be proud of.” That is true, the generated javascript code is readable and idiomatic, sometimes — I can’t believe I am saying this about Javascript — It is even beautiful. However, Fable has another killer feature which will be the subject of this article: simple interop with the Javascript ecosystem.
Interop with Javascript means that you would write F# code that calls native javascript functions. In the more general sense, it lets you generate custom javascript code you control. Also it lets interact and use javascript code and libraries in your code.
In this article I will present you a plethora of examples on how and when to use Javascript interop. I will be covering a lot, so grab yourself a cup of coffee and lets get started.
Enviroment Setup
Skip this section if you already know how to setup a minimal Fable app.
We will use a local developement enviroment. To get started you will need the latest dotnet cli, Node and npm installed on your machine. Npm is packaged with Node so you don’t need a seperate download.
Lets a create a directory for our project:
mkdir FableInterop
cd FableInterop
If you are working with Fable for the first time you need to run this once:
dotnet new -i Fable.Template::*
It will download the latest fable template project from Nuget and cache it on your machine.
Now inside
FableInterop directory you can run
dotnet new fable which will generate a minimal fable app from the template. As for the editor I am using VS Code with Ionide for cross-platform F# developement.
After you have the project, you need to install it’s dependencies, this could take a couple of minutes (at least on my machine):
npm install
dotnet restore
When this finishes, you should have tooltips and auto-completion working inside VS Code.
Working with Fable is a bit tricky, you need 2 things to be working simultaneously for an “edit-save-recompile” flow, I usually use 2 shell tabs: One for running Fable compilation server using
dotnet fable start , Fable works behind a local server to cache compilation states when it recompiles a project after a change in a file, making subsequent compilations fast. Second tab is for running Webpack developement server using
npm run start that actually watches the changes in your project and sends them to the Fable server. Webpack is also responsible for serving your static content, bundling your code and refreshing the the browser when a recompilation is succesful.
Working with the code
Inside the
src/App.fs you can delete everything and leave this:
Keep
Fable.Core and
Fable.Core.JsInterop open as they provide the functions and attributes for the interop. The
Fable.Import.Broweser is opened only for using
console.log and it is generally recommended you fully qualify the the
Browser module if you want to use function from there, i.e.
Browser.console.log, here I am using it like the above for brevity.
For now we will only use
src/App.fs . To start developement mode, you can run the command:
dotnet fable npm-run start
This will start off two processes, one for Fable compilation server and one for Webpack developement server, these will work together when you make changes to your code for fast recompilations. Alternatively you can run each server seperately: run
dotnet fable start and on shell tab and in another tab, run
npm run start . Navigate to and open your browser console, you should see this:
From now on, you can just edit and save your
App.fs file and come back to the browser or console and see the changes.
The [<Emit>] Attribute
This is an attribute that you can attach to values, functions, members and other language constructs to override their default code generation. Next are some of the many use-cases of this attribute.
Defining the undefined
Javascript has a literal value known as
undefined. F# does not have such construct. We will use the
Emit attribute to generate that value in the following example:
Notice the parts:
[<Emit("undefined")>] ,
obj and
jsNative . The string “undefined” in the
[<Emit>] is called the “emit expression” , it is what gets injected in place of the
undefined value when the code is compiled.
obj is the type you give to the value. In this case an
obj is correct because
undefined could be any object in javascript. The right hand side of the assignment is ignored during compilation, due to the fact that there is an
[<Emit>] attribute. If this attribute is omitted,
jsNative will throw an error.
This way you can define custom values with their own types and use them regularly in your fsharp code. To inforce the concept, here is another example:
Whenever the compiler comes across the value
one , it will just inject the literal value within the
[<Emit>] . In this case it is
1. Another thing to notice is the fact that it had the type of
int , this in turn allows me to use the
+ operator because the type checker thinks it is an integer, which is the correct type in this case.
You have to be careful to always write the correct types because otherwise the code will just fail during runtime and cause unexpected behavior.
Generating literal code (like in the examples above) has it’s uses but is not very useful. The real fun starts when you parameterize the emit expression: using
[<Emit>] with functions, giving them macro-like behavior!
Parameterized [<Emit>] with functions
To extend our first example, I want to write a function that checks whether or not a value is
undefined , here is the example:
Now, take a closer look at the function
isUndefined . It has one paramter called
x of a generic type
'a . The function returns
bool . Then notice that
$0 in the emit expression , that is a placeholder of whatever value you pass to the function in the place of parameter
x . It has the number 0 because the parameters are 0-indexed and therefore the first parameter (in this case
x ) will have the index 0. Multiple parameters are also allowed (example):
Another useful example is when you want to check whether or not a value is not a number (NaN) using the native
isNaN function:
You might want to call a function without paramters and get a result, for example if you want a random number using the native
Math.random() :
Actually,
System.Random is supported by Fable so you can use that too. Here I am just showing you what you can do.
Type-safe Javascript functions with Option<’t>
The type
Option<'a> has special usage with Fable, namely that it is erased when compiled.
Some x becomes just
x and
None becomes
null . We will use this to give native functions type-safety. For example, the function
parseFloat has a type of
string -> float.
parseFloat might fail parsing the input string and return a NaN, I know NaN is a valid value for
float but for the sake of better semantics, we want to use the type:
string -> Option<float> and return
None when the return value is NaN. We can wrap the native function inside a typed one and use pattern matching:
The
[<Emit>] expression for
parseFloatfollows the logic:
parseFloat could return
NaN , Therefore I check if the result of parsing is
NaN and return null if that is the case and otherwise return the parsed value. Giving this function a return type of
float option makes it convenient to work with such functions, this way I ensure that my code has to account for failure of parsing and make sure to handle the case of
None too.
However, this approach is still not very robust because the parsing succeeds with input “5x” and returns 5 instead of failing like it should. This is more of a limitation of the native function itself and to correctly parse numbers, we will use a little javascript trick to do that. Putting the
+ operator before a string will parse a string to a number! Why that works I hear you say? I don’t know, your guess is as good as mine, good reasons I hope:
Notice that I used
+ twice in that emit expression and passed the parameter twice as well, which is not very efficient if my parameter was the result of an expensive operation. It should be wrapped inside a lambda (for proper scoping) and used only once:
You can still use the parse functions from the BCL such as
System.Int32.(Try)Parse,
System.Double.(Try)Parse etc. They are implemented to mimic the actual behavior from .NET as much as possible.
Writing a JQuery binding, it’s just glue.
We will use what we have learnt so far to write a jQuery binding. A binding is a collection of functions that call the functions of the original api. In this case the native library is JQuery.
First of all, add a reference to JQuery in your
public/index.html file with a script tag and include it before your
bundle.js file.
index.html should look something like:
Notice that when the page loads, jQuery’s
$ will be available globally to be called in the page. Another place to reference the dollar sign is directly from the
window object like this
window.$ or this
window['$'] .
We will use just the
[<Emit>] to write the binding. Lets assume we want to make this binding functional-ish. It goes as follows: define a jQuery instance type, this will be an empty type just to tell when a function returns a jQuery element or something else:
Making this type an interface will ensure the type does not generate any extra code, it is only there for the type checker. Now we define our
JQuery module, first you think about how you want to use the binding. I want to be able to use it in a functional style the same way I use
Seq,
Async or
List etc:
I want it to generate something like the following with chaining:
const div = window['$']("#main")
div.css("background-color", "red")
.click(function(ev) { console.log("Clicked")})
.addClass("fancy-class");
Notice I will use
JQuery.select as an alias of
$ . I will need a refence to that dollar sign but I can’t just use it like
[<Emit("$(...)")>] because the dollar sign is reserved for emit expressions. However, inside a browser, every globally available variable is just a property on the
window object. So I can get a reference to
$ from the
window like this:
Notice that when putting
$ between quotes it becomes an allowed emit expression.
The other methods are defined in a similar way like we have seen before:
And so on and so forth for the rest of the functions if you want to support all of JQuery. Notice that, in order to enable chaining, I am passing
el as the last parameter of type
IJQuery and returning
IJQuery (most JQuery functions return a JQuery object, see docs). This makes for a nice functional API although it is just my personal preference.
Instance-based method chaining for the JQuery binding
Writing a jQuery binding as a module is one just one way to enable chaining jquery methods. I bet you expected the usual way of chaining methods is to “dot through” the api:
This requires having the methods placed on the instance type rather than on a module. Earlier we used the interface ‘IJQuery’ but it was empty, this time we will fill that interface with abstract methods:
Observations:
- using
abstractmethods to only define their type signatures.
- abstract methods without an emit attribute are compiled using the name of the method.
- for custom-named functions such as
onClickI am using
[<Emit>]to fall back to the actual name, that is
click. The instance itself will be the first parameter that’s why I am using
$0.click($1)where
$0is the instance and
$1is the argument.
- the
csshas a paramter as a tuple to correctly compile to javascript. If I used parameters such as
css : string -> string -> IJQueryI would not be able to “dot through” the code and would have to use
csswith it’s paramters between parentheses.
- I kept using the
JQuery.selectto start the “chain”.
To be used like this:
Will produce:
If you don’t like giving everything a type, you can go quick-and-dirty with dynamic programming capabilities of Fable, although I personally discourage using this model because one of the main reasons for chosing F# to compile to javascript is the powerful type-system and if I wanted to write dynamic code I wouldn’t bother to use Fable in the first place. Anyways, each to his own, you might like this model so here it is:
Working with object literals
JQuery among almost every other javascript library, works with object literals. They are used as paramters most of the time and they are ubiquitous.
Using Fable. We want to be able to create and manipulate object literals in a type-safe way. There are multipe ways of doing that. For example, assume I have the imaginary function in javascript
addTime that is natively used like this:
As you can see, the object literal consists of three properties:
current has type
Date (in javascript).
amount has type
number and
unit is a
string. To represent these types in F# we would use
DateTime for
current ,
int for
amount and
string for
unit . We will use a type to represent the whole object literal like the following, lets call it
AddTimeProps :
This will output this:
A simple object literal becomes the output which is exactly what the external function
addTime is expecting. Notice that because
AddTimeProps does not have any constructors, I used the
createEmpty<T> function. This is a special Fable function that will create an empty object literal but with the given type paramter
T . In this case
T is
AddTimeProps . Also notice that we are not using
[<Emit>] with the properties. That is because they are abstract and Fable will use the name of the property provided. To use custom names, you can use the
[<Emit>] attribute but with a funny emit expression syntax like this:
Here I am replacing the property
amount with
specialAmount using
[<Emit>] then it produces the property with the custom name, it is using “optional parameter” syntax to determine whether it should use the setter or the getter for the property.
String Literal Types, only better
Now, you might be satisfied with this solution for type-safety but you can actually do better! Suppose you are reading the docs of
addTime and come across the information that the property
unit can only have string values “days”, “months” or “years”. To ensure that no one forgets these values or write them incorrectly we want the compiler to check the correctness our code. For this case we can use the
[<StringEnum>] attribute. This is similar to “string-typing” in typescript. You can define a discriminated union with cases that don’t have paramters and have them compiled to strings at compile-time. Here is an example:
We can use this to enhance the
AddTimeProp type with even more type-safety by changing the type of
unit from
string to
TimeUnit :
The case of the discriminated union is camel-cased when compiled. If you need a custom name for your union case such as “YEARS” instead of “year”, you can use the
[<CompiledName>] attribute applied to the case:
The output becomes:
Using
[<Pojo>] Attribute
Plain old javascript objects or POJOs is just another name for object literals. Fable provides a useful attribute
[<Pojo>] that is applicable to record types to make them compile to object literals, here is an example:
This does not change the fact that they are still immutable. However, you can still start with an empty object using the
createEmpty<T> function:
A note from Fable’s author, Alfonso Garcia-Caro:
Pojo records are only intended for type-safe interaction with javascript libraries that require a plain object (like React components). These records lack many features, like instance and static methods and have no reflection support.
Using a list of discriminated union as object literal
Yes, that is also possible! To use the previous example of
Person , this is how you would describe it as a discriminated union:
A
Person has these properties of
Name and
Age but because this is a sum type, you can either have
Name or
Age to be a person which does not make sense. In order for this to work, you actually need a list of
Person :
Not quite idiomatic in F# but it works well (and looks nice) when interacting with external libraries. Now that you have the list of Person, you can use the special function
keyValueList provided by Fable (in
Fable.Core.JsInterop) to turn that list into an object literal:
These type of object literals should be used you have many optional properties of an object and you want a couple of them and you ignore the rest. This works pretty well for example for React style objects or for JQuery’s ajax options.
You can all also use ad-hoc properties using
unbox or using the new dynamic operator
!!:
It it worth noting that Fable will try to convert the list to an object literal during compile-time if the value is constant and in run-time if the value is yet to be determined during run-time.
Creating object literals inline
Again, if you feel lazy and you don’t want to give everything a type, you can another Fable function called
createObj from
Fable.Core . This function creates an object literal like this:
createObj is nice becuase it accepts a list of key-value pairs just like what would expect from an object literal. You can easily nest objects too:
Interacting with existing Javascript code
All our interaction with javascript so far was through generating some custom code that will be injected during compilation using the different attributes and functions that Fable provides. Now it is time to interact with existing javascript code and actually call it from F#. To learn this, we will write some javascript by hand. First create a file called
custom.js like this:
This file will contain two functions that we will call from F#.
parseJson and
getValue organized using modules to be compatible with webpack:
parseJson will try to parse a string to an object literal and will return
null if that fails.
getValue will try to get a value from an object literal using it’s string index and will return
null if such property does not exist (i.e. the property is
undefined).
Because both functions will return either a result or null, that qualifies them to return
Option<'a> . Also notice that this file is in the same directory
src as the the
App.fs file. When importing custom code, you use relative paths.
To import these functions, we will use the
import function from inside
App.fs like this:
The first argument for
import is what value you want to import, in this case it is the function
parseJson and the second argument is the where you want to import that value from, in this case from the file called
custom.js in the same directory.
Now these functions are available to be used:
With the output:
Another way to import both functions or any number of functions from a javascript module is using the
importAll function, but first you have to put declarations into one type and import all the functions from the javascript module as that type:
A javascript module can export a single value:
I created a file
default.js inside
src directory with the code above. You can import it using
importDefault :
Interacting with Javascript from npm packages
Modern javascript libraries are distributed through npm, the node package manager, as modules. Less often are there “built and ready” libraries that you just add to your page with a script tag. In our Fable apps we definitely want to interact such libraries. For the following example I want to use a silly library called left-pad. I call it silly because this “library” is a single function used by millions, instead of … you know, just writing the function yourself whenever you need it.
Anyways, I will stop ranting now :), here we go. It the same as with our
custom.js file but instead of pointing to the path of library, you just point to the name of it. First you want to install the library using
npm by running
npm install --save left-pad. This package is then added as a dependency to your
package.json file, you should see the entry
"left-pad": "^1.1.3" in your
dependencies . It is insalled in your
node-modules directory and with the magic of Webpack, you can import it from anywhere in you F# code like this:
Working with overloads
If you look at the docs of left-pad, the function
leftPad should have another overload with only two parameters. Because you can’t overload normal F# functions (like the one above) you can write the same function with a different (but meaningful) name with two parameters:
If you want to overload the method with the same name, you can use static methods on a class.
What about when you have a function with one paramter but that parameter can be a
string,
int or
DateTime? Then you use “erased unions”. These special union types created just for overloading paramter types. To define a function with a such parameter:
If you had more types, you would use
U4<>,
U5<> etc. To use such functions you write:
Notice the funny
!^ operator specially made to work with erased types and to check whether or not the provided type is actually compatible with parameter type.
Curried and Uncurried Functions
Functions in F# are curried by default, that is when using multiple arguments, the function becomes a single parameter function that returns other functions, here is an example:
This function is equivalent to this curried function:
In the early days of Fable, it used to compile the curried function as is with closures:
But recently as of Fable 1.0 beta, the compilation is optimized and the function is uncurried:
What if you wanted to explicitly return a function like this:
then you would have to use
System.Func in the signature as the return type:
Conclusion
There are many ways to interact with javascript in Fable. This allows you to leverage all of the javascript ecosystem and the numerous libraries published on npm. I hope you learnt a lot from this article, don’t forget to hit that heart icon below and to share! | https://medium.com/@zaid.naom/f-interop-with-javascript-in-fable-the-complete-guide-ccc5b896a59f | CC-MAIN-2018-26 | refinedweb | 3,845 | 59.33 |
What to focus on when learning ASP.NET Core?
Jon Hilton
Oct 18 '18
Updated on Nov 03, 2018
・1 min read
"I?
- If API, which SPA? Angular, React, Vue or something else
- Typescript or Javascript
- Visual Studio or Visual Studio Code?
No wonder you can't decide where to start, but start you must, so what to do?
Well, you could turn to others to decide for you.
There's just one problem...
... when people aren't invested in your decision, they (naturally) tend to put forward opinions based on their own experience and preferences, when what you really need is something more specific.
You need a way to decide for yourself.
Here's the good news. This moment of indecision is the worst part of the process.
This is the moment where your uncertainty about the enormity of the task ahead weighs heaviest.
An old boss of mine used to refer to this as "the blank piece of paper" moment.
Once you get your foot in the door, everything else will click into place and you'll be able to focus on your features, not choosing what to learn.
Plus, you'll have momentum on your side, which is a much nicer place to be!
Here's how to get from "choices, so many choices" to "ah, building that feature was fun" (in the fewest steps possible)
- Come up with an idea for a small application
- Pick one "stack" to start with
- Build the first feature
- Rinse and repeat
First, come up with an idea for a small application
The best way to learn any of this stuff is to build something, watch it go up in flames, then work out where you put the typo!
photo credit: Citoy'Art Tempus fugit via photopin (license)*
Small applications are perfect for this, and because you don't have to share it with anyone else (unless you want to) you're free to build anything you like!
If you're stuck for inspiration, here are a few ideas.
Hobbies/Interests
If you have something you're interested in (a sport, hobby, the Marvel universe etc.) it might pay to build something around it.
You'll have the benefit of actually being interested in the subject and almost certainly able to think up lots of ideas for features.
Seek inspiration
If you are still drawing a blank, have a quick hunt around on Google for programming side projects and you'll find plenty of ideas.
codementor.io has a list of 40 ideas.
They don't all make sense for a web application but many do. I quite like the idea of #34. a Lunch picker (decide what you should have for lunch so you don't have to!).
I also like this list by Dave Ceddia (specifically for React but would work for other frameworks too).
Then pick one "stack" to start with
On first glance this might seem a variation on "throwing the dice in the air to see where they land" but consider this...
It looks like there are many paths stretching out in front of you but right now, there are just two.
Server side application or API + Client.
Everything else hangs off this choice.
Server side application
Here's a server-side application (ASP.NET MVC or Razor Pages).
A request comes in from the browser.
ASP.NET handles that request (via an MVC Controller or Razor Page), executes some business logic (maybe interacts with a database) then returns a View/Page (which is compiled, on the server, to html).
API + Client
Now compare it to this example.
Here, the "back-end" is exposed as an API, with a separate front-end client written in something like Angular/React etc.
This time, the user interface has moved up to the client (running in the browser).
On the server (the ASP.NET part) we still have controllers, but they return data (typically as JSON) rather than compiled html (via Razor Views/Pages).
They're the same!
Well, the bit below the blue dotted line is anyway.
And this is great for you because it means the part where you interact with the database and perform logic on data is going to stay the same whether you use MVC, Razor Pages or decide to stand up an API.
In fact, in ASP.NET Core, MVC/Razor Pages and Web API are all effectively merged and share the same underlying framework.
So whatever you pick to start with, you'll learn the fundamentals of how ASP.NET Core works (including things like Dependency Injection, Startup configuration, data access via an ORM like Entity Framework).
Which one to start with?
Now you're still going to want to figure out whether to try server-side or API + client first.
These criteria can help you decide.
1. Jobs availability
If you're looking to eventually get a job building ASP.NET applications it might pay to do a little research into the jobs available to you.
One way is to hit one of the online jobs sites (Indeed, Monster etc.) and see what .net jobs are listed in your local area.
This will, at the very least, give you a sense of what direction the wind's blowing.
2. Previous experience
Perhaps you already have some html and javascript experience.
In which case you may prefer to stick to what you know and plump for an API with javascript/html client.
Of maybe you've used server-side frameworks before and would like to start there.
Bear in mind, whether you choose server-side or client-app you'll still be writing a lot of code which looks suspiciously like html, whether it's in a Razor page/view or as JSX in a React component.
Similarly, Typescript and Javascript are ever closer in syntax and features so much of what you learn will transfer across.
3. The simplest option
If you have no compelling reasons to choose one or the other, you'll probably find there are less "moving parts" if you start with ASP.NET MVC/Razor Pages.
And finally, if you still can't decide, starting is more important than choosing the "right" option, so roll a dice and move along!
Which javascript framework?
If you opt for the API + Client stack, you're left with this inevitible conundrum.
The biggest danger, is that you'll end up trying to learn too much at once (Web API, ASP.NET, Data Access AND
<insert-javascript-framework-here>).
So how can you avoid overloading yourself?
1. Use a ready-made client to test your API
Rather than build a front-end, you can always use something like Insomnia to test your API as you build it.
This way you can easily initiate requests to your API without getting into the weeds of a javascript framework.
2. Start on "easy mode" with the project templates
If you do decide to tackle a JS framework at the same time, the ASP.NET template projects are a good option.
Either from Visual Studio (via file > new project) or the CLI...
dotnet new angular
or
dotnet new react
This will get you up and running with a minimal project for reference.
Now, build your feature
I know, easier said than done right?
This is where it really pays to choose the simplest possible feature, and the simplest possible approach to building it.
This is where all the learning happens, through trial and error, working out what you don't know and how to find answers.
There's no real substitute for this part of the process which is why starting with simple features is key.
If you try and build an entire "Twitter Clone" you'll find it difficult to get those small wins which propel you on to the next challenge.
Better then, to try and build a tiny part of Twitter.
Maybe, a simple page which shows one hard-coded tweet, with little more than a heading and the text to start with, then build from there.
Rinse and repeat
Now you have one tiny feature under your belt, you can tackle the next feature or two.
Pro tip: You can make life much easier for yourself if you build your MVC/API project in a way which enables re-use of your business logic.
The logic/data access code you've written for your first few features, can easily become the basis for learning "the other stack".
See how to refactor logic out of your controllers with this super quick tip
For example, if you chose to try server-side first, you can expose the same business logic via an API and start building a simple front-end client to see how it compares.
Take this controller action. It returns "order data" (which you could display via a front-end application e.g. React).
public class OrderController : Controller { // rest of code omitted public IActionResult ById(string id){ var order = _orders.Find(id); if(order == null){ return NotFound(); } return Ok(order); } }
Whilst this returns an MVC View (with the same order data);
public class OrderController : Controller { // rest of code omitted public IActionResult ById(string id){ var order =_orders.Find(id); return View(order); } }
The important, nay crucial thing is that these controller actions are as minimal as possible and focused purely on exposing data or rendering views.
If you push your important business logic into another class/service/handler (as with
_orders in this example), it becomes trivial to change how you call that logic.
In other words, try to avoid this kind of thing!
public class OrderController : Controller { // rest of code omitted public IActionResult ById(string id){ var order = _dataContext.Orders.FirstOrDefault(x=>x.Id == id); if(order == null) { order = new Order(); } return View(order); } }
Here we've boxed ourselves into a corner because the business logic is in the controller action itself, making it harder to now expose the same data in a different way.
Go, go, go
You want to learn ASP.NET, the world needs you to learn ASP.NET! So take some pressure off yourself.
Don't worry about learning the wrong thing.
You can learn by doing (and the more you do, the more ready you'll be to pick up any language/framework/architecture the world chooses to throw at you).
Your plan for learning ASP.NET
- Pick an interesting and "buildable" side project
- Choose one of the two main architectures ("stacks")
- Identify one tiny feature (simple, few moving parts)
- Figure out how to build that feature (simplest possible implementation)
- Make sure to keep your controller actions (and/or Razor Page code) minimal
- Repeat steps 3-5
- Switch architecture (stack) when you feel like you want to try something different
Try it out and let me know how you get on :-)
Just before you go, see how to refactor logic out of your controllers with this super quick tip
(open source and free forever ❤️)
Takes Notes on Everything
With so much to keep learning as a junior dev, I've remembered not to trust my brain to hold onto all the new info.
Episode 010 - Async all the things - ASP.NET Core: From 0 to overkill
João Antunes - Dec 18 '18
If async & awaits is used with Task in ASP.NET (C#), is there a need to manually create threads?
DarkNada - Dec 19 '18
How to create a Nuget package for Blazor assembly with Azure DevOps
remi bourgarel - Dec 14 '18
Fantastic article Jon! Love the writing style!! I started writing a couple of weeks ago (for the first time!) with a topic related to thing you are pointing the whole time. And it is related to .NET Core and a Web API that a reader should build through the series. Cool stuff is that I did not touch the FE part of the story so that is totally up to the reader. So, if anyone is interested after reading this awesome piece by Jon head over to (now on medium but will repost here :-)) the tutorial in which I guide you in building a web scraper: medium.com/@vekzdran/practical-net...
Cheers,
V.
Nice article. Thank you. | https://dev.to/jonhilt/what-to-focus-on-when-learning-aspnet-core-52cm | CC-MAIN-2019-04 | refinedweb | 2,033 | 71.14 |
Purpose: interface to SQLite3 database manipulation. More...
#include <sqlite3.h>
Go to the source code of this file.
Purpose: interface to SQLite3 database manipulation..
Utility macro for finalizing a statement; assumes existence of an integer err variable.
Utility macro for executing and resetting a statement; assumes existence of an integer err variable.
Utility function for binding a random_value to a parameter as TEXT.
Note optimization for storing values without randomness.
References random::base, random::dice, random::m_bonus, random::sides, and strnfmt().
Call stats_close_db to close the database connection and free module variables.
References ANGBAND_DIR_STATS, db, db_filename, and mem_free().
Evaluate a sqlite3 SQL statement on the previously opened database.
The argument sql_str should contain the SQL statement, encoded as UTF-8. The return value is zero on success or a sqlite3 error code on failure. This function is a wrapper around sqlite3_exec and is for statements that don't expect output. Use stats_db_stmt_prep(), sqlite3_bind_*(), and sqlite3_step() for more complex statements.
Call stats_db_open first to create the database file and set up a database connection. Returns true on success, false on failure.
References ANGBAND_DIR_STATS, db, db_filename, file_exists(), mem_alloc(), NULL, path_build(), PATH_SEP, size, stats_make_output_dir(), and strnfmt().
Prepare a sqlite3 SQL statement for evaluation.
sql_stmt should be a non-NULL, unallocated pointer. The sqlite3 library will allocate memory appropriately. The caller should later delete this statement with sqlite3_finalize(). sql_str should contain the SQL statement, encoded as UTF-8. The function returns 0 on success or a sqlite3 error code on failure. | http://buildbot.rephial.org/builds/master/doc/db_8h.html | CC-MAIN-2018-09 | refinedweb | 247 | 52.97 |
E2E Testing with External Services
Generally it's better not to hit external services in end-to-end tests, even external services you own. Instead, create a fake version of the external services for your tests. We'll see how to do this below.
It seems like hitting external services would give you more confidence, and that using fake services isn't really testing your app. But hitting external services opens your tests up to flakiness due to network unreliability and outages in different systems--especially if the services aren't owned by you. Also, setting up test data against a real external service can make your tests much harder to write and maintain, making it less likely that you'll write and maintain them.
So how can you gain confidence that your app works against the real service? Here's what I'd recommend, from preferred first:
- You are almost certainly doing some manual testing your app. Let that manual testing be the test that the external service connectivity works.
- If you feel the need to automate testing of the external service connection, write just one or a few tests as part of a separate test suite. That way you can run it whenever you like, but it won't cause CI failures. Keep your main test suite using a fake external service.
Faking External Services
We don't need to rely on any special libraries to fake external connections; it's easy to write ourselves. As an additional benefit, this approach nudges our app to be less coupled to specifics of third-party libraries. Let's see how.
Say our app has an
api.js file that configures an instance of Axios, a popular HTTP client:
import axios from 'axios';
const api = axios.create({
baseURL: '
});
export default api;
This file is required throughout our app. For example, here's a component where we do a GET request to load widgets:
import React, {useState, useEffect} from 'react';
import {Text, View} from 'react-native';
import api from './api';
export default function WidgetContainer() {
const [widgets, setWidgets] = useState([]);
useEffect(() => {
api.get('/widgets').then(response => {
setWidgets(response.data);
});
}, []);
//...
}
How can we fake out this client? We just create another module that exposes the same interface to the rest of the app, but uses hard-coded in-memory data instead. Let's see how.
First let's create a fake, then wire it up. Make an
api folder and create a
fake.js in it. Add the following:
const api = {
get() {
return Promise.resolve();
},
};
export default api;
We create an object with the same interface as an Axios instance as we are using it: it has a
get() method that returns a
Promise.
Now let's add some fake data to it:
const api = {
get() {
- return Promise.resolve();
+ return Promise.resolve({
+ data: [
+ {id: 1, name: 'Widget 1'},
+ {id: 2, name: 'Widget 2'},
+ ],
+ });
},
};
export default api;
OK, now if we hook up this fake service it will return hard-coded data instead of hitting a web service. If your app makes
post() or
patch() requests you can add methods for those. If there are several different
get() requests sent throughout your app, you can check the passed-in URL to decide which hard-coded data to send back. You can even add statefulness, storing an array of records in
fake.js, appending to it when data is
post()ed, etc.
Next, how can we hook our fake up to our app? We need some way to use our real service during development and production, but our fake service during testing. Let's set up the plumbing for that first, then figure out how to set that flag.
Move
api.js into the api folder and rename it to
remote.js. Now in api create an
index.js in it. Metro Bundler handles index files the way many other bundlers do: the import path
./api will match either
./api.js or
./api/index.js. This means you don't even need to make changes to the import statements in the rest of your app; you can just expand the one
api.js file into a directory.
In
api/index.js, add the following:
import fake from './fake';
import remote from './remote';
const apiDriver = 'fake';
let api;
switch (apiDriver) {
case 'remote':
api = remote;
break;
case 'fake':
api = fake;
break;
}
export default api;
If the driver variable is set to "remote" we export the real Axios client; if it's set to "fake" we export the fake one.
Now, how can we switch without having to edit this file? A package called
react-native-config will help us set config values.
react-native-config
Install
react-native-config:
$ yarn add react-native-config
$ (cd ios; pod install)
Create an
.env file at the root of your project:
API_DRIVER=remote
And an
.env.detox file:
API_DRIVER=fake
Now update your
api/index.js file to read the config value:
+import env from 'react-native-config';
import fake from './fake';
import remote from './remote';
-const apiDriver = 'fake';
+const apiDriver = env.API_DRIVER;
let api;
switch (apiDriver) {
case 'remote':
api = remote;
break;
case 'fake':
api = fake;
break;
}
export default api;
When running your app, the
.env file will be used by default, which will load the real API client. We can update the detox command to tell it to load
.env.detox instead by adding
ENVFILE=.env.detox to the front of the detox
build config property in
.detoxrc.json:
"configurations": {
"ios": {
"type": "ios.simulator",
"binaryPath": "ios/build/Build/Products/Debug-iphonesimulator/RNTestingSandbox.app",
- "build": "xcodebuild -workspace ios/RNTestingSandbox.xcworkspace -scheme RNTestingSandbox -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build",
+ "build": "ENVFILE=.env.detox xcodebuild -workspace ios/RNTestingSandbox.xcworkspace -scheme RNTestingSandbox -configuration Debug -sdk iphonesimulator -derivedDataPath ios/build",
"device": {
"type": "iPhone 11"
}
},
Note that, unlike JS files, you can't just reload the app when you change a
.env file; you need to rebuild the app:
$ detox build -c ios
$ detox test -c ios | https://reactnativetesting.io/e2e/external-services/ | CC-MAIN-2022-21 | refinedweb | 991 | 66.44 |
-
Configuration and calling
Operating-Modes
Embperl can operate in one of four modes:
- mod_perl
The mostly used way is to use Embperl together with mod_perl and Apache. This gives the best performance and the most possibilities.
- CGI/FastCGI. ().
- Offline
You can use Embperl also on the command line. This is usefull for generating static content out of dynamic pages and can sometime be helpfull for testing.
- Call it from other Perl programms
If you have your own application and want to use Embperl's capbilities you can do so by calling Embperl::Execute. This allows to build your own application logic and useing Embperl possibilites for rendering content.
mod_perl
Preloading pages.
CGI/FastCGI
To use this mode you must copy embpcgi.pl to your cgi-bin directory. You can invoke it with the URL.
The /url/of/your/document will be passed to Embperl by the web server. Normal processing (aliasing, etc.) takes place before the URI makes it to PATH_TRANSLATED.
If you are running the Apache httpd, you can also define embpcgi.pl as a handler for a specific file extension or directory.
Example of file which should be processed by Embperl.
- query_string
Optional. Has the same meaning as the environment variable QUERY_STRING when invoked as a CGI script. That is, QUERY_STRING contains everything following the first "?" in a URL. <query_string> should be URL-encoded. The default is no query string..
- -o options
See "EMBPERL_OPTIONS" for option values.
- -s syntax
Defines the syntax of the source. See See "EMBPERL_SYNTAX"
By calling Embperl::Execute (\%param).
Execute($filename, $p1, $p2, $pn) ;
This will cause Embperl to interpret the file with the name
$filename and, if specified, pass any additional parameters in the array
@param (just like
@_ in a Perl subroutine). The above example could also be written in the long form:
Execute ({inputfile => $filename, param => [$p1, $p2, $pn]}) ;
The possible items for hash of the long form are are descriped in the configuration section and parameter section.
EXAMPLES for Execute:
#') -]
Debugging
Starting with 2.0b2 Embperl files can debugged via the interactive.
Configuration
- Env:
EMBPERL_SESSION_HANDLER_CLASS
- Method:
$application -> config -> session_handler_class [read only]
- Default:
Apache::SessionX
- Since:
1.3b3
- See also::
Session Handling
Set the expiration date that Embperl uses for the cookie with the session id. You can specify the full date or relativ values. The following forms are all valid times:
+30s 30 seconds from now +10m ten minutes from now +1h one hour from now -1d yesterday (i.e. "ASAP!") now immediately +3M in three months +10y in ten years time Thu, 25-Apr-1999 00:40:33 GMT at the indicated time & date
Embperl_Cookie_Secure
- transfered over a secured connection.
Embperl_Log
- Env:
EMBPERL_LOG
- Method:
$application -> config -> log [read only]
- Default:
Unix: /tmp/embperl.log Windows: /embperl.log
Gives the location of the log file. This will contain information about what Embperl is doing. The amount of information depends on the debug settings (see "EMBPERL_DEBUG" below). The log output is intended to show what your embedded Perl code is doing and to help debug it.
Embperl:
- dbgStd = 1 (0x1)
Show minimum information.
- dbgMem = 2 (0x2)
Show memory and scalar value allocation.
- dbgEval = 4 (0x4)
Show arguments to and results of evals.
- dbgEnv = 16 (0x10)
List every request's environment variables.
- dbgForm = 32 (0x20)
List posted form data.
- dbgInput = 128 (0x80)
Show processing of HTML input tags.
- dbgFlushOutput = 256 (0x100)
Flush Embperl's output after every write. This should only be set to help debug Embperl crashes, as it drastically slows down Embperl's operation.
- dbgFlushLog = 512 (0x200)
Flush Embperl's logfile output after every write. This should only be set to help debug Embperl crashes, as it drastically slows down Embperl's operation.
- dbgLogLink = 8192 (0x2000)>
- dbgDefEval = 16384 (0x4000)
Shows every time new Perl code is compiled.
- dbgHeadersIn = 262144 (0x40000)
Log all HTTP headers which are sent from and to the browser.
- dbgShowCleanup = 524288 (0x80000)
Show every variable which is undef'd at the end of the request. For scalar variables, the value before undef'ing is logged.
- dbgSession = 2097152 (0x200000)
Enables logging of session transactions.
- dbgImport = 4194304 (0x400000)
Show how subroutines are imported in other namespaces.
- dbgOutput = 0x08000
Logs the process of converting the internal tree strcuture to plain text for output
- dbgDOM = 0x10000
Logs things related to processing the internal tree data structure of documents
- dbgRun = 0x20000
Logs things related to execution of a document
- dbgBuildToken = 0x800000
Logs things related to creating the token tables for source parsing
- dbgParse = 0x1000000
Logs the parseing of the source
- dbgObjectSearch = 0x2000000
Shows how Embperl::Objects seraches sourcefiles
- dbgCache = 0x4000000
Logs cache related things
- dbgCompile = 0x8000000
Gives information about compiling the parsed source to Perl code
- dbgXML = 0x10000000
Logs things related to XML processing
- dbgXSLT = 0x20000000
Logs things related to XSLT processing
- dbgCheckpoint = 0x40000000
Logs things related to checkpoints which are internaly used during execution. This information is only usefull:
choosen by Net::SMTP
- Since: adress_Object_App
Filename of the application object that Embperl::Object searches for. The file should contain the Perl code for the application object. There must be no package name given (as the package is set by Embperl::Object) inside the file, but the @ISA should point to Embperl::App. If set this file is searched throught the same search path as any content file. After a successfull load the init method is called with the Embperl request object as parameter. The init method can change the parameters inside the request object to influence the current request.
Embperl_Object_Addpath
Directory where Embperl::Object stops searching for the base page.
Embperl_Object_Fallback
If the requested file is not found by Embperl::Object,_Handler_Class
-:
2.0b6
If specified, only files which match the given perl regular expression will be processed by Embperl, all other files will be handled by the standard Apache handler. This can be useful if you have Embperl documents and non Embperl documents (e.g. gifs) residing in the same directory.
Example: # Only files which end with .htm will processed by Embperl EMBPERL_URIMATCH \.htm$
Embperl_Multfieldsep
- Env:
EMBPERL_MULTFIELDSEP
- Method:
$request -> config -> mult_field_sep [read only]
- Default:
\t
- Since:
2.0b6
Specifies the charachter that is used to separate multiple form values with the same name.
Embperl_Path_Output_Mode
- Env:
EMBPERL_OUTPUT_ESC_CHARSET
- Method:
$request -> config -> output_esc_charset [read only]
- Default:
ocharsetLatin1 = 1
- Since:
2.0.2
Set the charset which to assume when escaping. This can only be set before the request starts (e.g. httpd.conf or top of the page). Setting it inside the page has undefined results.
- ocharsetUtf8 = 0
UTF-8 or any non known charset. Characters with codes above 128 will not be escaped at all
- ocharsetLatin1 = 1
ISO-8859-1, the default. When a Perl string has it's utf-8 bit set, this mode will behave the same as mode 0, i.e. will not escape anything above 128.
- ocharsetLatin2 = 2
ISO-8859-2. When a Perl string has it's utf-8 bit set, this mode will behave the same as mode 0, i.e. will not escape anything above 128.
Embperl_Session_Mode
-
The session id for the user session will be passed via cookie
- smodeUDatParam = 2
The session id for the user session will append as parameter to any URL and inserted as a hidden field in any form.
- smodeUDatUrl = 4
The session id for the user session will passed as a part of the URL. NOT YET IMPLEMENTED!!
- smodeSDatParam = 0x20
- Env:
EMBPERL_USEENV
- Method:
$component -> config -> use_env [read only]
- Default:
off unless runing as CGI script
- Since:
2.0b6
Tells Embperl to scan the enviromemt for configuration settings.
use_redirect_env
- Method:
$component ->_Package
The name of the package where your code will be executed. By default, Embperl generates a unique package name for every file. This ensures that variables and functions from one file do not conflict with those of another file. (Any package's variables will still be accessible with explicit package names.).
-FormDataNoUtf8 = 0x2000000
Redirects STDOUT to the Embperl output stream before every request and resets it afterwards. If set, you can use a normal Perl print inside any Perl block to output data. Without this option you can only use output data by using the [+ ... +] block, or printing to the filehandle OUT.
- optNoHiddenEmptyValue = 65536 (only 1.2b2 and above)
Normally, if there is a value defined in %fdat for a specific input field, Embperl will output a hidden input element for it when you use hidden. When this option is set, Embperl will not output a hidden input element for this field when the value is a blank string.
- optKeepSpaces = 1048576 (only 1.2b5 and above) = 0x100000,
Disable the removal of spaces and empty lines from the output. This is useful for sources other than HTML.
Embperl_Escmode
Turn HTML and URL escaping on and off.
NOTE: If you want to output binary data, you must set the escmode to zero.
For convenience you can change the escmode inside a page by setting the variable
$escmode.
- escXML = 8 (or 15) (2.0b4 and above)
The result of a Perl expression is always XML-escaped (e.g., `>' becomes `>' and ' become ').
- escUrl + escHtml = 3 (or 7)
The result of a Perl expression is HTML-escaped (e.g., `>' becomes `>') in normal text and URL-escaped (e.g., `&' becomes `%26') within of
A,
EMBED,
IMG,
IFRAME,
FRAMEand
LAYERtags.
- escUrl = 2 (or 6)
The result of a Perl expression is always URL-escaped (e.g., `&' becomes `%26').
- escHtml = 1 (or 5)
The result of a Perl expression is always HTML-escaped (e.g., `>' becomes `>').
- escNode = 0
No escaping takes place.
- escEscape = 4
If you add this value to the above Embperl will always perform the escaping. Without it is possible to disable escaping by preceeding the item that normaly is escaped with a backslash. While this is a handy thing, it could be very dangerous in situations, where content that is inserted by some user is redisplayed, because they can enter arbitary HTML and preceed them with a backslash to avoid correct escaping when their input is redisplayed again.
NOTE: You can localize $escmode inside a [+ +] block, e.g. to turn escaping temporary off and output
$data write
[+ do { local $escmode = 0 ; $data } +]
Embperl_Input_Escmode
- Env:
EMBPERL_INPUT_ESCMODE
- Method:
$component -> config -> input_escmode [read only]
- Default:
0
- Since:
2.0b6
Tells Embperl how to handle escape sequences that are found in the source.
- 0
don't interpret input (default)
- 1
unescape html escapes to their characters (i.e. < becomes < ) inside of Perl code
- 2
unescape url escapes to their characters (i.e. %26; becomes & ) inside of Perl code
- 3
NOT YET IMPLEMENTED!
Embperl_Top_Include
literal string that is appended to the cache key
Embperl_Cache_Key_Options
- Env:
EMBPERL_CACHE_KEY_OPTIONS
- Method:
$component -> config -> cache_key_options [read only]
- Default:
all options set
- Since:
2.0b1
Tells Embperl how to create a key for caching of the output
- ckoptPathInfo = 2
include the PathInfo into CacheKey
- ckoptQueryInfo = 4
include the QueryInfo into CacheKey
- ckoptDontCachePost = 8
don't cache POST requests (not yet implemented)
Embperl_Expires_Func
(default; contains EmbperlHtml and are taken on the Text
Defines the <mail:send> tag, for sending mail. This is an example for a taglib, which could be a base for writing your own taglib to extent the number of available tags
- POD
translates pod files to XML, which can be converted to the desired output format by an XSLT transformation
- RTF }) ;
output
Gives the possibility to write the output into a scalar instead of sending it to stdout or a file. You should give a reference to a scalar. Example:
Execute({inputfile => 'mysource.epl', output => \$out}) ;
sub.
subreq!
import
A value of one tells Embperl to define the subrountines inside the file (if not already done) and to import them as perl subroutines into the current namespace.
See [$ sub $] metacommand and section about subroutines for more info.
A value of zero tells Embperl to simply precompile all code found in the page. (With 2.0 it is not necessary anymore to do it before using the
sub parameter on a different file).
firstline
Specifies the first linenumber of the sourcefile.
mtime
Last modification time of parameter input. If undef the code passed by input is always recompiled, else the code is only recompiled if mtime changes.
param.
fdat
Pass a hash reference to customly set %fdat. If
ffld is not given,
ffld will be set to
keys %fdat.
ffld
Pass a array reference to customly set @fdat. Does not affect
%fdat.
object
Takes a filename and returns an hashref that is blessed into the package of the given file. That's usefull, if you want to call the subs inside the given file, as methods. By using the
isa parameter (see below) you are able to provide an inherence tree. Additionaly you can use the returned hashref to store data for that object. Example:
[# the file eposubs.htm defines two subs: txt1 and txt2 #] [# first we create a new object #] [- $subs = Execute ({'object' => 'eposubs.htm'}) -] [# then we call methods inside the object #] txt1: [+ $subs -> txt1 +] <br> txt2: [+ $subs -> txt2 +] <br>
isa'}) !]
errors
Takes a reference to an array. Upon return, the array will contain a copy of all errormessages, if any.
xsltparam
Takes a reference to hash which contains key/value pair that are accessable inside the stylesheet via <xsl:param>.
Embperl's Objects.
thread
Returns a reference to a object which hold per threads informations. informations. informations passed to the error handler when an error is reported.
errdat2
Additional informations passed to the error handler when an error is reported.
lastwarn
Last warning message.
errobt 1997:
Expected text after =item, not a number
- Around line 2002:
Expected text after =item, not a number
- Around line 2007:
Expected text after =item, not a number
- Around line 3635:
Non-ASCII character seen before =encoding in 'hinzufügen'. Assuming ISO8859-1 | https://metacpan.org/pod/release/GRICHTER/Embperl-2.1.0/Config.pod | CC-MAIN-2019-43 | refinedweb | 2,276 | 57.16 |
C++ is a very popular programming language. Every C++ program has a general structure that it must follow.
Here is common structure of a C++ program.
Header Section
The header section is used for
- Including header files
- Declaring global variables
- Declaring functions
- Defining constants
You can divide C++ functions into two categories – Builtin functions and User-defined functions. Builtin functions are written in a header file that ends with a .h extension. In other words, builtin functions comes with the C++ compiler located in a directory called Include.
User-defined functions are written by programmers. You can write your own codes and store them in the source file for C++ that ends with extension (.cpp).
Syntax to include header file in a C++ program
#include "iostream.h" or #include <iostream.h>
When you use double quotes the compiler look for the header file in the directory where the C++ program located and then look for file in Include directory that contains all the headers. Otherwise, if we use < ….> , then compiler look for the file only under Include directory.
The header section is a global section because it is outside the main program. You can declare functions (user-defined) with global scope so that all other functions can call it anytime.
void add() ; int calculate();
You can declare global variables which is used by the main program components such as functions and expressions.
int number; char name;
You can specify a macro or a numeric constant value in the header section. The value of a numeric constant does not change during the execution of a program.
Syntax for declaring numeric constant
#define MAX 10 #define PI 3.14
The Main Function
The main function is where your program gets executed first. The main function has a default return type of integer in C++. It calls other functions while executing.
void main() { statement 1; statement 2; ... ... statement n; }
The two braces –
{ and
} indicate a block of statements and main() like every other function has a block of statements. The block is the entire C++ program and when it has finished execution, returns an integer value.
If you do not want main to return any value, use the keyword void main() to indicate this.
Other Functions
A user-defined function is a function that performs some task for the user in the program. It is different from built-in functions because user has the control to change the function anytime.
The programmer declares the function in the head section or in the main body of a C program. The declared function has definition that defines what the function does. The best practice is to keep the function definition after main() function. But, it is not necessary.
Putting it all together
The following is an example of C++ program structure.
#include <iostream.h> #define NUM 100 void main() { int x = 20; int sum = 0; sum = x + NUM; cout << sum << endl; }
Output
120 | https://notesformsc.org/c-plus-plus-program-structure/ | CC-MAIN-2022-33 | refinedweb | 488 | 65.62 |
Greedy algorithm greedily selects the best choice at each step and hopes that these choices will lead us to the optimal solution of the problem. Of course, the greedy algorithm doesn't always give us the optimal solution, but in many problems it does. For example, in the coin change problem of the Coin Change chapter, we saw that selecting the coin with the maximum value was not leading us to the optimal solution. But think of the case when the denomination of the coins are 1¢, 5¢, 10¢ and 20¢. In this case, if we select the coin with maximum value at each step, it will lead to the optimal solution of the problem.
Also as stated earlier, the fraction knapsack can also be solved using greedy strategy i.e., by taking the items with the highest $\frac{value}{weight}$ ratio first.
Thus, checking if the greedy algorithm will lead us to the optimal solution or not is our next task and it depends on the following two properties:
- Optimal substructure → If the optimal solutions of the sub-problems lead to the optimal solution of the problem, then the problem is said to exhibit the optimal substructure property.
- Greedy choice property → The optimal solution at each step is leading to the optimal solution globally, this property is called greedy choice property.
Implementation of the greedy algorithm is an easy task because we just have to choose the best option at each step and so is its analysis in comparison to other algorithms like divide and conquer but checking if making the greedy choice at each step will lead to the optimal solution or not might be tricky in some cases. For example, let's take the case of the coin change problem with the denomination of 1¢, 5¢, 10¢ and 20¢. As stated earlier, this is the special case where we can use the greedy algorithm instead of the dynamic programming to get the optimal solution, but how do we check this or know if it is true or not?
Let's suppose that we have to make the change of a number $n$ using these coins. We can write $n$ as $5x+y$, where x and y are whole numbers. It means that we can write any value as multiple of 5 + some remainder. For, example 4 can be written as $5*0+4$, 7 can be written as $5*1 + 2$, etc.
Now, the value of y will range from 0 to 4 (if it becomes greater than or equal to 5, then it will be covered in the $5x$ part) and we can check that any value between 0 to 4 can be made only by using all coins of value 1. So, we know that the optimal solution for the part $y$ will contain coins of value 1 only. Therefore, we will consider for the optimal solution of the $5x$ part. As the problem exhibits optimal substructure, so the optimal solution to both the subproblems will lead to the optimal solution to the problem.
Since $5x$ is a multiple of 5, so it can be made using the values 5, 10 and 20 (as all three are multiples of 5). Also in 5, 10 and 20, the higher value is multiple of the lower ones. For example, 20 is multiple of 5 and 10 both and 10 is multiple of 5. So, we can replace the multiple occurrences of the smaller coins with the coins having higher value and hence, can reduce the total number of coins. For example, if 5 is occurring more than once, it can be replaced by 10 and if 10 is occurring more than once it can be replaced by 20. In other words, we can choose the coins with higher value first to reduce the total number of coins.
So, we have just checked that we can apply the greedy algorithm in this case.
We are going to see more greedy algorithms in this course. So, you will become more comfortable with the greedy algorithm with the progress of this course.
Let's code the above coin change problem and get more familiar with the greedy algorithm.
Coin Change Problem with Greedy Algorithm. Now after taking one coin with value
coins[i], the total value which we have to make will become n-coins[i].
i = 0
while (n)
if coins[i] > n
i++
if coins [i] > n → We are starting from the 0th element (element with the largest value) and checking if we can use this coin or not. If the value of this coin is greater than the value to be made, then we are moving to the next coin -
i++.
If the value of the coin is not greater than the value to be made, then we can take this coin. So, we will take it. Let's just print the value right here to indicate we have taken it, otherwise, we can also append these value in an array and return it.
while (n)
...
else
print coins[i]
Now, the value to be made is reduced by coins[i] i.e.,
n-coins[i].
COIN-CHANGE-GREEDY(n) coins = [20, 10, 5, 1] i = 0 while (n) if coins[i] > n i++ else print coins[i] n = n-coins[i]
- C
- Python
- Java
#include <stdio.h> void coin_change_greedy(int n) { int coins[] = {20, 10, 5, 1}; int i=0; while(n) { if(coins[i] > n) { i++; } else { printf("%d\t",coins[i]); n = n-coins[i]; } } printf("\n"); } int main() { int i; for(i=1; i<=20; i++) { coin_change_greedy(i); } return 0; }
Analysis of the Algorithm
We can easily see that the algorithm is not going to take more than linear time. As n is decreased by coins[i] at the end of the while loop, we can say that for most of the cases it will take much less than $O(n)$ time.
So, we can say that our algorithm has a $O(n)$ running time. | https://www.codesdope.com/course/algorithms-greedy-algorithm/ | CC-MAIN-2022-40 | refinedweb | 1,002 | 65.35 |
START LEARNING
FLASH NOW
Get instant access to over
45 minutes of FREE Flash
tutorials on video
and our newsletter.
.
A Java class is a reusable entity. It is self-contained and may be thought of as an object
like a fish or a rock or book. In a Space Invaders game you might have an alien class, a defender class,
a bullet class, a bomb class and a game class that controls the whole thing. If I wanted to write another game
that had baddies, i might be able to reuse the aliens and bullet classes. classes can be thought of as
modular. They are connected together through methods. Let's make an example to show you :
public class Employee{
private int empNum;
public void setEmpNum(int _x){
bulletx = _x;
}
public int getEmpNum(){
return empNum;
}
}
A class needs an access modifier, the word "class" to signal what it is, and a unique name. Our Employee class
has only one variable - its employee number. It is private so that it cant be hacked into from outside. The only access
to it is through the set and get methods. The set method(setEmpNum) allows other classes to access the
Employee class and change the
empNum variable. The get method(getEmpNum) allows access to what the value of it is. Say you had an managing class
called Accountant. It might be something like this:
public class Accountant{
public static void main(String[] args){
// create an instance of class Employee called steve
Employee steve = new Employee();
steve.setEmpNum(234);
System.out.println("Steve : Employee Number :"
+ steve.getEmpNum());
}
}
But what is this line :
Employee steve = new Employee();
What is that ? What we have done in this line is to INSTANTIATE the Employee class.
We have made a clone of the Employee class and named it steve.
We use the keyword new to allocate memory for it and to make a new copy of that class.
Employee() is the CONSTRUCTOR . I allows us to create copies.
Later on we will write our own constructor methods, but for now we will use the default inbuilt
constructor.
Lets write our own Alien class for our game. What do you think it will need ?
A position (x and y),and a boolean flag to tell when it has been hit. Let us have a go at it:
public class Baddie{
// declare our variables
private int xpos;
private int ypos;
private Boolean isHit;
// declare our methods
public void setXpos(_x){
xpos = _x;
}
public void setYpos(int _y){
ypos =_y;
}
public void setHit(boolean hitFlag){
isHit = hitFlag;
}
public int getXpos(){
return xpos;
}
public int getYpos(){
return ypos;
}
public boolean getIsHit(){
return isHit;
}
}
So how are we going to use this class? Make a Game class that controls our baddies. Try something like so:
public class Game{
public static void main(String[] args){
String aliveOrDead;
// instantiate a baddie
Baddie pugwort = new Baddie();
pugwort.setXpos(50);
pugwort.setYpos(100);
pugwort.setHit(true);
if(pugwort.getIsHit==true){
aliveOrDead = "Dead";
}else{
aliveOrDead = "Alive";
}
System.out.println("pugwort is at "+
pugwort.getXpos() + " : "
+ pugwort.getYpos() +
" and he is "+ aliveOrDead);
}
}
What did you get ? Where was he ? Was he dead or alive ? What did we do in Game class?
Flash Tutorials in Video Format -
Watch them now at LearnFlash.com
Object anotherObject = new Object();
// or
ClassName instantiatedName = new ClassName();
// e.g.
Defender superDork = new Defender();
Lets make a java class constructor for our Employee class. It will have another variable which will be the same for all employees
- the salary. Let's do it:
public class Employee{
private int empNum;
private double salary;
// overwrite our default constructor
Employee(){
salary = 300.00;
}
// access methods
public void setEmpNum(int _x){
bulletx = _x;
}
public int getEmpNum(){
return empNum;
}
}
Any employee instantiated will have a default salary of $300.00 .
Exercise
Create a Circle class with attributes of radius, area and diameter. Use a constructor that sets the radius to 1.
Write methods such as setRadius , getRadius , computeArea and computeDiameter. Hints :
Diameter = 2 * radius
area = PI * Radius * Radius
Create another class called TestCircle and instantiate 3 Circles. Use the get and set methods for Circle
and compute the diameter and area of each circle and display those statistics.
Next Lesson we will get more into writing constructors and delving deeper into the murky depths of classes.
. | http://www.video-animation.com/java_012.shtml | crawl-001 | refinedweb | 715 | 65.32 |
Visual Studio: break on all CLR exceptions, not only the unhandled ones.
Posted by jpluimers on 2013/08/29
When you have a layered exception handling (for instance to translate general exceptions into domain or business exceptions, and want to control which exceptions trickle up to where), then from a debugger perspective, most exceptions actually handled.
However debugging those layers, it often makes sens to be able to break where all these exceptions are actually fired.
The screenshots (click on each to enlarge) are for Visual Studio 2010, but it works in any Visual Studio version and (since it is a debugger feature, not a language one) for all .NET languages I tried so far.
Note that as of Visual Studio 2010, if you disable these, it still breaks when exceptions are thrown from code called through reflection. This seems intentional and has 3 workarounds, but it might have been reverted in Visual Studio 2012.
This is a setting stored on the Solution level (.suo file) in Visual studio which by default is turned off. Luckily, it is very easy to turn this feature on, for instance for CLR (.NET Common Language Runtime) exceptions:
- In the “Debug” menu, choose “Exceptions” (or Press Ctrl+D, E),
- Wait a few moments for the dialog to appear
- Put a checkmark in the “Thrown” column for the “Comon Language Runtime Exceptions” row.
- Click the “OK” button.
Usually I do this for all or none of the CLR Exceptions.
But: You can drill down and specify this on the individual Exception class, or on the namespace in the same dialog:
Break on specific thrown .NET CLR exceptions in a namespace, or drill down to the Exception classes to break on.
–jeroen
Exceptions handling in the EnterpriseLibrary and the “SecurityException: The source was not found, …” « The Wiert Corner – irregular stream of stuff said
[…] So they never noticed this problem (and maybe never even looked for that log). After having the Visual Studio debugger break on all CLR exceptions, not only the unhandled ones, I could see this one shown below fired deep inside the EnterpriseLibrary. Which means that the […] | https://wiert.me/2013/08/29/visual-studio-break-on-all-clr-exceptions-not-only-the-unhandled-ones/ | CC-MAIN-2021-49 | refinedweb | 353 | 60.24 |
Internet Explorer 9 Caught Cheating In SunSpider 360
dkd903 writes "A Mozilla engineer has uncovered something embarrassing for Microsoft – Internet Explorer is cheating in the SunSpider Benchmark. The SunSpider, although developed by Apple, has nowadays become a very popular choice of benchmark for the JavaScript engines of browsers."
Embarassing? (Score:2, Funny)
I would think Microsoft would be used to embarassing by now..
Re: (Score:2, Insightful)
They're kinda like the rich fat cat who constantly puts his foot in his mouth. He knows he should shut up, but then again why should he care...he's rich, bitch!
Re: (Score:2)
Re: (Score:3, Informative)
And their 10-Q definitely indicates that they're not losing money. [yahoo.com]
The stock price is a meaningless indicator unless you are using it indirectly with their P/E ratio.
Re:Embarassing? (Score:5, Funny)
Microsoft added 'optimization' for the 10-Q results as well.
Re: (Score:3, Informative)
uh, hate to break it to ya, but ballmer selling 87 million in shares is not necessarily a good sign for the company given current losses. [nwsource.com]
Re: (Score:2)
please. it's almost trivial to sell shares in a company that puts anything above $1b in revenue a year, even if they're on the verge of bankruptcy.
Re: (Score:3, Interesting)
Insiders selling company stock is always a Red Flag for investors. Whether that is justified or not in this case is up to the individual to decide, but it's a Red Flag for a reason
... very often it means problems within the company and bad news follows.
There are perfectly good reasons for insider selling, and it helps to be very straightforward about it
... it's not like you can hide it anyway, it's reported and watched vigorously.
Not being a Microsoft investor, and not particularly interested in their are
Re: (Score:3, Informative)
That's actually why there's blackout periods for insiders buying/selling shares, as part of the Sarbanes-Oxley rules. When I was at Dell, I wasn't allowed to buy/sell within 30 days (either way) of any public statement regarding earnings or future plans. These rules exist specifically to prevent insiders from gaming the market by buying up a large amount of stock right before an expected rise (higher than expected earnings announced, for example), or selling right before an expected fall (lower than expecte
Re: (Score:2)
Not for long . . . I've been hearing that MS is losing tons of money and heading towards IBM territory.
Why would heading towards "IBM territory" be a bad thing? IBM is a profitable, successful company that has been around for longer than most companies of its size.
Re: (Score:2)
The 70s and 80s IBM used to be much, much bigger than it is now.
You'll find numerous buildings scattered across the US that once belonged to IBM, but no longer do, because of IBM's shrinking profits (mainly due to their loss of the IBM PC business, but also other downsizing). Another analogy is that Microsoft might end-up like Kmart, who was once the #1 retailer but is now increasingly irrelevant.
Re: (Score:2)
How can one be fiscally conservative (spend less money) and simultaneously be socially liberal (spend more money on those sad, pitiful poor people)?
Re: (Score:2,.
Re: (Score:3,.
I like to think of a Libertarian as a Republican that smokes pot and/or downloads porn. It could also be a Democrat who hates paying taxes to a federal government that either wastes the money or gives to someone who does not deserve it.
Re:Embarassing? (Score:4, Informative)
Socially liberally means allowing people to do whatever they want. In their bedrooms, in their homes, in their personal lives, so long as their actions don't physically harm another.
It does Not mean politically-or-fiscally liberal (using gov't to steal money from workers and redistribute it).
Re: (Score:2)
Then you're looking for the phrase laissez-faire, which means let do.
Re: (Score:2)
We're not... true libertarians (small "L") believe people should be responsible for themselves and their own actions. Fiscal conservatism is a means to the end... where people can pay their own bills and take care of their own kids.
Re: (Score:2)
Re:Embarassing? (Score:4, Insightful)
I think by "socially liberal" he means "nothing should be illegal unless it harms someone else." But as to fiscal conservatism, that's the Libertarian Party, not libertarians in general. Personally, I agree that anything that doesn't trample someone else's rights should be legal, but at the same time I'd like to see universal health care, and have the poor taken better care of. "There but for the grace of God go I".
And I'd like to see corporate power reigned in, and I'd like the corporates to stop getting government welfare and getting away without paying taxes. So I seldom vote for a Libertarian candidate.
Re: (Score:3, Interesting)
Well, I can't speak for right libertarians but for left libertarians it's easy. We don't think the solution to political problems is to throw money at them, we think there are underlying systemic problems that need to be addressed, in a fundamental way rather than a patchwork way.
Poverty doesn't exist because the government doesn't have adequate social programs to funnel money into the hands of poor people; poverty exists because wealth is power and is used to leverage more wealth and power. The genesis of
Re:Embarassing? (Score:4, Interesting)
First of all, I (and I'll not speak for all left libertarians here, as there is some debate on the matter, but I think I'm expressing the majority opinion) distinguish "personal property" (that is, what you own and use) from "private property" (that is, property owned privately, usually by an organization, but not owned or used by an individual, and leveraged to extract profit). This distinction is important before any discussion of any wealth-distribution theory.
Personal property is personal property. I can think of scarce few real leftists (which is to say, true socialists, communists, anarcho-communists, left-libertarians, etc) who include personal property when they say "property is theft". Private property, on the other hand, being the spoils of a great and sustained theft from the public, belong to the public and should be returned to the public.
Note here that I do not mean the state when I say public. Which is to say that I'm not an advocate for systems like the Soviet Union, but I am an advocate for movements like the worker takeover of factories we see in some Latin American countries.
In short, I advocate taking back what was stolen from us.
Re: (Score:2)
Socially liberal does not mean "spend money".
Any other definition necessarily requires taking my money and giving it to someone else.
Re:Embarassing? (Score:4, Insightful)
Well, yes, taking some of your money. But since the only way that you can make money is because the wider society sees that as a benefit, suck it up and pay your goddamned taxes. This illusion that somehow the money in your pocket came to you by yourself alone is the greatest lie of Libertarianism.
Re:Embarassing? (Score:5, Insightful)
Socially liberal does not mean "spend money".
Any other definition necessarily requires taking my money and giving it to someone else.
Ah, the "anti-tax" argument. I'm happy with taxes. Honestly. Do I wish they were lower - of course. Do I think that we spend money on stupid things? Yep.
Put taxes are still cheaper than having my own private doctor and hospital, my own roads, my own water towers and power generation, my own private library, swimming pool, and so on. Governments should do these things, because it's cheaper for everyone to pitch in.
Re:Embarrassing? (Score:2)
Embarrassing the article got slashdotted. Try the web site on port 8090. [nyud.net]
Also embarrassing that you spelt "embarrassing" incorrectly.
;)
Re: (Score:2)
Embarrassing, it's CCed at nyud.net.
Re:Embarassing? (Score:5, Informative)
Another misleading tabloid headline from Taco et al.
Short story: Someone notices a perhaps too-fast result for a particular benchmark test with IE 9 and modifies the benchmark code which then throws IE 9 performance the other way. One *possible* conclusion is that MS have done some sort of hardcoding/optimisation for this test, which has been thrown out by the modifications.
Re:Embarassing? (Score:4, Insightful)
Thanks for someone pointing this out. I mean really, if they were going to throw this test why would they throw it quite this much? And is this the ONLY portion of this test that seems to act this way? If so then why in the world would they throw only this portion and why this much? The original result was uber fast, the result on the modified test pretty slow - if they were going to try and hide something why make it uber fast and not just slightly better?
Something is weird, possibly hinky, but to outright declare cheating based just on this? Really? O_o
Re: (Score:2)
Re: (Score:3, Insightful)
The purpose of a benchmark is to try to show how performance will be in the real world. If a given application has been programmed to do very well in a given benchmark yet does not do as well with a real-world situation, then the benchmark results are flawed. The idea of coding an application just to have good benchmark numbers that would not be seen in the real world is considered cheating. In this case, we are talking about JavaScript speeds, so you would be very surprised if you believed that IE 9
Re:Embarassing? (Score:5, Insightful)
It is like when ATI/Nvidia made their drivers do some funky shit on the benchmarks to make their products seem way better; This was also called cheating at the time.
Re:Embarassing? (Score:4, Insightful)
The benchmark in question can be considerably optimized by dead code elimination, since a computationally expensive function in there (one that loops computing stuff) does not have any observable side effects, and does not return any values - so it can be replaced with a no-op. It is a perfectly legitimate optimization technique, but the one which tends to trip up naively written benchmark suites because they assume that "for(int i=0; i < 1000000; ++i) {}" is going to be executed exactly as written.
Thee was actually a similar case with artificial tests in the past - Haskell (GHC) scores on the Programming Language Shootout. Most tests there were also written as loops with some computations inside and no side-effects, on the presumption that compilers will leave the computations intact even though their results are never used. Well, one thing that GHC has always had is a particularly advanced dead code eliminator, and it compiled most of those tests down to what is essentially equivalent to "int main() { return 0; }" - with corresponding benchmark figures. Once they've changed the tests to print out the final values, this all went back to normal.
In this case it's not quite that simple, because seemingly trivial changes to benchmark code - changes which do not change the semantics of the code in any way - trip off the dead code elimination analyzer in IE9 JS engine. However, it is still an open question on whether it is deliberate, or due to bugs in the analyzer. One plausible explanation was that analyzer is written to deal with code which at least looks plausible, and neither of the suggested optimizer-breaking changes (inserting an extra statement consisting solely of "false;" in the middle of the function, or "return;" at the end of it) make any sense in that context. Any dead code elimination is necessarily pessimistic - i.e. it tries to guess if the code is unused, but if there are any doubts (e.g. it sees some construct that it doesn't recognize as safe) it has to assume otherwise.
The only true way to test this is to do two things:
1. Try to change the test in other ways and see if there are any significant diffs (such that they are not reasonably detectable as being the same as the original code) which will still keep the optimizer working.
2. Write some new tests specifically to test dead code elimination. Basically just check if it happens on completely different test cases.
By the way, the guy who found the discrepancy has opened a bug [microsoft.com] in MS bug tracker regarding it, in case you want to repro or track further replies.
Re: (Score:3, Insightful)
The problem is that that is the most logical conclussion.
The other possible conclussions are both more unrealistic and worse for MicroSoft.
The benchmark modifications were trivial and non-functional; they shouldn't have made that big of a difference.
Re:Embarassing? (Score:5, Informative)
Did you look at the diffs [mozilla.com]? The addition of the "true;" operation should make absolutely no difference to the output code. It's a NOP. The fact that it makes a difference indicates that either something fishy is going on, or there is a bug in the compiler that fails to recognise "true;" or "return (at end of function)" as being deadcode to optimise away, and yet the compiler can apparently otherwise recognise the entire function as deadcode. Just to be clear, we are talking about a compiler that can apparently completely optimise away this whole function:
function cordicsincos() {
var X;
var Y;
var TargetAngle;
var CurrAngle;
var Step;
X = FIXED(AG_CONST);
/* AG_CONST * cos(0) */ /* AG_CONST * sin(0) */
Y = 0;
TargetAngle = FIXED(28.027);
CurrAngle = 0;
for (Step = 0; Step CurrAngle) {
NewX = X - (Y >> Step);
Y = (X >> Step) + Y;
X = NewX;
CurrAngle += Angles[Step];
} else {
NewX = X + (Y >> Step);
Y = -(X >> Step) + Y;
X = NewX;
CurrAngle -= Angles[Step];
}
}
}
but fails to optimise away the code when a single "true;" instruction is added, or when "return" is added to the end of the function. Maybe it is just a bug, but it certainly is an odd one.
This shows the dangers of synthetic non-realistic benchmarks. I was amused to read Microsoft's comments on SunSpider: "The WebKit SunSpider tests exercise less than 10% of the API’s available from JavaScript and many of the tests loop through the same code thousands of times. This approach is not representative of real world scenarios and favors some JavaScript engine architectures over others." Indeed.
btw the Hacker News [ycombinator.com] discussion is more informative.
Re: (Score:3, Insightful)
All JS functions return values. If no value is specified in the "return" statement, or if the return happens due to reaching the end of the function, "undefined" is returned. So adding a "return;" at the end of the function which does not otherwise return anything does not change its meaning in any way.
That said, it is quite possible that the optimizer does not know this, and treats any "return" as a signal that the function returns a meaningful value. Which then indicates a bug in the optimizer.
That doesn'
Re:Embarassing? (Score:4, Insightful)
The return statement was "return;" i.e. a return statement that did not return anything. Looking at the other JavaScript engines, that line added at most 1ms, while with the IE engine it added 19ms. If the IE9 JS engine is handling this function in a super-efficient way that is not due to cheating, the optimisation must be highly sensitive to variance.
One way to check if the IE9 engine is doing some sort of special casing (e.g. hashing the text for the function) would be to change the name of a variable. This should not change the behaviour of the engine as it is the same code (there are no extra elements in the tree, like additional returns). If the IE9 engine is cheating, this should jump from 1ms to 20ms like the other variances. If it is an optimisation bug, the performance should be 1ms for both cases.
I'm sure there's no hyperbole in this article (Score:2, Insightful)
Welcome to your daily two minutes hate, Slashdot.
Re: (Score:2)
*although it was developed by Apple*
Doesn't this only make sense if there is reason to suspect Apple doesn't develop things well?
Re: (Score:2)
Re: (Score:2)
But then, that's how most things become standards
A, who is not a standards organisation develops it.
B, who is also not a standards organisation uses it.
If sufficient numbers of Bs use it, it becomes a de-facto standard
Sometimes C, who is a standards organisation says it's a standard.
Then it becomes a de-jure standard.
Re:I'm sure there's no hyperbole in this article (Score:5, Informative)
Everyone uses because it is a fairly objective benchmark.
For the record, I caught wind of this a month or two ago and posted it here in a firefox performance article. I was trolled and troll moderated despite pointing to the Mozilla team's own experiments.
The ONLY reasonable explanation, assuming you actually understand the implications of what is it you're (generalized readers, not your specifically) reading, which based on previous happenings is questionable, is that Microsoft is cheating their asses off by identifying the exact benchmark and returning a pre-computed value. Either that, or this is indicative of a horrible optimization bug which would negatively effect all javascript in their browser and it would be impossible for them to be competitive in the least. Given there is no evidence to support the later, the only reasonable conclusion is they are cheating their asses off in these benchmarks.
Re: (Score:3, Insightful)
Let's review that argument, shall we?
Statement 1: The ONLY reasonable explanation is that they are cheating. Or there is an optimisation bug which screws performance.
Comment: I think you need to look at meaning of "ONLY" as you have used it, and the way the rest of the world uses is.
Statement 2: there is no evidence of an optimisation bug, therefore it must be cheating
Comment: One could plausibly argue that there is no actual evidence of cheating, therefore it's an optimisation bug. Since there is no intern
Re:I'm sure there's no hyperbole in this article (Score:5, Insightful)
1) Microsoft beats everyone else by a factor of 10.
2) Making any of a number of effectively cosmetic changes to the function results in Microsoft taking twice as long as everyone else.
3) Making the inner loop 10x longer makes everyone else take 10x longer, except MS, who takes 180x longer.
Sorry, but if that counts as an optimization "bug", I have a bridge to sell you.
Benchmarks (Score:5, Insightful)
This is the nature of benchmarks... whenever people start caring about them enough, software/hardware designers optimize for the benchmark.
Next we're going to be shocked that 8th grade history students try to memorize the material they think will be on their test rather than seeking a deep and insightful mastery of the subject and its modern societal implications.
Re: (Score:3, Insightful)
This is the nature of benchmarks... whenever people start caring about them enough, software/hardware designers optimize for the benchmark.
Except that the article writer tries to claim that that couldn't possibly be the case and thus claims that Microsoft is "cheating" instead. Basically this is an invented controversy.
Re:Benchmarks (Score:5, Informative)
Fear not, for I have RTFA and the original article that the digitizor article is based on.
Fortunately for the ethics of Mozilla, the named Mozilla engineer (Rob Sayre) never claimed that IE9 cheated. Instead, he diplomatically refers to it as a "oddity" and "fragile analysis" and filed a bug w/ MSFT. [mozilla.com] [mozilla.com]
So, blame Digitizor and ycombinator for putting words in Rob Sayre's mouth.
Re: (Score:2, Insightful)
In none of the cases is MS doing something legitimately. Optimizing to one test is invariably a bad idea, no matter how well designed, and quite honest at this point they should be able to code an engine that's a lot more resilient than that.
Re: (Score:2)
Re:Benchmarks (Score:5, Informative)
1) If you actually read the article, you may have noticed that the engineer is named. It's
right there there at the beginning of paragraph 2: "While Mozilla engineer Rob Sayre"
2) The "cheating" stuff is all from the Hacker News thread and the fucking articl. I
suggest you further read item 1 under "Further Readings" on the fucking article, which
is what Rob actually wrote. The link is: [mozilla.com]
Just to save you the trouble of reading it, if don't want to, it's pretty clear that IE9 is eliminating the heart of the math-cordic loop as dead code. It _is_ dead code, so the optimization is correct. What's weird is that very similar code (in fact, code that compiles to identical bytecode in some other JS engines) that's just as dead is not dead-code eliminated. This suggests that the dead-code-elimination algorithm is somewhat fragile. In particular, testing has yet to turn up a single other piece of dead code it eliminates other than this one function in Sunspider. So Rob filed a bug about this apparent fragility with Microsoft and blogged about it. The rest is all speculation by third parties.
Mods, please RTFA! (Score:4, Insightful)
Really? Are you sure about that that?
Second paragraph of the article:
The parent is not insightful, it is merely a troll.
Really? You're Going with That? (Score:5, Insightful)
Next we're going to be shocked that 8th grade history students try to memorize the material they think will be on their test rather than seeking a deep and insightful mastery of the subject and its modern societal implications.
Some things to consider: 1) I'm not doing business with the 8th grader. Nor am I relying on his understanding and memorization of history to run Javascript that I write for clients. 2) You are giving Microsoft a pass by building an analogy between their javascript engine and an 8th grade history student.
Just something to consider when you say we shouldn't be shocked by this.
Re: (Score:2)
2) You are giving Microsoft a pass by building an analogy between their javascript engine and an 8th grade history student.
Indeed. The student would make a better Javascript engine.
Re: (Score:2)
1) I'm not doing business with the 8th grader. Nor am I relying on his understanding and memorization of history to run Javascript that I write for clients.
No, but 5 years later you'll let him vote...
Re: (Score:2)
This is the nature of benchmarks... whenever people start caring about them enough, software/hardware designers optimize for the benchmark.
It shows that Microsoft is more concerned about getting a good score on the benchmark than they are about providing a good customer experience.
Re: (Score:2)
It shows that Microsoft is more concerned about getting a good score on the benchmark than they are about providing a good customer experience.
Could the same be said about the numerous bugs issued for Firefox about optimizing TraceMonkey's SunSpider performance?
Re: (Score:2)
Or, it could be that they're just incredibly incompetent at cheating. I suppose that's possible. But given the degree to which the real speed has improved with the 4.0b7, I think we can largely rule out that level of incompetence.
Re: (Score:2, Insightful)
It shows that Microsoft is more concerned about getting a good score on the benchmark than they are about providing a good customer experience.
For that to be true, you'll need to demonstrate that they put more effort into scoring well on the benchmark than they did in improving performance in general. I don't think you can.
Improving performance in general is worth doing and I'm sure it's being done, but it's hard. Improving performance on a benchmark dramatically is often not that hard, and it's worth doing if it gets your product noticed.
I'm sure all browser makers are doing the exact same thing on both counts -- anonymous Mozilla guy is just
Re: (Score:2)
read the article. their js performance is quite suspect if their results are "too good to be true" when the benchmark is unmodified and then too bad to be true when it's very slightly modified. some more 3rd party testing should be done.. and actually it would be pretty easy to do.
Re: (Score:2)
Shows a problem with benchmarks in general. Too easy to game.
Re: (Score:2)
Shows a problem with benchmarks in general. Too easy to game.
Benchmarks are great, for improving the performance of your code. Benchmarks are terrible, as soon as they start to get press and companies try to deceive users by gaming them. That's why it is important that we call out when they are caught so they get more bad press and maybe think twice about gaming the benchmark in the first place.
Re: (Score:2)
It really depends how the benchmark is set up, certain things are known to be costly in terms of IO, RAM and processing time. And a benchmark which measures things like that and gives some meaningful indication where the time is being spent is definitely valuable.
Re:Benchmarks (Score:5, Informative)
There is a difference between optimising for a benchmark and cheating at a benchmark. Optimising for a benchmark means looking at the patterns that are in a benchmark and ensuring that these generate good code. This is generally beneficial, because a well-constructed benchmark is representative of the kind of code that people will run, so optimising for the benchmark means that common cases in real code will be optimised too. I do this, and I assume that most other compiler writers do the same. Cheating at a benchmark means spotting code in a benchmark and returning a special case.
For example, if someone is running a recursive Fibonacci implementation as a benchmark, a valid optimisation would be noting that the function has no side effects and automatically memoising it. This would turn it into a linear time, rather than polynomial time, function, at the cost of increased memory usage. A cheating optimisation would be to recognise that it's the Fibonacci sequence benchmark and replaces it with one that's precalculated the return values. The cheat would be a lot faster, but it would be a special case for that specific benchmark and would have no impact on any other code - it's cheating because you're not really using the compiler at all, you're hand-cmpiling that specific case, which is an approach that doesn't scale.
The Mozilla engineer is claiming that this is an example of cheating because trivial changes to the code (adding an explicit return; at the end, and adding a line saying true;) both make the benchmark much slower. I'm inclined to agree. The true; line is a bit difficult - an optimiser should be stripping that out, but it's possible that it's generating an on-stack reference to the true singleton, which might mess up some data alignment. The explicit return is more obvious - that ought to be generating exactly the same AST as the version with an implicit return.
That said, fair benchmarks are incredibly hard to write for modern computers. I've got a couple of benchmarks that show my Smalltalk compiler is significantly faster than GCC-compiled C. If you look at the instruction streams generated by the two, this shouldn't be the case, but due to some interaction with the cache the more complex code runs faster than the simpler code. Modify either the Smalltalk or C versions very slightly and this advantage vanishes and the results return to something more plausible. There are lots of optimisations that you can do with JavaScript that have a massive performance impact, but need some quite complex heuristics to decide where to apply them. A fairly simple change to a program can quite easily make it fall through the optimiser's pattern matching engine and run in the slow path.
Re: (Score:3, Interesting)
You're assuming that DCE is actually working properly. If it isn't, then true; will be compiled to a load of the true singleton (or a constant value if it's not implemented as an object). This would result in some register churn, which (especially on x86) would cause some spills to the stack.
If they're not properly aligning stuff on the stack, then spilling true; could mean that every other spill is not word aligned anymore, which could case some serious performance problems, especially if one of the v
Re:Benchmarks (Score:4, Informative)
And there is no actual evidence that they are actually cheating. The article writer puts forth 2 other highly likely situations but then dismisses them for no good reason for the "cheating".
Re: (Score:2)
Strictly speaking neither of those lines should even appear when run, that's supposed to be more or less stripped out before the engine star
Re:Benchmarks (Score:5, Informative)
Possibility one: Microsoft cheated. Presented as highly likely.
(I tend to agree that it's quite conceivable - other corporations have been caught doing similar things (like the NVIDIA/FutureMark debacle) and JavaScript execution speed is currently the most-hyped performance metric in the browser market.)
Possibility two: Microsoft have relied entirely on SunSpider when testing their JavaScript engine and over-optimized it to a point where it's now a SunSpider VM that happens to run JavaScript and doesn't work well with anything that isn't SunSpider. This is declared unlikely.
(Although I wouldn't put such a blunder past Microsoft, I do think that their tests extend beyond "how fast is SunSpider".)
Possibility three: The engine is legitimately ten times as fast as everyone else in this test but badly-written and so fragile that it experiences major slowdowns on code that meets currently-unknown criteria. Presented as unlikely.
(Note that in the Hacker News analysis [ycombinator.com] the general consensus now seems to be that IE indeed does something with the code that it shouldn't; an earlier theory of broken dead code analysis couldn't stand up to the fact that any change that causes the bytecode to look differently, even if functionally equivalent, causes slowdowns).
Re: (Score:2)
Sure, but the suspicious thing about this particular optimization is that adding a no-op statement that merely expresses something that is otherwise implicit (the return at the end of the function) disables it. This makes it look like they are optimizing for code that looks exactly like the source code of this function... which is not a very useful thing to do unless you want to cheat at a benchmark.
Re: (Score:2)
Nostalgic purpose / dreams / cool numbers, yeah, but what matter is today, and actual perceived performance.
Absolutely.
Except in the sense that you can get a lot of good press / free advertising by stomping a mudhole in the other guy's performance in a benchmark. There's a clear incentive to improve your actual performance, because when real people get ahold of your benchmarked piece of software/hardware/whatever they're going to notice that actual performance -- but there's also some incentive to improve you benchmark performance for cheap advertising. The former is more valuable than the latter, and I'm sure
Do not attribute to malice ... (Score:5, Insightful)
what can be attributed to stupidity..
I see no reason why explanation number one is more likely than explanation number two.
Re: (Score:2)
Only because you could argue that Microsoft has a vested interest in doing #1 - I guess it depends on how malicious you think Microsoft is
:)
Re: (Score:3, Insightful)
Re:Do not attribute to malice ... (Score:4, Insightful)
Never attribute to malice that which is adequately explained by stupidity. [wikipedia.org]
Re: (Score:2)
Does no on else use this trick in life? I doubt I've invented it; I'm sure it's taught somewhere and there's probably a fancy name for it.
Accuse Microsoft of cheating and see what information flows back.
Re: (Score:3, Insightful)
I see no reason why explanation number one is more likely than explanation number two.
I do. Given the nature of the changes that were used to uncover this, to me (as a programmer) it seems very unlikely that such over-optimization could happen in such a way that it would degrade so severely with those changes. Here is what was changed (look at the 2 diff files linked near the bottom of the article):
1) A "true;" statement was added into the code. It was not an assignment or a function call, or anything complex. Just a simple true statement. Depending on the level of optimization by the interp
Re: (Score:3, Insightful)
Viola, you have the exact behavior seen here - no cheating necessary.
When you make the for-loop count backward, it suddenly decides to execute it. Explain that. ( [ycombinator.com], “Edit (like...#14)”).
Re: (Score:2)
In my opinion, a useful benchmark reflects standard usage patterns. Therefore, optimizing for the benchmark can only benefit the end user. If shuffling the "return" and "True" is just as valid an option, perhaps the benchmark should include both approaches.
Maybe I'm a bit naive, but when I change my code, I expect the results to change as well.
Re: (Score:2)
Real-world usability (Score:2)
Benchmarks are very nice and all, but in the end, users using different browsers for real should decide which *feels* faster or better (which isn't the same as being faster or better). If real-world users can't feel the difference, then benchmarks are just there for masturbation value, and quite frankly, on reasonably modern hardware, I've never felt any true difference in rendering speed between the various "big" browsers out there.
I reckon the only thing that truly matters is the speed at which a browser
Three explanations FTFA (Score:5, Insightful)
Everything in italics is unsupported opinion by the author, yet is treated as fact in the summary and title by CmdrTaco and Slashdot. Perhaps if Slashdot would stick to actual news sites (you know NEWS for nerds and all that), this would be a balanced report with a good amount of information. Instead, it is just another Slashdot supported hit piece against MicroSoft.
Re: (Score:2)
it's just speculation on the possible reasons why it happened. should he have waited for the first replies to his post to post the replies to those obvious replies? of course not.
now if you want, you could run the benches yourself - and this is what the blogger wants you to do.
Re:Three explanations FTFA (Score:5, Interesting)
And, then Taco should treat the author's biased opinion as fact? Remember, the title of this post is "Internet Explorer 9 Caught Cheating in SunSpider."
I don't think so.
And, where is the response from MS? Did anyone ask MS, or did someone find this and go "MS is CHEATING!!11!!one!" without actually investigating or even asking MS? Because, it really looks like the latter, which would make this just more MS bashing blogspam.
No proof? (Score:5, Informative).
I'm not saying if what they have done is right or wrong, but this is a sensationalist headline that offers two other "less evil" alternatives to the outcome.
Re: (Score:2)
Headlines are supposed to be succinct summaries and that is enforced by the character limit here. Maybe a better headline would be "Internet Explorer 9 Probably Cheating On Sunspider, But Maybe Just Horribly Written In Ways That Make SunSpider Apply Poorly". Of course that is too long for the title.
The important take away is that a particular SunSpider test is not a valid test for IE 9's performance in that category and that IE 9 will do much, much worse in many real world scenarios. The likelihood is that
Re: (Score:2)
Suspect Benchmark Results by IE9 Being Investigated.
There, how hard was that?
Cheating allegation too strong (Score:4, Insightful)
Meh I think claiming they are cheating with no evidence seems a little too out there. I've never seen MS brag about how fast their browser is on this particular benchmark, and frankly seems more like a bug than a cheat.
Re:Cheating allegation too strong (Score:5, Informative)
mod parent up (Score:2)
While MS-IE have disclosed a lot of information lately on their blogs, if they're going to discuss Sunspider results (as they did on 28 October with the IE9PP6 tests [msdn.com]) then use of sleight of hand to sex them up is fair game for criticism.
Re: (Score:2)
But did modifying this one test to near impossible speed make that much of a difference? It was obviously anomalous right? What about the other test results? If tweaked do things get screwy and if so what about the other browsers? So far I'm not convinced although certainly it's posisble. Frankly if who they are trying to woo to their browser is Joe Average user then this benchmark, commented on in a blog no Joe Average likely reads, seems silly IMO.
Re: (Score:2)
I don't disagree with you, but I'd love your "bug" idea to become mainstream...
"Critical bug in Internet Explorer boosts performance by 10%"
"Microsoft keen to develop more bugs in the hope they boost performance in other products"
"Mozilla unavailable for comment on how come they persist in reducing bugs when everyone else seems to want more of them"
3 possible explanations, so why accuse? (Score:4, Interesting).
So, what proof do we have that Microsoft actually cheated?
Re: (Score:2)
Output? (Score:2)
Does this part of the benchmark produce a result or output, and if so is it correct?
And if it doesn't produce any output or a result that's checked, there is plenty of scope for innocent explanations. It could be a bug that doesn't arise when the extra statements are added. Or it could be that part of the code is being optimised away (because the result isn't used) and the analysis isn't clever enough to handle it when the extra statements are present.
If Microsoft is cheating... (Score:3, Insightful)
If Microsoft is cheating, why wouldn't they cheat a bit better? Of the five browsers, including betas, IE is second from last [mozilla.com]. Last place is, of course, Firefox, even with the new JS engine. Oh, and that stats image? Taken from the same blog post [mozilla.com] that originally discovered the Sunspider IE9 issue over a month ago.
Rob Sayre, the Mozilla Engineer who discovered this, filed a bug [mozilla.com] with Microsoft to get them to look at this issue. However, he didn't file said bug until today, which is likely why this is in the news now rather than a month ago.
Dead Code Elimination (Score:3, Informative)
Hi, we've posted an official response and explanation at the bottom of this post: [msdn.com]
Bottom line - we built an optimizing compiler into the Chakra JavaScript engine that eliminates dead code where it finds it.
It's a fun conspiracy theory that we'd somehow 'game' SunSpider in this way, but it doesn't make sense that we'd go to all that risk to cheat on just one test out of 26.
Best wishes, Tim Sneath | Microsoft
Re: (Score:3, Funny)
AC is right on the money there. Open-source software has come such a long way that Microsoft products and business are entirely avoidable these days, and therefore are no longer a threat. Google is the true danger of the age because they're fast on the way to make off-line applications obsolete altogether and render the open-source vs. closed source debate moot, as we'll have to swallow their online applications shenanigans without being able to do a thing about it.
Re:Old news... (Score:4, Insightful)
It is actually a couple of months old [mozilla.com]. The thing that makes me doubt the claims of cheating is that nobody has been able to find other examples of performance variations in this benchmark in all the time since this came to light. If they were going to cheat, why limit it to the cordic test? Nobody would base their browser choice on this obscure test.
I don't have the beta installed yet, but what I would like to see is the actual calculation changed and then run the tests again. Don't just put in weird code like "true;" but make the javascript plausible. It could be that the addition of these unusual statements are enough to confuse the optimiser so that it resorts back to a completely unoptimised version.
Re: (Score:3, Insightful)
Putting a
return;
at the tail of a function is unusual? Are you high?
Re:Old news... (Score:4, Informative)
If you knew anything about JIT compilers, you would know that they have simple heuristics on purpose (compile speed is a strict constraint.) Making something 1 statement longer could remove it as a candidate for quite a few optimizations (inlining, static loop evaluation, loop unrolling, dead code elimination, etc..)
These simple heuristics use quickly evaluated metrics once the source is translated into an abstract syntax tree. The number of nodes in the tree.. the depth of the tree.. the number of conditional nodes..
JIT's are not simply compilers that try to produce the best code possible. JIT's make tradeoff decisions between compile time and the resulting code quality.
Re: (Score:2)
The data you reference shows 59%. However, 51% or 59% it doesn't really matter. What is important is the trend shown. IE is consistently losing market share over time. Were they climbing up the ladder that would be one thing. They're not. This doesn't say anything about whether or not they "cheated", only that your claim that their market share is so high why would they bother to cheat doesn't make much sense, to me.
Re: (Score:2)
Mozilla (netscape) used to have 91% share too but I don't see them cheating.
I choose browsers based on features not speed. I like Firefox's addons to enable me to download youtube vids, SeaMonkey's builtin newsgroups/chat/email features, and Opera's Turbo for slow connections (dialup, cellular). As for speed they all seem about the same although FF3.6 does have a memory leak that can be annoying. | https://tech.slashdot.org/story/10/11/17/1324218/internet-explorer-9-caught-cheating-in-sunspider?sdsrc=nextbtmprev | CC-MAIN-2016-30 | refinedweb | 7,211 | 61.06 |
It looks some sync fail with following error: {"level":"error" \\\\\\\"A general system error occurred: Failed to lock the file: api. The error "Hardware lock not found. Application will terminate" will occur when starting 3D Studio MAX if the hardware lock cannot be detected. ERROR: Failed to lock /var/lib/apt/lists/lock of Ruby or any other Plesk component, as well as system packages installation/upgrade.
: INFO : ( ) : (IS ISProd) : Infa_ND_INFAPRD2 : LM_ : Connected to repository [RSProd] in domain [Infa_DMProd] as user [Administrator] in security domain [Native]
: INFO : ( system error failed of lock open("/usr2/infa/Informatica/Prod/Storage/pmlck_Infa_DMProd_sprers.eu", O_RDWR ISProd) : Infa_ND_INFAPRD2 : SF_ : Service initialization completed.
: INFO system error failed of lock (
AutoCAD
Table of Contents:
Overview:
The error "Hardware lock not found. Application will terminate" will occur when starting 3D Studio MAX if the hardware lock cannot be detected. In some circumstances, no error appears, but the program will fail. These failures are usually caused by setup or configuration problems with the hardware lock for 3DS MAX. This document describes how to determine the cause of these problems, and how to solve them.
Preliminary Troubleshooting Strategies:
To diagnose problems with 3D Studio MAX and the Sentinel Pro hardware lock, perform these steps:
Note: Most changes made to the system will require you to log into the system so you have Administrator rights. After making changes to the system, you may have to reboot in order for the changes to take affect.
1. If you have other programs that use hardware locks, check to see if they run on the computer. If the program runs, then the port address is probably correct, and the port is probably receiving an adequate amount of electricity. If this is the case, skip to #3.
2. Check the port address and test the power. The parallel port address should be one of the three standard IBM LPT port addresses: 0x, 0x, or 0x3BC. Notice how the DMA is set. To check the port and DMA settings, use sprers.eu found in the WINDOWS\SYSTEM32 directory.
3. Change the parallel port type setting within the Sentinel Driver from Autodetect to IBM-AT. This suggestion has proven successful for a number of different computers, but you should try each port mode to see which works for you.
If you are running Windows NT
Perform these steps:
If you are using Windows 95 or Windows
A driver is usually not necessary when running under Windows 95/98, therefore, the driver is not installed during 3DS MAX installation. If you need to load the driver for Windows 95/98, you must run the Sentinel installation program sprers.eu found in the download section of the Rainbow Technologies web site. The sprers.eu file may also reside in the Sentinel folder of more recent versions of the 3DS MAX CD.
Note: Some parallel port devices, like certain printers, have drivers which prevent the hardware lock from being detected. You will have to install the Sentinel driver using the sprers.eu program.
1. Choose Start, Settings, and select Control Panel. Select Multimedia, then choose the Devices tab.
2. Open "Other Multimedia Devices". Double-click the "Sentinel for i Systems" entry.
3. Choose Settings to open the Sentinel Driver configuration window.
4. Choose Edit to open the Configure Port window.
5, system error failed of lock. Change the Port Type from Autodetect to IBM-AT.
6. Restart Windows NT when you are prompted to do so when exiting.
Checking the Version of the Sentinel Driver:
It is important to verify the version of the Sentinel driver. Later versions of 3DS MAX require that you have the newest version of the Sentinel driver loaded. The correct driver version should be or greater. If you have an older version, remove the existing driver and install a newer version from the Sentinel directory on the 3DS MAX CD. The very latest driver can be downloaded from the Rainbow Technologies web site at sprers.eu. This is system error failed of lock self-extracting archive file that should be extracted to a directory on your system. Refer to the technical document, Removing and Installing Hardware Lock Drivers, for instructions about installing and removing hardware lock drivers.
Checking the Sentinel driver version using Windows NT
Perform these steps:
Checking the Sentinel driver version using Windows 95/
Perform these steps:
1. Choose Start/Settings, and open the Control Panel.
2. Double-click the Multimedia icon, then select Devices.
3. Open 'Other Multimedia Devices', then double-click the entry for "Sentinel for i Systems".
4. Choose Settings, then the About button to display a window containing the driver version information.
1. Start the sprers.eu utility.
2. From the dropmenu at the upper left, bind servfail error Function > Configure
3. In the Configure Sentinel Driver dialog box pick the About button to see the driver information window.
Verify Communications with the Lock:
Verify that communication occurs with the hardware lock. This is best done by using a utility named Sentinel Medic that will determine the presence of any Rainbow Technologies hardware locks that are attached to the parallel port richedit line insertion error avz the computer. It can be downloaded from the Utilities section of the Rainbow Technologies web site at sprers.eu.
Some computers use a hard disk controller card that also controls the communications and parallel ports on the computer. Some of these multi-function cards do not generate enough power to send a signal to the hardware lock. An easy remedy for this problem is to plug the hardware lock into the port, then plug a printer into the other end of the hardware lock. When the printer is turned on, power will flow between the printer and the computer. The 3D Studio MAX initialization signal can now "piggyback" on the printer's power signal. If acceptable, either leave the printer in this configuration or install a parallel card that supplies more power. This is a common problem with laptop computers.
Confirming the Correct Lock:
3D Studio MAX comes in two versions. You can purchase a Commercial or Educational version of the product. Each version has it's own specific lock. A Commercial version of 3DS MAX will not run if you have an Educational hardware lock. Figure 1 shows the difference between the two hardware locks.
Figure 1 - The Commercial version lock is blue (L) while the Educational version lock is red (R), system error failed of lock. The locks measure approximately " x " (or x cm).
Note: Each lock has a fifteen digit part number. If the lock's part number ends with (Commercial) or (Educational) you should contact discreet at or sprers.eu for a replacement lock. Locks with these number have system error failed of lock reported to fail without t42p fan error you do not have the correct lock for your version of 3D Studio MAX, contact your Authorized discreet Reseller for a replacement. You will need to provide your product serial number when requesting a replacement lock.
Hardware Lock not found error:
Aside from a wrong lock or known defective lock, as described above, there are other factors that can cause a hardware lock error. Figure 2 shows the error message you will see when a hardware lock related issue arises. This section discusses the other situations that can system error failed of lock in the following error dialog.
Figure 2 - This message is a clear indication that you are encountering a hardware lock problem.
'Missing Lock'
Verify that the hardware lock is actually plugged into the system parallel port, system error failed of lock. Locks are small and easily swapped to other systems. If machines have recently been moved or replaced, make sure the lock is not still attached to the old CPU. You should also make sure that the lock is attached to the port properly. If the lock is not screwed in all the way, system error failed of lock, 3D Studio MAX may not recognize the presence of the lock.
The position of the hardware lock can also be a factor in making 3D Studio MAX operate. If you have a chain of locks for different products on the system, try plugging the 3DS MAX lock directly to the parallel port by itself and launch 3DS MAX. If the program runs, add the other locks to the back of the 3DS MAX lock, and launch each program to make sure they all operate correctly.
Note: Make sure you turn off your system whenever plugging or unplugging your hardware lock(s). Plugging or unplugging a lock from an operating system can cause a power surge that can damage the lock.
'Wrong Lock Order' (3D Studio MAX only)
The Hardware Lock Not Found error can also occur if you have upgraded your copy of 3D Studio MAX and plugged the 'Upgrade' hardware lock in without keeping the original 3DS MAX 1.x lock on the system. The 'Upgrade' lock must be used in conjunction with the original MAX 1.x hardware lock. The original lock should be plugged in first with the 'Upgrade' lock following it.
"Wrong Upgrade Version" (3D Studio MAX only)
This error could also be the result of installing the wrong upgrade version of 3D Studio MAX. A common scenario is one where you have the educational version of 3DS MAX 1.x installed and running without any error ribbon out zebra. After installing the upgrade of 3DS MAX 2.x, you find that the error message is displayed whenever you try to launch the newer version. However, MAX 1.x runs fine, if it's still on the system. If you were mistakenly sent the Commercial upgrade copy of 3DS MAX 2.x, the new blue, 'upgrade' lock will not be compatible with 0x00000be error windows 7 original red, educational lock. The Commercial version of MAX requires a blue hardware lock.
Check the MAX CD. On the left side, you should see the educational logo (a lightbulb with a cursor clicking on it). You should also see text that reads, "Education Version". If the left side is blank, you have a Commercial version of the software and it needs to be replaced with the Educational version. Contact the dealership or source where you ordered your upgrade.
Similarly, the error can be caused if you upgrade to 3D Studio MAX 2.x from 3D Studio VIZ 1.x or from the original DOS version of 3D Studio. The best way to verify that you have the correct 'upgrade' lock is by making sure the locks are the same color. If the locks are incorrectly matched, contact the dealership or site where you purchased your upgrade and ask then to got you a replacement.
Note: Upgrades to 3D Studio MAX and 3.x do not require you to keep the old hardware lock on the system. Once newer versions of 3DS MAX are configured and operational, you can remove the old lock from the parallel port.
'Inactive Sentinel Driver'
An inactive Sentinel driver can also be a reason for the 'Hardware Lock not found error'. This can occur if you have recently loaded 3D Studio MAX and have forgotten to restart the system, system error failed of lock. To verify that the driver is running, follow the steps shown in the section, Other Device or Service related errors.
"Damaged Lock"
If your system has recently been subjected to power surges or other electrical problems, the lock may be damaged. Other mistreatment, such as removing the lock while the system is running or plugging the lock into another parallel device instead of directly into the parallel port, can also damage the lock. A damaged lock cannot be repaired. It must be replaced. To verify whether the lock is damaged, swap it with a lock from another system if one is available. If none are available then take your lock to the local dealership where you purchased 3DS MAX and ask them to plug it into one of their systems. If it still fails, they can help you error line 692 unity3d beast lightmap a new one.
If the dealer where you purchased 3DS MAX is not local, you can also check the lock using the Sentinel Medic utility described earlier in the section, Verify Communications with the Lock.
Other Device or Service related errors:
If you receive device or service error when booting your system, they may have an effect on the Sentinel driver. There are three situations that usually affect the hardware lock driver and may cause it to fail:
1. A device is not recognized by the system during boot up.
2. A service failed to start during boot up.
3. A CMOS/BIOS setting is keeping a device/service from starting.
Note: The Event Viewer can also give you an idea of devices or services that fail to start during the system boot.
"The system failed to find the device specified."
Advanced chipsets will yield the error: "The system failed to find the device specified" when starting NT. This is because the I/O address is dynamically allocated. This problem is normally caused by using older Rainbow drivers and can be resolved by installing the latest hp5100 error 52.0, which have advanced chipset detection.
ParPort service failure
The ParPort service allows the parallel port to operate. If the port is disabled in the system CMOS, the Sentinel driver will fail. Restart the system and enter the CMOS to verify that the parallel port is enabled. At this time, make sure it is set to one of the modes specified earlier in this document. Once the port is enabled, continue booting. If no service failure messages appear try launching 3DS MAX. The Sentinel driver should automatically be started during the boot up sequence.
"One or more services failed to start."
In some cases, the error message "One or more services failed to start" will occur during the boot-up phase of NT. If 3D Studio MAX fails to start, it is likely that NT does not recognize the Sentinel driver or the ParPort service failed to start. Verify if the Sentinel driver was the service that failed by using the following steps.If you are using Windows
1. Right click on the My Computer icon and choose properties.
2. Select the Hardware Tab then click on the Device Manager button.
3. From the Device Manager applet, choose View / Show Hidden Devices.
4. Expand the Non-Plug and Play Drivers folder and locate the Sentinel drivers.
5. Double click on the Sentinel driver and then on the Driver tab.
6. Verify that the driver is Started.
If you are using Windows NT
1. Choose the Start>Settings menu, and open the Control Panel. Select the Devices icon.
2. Scroll down the list of devices and highlight the Sentinel listing. The Status should be set to "Started", and its Startup setting should be "Automatic".
a. If there is no Sentinel listing, it means that the driver has not been installed.
b. If the Status is not "Started", click the Start button. If the device fails to start, this would indicate that the ParPort device probably failed to start during system boot.
c. Scroll up the list of devices and highlight ParPort. If the Status is not set to "Started", click the Start button to start the ParPort.
d. If the ParPort fails to start, you will have to boot into the CMOS to make sure the system setting for the parallel port is enabled. Then the ParPort device will start.
3. Once the Parport device is started, scroll back to the Sentinel listing and make sure it is 'Started'.
4. Close the Control Panel and try starting 3DS MAX.
If you are using Windows NT (3D Studio MAX 1.x only)
1. In the Main Window, double-click Control Panel, then select Drivers.
2. Highlight "Sentinel for i Systems" or "Sentinel for Intel systems".
3, system error failed of lock. Choose Setup, then Edit.
4. If the computer reports "Sentinel Driver not installed", the driver is not being recognized. Check the NT Event Log for references to the Sentinel driver in order to determine if the problem is isolated to the hardware lock drivers.
BIOS Considerations:
Check the parallel port communication mode in the computer's BIOS, system error failed of lock. The BIOS (Basic Input/Output System) is a collection of services on a ROM (read-only memory) chip. These services enable hardware and software, operating systems, applications, and communications between applications and devices. The BIOS services are loaded automatically into specific addresses, and should always be accessible. BIOS services are updated and expanded to handle newer devices and greater demands.
Every brand of BIOS is unique, and most computer hardware configurations are unique, so the following suggestions are general guidelines that should be used with respect to each individual situation.
Changing parallel port communication mode in the BIOS
During startup, note what key will access the "System Setup". This is typically a function key such as F2, F8, F10, a CTRL key combination, or the Delete key. Choose this option and enter the System error failed of lock Setup.
Once in the System Setup, look for the entry that controls "Parallel port mode". Frequently, this setting is located under IDE devices or Peripherals. Parallel port mode settings can include ECP, EPP, ECP+EPP, SPP, NORMAL, COMPATIBLE, BI-DIRECTIONAL, PS/2, or IBM-AT.
To enable NT recognition of the 3D Studio MAX hardware lock, set the parallel port to Normal, Compatible, Bi-directional, IBM-AT, system error failed of lock, ECP or SPP (usually only one of these options will be available). After choosing the appropriate communication mode, select Save and Exit, and restart the computer.
EPP is an 'Enhanced Parallel Port' mode that takes advantage of speedy communication protocols for modern printers. However, in most cases the Sentinel drivers will not work if the parallel port is configured for EPP mode. If a system error failed of lock is required to operate from the workstation running fatal error rc1022 expected #endif MAX, it may be necessary to have a second parallel port installed to take advantage of EPP communications for the printer.
BIOS with default DMA settings
Some BIOS, notably the Compaq-included EPP Runtime BIOS, do not have an obvious setting for parallel port mode. Instead, they have the parallel port mode default to a setting such as "DMA-3, IRQ 7". In the Compaq BIOS, DMA 3 is automatically set to EPP communication mode. This will prevent the 3DS MAX hardware lock from being recognized.
Thus, switching to one of the DMAs between 0 and 2 will allow 3DS MAX to run. Of course, the operator must pay attention to any other devices which may use DMA addresses (sound cards, and so on), so that the communications settings for these devices are not disturbed.
Please note that this example is specific to the Compaq BIOS. However, similar settings may exist for other BIOS. Therefore, if there are no apparent communications setting for the parallel port in the BIOS, contact the computer manufacturer to see if the port communication defaults to a mode that will not work with the 3DS MAX hardware lock.
To check the DMA settings, use the Microsoft Diagnostics executable. In NT versionuse the File Manager to browse to WINNT35/ SYSTEM32 sprers.eu. Double-click on the executable file, then choose the DMA/Memory button. In NTselect the Resources tab, and then choose the DMA button.
Award BIOS
Conflicts with the 3D Studio MAX hardware lock have been reported on computers with newer Award BIOS that have the "Daylight Savings Time" option enabled. Edit the System Setup to disable this option, then re-install the hardware lock drivers as described in the technical document, Removing and Installing Hardware Lock Drivers.
Other Troubleshooting Suggestions:
The following are general suggestions to follow if 3D Studio MAX cannot recognize the hardware lock after following the steps outlined earlier in this document. While most of the suggestions below do not relate directly to 3D Studio MAX or the Sentinel Pro hardware lock, they can help solve related problems that may be the cause for the lock or drivers to fail.
Update the BIOS (check with your system manufacturer for updates). BIOS manufacturer, version, and date information appears at the bottom of the screen while booting the system. Hitting the PAUSE key at this point during startup should allow the operator to read the part number, the BIOS date and version. Sometimes this information is also listed upon entering "Setup" during startup.
Launch Windows NT from a stripped system. This can be done by entering the BIOS (System Setup). Disable any memory caching or shadow RAM, and system error failed of lock Windows NT in VGA mode. While this will slow down the computer, it is a temporary troubleshooting step, system error failed of lock. If MAX runs in a stripped environment, this indicates that there may be a hardware conflict or problem in the system.
Character Studio and Plug-In concerns:
Make sure the lock corresponds with Character Studio. When Character Studio is authorized, the authorization code is directly linked to the hardware lock for that installation of 3D Studio MAX. Installing a previously authorized version of Character Studio on a system with a different lock will result in the authorization dialog appearing. System error failed of lock the same authorization code will result in an "Incorrect authorization string"message.
Other Plug-ins from 3rd party vendors may operate the same way. | https://sprers.eu/system-error-failed-of-lock.php | CC-MAIN-2022-40 | refinedweb | 3,600 | 64.2 |
Written by Nwose Lotanna✏️
In this article, we’ll break down all the new and shiny features that shipped with the latest version of Next.js.
What is Next.js?
Next.js is self-branded as the React framework for static pages, progressive web apps, mobile web apps, SEO-friendly pages, and, especially, server-side rendering. It facilitates static exporting with just a line of command and ships with a CSS-in-JS library styled JSX. It also includes features such as code splitting, universal rendering, and hot reloading.
According to the latest “State of JavaScript” survey, the Next.js community grew massively in the last 12 months, with the retention rate skyrocketing from 6 percent to 24 percent. The number of new people who are willing to learn Next.js increased by almost 10 percent as well.
Let’s take a detailed look at some of the most noteworthy new features that shipped with the latest version of the framework: Next.js 9.2.
Built-in CSS support for global stylesheets
The capability to import CSS with the
next-css plugin extending the behavior of Next.js was shipped in version 5.0. As time went on, the Next team got a lot of feedback regarding the
next-css plugin from companies that use the framework.
In response, the team decided to bring the plugin in-house as a part of the core Next product. Whereas the plugin had previously been limited in its handling of imports — such as cases where imported files dictated global styles for the entire app rather than being scoped to the component level — the Next team developed a workaround. To get started using CSS imports in your application, you can import the CSS file within
pages/_app.js.
Consider the following stylesheet, named
styles.css, in the root of your project.
body { padding: 20px 20px 60px; margin: 0; }
Create a
pages/_app.js file if not already present. Then, import the
styles.css file.
import '../styles.css' // This default export is required in a new `pages/_app.js` file. export default function MyApp({ Component, pageProps }) { return <Component {...pageProps} /> }
Since stylesheets are global by nature, they must be imported in the custom
<App> component to avoid class name and ordering conflicts for global styles. This makes it possible for your styles to reflect on your app as you edit them in the development environment.
In production, all the stylesheets will be minified into one
.css file and loaded through a link tag in the
index.html file that Next serves. This feature is backward-compatible, and if you already achieve this with another library, the feature is disabled by default to avoid conflicts.
Built-in CSS module support for component-level styles
Another issue with the old
next-css plugin was the fact that all your
.css files were either handled as global styles or local styles and there was no option to enable both at once. In this new version, CSS modules are now supported so you can use global CSS and CSS modules simultaneously.
With CSS modules, you can scope CSS locally by classnames and import them anywhere in your app to achieve scoping or component-level styling. Consider, for example, a reusable
Button component in the
components/ folder.
First, create
components/Button.module.css with the following content.
/* You do not need to worry about .error {} colliding with any other `.css` or `.module.css` files! */ .error { color: white; background-color: red; }
Then, create
components/Button.js, importing and using the above CSS file.
import styles from './Button.module.css' export function Button() { return ( <button type="button" // Note how the "error" class is accessed as a property on the imported // `styles` object. className={styles.error} > Destroy </button> ) } >
In this version, CSS modules are opt-in and only enabled for files with the
.module.css extension; the normal link stylesheets and global CSS files are still supported. This feature is backward-compatible, and if you already achieve this with another library, again, the feature is disabled by default to avoid conflicts.
Improved code splitting strategy
For a Next.js app to load, five fixed JavaScript bundles must load to boot up React: the main JS file, a common JS file, the Next runtime bundle, the Webpack runtime bundle, and dynamic imports. Code splitting helps to optimize the process of loading up all these files.
The initial code splitting strategy the Next team employed was for the commons bundle. It was a usage-ratio heuristic strategy to ensure that if a module was used in more than half of all pages, it would be marked as a module; otherwise, it would be bundled. While the team found this method to be beneficial, over time it realized that it could optimize the process even further.
The new strategy allows you to optimize common chunks with multiple files, including when many page types are involved. This is now the default process moving forward from this version.
The new chunking implementation leverages HTTP/2 to deliver a greater number of smaller-sized chunks. Under the new heuristic, myriad chunks are created for various purposes:
- A minimal chunk for each page
- A framework chunk containing React, ReactDOM, React’s Scheduler, etc.
- Library chunks for any
node_moduledependency over 160kb (pre-minify/gzip)
- A commons chunk for code used across all pages
- As many shared chunks (used by two or more pages) as possible, optimizing for overall application size and initial load speed
- Next.js client-side runtime
- Webpack runtime
Catch-all dynamic routes
Dynamic route segments were introduced in Next 9.0. The goal was to simplify dynamic segments in Next.js without using a custom server. The feature has enjoyed widespread adoption, and the Next team has been trying to optimize it as much as possible.
Previously, dynamic routes did not cover catch-all routes. In the new version, you can now use catch-all routes by using the
[...name] syntax. This is especially useful when you have a nested structure that is defined by a content source, such as a CMS.
For example,
pages/post/[...slug].js will match
slug is provided in the router query object as an array of individual path parts. So for the path
post/foo/bar, the query object will be
{ slug: ['foo', 'bar'] }
How to get started using Next.js 9.2
You can get started using the new version right away by upgrading your current version.
npm i next@latest react@latest react-dom@latest
Conclusion
The Next community has been showing impressive growth numbers, as evidenced by its nearly 900 contributors, 44,000+ GitHub stars, a vast number of example directories, and a 13,800-member spectrum forum. These numbers are poised to keep increasing steadily as the team continues to focus on improving the developer experience and optimizing the Next product.
What is your favorite feature of Next 9.2? What’s new in Next.js 9.2? appeared first on LogRocket Blog.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/bnevilleoneill/what-s-new-in-next-js-9-2-4d3k | CC-MAIN-2021-31 | refinedweb | 1,166 | 64.61 |
. Today C++ remains high on the, or parent classes, or ancestor classes, or super-classes).
This is shown more special characteristics not uncommon to find someone defending C over C++ (or vice versa) or complaining about some features of these languages. There is no scientific evidence to put a language above another in general terms; the only reason that does have some traction is the possibility of deep changes or unknown bugs in a language that is still very recent. In the case of C or C++ this is not the case, as both languages are very mature. Though both are still evolving, the new features keep less extensive and lower level than C++, it is easier to check and comply with strict industry guidelines and automate those steps. Another benefit of C is that it is easier for the programmer to do low level optimizations, though most C++ compilers can guarantee near perfect optimizations automatically, a human can still do more and C has less complex structures.
Any of the valid reasons to choose a language over another is mostly due to programmer's choice that indirectly deals with choosing the best tool for the job and having the resources needed to complete it. It would be hard to validate selecting C++ for a project if the available programmers only knew historic evolution.
One could argue that using the C subset of C++, in a C++ compiler, is the same as using C, but in reality we find that it)..
Fundamentals for getting started
The code.
File organization
Most operating systems require files to be designated by a name followed by a specific extension. The C++ standard does not impose any specific rules on how files are named or organized.
The specific conventions for the file organizations lead to most practices remaining static, except for the operating systems:
#include "light.h" Light::Light () : on(false) { } void Light::toggle() { on = (!on); } bool Light::isOn() const { return on; }
.h.
Statements. }
*/.
We will now describe how a comment can be added to the source code, but not where, how, and when to comment; we will get into that later.
C style comments
Document your code
There are a number of good reasons to document your code, and a number of aspects of it that can be documented. Documentation provides you with a shortcut for obtaining an overview of the system or for understanding the code that provides a particular feature.
Why?
The alias
You can create new names (aliases) for namespaces, including nested namespaces.
- Syntax
namespace identifier = namespace-specifier;
using namespaces
- using
using namespace std;
This using-directive indicates that any names used but not declared within the program should be sought in the ‘standard (std)' namespace.
To make a single name from a namespace available, the following using-declaration exists:
using foo::bar;
After this declaration, the name bar can be used inside the current namespace instead of the more verbose version foo::bar. Note that programmers often use the terms declaration and directive interchangeably, despite their technically different meanings.
It is good practice to use the narrow second form (using declaration), because the broad first form (using directive) might make more names available than desired. Example:
namespace foo { int bar; double pi; } using namespace foo; int* pi; pi = &bar; // ambiguity: pi or foo::pi?
In that case the declaration using foo::bar; would have made only foo::bar available, avoiding the clash of pi and foo::pi. This problem (the collision of identically-named variables or functions) is called "namespace pollution" and as a rule should be avoided wherever possible.
using-declarations can appear in a lot of different places. Among them are:
- namespaces (including the default namespace)
- functions
A using-declaration makes the name (or namespace) available in the scope of the declaration. Example:
namespace foo { namespace bar { double pi; } using bar::pi; // bar::pi can be abbreviated as pi } // here, pi is no longer an abbreviation. Instead, foo::bar::pi must be used.
Namespaces are hierarchical. Within the hypothetical namespace food::fruit, the identifier orange refers to food::fruit::orange if it exists, or if not, then food::orange if that exists. If neither exist, orange refers to an identifier in the default namespace.
Code that is not explicitly declared within a namespace is considered to be in the default namespace.
Another property of namespaces is that they are open. Once a namespace is declared, it can be redeclared (reopened) and namespace members can be added. Example:
namespace foo { int bar; } // ... namespace foo { double pi; }
Namespaces are most often used to avoid naming collisions. Although namespaces are used extensively in recent C++ code, most older code does not use this facility. For example, the entire standard library is defined within namespace std, and in earlier standards of the language, in the default namespace.
For a long namespace name, a shorter alias can be defined (a namespace alias declaration). Example:
namespace ultra_cool_library_for_image_processing_version_1_0 { int foo; } namespace improc1 = ultra_cool_library_for_image_processing_version_1_0; // from here, the above foo can be accessed as improc1::foo
There exists a special namespace: the unnamed namespace. This namespace is used for names which are private to a particular source file or other namespace:
namespace { int some_private_variable; } // can use some_private_variable here
In the surrounding scope, members of an unnamed namespace can be accessed without qualifying, i.e. without prefixing with the namespace name and :: (since the namespace doesn't have a name). If the surrounding scope is a namespace, members can be treated and accessed as a member of it. However, if the surrounding scope is a file, members cannot be accessed from any other source file, as there is no way to name the file as a scope. An unnamed namespace declaration is semantically equivalent to the following construct
namespace $$$ { // ... } using namespace $$$;
where $$$ is a unique identifier manufactured by the compiler.
As you can nest an unnamed namespace in an ordinary namespace, and vice versa, you can also nest two unnamed namespaces.
namespace { namespace { // ok } }
Because of space considerations, we cannot actually show the namespace command being used properly: it would require a very large program to show it working usefully. However, we can illustrate the concept itself easily.
// Namespaces Program, an example to illustrate the use of namespaces #include <iostream> namespace first { int first1; int x; } namespace second { int second1; int x; } namespace first { int first2; } int main(){ //first1 = 1; first::first1 = 1; using namespace first; first1 = 1; x = 1; second::x = 1; using namespace second; //x = 1; first::x = 1; second::x = 1; first2 = 1; //cout << 'X'; std::cout << 'X'; using namespace std; cout << 'X'; return 0; }
We will examine the code moving from the start down to the end of the program, examining fragments of it in turn.
#include <iostream>
This just includes the iostream library so that we can use std::cout to print stuff to the screen.
namespace first { int first1; int x; } namespace second { int second1; int x; } namespace first { int first2; }
We create a namespace called first and add to it two variables, first1 and x. Then we close it. Then we create a new namespace called second and put two variables in it: second1 and x. Then we re-open the namespace first and add another variable called first2 to it. A namespace can be re-opened in this manner as often as desired to add in extra names.
main(){ 1 //first1 = 1; 2 first::first1 = 1;
The first line of the main program is commented out because it would cause an error. In order to get at a name from the first namespace, we must qualify the variable's name with the name of its namespace before it and two colons; hence the second line of the main program is not a syntax error. The name of the variable is in scope: it just has to be referred to in that particular way before it can be used at this point. This therefore cuts up the list of global names into groups, each group with its own prefixing name.
3 using namespace first; 4 first1 = 1; 5 x = 1; 6 second::x = 1;
The third line of the main program introduces the using namespace command. This commands pulls all the names in the first namespace into scope. They can then be used in the usual way from there on. Hence the fourth and fifth lines of the program compile without error. In particular, the variable x is available now: in order to address the other variable x in the second namespace, we would call it second::x as shown in line six. Thus the two variables called x can be separately referred to, as they are on the fifth and sixth lines.
7 using namespace second; 8 //x = 1; 9 first::x = 1; 10 second::x = 1;
We then pull the declarations in the namespace called second in, again with the using namespace command. The line following is commented out because it is now an error (whereas before it was correct). Since both namespaces have been brought into the global list of names, the variable x is now ambiguous, and needs to be talked about only in the qualified manner illustrated in the ninth and tenth lines.
11 first2 = 1;
The eleventh line of the main program shows that even though first2 was declared in a separate section of the namespace called first, it has the same status as the other variables in namespace first. A namespace can be re-opened as many times as you wish. The usual rules of scoping apply, of course: it is not legal to try to declare the same name twice in the same namespace.
12 //cout << 'X'; 13 std::cout << 'X'; 14 using namespace std; 15 cout << 'X'; }
There is a namespace defined in the computer in special group of files. Its name is std and all the system-supplied names, such as cout, are declared in that namespace in a number of different files: it is a very large namespace. Note that the #include statement at the very top of the program does not fully bring the namespace in: the names are there but must still be referred to in qualified form. Line twelve has to be commented out because currently the system-supplied names like cout are not available, except in the qualified form std::cout as can be seen in line thirteen. Thus we need a line like the fourteenth line: after that line is written, all the system-supplied names are available, as illustrated in the last line of the program. At this point we have the names of three namespace incorporated into the program.
As the example program illustrates, the declarations that are needed are brought in as desired, and the unwanted ones are left out, and can be brought in in a controlled manner using the qualified form with the double colons. This gives the greater control of names needed for large programs. In the example above, we used only the names of variables. However, namespaces also control, equally, the names of procedures and classes, as desired.
The Compiler analysis analysis compiler DOS.
The Preprocessor
- Syntax:
#warning message #error message
#error and #warning.
Linker.
Internal
static.
External
All entities in the C++ Standard Library have external linkage.
extern.
Variables_1<<_3<<
The bases that these numbers are in are shown in subscript to the right of the number.
Carry bit complement.
Assignment [ ]
This operator is used to access an object of an array. It is also used when declaring array types, allocating them, or deallocating them.
Arrays;
Logical operators
The operators and (can also be written as &&) and or (can also be written as ||) allow two or more conditions to be chained together. The and operator checks whether all conditions are true and the or operator checks whether at least one of the conditions is true. Both operators can also be mixed together in which case the order in which they appear from left to right, determines how the checks are performed. Older versions of the C++ standard used the keywords && and || in place of and and or. Both operators are said to short circuit. If a previous and condition is false, later conditions are not checked. If a previous or condition is true later conditions are not checked.
The not (can also be written as !) operator is used to return the inverse of one or more conditions.
- Syntax:
condition1 and condition2 condition1 or condition2 not condition
- Examples:
When something should not be true. It is often combined with other conditions. If x>5 but not x = 10, it would be written:
if ((x > 5) and not (x == 10)) // if (x greater than 5) and ( not (x equal to 10) ) { //...code... }
When all conditions must be true. If x must be between 10 and 20:
if (x > 10 and x < 20) // if x greater than 10 and x less than 20 { //....code... }
When at least one of the conditions must be true. If x must be equal to 5 or equal to 10 or less than 2:
if (x == 5 or x == 10 or x < 2) // if x equal to 5 or x equal to 10 or x less than 2 { //...code... }
When at least one of a group of conditions must be true. If x must be between 10 and 20 or between 30 and 40.
if ((x >= 10 and x <= 20) or (x >= 30 and x <= 40)) // >= -> greater or equal etc... { //...code... }
Things get a bit more tricky with more conditions. The trick is to make sure the parenthesis are in the right places to establish the order of thinking intended. However, when things get this complex, it can often be easier to split up the logic into nested if statements, or put them into bool variables, but it is still useful to be able to do things in complex boolean logic.
Parenthesis around x > 10 and around x < 20 are implied, as the < operator has a higher precedence than and. First x is compared to 10. If x is greater than 10, x is compared to 20, and if x is also less than 20, the code is executed.
and (&&)
The logical AND operator, and, compares the left value and the right value. If both statement1 and statement2 are true, then the expression returns TRUE. Otherwise, it returns FALSE.
if ((var1 > var2) and (var2 > var3)) { std::cout << var1 " is bigger than " << var2 << " and " << var3 << std::endl; }
In this snippet, the if statement checks to see if var1 is greater than var2. Then, it checks if var2 is greater than var3. If it is, it proceeds by telling us that var1 is bigger than both var2 and var3.
or (||)
The logical OR operator is represented with or. Like the logical AND operator, it compares statement1 and statement2. If either statement1 or statement2 are true, then the expression is true. The expression is also true if both of the statements are true.
if ((var1 > var2) or (var1 > var3)) { std::cout << var1 " is either bigger than " << var2 << " or " << var3 << std::endl; }
Let's take a look at the previous expression with an OR operator. If var1 is bigger than either var2 or var3 or both of them, the statements in the if expression are executed. Otherwise, the program proceeds with the rest of the code.
not (!)
The logical NOT operator, not, returns TRUE if the statement being compared is not true. Be careful when you're using the NOT operator, as well as any logical operator.
not x > 10
The logical expressions have a higher precedence than normal operators. Therefore, it compares whether "not x" is greater than 10. However, this statement always returns false, no matter what "x" is. That's because the logical expressions only return boolean values(1 and 0).
Conditional Operator.".
These functions are very much like printf(), | http://en.wikibooks.org/wiki/C%2B%2B_Programming/Print_version | CC-MAIN-2014-52 | refinedweb | 2,644 | 60.65 |
Primitive types (also known as the fundamental types) common to both C++ and Java are:
A boolean type: bool in C++ and boolean in Java.
Character types: char in both C++ and Java. Additionally, signed char and unsigned char in C++.
Integer types: short, int, long in both C++ and Java. Additionally, byte in Java. Additionally, signed int and unsigned int in C++. The type signed int in C++ is synonymous with int .
Floating-point types: float and double in both C++ and Java. Additionally, long double in C++.
A Boolean type [2] is allowed to have only one of two values: true or false . A Boolean is used to express the result of logical operations. For example,
int x; int y; ... ... bool b = x == y; in C++ boolean b = x==y; in Java
For another example,
C++: bool greater( int a, int b ) { return a > b; } Java: boolean greater( int a, int b ) { return a > b; }
In Java, casting to a boolean type from any other type or casting from a boolean type to any other type is not permitted. Note in particular that, unlike the bool type in C++, boolean values in Java are not integers.
[2] Some of the older C++ compilers do not have bool as a built-in type. However, you can easily create the bool type with the following enumeration:
enum bool {false, true};
Enumerations are discussed further in Chapter 7. Suffice it here to note that this declaration makes the symbols false and true as enumerators of the bool type, in the sense that they represent the complete set of values that objects of type bool are allowed to take.
C++ gives you three different character types:
char unsigned char signed char
Almost universally , a C++ char is allocated one byte so that it can hold one of 256 values. The decimal value stored in such a byte can be interpreted to range from either-128 to 127, or from 0 to 255, depending on the implementation. But in either case, the bit patterns for values between 0 and 127 are almost always reserved for letters , digits, punctuations, and so on, according to the ASCII format. All printable characters belong to this set of 128 values. From the standpoint of writing portable code, note that some of the characters one usually sees in C++ source code may not be available in a particular character set available for C++ programming. For example, some European character sets do not provide for the characters {,}, [,], and so on.
The decimal values of the bit patterns stored in a signed char are always interpreted as varying from -128 to 127, and those for a unsigned char [3] from 0 to 255. So, is a plain char in C++ an unsigned char or a signed char ? That, as mentioned before, depends on the implementation.
A char variable can be initialized by either an integer whose value falls within a certain range or by what's known as a character literal :
char ch = 98; // ch is assigned the character 'b' char x = 'b';
The quantity 'b' is referred to as a character literal or a character constant. A character literal is in reality a symbolic constant for the integer value of the character. In the code fragment
char y = '2'; int z = y + 8; (works for both C++ and Java)
the value of z would be 58 because, under ASCII coding, the integer that corresponds to the bit pattern for the character '2' is 50. [4]
Like C, C++ also allows an individual character to be represented by an escape sequence , which is a backslash [5] followed by a sequence of characters that must be within a certain range. There are two kinds of escape sequences: character escapes and numeric escapes . Character escapes, such as \n , \t , and so on, represent ASCII's more commonly used control characters that when sent to an output device can be used to move the cursor to a new line, or to move the cursor to the next horizontal tab, etc. The character '/n' is frequently called the newline character and '/t' the tab character .
Since character escapes are few in number, a more general form of an escape sequence for representing an individual character is the numeric escape. A numeric escape comes in two forms: hexadecimal and octal . In the following declarations, all initializing x to the same value, the declaration in line (C) uses the hexadecimal form for the escape sequence shown, and the one in line (D) the octal form:
char x = 'b'; // decimal value of 'b' is 98 //(A) char x = 98; //(B) char x = '\x62'; // 62 is hex for 98 //(C) char x = '2'; // 142 is octal for 98 //(D)
In general, the hexadecimal (referred to frequently as just hex) form of a numeric escape in C++ must always be of the form
\xdddd....d
where every character after the letter ‘x' is a hexadecimal digit (0-9 and a-f or A-F to represent the decimal values 0-15). C++ allows any arbitrary number of characters after the letter 'x' as long as each is a valid hexadecimal digit and with the additional stipulation that the decimal value of the hex number does not exceed 255 for 8-bit characters. The hexadecimal number x62 in line (C) represents the decimal 98, which is the ASCII code for the letter b. Similarly, the octal number 142 in line (D) also represents the decimal 98 and, therefore, corresponds again to the same letter, 'b'. Unlike octal numbers in general, an octal escape sequences does not have to begin with a 0. Also, a maximum of three digits is allowed in an octal escape sequence.
These properties of escape sequences require care when they are used as characters in string literals. This point is illustrated with the examples in the following program:
//CharEscapes.cc #include < iostream > #include <string> using namespace std; int main() { string y1( "a\x62" ); cout << y1 << endl; // y1 is string "ab". // Printed output: ab string y2( "a\xOa" ); cout << y2 << endl; // y2 is the string formed by // the character 'a' followed // by the newline character // Printed output: a string y3( "a\nbcdef" ); cout << y3 << endl; // y3 is the string formed by // the character 'a' followed // by the newline character // represented by the character // escape '\n' followed by the // characters 'b', 'c', 'd', 'e', // and 'f'. // Printed output: a // bcdef string y4( "a\xOawxyz" ); cout << y4 << endl; // y4 is the string formed by // character 'a' followed by the // newline character represented by // the numerical escape in hex, '\xOa', // followed by the characters 'w', // 'x', 'y', and 'z'. // Printed output: a // wxyz //string y5( "a\xOabcdef" ); // ERROR //cout << y5 << endl; // because the number whose hex // representation is 'Oabcdef' is // out of range for a char string y6( "a\xef" ); cout << y6 << endl; // Correct but the character after // 'a' may not be printable string w1( "a2" ); cout << w1 << endl; // w1 is the string formed by // the character 'a' followed by // the character 'b'. // Printed output: ab string w2( "a2c" ); cout << w2 << endl; // w2 is the string formed by the // character 'b' followed by the // character 'c'. // Printed output: abc string w3( "a2142" ); cout << w3 << endl; // w3 is the string formed by the // character 'a' followed by the // character 'b' followed by the // characters '1', '4', and '2'. // Printed output: ab142 string w4( "a" ); cout << w4 << endl; // w4 is the string formed by the // character 'a' followed by the // bell character, followed by // the character '9'. Printed // output: a9 string w5( "\x00007p\x0007q\x0007r\x007s\x07t\x7u" ); cout << w5 << endl; // printed output: pqrstu return 0; }
A Java char has 2 bytes. Any two contiguous bytes in the memory represent a legal Java char . Which 16-bit bit pattern corresponds to what character is determined by the Unicode representation. As was mentioned earlier, the integer values 0 through 255 in the Unicode representation correspond to Latin-1 characters and the first 128 of these are the same as the encodings for the 7-bit ASCII character set (except for an additional byte of zeros on the high side). A Java char is unsigned, meaning that its integer values go from 0 through 65,535.
In Java, all of the following four declarations are equivalent:
char x = 'a'; // value of 'a' is 98 //(E) char x = 98; //(F) char x = '\u0062'; // 0062 is hex for 98 //(G) char x = '2'; // 142 is octal for 98 //(H)
As shown in line (G), the hex form of a numeric escape in Java begins with the letter 'u', as opposed to the letter 'x' for C++. In general, the hex form of a numeric escape in Java must always be of the form
\udddd
where where each d is a hexadecimal digit. The declaration in line (H) above uses an escape sequence in its octal representation. In all four cases, the value of x will be the same, the letter 'a'. Comparing the numeric escapes in lines (E) through (H) for Java and in lines (A) through (D) for C++, we note that only the hex versions are different. The hex version for Java must consist of four hex digits, where C++ allows an arbitrary number of hex digits.
Suppose you are translating a C++ program into a Java program, is it always possible to substitute Java's / udddd escape for C++'s / xd … d escape of an identical decimal value (that is under 256)? Not so. For example, the declaration
0 char ch = '\xOOOa'; // ok in C++ //(I)
gives us a valid char in C++ consisting of the newline character. An equivalent Java declaration
char ch = '\uOOOa'; // ERROR in Java //(J)
is illegal. By the same token, the second string literal we used in the C++ program CharEscapes.cc
string y2 = "a\xOa"; // ok in C++ //(K)
is legal. However, a comparable declaration in Java
String s = "a\uOOOa"; // ERROR in Java //(L)
is illegal for constructing a string literal. The reason for why the / udddd escapes shown in lines (J) and (L) cause errors in Java has to do with the fact that the very first thing a Java compiler does with a source file is to scan it for resolving on the fly all / udddd escapes. As each / udddd escape is encountered , it is replaced immediately by the corresponding 2-byte Unicode character. If a / udddd escape represents the newline character, as is the case with the escape sequence / uOOOa , a newline is inserted into the source file at that point immediately. [6] The same thing happens with the Unicode escape / uOOOd , which represents carriage return.
The above discussion should not be construed to imply that you cannot embed control characters such as the newline or the carriage-return characters in a character or a string literal. When, for example, a newline character is desired, one can always use the character escape '/n' for achieving the same effect.
Shown below is Java's version of the C++ program CharEscapes.cc presented earlier in this section. This program retains as many of the string literals of the C++ program as make sense in Java. We have also avoided the use of/ uOOOa as a newline character in the string literals.
In the program shown below, note in particular that whereas the string y5 resulted in an error in C++, Java has no problems with it. Java forms a Unicode character out of the escape / uOabc , leaving the rest of the characters for the string literal. But since Java cannot find a print representation for the Unicode character, it outputs a question mark in its place when the print function is invoked on the string. The same is true for the print representation of the Unicode character formed from the escape sequence in y6 . The string literals w1 - w4 use octal escapes in the same manner as we showed earlier for the C++ program.
//CharEscapes.java class Test { public static void main( String[] args ) { String y1 = "a\u0062"; print( "y1:\t" + y1 ); // Printed output: ab String y2 = "a\n"; print( "y2:\t" + y2 ); // Printed output: a String y3 = "a\nbcdef"; print( "y3:\t"+ y3 ); // Printed output: a // bcdef String y4 = "a\nwxyz"; print( "y4:\t" + y4 ); // Printed output: a // wxyz String y5 = "a\uOabcdef"; print( "y5:\t" + y5 ); // Printed output: a?def String y6 = "a\uOOef"; print( "y6:\t" + y6 ); // Correct, but the character // following 'a' may not have // a print representation String w1 = "a2"; print( "w1:\t" + w1 ); // Printed output: ab String w2 = "a2c"; print( "w2:\t" + w2 ); // Printed output: abc String w3 = "a2142"; print( "w3:\t" + w3 ); // Printed output: ab142 String w4 = "a"; print( "w4:\t" + w4 ); // Printed output: a9 } static void print( String str ) { System.out.println( str ); } }
[3] Unsigned chars of C and C++ are useful for image processing work. Most color cameras produce 8-bit values in each of the color channels, R, G, and B. You'd want to read these values into an unsigned char . If you read them into a signed char , unless care is taken the high values could get interpreted as negative numbers during downstream processing.
[4] The automatic type conversion involved here from char to int for y is known as binary numeric promotion in both C++ and Java. See the last paragraphs of Sections 6.7.1 and 6.7.2 for when such conversions can be carried out automatically in C++ and in Java, respectively.
[5] A second common use of backslash inside either a double-quoted string or between a pair of single quotes is that it tells the system to alter the usual meaning of the next character. For example, if you wanted to set the value of a character variable to a single quote that is ordinarily used as a character delimiter , you would not be able to say
char x = "'; \ERROR
Instead, you could use a backslash in the following manner
char x = '\";
to suppress the character-delimiter meaning of the single quote that follows the backslash. Another illustration of this would be if you wanted to initialize a character variable to the backslash itself:
char x = '\';
where the first backslash alters the usual meaning of the backslash that follows.
[6] Java lexical grammar has the notion of a LineTerminator , which is not considered to be one of the InputCharacters from which Tokens are formed. When the Unicode escape / uOOOa is encountered during the initial scan of a source file, it is replaced by a LineTerminator [23]. | http://flylib.com/books/en/1.422.1.49/1/ | CC-MAIN-2013-20 | refinedweb | 2,419 | 55.58 |
Details
Description
This is a nasty one.
We currently support URIs of the following form in camel-cxf:
"cxf://{}PersonService&portName={}soap"
As curly brackets are not valid, URIs like above are invalid. Unfortunately I suspect there are too many users who use this format now to just fix it so we need to deprecate this format, find a workaround and a solution.
The solution I am proposing is to use another parameter: targetNamespace to replace the value between the curlies for the serviceName. The portName should not be a QName actually either. As such, the example above would become:
"cxf://"
I will look for a workaround too, to not break existing code too much.
Activity
- All
- Work Log
- History
- Activity
- Transitions
"{" and "}
" are not URI safe character, we did some work in Camel when it parsers endpoint URI to support it.
If you are passing the URI which is encoded to camel, you may face that kind of trouble.
As most user don't do it, we don't get this kind of alarm before.
There is a question just comes into my mind, what if the user just pass a URI which is not encoded with UTF-8, or it is not be encoded.
@Willem, as per the wsdl spec, the port @name is an 'nmtoken' not a 'qname'. Being part of a service it shares its namespace. Does CXF interpret this differently?
Even if that were the case, we'd define two parameters then, something like serviceNamespace, and endpointNamespace, as you suggest, so it's not a biggie.
I agree we need to support the current way, flawed as it is until 3.0 and I agree with the suggestion to fix and deprecate. I will not comment on the reason why we didn't catch it until now, I am as guilty as anyone.
We can expect URIs to be UTF-8 encoded. Supporting other encodings would be a feature we could consider for the future, but I am not too worried about that now. If a URI not properly encoded is passed now, it's a toss up. After looking into the details of the code I can give examples that are invalid, yet work, and I can give examples that fail miserably (without even an clear explanation of what went wrong). It's fixable though in a few ways and I think I have a solution.
People have been using this for years with no general problem at all. Why suddenly all the fuzz and marking it as critical and whatnot?
The portName and serviceName may share difference namespace, we may need add options for this issue.
In CxfEndpoint spring configuration we use the endpointName for the portName, we also unify the options definition.
I suggest we use the endpointName for the portName, as the CXF is using the endpointName which is also used in WSDL2 definition.
In CXFSpringEndpoint ,there are some method for set and get these options.
like serviceNamespace, serviceLocalName, endpointNamespace, endpointLocalName
If we move these method into CXFEndpoint, camel-cxf URI can be a good URI citizen.
If we don't support the of serviceName and portName option for the camel-cxf URI, it will hurt the user.
So I suggest to deprecate them in Camel 2.9 and remove the support of these option in URI in Camel 3.0. | https://issues.apache.org/jira/browse/CAMEL-4405?focusedCommentId=13095710&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-22 | refinedweb | 560 | 62.78 |
How to make a heap profiler
We'll see how to make a heap profiler. Example code for this post makes up heapprof, a working 250-line heap profiler for programs using malloc/free.
It works out of the box on Linux (tested on "real" programs like gdb and python). The main point though is being easy to port and modify to suit your needs. The code, build and test scripts are at github.
Why roll your own heap profiler?
- It's easy! And fun, if you're that sort of person. What, not reasons enough? OK, how about...
- Your platform doesn't have one - a common case on embedded systems lacking an OS, standard APIs to output data, etc.
- Your real-time program overwrites the heap after a few hours. You'd like to know which buffer overflows. Valgrind doesn't run on the device/is too slow. How does a custom heap profiler help here? Read on!
- You want to present stats differently from the way your profiler does.
- You only want to instrument malloc some of the time to minimize the slowdown.
- ...
What a heap profiler needs from the platform
You can't write a heap profiler in portable, standard C. You need a few things that most platforms have, but C doesn't specify an interface for. What you need are ways to:
- Intercept calls to malloc, free, calloc and realloc
- Get the current call stack at run time
- Dump the contents of memory (as in core dump)
- Match instruction addresses to source code lines
Given that, we can associate every allocated block with a call stack. Then we cluster allocations by call stack. Finally, we sort the call stacks by the total amount of allocated memory.
To illustrate the idea, here's a rough sketch of how heap memory normally looks like (each line is one memory word - 32/64b):
size (used by malloc/free) user data (malloc returns this address - size is "invisible" to the user) ... size (of the next block) user data ...
And here's how heap memory looks like with heapprof's malloc:
size (used by the underlying malloc/free) "HeaP" (magic string; underlying malloc gives us this address) user size (user's request - without heapprof's overhead) caller 0 (the address malloc returns to) caller 1 (the address caller 0 returns to) ... "ProF" (magic string) user data (our malloc returns this address to the user) ... size "HeaP" user size caller 0 caller 1 ... "ProF" user data ...
A program reading a core dump containing this heap can simply look for blocks enclosed in "HeaP"..."ProF". Thus it will find the sizes of all live blocks - and the call stacks responsible for each allocation.
While we're at it - why store metadata at the beginning of chunks and not at the end?
Most often, buffers overflow due to large positive array indexes, not negative indexes. Let's say someone's array overflows, messing up the heap and dumping core. Then if we ran under heapprof, we'll see who allocated the block right before the point of corruption. This will narrow down our search considerably.
You can tell that I'm writing from experience, can't you?.. Ah, safety-critical software in zero-safety languages... what a way to make a living. Anyway, the point is, a heap profiler includes a "heap annotator", which is a handy debugging tool in its own right. Because it's much easier to make sense of a heap with call stacks attached to blocks.
So how do we do all that - intercept malloc, save the call stack, dump the memory and match return addresses to source lines? Let's answer these one by one.
Intercepting malloc and friends
gcc -shared -fPIC -o heapprof.soheapprof.c ... env LD_PRELOAD=heapprof.so program args ...
That's all - add to $LD_PRELOAD a shared object redefining malloc, calloc, free and realloc. (Be careful to not redefine anything else - use static to hide symbols.)
Our redefined malloc will allocate the bytes for its caller plus a few more for the call stack. Allocate - how? The simplest way is to call the original malloc (it's similar with free):
typedef void* (*malloc_func)(size_t); static malloc_func g_malloc; //at init time: g_malloc = (malloc_func)dlsym(RTLD_NEXT, "malloc"); //upon malloc: void* chunk = g_malloc(size+EXTRA);
In statically linked binaries common on embedded systems, just adding your malloc, etc. to the build is typically enough to override the standard functions. Calling the original functions is hard or impossible though. I'd pull in an open source malloc - like Doug Lea's dlmalloc - rename the functions to real_malloc or whatever and call them from my own versions.
Getting the current call stack
GNU C has the wonderful backtrace() function which just does the work. Nifty!
void** p = (void**)chunk; //fill the metadata p[START_INDEX] = START_MAGIC; backtrace(p+SIZE_INDEX, nframes+1); //+1 for &malloc p[SIZE_INDEX] = (void*)size; // overwrite &malloc p[END_INDEX] = END_MAGIC; //give the user a pointer past the metadata return (char*)p + EXTRA;
Unfortunately, not all systems have backtrace - not even all GNU C ports (say, there's no backtrace on MIPS, AFAIK). Without backtrace, getting the call stack yourself is still relatively easy, though it can get a bit ghastly. If you care, you can read a bunch about it here.
Dumping core
If there's one thing C is a good at, it's dumping core:
int*p=0;*p=0;
Segmentation fault (core dumped)
(Is there a more succinct way? int*p=*p comes to mind, but it might accidentally not crash if p is un-initialized to a legitimate pointer. *(int*)0=0? Any other suggestions for shaving off characters?..)
What if these barbaric means don't suit your ends? gdb lets you place a breakpoint at your function some_func, and dump core thusly:
gdb program -ex "b some_func" -ex r -ex "gcore my.core" -ex q
You can do this multiple times in the same process, getting several heap state snapshots.
Or let's say you're a Python process with C modules you want to profile:
os.kill(os.getpid(), 11)
...or there's `kill -SEGV process-id` etc. etc.
On an embedded system, you can dump memory with any method available - using a JTAG probe or having the program send memory over some communication channel, etc. It won't be a "real" core dump in a format recognized by debuggers. But as we'll see in the next section, it's probably enough.
Matching return addresses to source code
Now our offline heap stats analyzer, heapprof.py, searches for block metadata enclosed in "HeaP...ProF" and finds block sizes and stacks:
class Block: def __init__(self, metadata): # 'I',4 for 32b, 'Q',8 for 64b machines self.size = struct.unpack('I', metadata[0:4])[0] self.stack = struct.unpack('%d'%(len(metadata)/4 - 1)+'I', metadata[4:])
So now block.stack is a list of return addresses, and
{addr for block in blocks for addr in block.stack}
...is the set of all return addresses in our core dump. How do we match them to source lines and function names?
We can pipe the addresses to `addr2line -f -e program`:
from subprocess import * addr2line = Popen('addr2line -f -e'.split()+[exe], stdin=PIPE, stdout=PIPE) for addr in addrs: addr2line.stdin.write('0x%x\n'%addr) addr2line.stdin.flush() func = addr2line.stdout.readline().strip() line = addr2line.stdout.readline().strip()
However, this doesn't work for return addresses from shared libraries - addr2line doesn't know to which addresses they got loaded.
What does work is gdb - if it's given the core dump telling it where shared libraries got loaded:
gdb = Popen(['gdb',prog,core], stderr=STDOUT, stdin=PIPE, stdout=PIPE) for addr in addrs: gdb.stdin.write('info symbol 0x%x\n'%addr) gdb.stdin.write('list *0x%x\n'%addr) gdb.stdin.write('printf "\\ndone\\n"\n') gdb.stdin.flush() s = '' while s != 'done': s = gdb.stdout.readline().strip() if 'is in' in s: line = s.split('is in ')[1] if 'in section' in s: func = s.split('(gdb) ')[1]
The script looks kinda ugly, but the upshot is that info symbol 0xwhatever tells you the function name (and then some), while list *0xwhatever tells you the source line number (and then some).
So who needs addr2line when we have gdb? On embedded systems, often there's just one static binary, so addr2line is sufficient. And on the other hand, there are no core dumps coming out in a standard format. Maybe all you have is a JTAG probe and you dump your memory in one big chunk. So you can't use `gdb program core`.
In that case, heapprof.py will work just fine if you setenv HEAPPROF_ADDR2LINE. It doesn't care if it's a "real" core dump or just a raw memory dump - searching for "HeaP...ProF" is equally easy. Only gdb cares, and $HEAPPROF_ADDR2LINE avoids using gdb.
If you use a proprietary compiler, then maybe it doesn't have addr2line. Bummer. If the executable file format is standard (ELF/COFF/whatever), then gdb's info symbol command will work (but list won't - not unless DWARF debug info is available.) Also, proprietary compilers are bad for business in embedded systems. But that's a rant for another time.
Clustering and sorting
Clustering blocks by their allocating stack, and sorting the stacks by sum of block sizes is easy:
stack2sizes = {} for block in blocks: stack2sizes.setdefault(block.stack,list()).append(block.size) total = sorted([(sum(sizes), stack) for stack, sizes in stack2sizes.iteritems()])
Now we can simply print out the sorted stacks using the symbolic information obtained above from addr2line/gdb.
Everybody likes to malloc
Why did heapprof take me several hours to write instead of just one - several hours spread over several days, so here I am programming in my spare time after switching to a part-time job specifically to program less? What's the root cause apart from me being a complete dork?
The problem is that everybody mallocs. dlsym mallocs. backtrace mallocs. pthread - which I only need because those others malloc - also mallocs. Sheesh!
So what happens is, you're inside malloc. You want to log the call stack, so you call backtrace. Backtrace calls malloc. Ought to avoid infinite recursion, which we do with a global variable. Now that global variable needs to be protected with a mutex. Which we have to initialize before the first call to malloc - and that initialization mallocs. Also we need dlsym initially to get the original &malloc, but dlsym also mallocs.
So we need to be able to malloc without &malloc, initially. So I use sbrk for that, and I need free to not use &free to try and free that sbrk'd stuff 'cause that will fail miserably. And so on
It's all in heapprof.c if you want to have a look. I don't think it's very interesting; it does make a heap profiler that much harder to write, but it all still fits into 111 sloc, so it's no big deal, really. The really silly thing is it pulls threading into this, because of the global variable guarding against backtrace's malloc calls.
I suspect that backtrace only mallocs upon initialization and maybe not at all in statically linked binaries. So maybe if you're porting to an embedded system, you don't need to worry about threading issues after all. I just wanted to write something "robust" for "the general case".
Porting
If you want to port heapprof or bits of it to your platform, the issues you might need to deal with are described here. Basically you might need to tweak the platform-specific things we've discussed above, plus a couple others like alignment and endianness.
Conclusions
- A heap profiler is a really simple tool to write
- A heap profiler annotates heap blocks with metadata at run time - this in itself can a be a great debugging tool
- I'm a hopeless dork desperately needing to program less
Previous post by Yossi Kreinin:
Delayed printf for real-time logging
Next post by Yossi Kreinin:
The habitat of hardware bugs
- Write a CommentSelect to add a comment
Quote: Everybody likes to malloc
Umm.. no.
I will not use C++ on a critical system because of the desire to malloc when creating any class variable, or calling a C++ library function. i will put in a dummy malloc() to catch those cases.
If I *do* need "dynamic buffers", I do an analysis to determine the number and size of buffers and create buffer pools. Part of the analysis is: say I need 10 bytes & 20 byte buffers. I will look at when & how long I need both buffer sizes. Frequently, I can create a 20 byte buffer pool and just use it for both cases.
It is also easy to instruments the buffer pools to determine usage, malloc without free, tracing the path of how the buffer is shared between tasks, etc.
This method works on fairly small to large embedded systems.
If you really need to use malloc(), this method looks like it. | https://www.embeddedrelated.com/showarticle/600.php | CC-MAIN-2020-34 | refinedweb | 2,194 | 65.01 |
Assigning Grades coderinme
The certain instructor assigns letter grade for his course based on the following table:
Score Grade
>= 90 A+
>= 85 A
>= 80 B+
>= 75 B
>= 65 C+
>= 60 C
>= 55 D+
>= 50 D
< 50 F
Write a class, Grader, which has: an instance variable, score, an appropriate constructor and appropriate methods
Now write a demo class to test the Grader class by reading a score from the user, using it to create a Grader object after validating that the value is not negative and is not greater then 100. Finally, call the letterGrade() method to get and print the grade. See figure (b) for sample run.
Program:
import java.io.*; class Grader { private int score; public Grader(int s) { score=s; } public String letterGrade() { String grade; if(score>=90) grade="A+"; else if(score>=85) grade="A"; else if(score>=80) grade="B+"; else if(score>=75) grade="B"; else if(score>=65) grade="C+"; else if(score>=60) grade="C"; else if(score>=55) grade="D+"; else if(score>=50) grade="D"; else grade="F"; return grade; } } public class Q22 { public static void main(String[] args) throws IOException { System.out.println("please enter the marks of the student :"); BufferedReader in=new BufferedReader(new InputStreamReader(System.in)); int grad=Integer.parseInt(in.readLine()); Grader g1=new Grader(grad); System.out.println("The grade of the student is :"+g1.letterGrade()); } } | https://coderinme.com/assigning-grades-coderinme/ | CC-MAIN-2019-09 | refinedweb | 231 | 50.57 |
PRF is designed to help coding RESTful endpoints with minimal code
Project description
Pyramid RESTful Framework (PRF) is designed to help coding RESTful endpoints with minimal code. It takes care of lots of reasonable defaults and boilerplate code.
Setup.
First, lets install pyramid and create an app:
virtualenv myapp pip install pyramid pcreate -s starter myapp pip install -e .
Now if we run
pserve development.ini
and navigate to we will see the standard pyramid app. Boring.
Lets install
httpie to use it for doing requests to our endpoints. Feel free to use curl or any other http client as long as it supports CRUDs.
And lets add prf to the mix!
pip install git+
And add resources.
Modify
__init__.main function of myapp to look like:
def main(global_config, **settings): config = Configurator(settings=settings) config.include('prf') #pyramid way of adding external packages. root = config.get_root_resource() #acquire root resource. user = root.add('user', 'users', view='prf.view.NoOp') # declare `users` root resource user_story = user.add('story', 'stories', view='prf.view.NoOp') # declare `nested resource `users/stories` #per pyramid, must return wsgi app return config.make_wsgi_app()
The following endpoints are declared with the code above:
users/{id} users/{user_id}/stories/{id}
Try these:
# will get all declared resources http 0.0.0.0:6543/_ # will get users http 0.0.0.0:6543/users will get stories for a user with id 1 http 0.0.0.0:6543/users/1/stories
'NoOp' view, as name suggests, does not do much. We will need to create our own views for each resource. In our case UsersView and UserStoriesView.
Lets modify views.py to add the following:
from prf.view import BaseView Users = [ { 'id': 0, 'name':'Alice', }, { 'id': 1, 'name':'Bob', }, { 'id': 2, 'name':'Katy', }, ] class UsersView(BaseView): def index(self): return Users def show(self, id): return Users[int(id)] def create(self): Users.update(**self._params) def delete(self, id): Users.pop(int(id))
We need to change the view argument for the
users resource to point to our new class in the
main:
user = root.add('user', view='myapp.views.UsersView')
Restart the server and try:
# list users http 0.0.0.0:6543/users # delete a user with id 1 http DELETE 0.0.0.0:6543/users/1 # user 1 is gone http 0.0.0.0:6543/users
Above, we declared
index,
show,
create and
delete actions which correspond to: GET collection, GET resource, POST resource and DELETE resource respectively. You could also declare
update, which would correspond to the PUT method. You dont need to declare all of them, only those you need. The missing ones will automatically return 405 Method Not Allowed error.
Comment out the
index action and try:
http 0.0.0.0:6543/users
Happy RESTing !
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/prf/ | CC-MAIN-2022-27 | refinedweb | 496 | 68.97 |
If you enjoy talks from 360 AnDev, please support the conference via Patreon!
Android developers often fetch data from HTTP endpoints to display it on the device. These endpoints can be a major source of frustration, especially when they are backed by complex legacy systems. One easy source of testing that will actually speed the pace of development from the beginning is end-to-end integration tests with those endpoints. However, in the end the tests can only be as stable as the environment they’re testing. I shall present a framework for creating these tests and partially automate creating an OkHttp interceptor that will provide mocked responses to ensure the stability of the tests. With this framework and this interceptor, integration tests can distinguish between a failure caused by a change to the behavior of the endpoint and a failure caused by a change to the application source.
Introduction
I’m Chris Koeberle. I’m an Android developer at Bottle Rocket. We’re an agency. We make apps for clients. What we’re going to talk about today is the problem that I faced, the requirements for implementing my solution, how I implemented the solution, and then mocking the responses for the network calls that I test. This is going to be code-heavy, but you can find the code at this link.
In my Android development career, I have mainly worked on travel sector apps, which means that I’m working against the backend that has a backend that has a backend that’s controlled by someone that nobody knows who it is.
The problem
There are rules that nobody has documented, but when you try and make a purchase, all of a sudden, you get back 500 Internal Server Error and that’s the entirety of your explanation for where you went wrong. My best example of that was the server told me that my user had a gender of MALE (capital M, capital A, capital L, capital E).
I turned around and I tried to make a purchase with that user with a gender of all caps male, 500 Internal Server Error. It turns out, and at least someone was available to explain this to us once we figured out, found out that we had a problem. The server that told us, all caps MALE, actually wanted capital M, lowercase a, lowercase l, lowercase e.
My main goal in initial development is to reduce the iteration time. Instead of having to click all the way through all of my screens to get to where I have a purchase button, I can do it in as little time as possible.
The solution
The worst plan is running the app.
The first time I did a travel sector app, I built up the screen that gives me a list of hotels. I built up the screen that gives me the results and the screen that lets me input the user’s information and the payment information.
I’m finally to the screen and I’m going to make the purchase. And each time I go to a new screen, then I have to make the API calls, test them, get them working. I get to the purchase screen, I click Purchase, and get 500 Internal Server Error. Every time I make a change, I have to build, deploy, and go through everything again.
Then Postman came out, and Postman’s great. I certainly advocate the use of Postman, but one drawback of Postman is that it will export some Java for me, but I don’t have working code in my app that will do the same thing. Charles and Fiddler are helpful too, but again, my goal is to end up with working code at the exact same moment that I get the call to the backend working.
First, I write a bunch of integration tests. In principle (and this is what I did for the app that I’m currently working on), you can write your entire networking layer in an Android app that has zero activities.
I spent two weeks building 40 endpoints, getting the purchase flow working, at least for enough cases that we could start building a UI against it, and then we started working on UI. Since I’m building these integrations tests, I’d also like them to have some ongoing value, but the problem with that is our servers are flaky and the server that worked last week isn’t going to work this week. We have on our whiteboard, “Use QA2 today.” And sometimes we erase the QA2 and we don’t know what to put. We need to be able to see that everything we wrote is still working even though there’s a remote failure.
Also, sometimes you have an endpoint that has side effects. You don’t want to book every room at a hotel, even on QA. If you’re enrolling a new user every time someone pushes, someone is probably going to complain at some point.
Requirements
Our goal is to mock the results that we’re getting back from the server so that we can rely on those even when we’re not hitting the server. Because we’re saying the word testing, we need Dependency Injection or Service Locator. We’re going to need OkHttp because I’m using an interceptor. This only works with OkHttp.
We’ll need something that gets us off the main thread. My examples are going to use RxJava. At Bottle Rocket, for a while, we were using something that wasn’t RxJava and it had some Looper problems, so then we had to mock the Looper where the Looper wasn’t releasing threads when it was supposed to. RxJava doesn’t have this problem.
The example
You can follow along here.
I can’t walk through any of the systems that I’ve worked on with their amazing 500 Internal Server errors, but that’s okay because they are more complex examples than we need. I do want to be able to show making a POST call, making a call that changes something on the server.
That would normally require that have login but then I’m going to have to also have the complexity of wrapping login around everything, so we’re going to look at GitHub because GitHub is so friendly, they let us create an anonymous Gist. This example is going to cover most of the difficulties I’ve come up with in the hundreds of endpoints that I have written against.
Dependency injection
Most people use dependency injection, but for this
ServiceLocator is going to be easier to read.
Get more development news like this
public class ServiceLocator { Map<Class<?>, Object> mLocatorMap; private ServiceLocator() { mLocatorMap=newHashMap<>(); } private static class SingletonHolder { static final ServiceLocator instance = new ServiceLocator(); } private static ServiceLocator getInstance() { returnSingletonHolder.instance; } public static <T> void put(Class<T> type, T instance) { if (type == null) { throw new NullPointerException(); } getInstance().mLocatorMap.put(type, instance); } public static <T> T get(Class<T> type) { return (T) getInstance().mLocatorMap.get(type); } }
public class ServiceInjector { public static <T> T resolve(Class<? extends T> type) { returnServiceLocator.get(type); } } ServiceLocator.put(RxEndpoints.class, new RxEndpointsImpl()); Flowable<User> flowable = ServiceInjector .resolve(RxEndpoints.class) .getUser("bottlerocketapps");
ServiceInjector.resolve is the call that says we’re getting something out of the
ServiceInjector. We’re getting an instance of our
RxEndpoints class. And then we make calls against that class.
We’re calling
getUser and this will return the GitHub user that is the company I work for.
OkHttp
We also need OkHttp. That needs to be injectable because we’re going to inject a different interceptor depending on what kind of mocking behavior we want.
public class OkHttpClientUtil { private static final long READ_TIMEOUT = 120; public static OkHttpClient getOkHttpClient(Context context, MockBehavior mock) { OkHttpClient okHttpClient = null; OkHttpClient.Builder builder = new OkHttpClient.Builder(); if(mock != MockBehavior.DO_NOT_MOCK) { builder.addInterceptor(new MockedApiInterceptor(context, mock)); } okHttpClient = builder .readTimeout(READ_TIMEOUT, TimeUnit.SECONDS) .retryOnConnectionFailure(false) .build(); return okHttpClient; } } ... ServiceLocator.put(OkHttpClient.class, OkHttpClientUtil.getOkHttpClient(null, MockBehavior.MOCK)); ... subscriber.onNext(ServiceInjector.resolve(OkHttpClient.class).newCall(request).execute());
This is where the interceptor goes. It does the mocking. And then we’re putting it in the
ServiceLocator. When we make our OkHttp calls, we resolve the client and we make a call against it. This is also a good practice injecting your OkHttp because there are other things that you want to have always on the OkHttp, things that you want to be able to change. I recommend it.
RxJava
I am not an RxJava expert - please do not use this as a tutorial for how to use RxJava. I upgraded the slides to conform to RxJava 2.1.1.
private Flowable<Response> getResponse(final HttpUrl url) { return Flowable.fromCallable(new Callable<Response>() { @Override public Response call() throws Exception { System.out.println(url); Request request = ServiceInjector.resolve(ServiceConfiguration.class).getRequestBuilder() .url(url) .build(); return ServiceInjector.resolve(OkHttpClient.class).newCall(request).execute(); } }); } @Override public Flowable<User> getUser(String userName) { HttpUrl url = ServiceInjector.resolve(ServiceConfiguration.class).getUrlBuilder() .addPathSegment(USER) .addPathSegment(userName) .build(); return getResponse(url) .flatMap(new FetchString()) .flatMap(new ToJson<User>(UserImpl.class)); }
This is showing the Rx code. I have a getResponse method that’s going to get my service configuration, so I use a service configuration so I can switch environments. Build my URL, and then call out OkHttp. getUser is going to use this getResponse. It’s also going to use Fetch String and ToJson:
private class FetchString implements Function<Response, Flowable<String>> { @Override public Flowable<String> apply(final Response response) { return Flowable.fromCallable(new Callable<String>() { @Override public String call() throws Exception { if (!response.isSuccessful()) { throw new IOException(response.message()); } String responseString = response.body().string(); System.out.println(responseString); return responseString; } }); } } private class ToJson<T> implements Function<String, Flowable<T>> { private final Class mTargetClass; private ToJson(Class mTargetClass) { this.mTargetClass = mTargetClass; } @Override public Flowable<T> apply(final String s) { return Flowable.fromCallable(new Callable<T>() { @Override public T call() throws Exception { return (T) ServiceInjector.resolve(Gson.class).fromJson(s, mTargetClass); } }); } }
FetchString does the work of pulling the string out of the response body. It’s totally unchecked in this version, so all of your API calls should succeed and everything will be fine. That’s not good advice. And then we have the
ToJson, which again, blindly assumes that we’re going to get exactly what we wanted in our body.
Testing the Observable
The only testing framework that I’m using is JUnit, and this is why my iteration times are three to five seconds. I don’t have to spool up anything. I don’t have to connect to anything. I don’t have to start a server. All I’m using is JUnit.
I change capital
MALE to first letter capital
Male, and three to five seconds later, or if I’m changing a bunch of calls together, then however long it takes for all the server calls to complete. I have a red light or a green light. We have a few dependencies that are going to stay the same for all the tests.
Our base test is going to set those up, and it’s Before:
public class BaseApiTest { @Before public void setup() { ServiceLocator.put(RxEndpoints.class, new RxEndpointsImpl()); ServiceLocator.put(ServiceConfiguration.class, new ServiceConfigurationImpl()); ServiceLocator.put(Gson.class, GsonUtil.getGson()); } }
We’re going to have our endpoints. We’re going to have our
ServiceConfiguration, We’re going to have Gson.
ServiceConfiguration can let us change environments. Obviously we don’t have a staging environment for GitHub, at least I don’t. Gson, again, is not a recommendation, it’s just something that makes setting up examples fast.
Each test is going to inject its own HTTP client. This way, each test can control how it’s mocked. And again, we’re not writing unit tests. Our goal isn’t to generate exhaustive coverage. Our goal is to make sure that the call completed, to make sure that the call returns something that is like what we expected.
@Test public void testOrganization() { ServiceLocator.put(OkHttpClient.class, OkHttpClientUtil.getOkHttpClient(null, MockBehavior.MOCK)); Flowable<Organization> flowable = ServiceInjector.resolve(RxEndpoints.class).getOrg("bottlerocketstudios"); TestSubscriber<Organization> testSubscriber = new TestSubscriber<>(); flowable.subscribe(testSubscriber); testSubscriber.assertComplete(); List<Organization> orgList = testSubscriber.values(); assertEquals(orgList.size(), 1); assertEquals(orgList.get(0).getName(), "Bottle Rocket Studios"); }
We’re getting the Bottle Rocket Studios organization. We are asserting that the call completed. We’re asserting that we got one result. We’re asserting that the name of the result we got is Bottle Rocket Studios. This is going to make your code coverage unreliable when you run these tests. I had something like 8% code coverage from these tests because it’s hitting all of the code that I use to set up all of my endpoints. It’s hitting a lot of the logic that runs through the app, but it’s not testing it. It’s only testing, “Can I successfully get some result back from the server?” If you’re a slave to code coverage, you probably want to disable these for your coverage reports.
Then we’re going to test the Observable when we chain a call. This is the beginning of a list detail flow:
public class GistTest extends BaseApiTest { @Test public void testAllGists() { ServiceLocator.put(OkHttpClient.class, OkHttpClientUtil.getOkHttpClient(null, MockBehavior.MOCK)); Flowable<Gist[]> flowable = ServiceInjector.resolve(RxEndpoints.class).getGists(); TestSubscriber<Gist[]> testSubscriber = new TestSubscriber<>(); flowable.subscribe(testSubscriber); testSubscriber.assertComplete(); List<Gist[]> gists = testSubscriber.values(); Gist gist = gists.get(0)[0]; Flowable<Gist> gistFlowable = ServiceInjector.resolve(RxEndpoints.class).getGist(gist.getId()); TestSubscriber<Gist> gistTestSubscriber = new TestSubscriber<>(); gistFlowable.subscribe(gistTestSubscriber); Gist detailGist = (Gist) gistTestSubscriber.values().get(0); assertEquals(detailGist.getDescription(), gist.getDescription()); } }
We’re going to call
getGists and we’re going to get the Gists on GitHub in order from the first one. I think it naturally cuts off at 20. We’ll get a list for 20 Gists and then we’re going to pick the first one from that and we’re going to do the detail call on it.
We pull out the ID, we do
getGist. That’s going to give us a more detailed view of that Gist. And then we’re going to check. And this is the only assert that matters is the last line there,
assertEquals. We want the detail description to match the description that we got from the list result. We’re checking to make sure that this detail call succeeded, making sure that it matches what we expected it to match.
A Mocked Interceptor
Next up is my mocking interceptor, and that’s in the repository but I also put a link to the Gist that has that file.
public abstract class AbstractMockedApiInterceptor implements Interceptor { public AbstractMockedApiInterceptor(Context context, MockBehavior mock) { if (context != null) { mContext = context.getApplicationContext(); } else { mContext = null; } mMockBehavior = mock; } ResponseFilename); return builder.build(); } } } if (fetchNetworkResults()) { Response response = chain.proceed(request); response = memorializeRequest(request, response, mockFound); return response; } throw new IOException("Unable to handle request with current mocking strategy"); } protected boolean fetchNetworkResults() { return mMockBehavior != MockBehavior.MOCK_ONLY; } protected boolean ignoreExistingMocks() { return mMockBehavior == MockBehavior.LOG_ONLY; } protected String getMockedMediaType() { return "json"; } }
Right there at the top, we’re looking for a context and if we don’t have a context, we’re not saving it. The reason we need a context is because I want these mocks to be able to run on the device.
Thing that this ends up being useful for is when we get changes on the server on our flaky servers, the first thing we do is we check to make sure that they’re up, and if they’re up, then we write our tests against them. We grab the mocks, we grab sufficient mocks that we can use the app and get to that state.
I’m no longer sensitive to the concerns of the other developers on the project when they say, “I can’t work on this because the server is down” or, “The server that has the changes that we need to test against is down.” After months of telling me that, they have finally been won over to this idea. The first thing they do is they grab mocks and now, they can get to the screens that they need to change, they can test the changes without having to rely on a server. And now that they’ve finally been won over, they’re happy that they do this.
We’re also going to set off a
MockBehavior. The mock behaviors that I’m supporting are
MOCK_ONLY,
LOG_ONLY,
MOCK, and
DO_NOT_MOCK. If we’re doing
DO_NOT_MOCK, then we don’t even add the interceptor, so I’m not testing against that here. If we’re doing
MOCK, then we try to make the request. We try to find a mocked version of the request. If we can’t find it, if nothing that we’ve mocked matches, then we’re going to return the actual network call and we’re going to save that network call to make it easier for you to add the mock.
That’s what the
fetchNetworkResults helper method there is is it’s asking, “Should we go to the network?” And then sometimes we only want to log the calls that we get from the network because we know that our call isn’t going to succeed if we use any mocked calls. I do that a lot when I’m setting up mocks for a complete flow, and I’ll explain more about that in a bit.
public abstract class AbstractMockedApiInterceptor implements Interceptor { (); } } } if (fetchNetworkResults()) { Response response = chain.proceed(request); response = memorializeRequest(request, response, mockFound); return response; } throw new IOException("Unable to handle request with current mocking strategy"); } }
The basic flow of the interceptor is it’s going to loop through a whole list of response specifications. If one matches, it’s going to build a response from it and return it. And if none of them match, then we’ll make the network call. I’m going to pause for a second and mention OkReplay. OkReplay came out in between me submitting this talk and gave me a heart attack, but OkReplay depends on Espresso and it’s not solving exactly the same problem. It doesn’t require as much customization – you get your tapes out and feed your tapes in. It is certainly easier to use for the problem it solves.
I don’t have a package depot and a nice repository that you can drop in. I loop through all of my network specs, er, my network call specs, and I see if they match. If they match, then I’m going to build up a response and pass it back. If they don’t match, then I fall through and hit the network. And then I make that
memorializeRequest call that’s going to build up what I need to mock it in the future.
Matching Requests
The way I look to see if something matches, the first five fields on
NetworkCallSpec are the
RequestURL (and that’s a pattern, I can do pattern matching on it), the
RequestMethod, the
QueryParameters, the
RequestBody, and
RequestBodyContains, since most of the time, matching the entire RequestBody is not the right plan. Most of the time, you want to look for something in the RequestBody.
public static class NetworkCallSpec { private final String mRequestUrlPattern; private String mRequestMethod; private Map<String, String> mRequestQueryParameters; private String mRequestBody; private Set<String> mRequestBodyContains; private int mResponseCode; private String mResponseMessage; private final String mResponseFilename; public boolean matches(HttpUrl url, String method, String body) { if (!url.encodedPath().matches(mRequestUrlPattern)) { return false; } if (!mRequestMethod.equalsIgnoreCase(method)) { return false; } if (mRequestMethod.equalsIgnoreCase("POST") && !TextUtils.isEmpty(mRequestBody) && !mRequestBody.equalsIgnoreCase(body)) { return false; } ...
Every time this user makes a purchase against this property, we’re going to give this result. We don’t care what dates they’re requesting. We want to, this test that uses this user to succeed and give a different result than this test that uses this user at a different property. We look to see if the path matches. We look to see if we’re doing the same method because we don’t want to return 204 Created when we’re doing a GET.
If we’re doing a POST, then we want to check and make sure that if we specified a whole body that that exact body is what we’re sending. I rarely use that because that’s not resilient to changes in the order of things in JSON. If you add a new field, then all of a sudden, this stops working. So I use RequestBodyContains:
... for (String contains : mRequestBodyContains) { if (!body.contains(contains)) { return false; } } for (Map.Entry<String, String> kvp : mRequestQueryParameters.entrySet()) { boolean foundKey = false; boolean foundValue = false; for (String key : url.queryParameterNames()) { if (key.matches(kvp.getKey())) { foundKey = true; String value = url.queryParameter(key); if (value != null && value.matches(kvp.getValue())) { foundValue = true; } if (value == null && (kvp.getValue() == null || kvp.getValue() == "" || kvp.getValue().equalsIgnoreCase("null"))) { foundValue = true; } } } if (!foundKey || !foundValue) { return false; } } return true; }
I loop to see if the body contains this exact string. I have chunks of JSON that find the username, that find the dates, whatever I’m looking for. And then with the query parameters, these are a way of excluding things. If you don’t put any query parameters, then any set of query parameters will match. If you only want to match when the user is logging in with ID
Chris, then you would put ID equals Chris, and it will match for any ID equals
Chris, but it will ignore what you’re passing in as the password. Also, don’t pass passwords as
get parameters.
If any of these fail, it’s going to return false. If everything matches or doesn’t have enough information to conflict, then it will return true, and then we will build up a response and send it back.
Building Responses
When we’re running as a JUnit test, we don’t have a context, so we’re going to go into
test/resources/mocks/. If we’re running the app, then we’re going to read it from
assets/mocks/. I use a
build.gradle Copy task and I copy from
assets/ into
test/resource/. And Git ignore that so that it doesn’t put two copies.
When I am making a mock, I copy something into
assets/mocks/. It’s grabbing the body out of the file. It’s doing a
substituteStrings call that we’ll talk about in a second. It’s building up the response code, the response message, and sending it back to whoever called it.
Response.Builder builder = new Response.Builder(); String bodyString = resolveAsset("mocks/"+spec.m(); } private String resolveAsset(String filename) { if (mContext != null) { return getAssetAsString(mContext, filename); } else { try { return readFromAsset(filename); } catch (IOException e) { Timber.e(e, "Error reading from asset - this should only be called in tests."); } } return null; }
Saving New Mocks
When we want to create a new mock, what ends up happening is we run the test that we’re working on. It retrieves a file and then it saves it out so we can mock it. It’s going to save it out in two places. It’s going to save the file itself, the body response, in a file. If we’re running on the device, then it will save it in a directory within the user dir. If we’re running it locally in a test, then it’s going to save it in the root directory. But then we’re also going to build up the code, to register with our interceptor what the requirement is to return this mock.
private Response memorializeRequest(Request request, Response response, NetworkCallSpec mockFound) { Response.Builder newResponseBuilder = response.newBuilder(); try { String responseString = response.body().string(); List<String> segments = request.url().encodedPathSegments(); String endpointName = segments.get(segments.size() - 1); String callSpecString = "mResponseList.add(new NetworkCallSpec(\""+request.url().encodedPath()+"\", \"::REPLACE_ME::\")"; if (response.code() != HttpURLConnection.HTTP_OK) { callSpecString += ".setResponseCode("+response.code()+")"; endpointName += "-"+response.code(); } if (!TextUtils.isEmpty(response.message()) && !response.message().equalsIgnoreCase("OK")) { callSpecString += ".setResponseMessage(\""+response.message()+"\")"; } if (!request.method().equalsIgnoreCase("GET")) { callSpecString += ".setRequestMethod(\""+request.method()+"\")"; endpointName += "-"+request.method(); } if (request.url().querySize()>0) { for (String key : request.url().queryParameterNames()) { callSpecString += ".addRequestQueryParameter(\""+key.replace("[", "\\\\[").replace("]", "\\\\]")+"\", \""+request.url().queryParameter(key)+"\")"; } } String body = stringifyRequestBody(request); if (body != null) { callSpecString += ".addRequestBody(\""+body.replace("\"", "\\\"").replace("\\u003d", "\\\\u003d")+"\")"; endpointName += "-"+body.hashCode(); } requestSpecString += ");"; if (endpointName.length()>100) { endpointName = ""+endpointName.hashCode(); } endpointName = getUniqueName(endpointName); callSpecString = callSpecString.replace("::REPLACE_ME::", endpointName); if (mockFound != null) { callSpecString += " // duplicate of existing mock "+mockFound.mPattern; if (!TextUtils.isEmpty(mockFound.mRequestBody)) { callSpecString += " with body "+mockFound.mRequestBody; } } callSpecString += "\n"; writeToFile(callSpecString, responseString, endpointName); newResponseBuilder.body(ResponseBody.create(response.body().contentType(), responseString)); } catch (IOException e) { Timber.e("Unable to save request to "+request.url().toString()+" : ", e); } return newResponseBuilder.build(); }
It adds everything that you could potentially want. It will add the full path, it will add all the query parameters. It will add the exact body, it will add the method, assuming that it’s not GET. And then you can delete the things that you don’t need. You can go in and you can change the
body to a
bodyContains. If there are parameters that you don’t care about, you can get rid of those.
It will save the entire body to the POST request. It also makes the filename unique. We have one endpoint that returns, the end of the path is the user’s ID. It’s not returning the user, it’s returning something based on query parameters, but every time I make that call, I get a new mock that is the user’s ID, dash one, dash two, dash three.
If it’s long, then I put out the hash code because sometimes we have an endpoint that has a length of over a hundred characters and that’s obnoxious.
Substitutions
One other thing that I run into a lot in booking flows is I get something that is valid for tomorrow and then it stops being tomorrow and my test stops working. Even worse, if you’re doing a login, you’re going to get back an authentication token and you’re getting back an expiration date. Six weeks later, all of a sudden your test isn’t working because you try to login and it says, “you’re logged in until yesterday. Enjoy.” What we can also put in the call spec is a substitution pattern.
private static interface StringSubstitutor { String replaceOneString(String body, Request request); boolean matchesFound(String body); } private String substituteStrings(String bodyString, Request request) { // Because each match can get replaced with something different, we have to reset the matcher after every replacement. // This way of doing things happens to enforce this in a non-obvious way, because we create a new matcher every time. for (StringSubstitutor substitutor : mSubstitutorList) { while (substitutor.matchesFound(bodyString)) { bodyString = substitutor.replaceOneString(bodyString, request); } } return bodyString; } private static final Pattern DATE = Pattern.compile(“%DATE[^%]*%"); private static class DateSubstitutor implements StringSubstitutor { @Override public String replaceOneString(String body, Request request) { Matcher dateMatcher = DATE.matcher(body); dateMatcher.find(); String match = dateMatcher.group(); Map<String, String> query = getQueryFromUri(match); LocalDate date = new LocalDate(); if(query.containsKey(OFFSET_PARAMETER)) { date = date.plusDays(Integer.parseInt(query.get(OFFSET_PARAMETER))); } body = dateMatcher.replaceFirst(date.toString()); return body; } }
I have a substitution pattern for date, a substitution pattern for date time, and then I have one that goes off the parameters. I can pull a query parameter, “You were looking for a hotel reservation that started on August 1st”. I’m going to replace every instance of the reservation date in the body with August 1st so that it makes sense.
The format is
%DATE. And then down at the bottom, you can see that, for instance, if I’m adding an offset, then I do question mark, offset equals 30. That moves the date 30 days into the future. If I’m doing query parameters, then the key is going to be the parameter. This lets me make my responses more resilient to changes in time. We have one response that has to be 30 minutes into the future. We care about what happens when that response is expired. We make it one minute into the past.
Back to the Test with Side Effects
Here is our test that has side effects:
private static final String CREATE_FILE_NAME = "AbstractMockedInterceptor.java"; private static final String CREATE_DESCRIPTION = "An OkHttp Interceptor that returns mocked results if it has them."; @Test public void createGist() throws IOException { ServiceLocator.put(OkHttpClient.class, OkHttpClientUtil.getOkHttpClient(null, MockBehavior.MOCK_ONLY)); Gist gist = new GistImpl(); gist.setDescription(CREATE_DESCRIPTION); gist.addFile(CREATE_FILE_NAME, readFromAsset("mocks/javaclass")); Observable<Gist> observable = ServiceInjector.resolve(RxEndpoints.class).createGist(gist); TestSubscriber<Gist> testSubscriber = new TestSubscriber<>(); observable.subscribe(testSubscriber); testSubscriber.assertCompleted(); List<Gist> gistList = testSubscriber.getOnNextEvents(); Gist resultGist = gistList.get(0); Observable<Gist> gistObservable = ServiceInjector.resolve(RxEndpoints.class).getGist(resultGist.getId()); TestSubscriber<Gist> gistTestSubscriber = new TestSubscriber<>(); gistObservable.subscribe(gistTestSubscriber); Gist detailGist = gistTestSubscriber.getOnNextEvents().get(0); assertEquals(detailGist.getDescription(), CREATE_DESCRIPTION); }
We’ve set it up to run
MOCK_ONLY. It’s going to try to find a mock. If it finds a mock, it will return that mock, but it’s never going to hit the network, and that’s an important safety step because if it hits the network and gets a good response, it’s never going to let you know that something went wrong and this is a call that has side effects that you didn’t want to be happening every time you build, so it’s going to be silently doing the thing you didn’t want every time you build.
This is now going to safely, one time, create the Gist. It’s going to read from an asset where I put the Gist that I’m ready to send out. It’s going to send it to GitHub. And then it’s going to try to retrieve it and it’s going to make sure that what it retrieves has the description that it put on it. And that should be enough to establish that the call succeeded, it created the Gist.
We’re not going to check and make sure that this whole file is the file that it put there. We want to test enough to make sure that the thing we wanted to happen happened. Gradle is building. My test passed. I vastly prefer this to clicking through, trying to get a reservation ready to submit.
Here is an example of how long it takes to make a change. I’m going to run this one. That should have been it. Sometimes I forget that was it making a change. But now I don’t have internet access. That’s all the time it takes to recompile after a change. I strongly endorse it. | https://academy.realm.io/posts/360-andev-2017-chris-koeberle-android-endpoint-integration-testing/ | CC-MAIN-2018-22 | refinedweb | 5,158 | 57.57 |
I have been playing with serverless solutions lately. It started with a Django project that was dealing with customer AWS credentials both in background and foreground tasks. I wanted to keep those tasks compartmentalized for security and I wanted them to scale easily. Celery is the common solution for this, but setting it up in my environment was not straightforward. This was as good excuse as any to AWS Lambda. I gave Serverless Framework a try because it was the most versatile framework I could find with proper Python support.
It worked well for a long time. But over time I noticed the following repeating issues.
- It requires Node.js which complicated development and CI environments. This is the reason I originally created docker combo images of Python and Node.js.
- Packaging Python dependencies is slow and error prone. Every deployment operation downloaded all the dependencies again, compressed them again, and uploaded them again. On Windows, Mac, and some Linux variants (if you have binary dependencies) it requires Docker and even after multiple PRs it was still slow and randomly broke every few releases.
- There was no easy way to directly call Lambda functions after they were deployed. I had to deal with the AWS API, naming, arguments marshaling, and exception handling myself.
To solve these issues, I created Lovage. The pandemic gave me the time I needed to refine and release it.
No Node.js
Lovage is a stand-alone Python library. It has no external dependencies which should make it easy to use anywhere Python 3 can be used. It also does away with the Node.js choice of keeping intermediate files in the source folder. No huge
node_modules folders, no code zip files in
.serverless, and no dependency caches.
Lambda Layers
Instead of uploading all of the project’s dependencies every time as part of the source code zip, Lovage uploads it just once as a separate zip file and creates a Lambda Layer from it. Layers can be attached to any Lambda function and are meant to easily share code or data between different functions.
Since dependencies change much less frequently than the source code itself, Lovage uploads the dependencies much less frequently and thus saves compression and upload time. Dependencies are usually bigger than the source code so this makes a significant difference in deployment time.
But why stop there? Lovage gets rid of the need for Docker too. Docker is used to get an environment close enough to the execution environment of Lambda so that pip downloads the right dependencies, especially when binaries are involved. Why emulate when we can use the real thing?
Lovage creates a special Lambda function that uses pip to download your project’s dependencies, package them up, and upload them to S3 where they can be used as a layer. That function is then used as a custom resource in CloudFormation to automatically create the dependencies zip file and create a layer from it. Nothing happens locally and the upload is as fast possible given that it stays in one region of the AWS network.
Here is a stripped down CloudFormation template showing this method (full function code):
Resources: RequirementsLayer: Type: AWS::Lambda::LayerVersion Properties: Content: S3Bucket: Fn::Sub: ${RequirementsPackage.Bucket} S3Key: Fn::Sub: ${RequirementsPackage.Key} RequirementsPackage: Type: Custom::RequirementsLayerPackage Properties: Requirements: - requests - pytest ServiceToken: !Sub ${RequirementsPackager.Arn} RequirementsPackager: Type: AWS::Lambda::Function Properties: Runtime: python3.7 Handler: index.handler Code: ZipFile: | import os import zipfile import boto3 import cfnresponse def handler(event, context): if event["RequestType"] in ["Create", "Update"]: requirements = event["ResourceProperties"]["Requirements"] os.system(f"pip install -t /tmp/python --progress-bar off {requirements}"): with zipfile.ZipFile("/tmp/python.zip", "w") as z: for root, folders, files in os.walk("/tmp/python"): for f in files: local_path = os.path.join(root, f) zip_path = os.path.relpath(local_path, "/tmp") z.write(local_path, zip_path, zipfile.ZIP_DEFLATED) boto3.client("s3").upload_file("/tmp/python.zip", "lovage-bucket", "reqs.zip") cfnresponse.send(event, context, cfnresponse.SUCCESS, {"Bucket": "lovage-bucket, "Key": "reqs.zip"}, "reqs")
This is by far my favorite part of Lovage and why I really wanted to create this library in the first place. I think it’s much cleaner and faster than the current solutions. This is especially true considering almost every project I have uses
boto3 and that alone is around 45MB uncompressed and 6MB compressed. Compressing and uploading it every single time makes fast iteration harder.
“RPC”
Most serverless solutions I’ve seen focus on HTTP APIs. Serverless Framework does have support for scheduling and events, but still no easy way to call the function yourself with some parameters. Lovage functions are defined in your code with a special decorator, just like Celery. You can then invoke them with any parameters and Lovage will take care of everything, including passing back any exceptions.
import lovage app = lovage.Lovage() @app.task def hello(x): return f"hello {x} world!" if __name__ == "__main__": print(hello.invoke("lovage")) hello.invoke_async("async")
The implementation is all very standard. Arguments are marshaled with
pickle, encoded as base85, and stuffed in JSON. Same goes for return values and exceptions.
Summary
Lovage deploys Python functions to AWS Lambda that can be easily invoked just like any other function. It does away with Docker and Node.js. It saves you development time by offloading dependency installation to Lambda and stores dependencies in Lambda layers to reduce repetition.
I hope you find this library useful! If you want more details on the layer and custom resource to implement in other frameworks, let me know.
One thought on “Lovage”
[…] few months ago I released Lovage. It’s a Python only serverless library that’s focused more on RPC and less on HTTP and […] | https://kichik.com/2020/04/11/lovage/ | CC-MAIN-2021-25 | refinedweb | 955 | 58.79 |
Opened 8 years ago
Closed 7 years ago
#6143 closed (wontfix)
provide doctest build helper
Description
I suggest a tiny refactoring to the django.test.simple module: extract the doctest.DocTestSuite() call to its own function and use that in build_suite() twice.
This provides two benefits:
- DRY in django.test.simple
- doctest suite creation shortcut for suite() function in models.py or tests.py
Currently to include a doctest for an arbitrary module in the custom test suite, one must do:
from unittest import TestSuite from django.test import _doctest as doctest from django.test.simple import doctestOutputChecker from django.test.testcases import DocTestRunner import mymodule def suite() suite = TestSuite() suite.addTest(doctest.DocTestSuite( mymodule, checker=doctestOutputChecker, runner=DocTestRunner)) return suite
This refactoring simplifies the above to just:
from unittest import TestSuite from django.test.simple import build_doctest_suite import mymodule def suite() suite = TestSuite() suite.addTest(build_doctest_suite(mymodule)) return suite
Attachments (1)
Change History (3)
Changed 8 years ago by akaihola
comment:1 Changed 8 years ago by Simon G <dev@…>
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Design decision needed
comment:2 Changed 7 years ago by russellm
- Resolution set to wontfix
- Status changed from new to closed
This seems like overkill to me. The repetition it removes isn't around a complex block of code that could be a serious maintenance problem - it's a single duplicated function call, with a common set of arguments. If you're building a custom test suite, I would argue that you should be explicitly making a decision on your Doctest arguments; the 'use the same defaults as Django' isn't a strong enough use case in my book.
suggested patch | https://code.djangoproject.com/ticket/6143 | CC-MAIN-2015-27 | refinedweb | 288 | 53.92 |
Question
What are props?
Answer
To properly understand one of the most essential features of React, which is the ability to pass data between components, we need to know the meaning of props and what they are. We have a lesson here that helps us understand part of it.
Props stand for properties which are arbitrary inputs to a component, where a component is pretty much like a JavaScript function in that sense, so no matter if we have this:
class MyComponent extends React.Component { constructor(props){ super(props) } ... } export default MyComponent;
where we make sure to register the props (input) received by the component using the constructor method and calling super which reaches to all the methods that recognize props from the class React.Component. Or we can have this, thinking that props has a name property:
function MyComponent(props){ return <p> This component has a name, and it is: { props.name }</p>; }
As the lesson mentions props is an object, for example in our siblings lesson our component is being passed an attribute name that contains the state of the parent component, that name becomes a property in our props object allowing us to have a reference of a value that we will receive when our component is rendered, looking back at the function
MyComponent, lets say we exported it and now we want to call it in another function from another file, that might look like this:
import { MyComponent } from './MyComponent'; //as in ES6+ function ParentComponent(){ const name = 'Axel'; return( <div> MyComponent({name}); </div> ) }
There we have called our component in the parent function and passed an argument object with a property
name. This in React will look like so:
import React from 'react'; import MyComponent from './MyComponent'; //quite the same with the exception of the component export dictates how you get to import. class ParentComponent extends React.Component{ constructor(){ this.state = { name: 'Axel' } render(){ return( <div> <MyComponent name={ this.state.name } /> </div> ) }
So now we have revisited that
props is an object created by React to manipulate more efficiently how data is passed to components; just like functions receive arguments (value passed to a function when is being called) and handles them through parameters (stand in value set between the function parenthesis when declared to state that there must be a value passed when the function is called) to fulfill their task. If we ever forget where
props are coming from, we can always trace them by checking where has our component been imported and where it has been called. | https://discuss.codecademy.com/t/what-are-props/375961 | CC-MAIN-2018-51 | refinedweb | 424 | 53.75 |
At this point, we should understand much of how the nontrivial parts of the Clojure REPL (and therefore Clojure itself) work:
Read: the Clojure reader reads the textual representation of code, producing the data structures (e.g., lists, vectors, and so on) and atomic values (e.g., symbols, numbers, strings, etc.) indicated in that code.
Evaluate: many of the values emitted by the reader evaluate to themselves (including most data structures and scalars like strings and keywords). We explored earlier in Expressions, Operators, Syntax, and Precedence how lists evaluate to calls to the operator in function position.
The only thing left to understand about evaluation now is how symbols are evaluated. So far, we’ve used them to both name and refer to functions, locals, and so on. Outside of identifying locals, the semantics of symbol evaluation are tied up with namespaces, Clojure’s fundamental unit of code modularity.
All Clojure code is defined and evaluated within a namespace. Namespaces are roughly analogous to modules in Ruby or Python, or packages in Java.[14] Fundamentally, they are dynamic mappings between symbols and either vars or imported Java classes.
One of Clojure’s reference types,[15] vars are mutable storage locations that can hold any value. Within the namespace where they are defined, vars are associated with a symbol that other code can use to look up the var, and therefore the value it holds.
Vars are defined in Clojure using the
def special form, which only ever acts within
the current namespace.[16] Let’s define a var now in the
user namespace, named
x; the name of the var is the symbol that it is
keyed under within the current namespace:
(def x 1) ;= #'user/x
We can access the var’s value using that symbol:
x ;= 1
The symbol
x here is
unqualified, so is resolved within the current
namespace. We can also redefine vars; this is critical for supporting
interactive development at the REPL:
(def x "hello") ;= #'user/x x ;= "hello"
Vars should only ever be defined in an interactive
context—such as a REPL—or within a Clojure source file as a way of
defining named functions, other constant values, and the like. In
particular, top-level vars (that is, globally accessible vars mapped
within namespaces, as defined by
def
and its variants) should only ever be defined by top-level expressions,
never in the bodies of functions in the normal course of operation of a
Clojure program.
See Vars Are Not Variables for further elaboration.
Symbols may also be namespace-qualified, in which case they are resolved within the specified namespace instead of the current one:
*ns*
;= #<Namespace user> (ns foo) ;= nil *ns* ;= #<Namespace foo> user/x ;= "hello" x ;= #<CompilerException java.lang.RuntimeException: ;= Unable to resolve symbol: x in this context, compiling:(NO_SOURCE_PATH:0)>;= #<Namespace user> (ns foo) ;= nil *ns* ;= #<Namespace foo> user/x ;= "hello" x ;= #<CompilerException java.lang.RuntimeException: ;= Unable to resolve symbol: x in this context, compiling:(NO_SOURCE_PATH:0)>
Here we created a new namespace using the
ns macro (which has the side effect of switching
us to that new namespace in our REPL), and then referred to the value of
x in the
user namespace by using the namespace-qualified
symbol
user/x. for our guidelines in their use.
We mentioned earlier that namespaces also map between symbols and
imported Java classes. All classes in the
java.lang package are imported by default into
each Clojure namespace, and so can be referred to without package
qualification; to refer to un-imported classes, a package-qualified symbol
must be used. Any symbol that names a class evaluates to that
class:
String ;= java.lang.String Integer ;= java.lang.Integer java.util.List ;= java.util.List java.net.Socket ;= java.net.Socket
In addition, namespaces by default alias all of the vars defined in the primary namespace of
Clojure’s standard library,
clojure.core. For example, there is a
filter function defined in
clojure.core, which we can access without
namespace-qualifying our reference to it:
filter ;= #<core$filter clojure.core$filter@7444f787>
These are just the barest basics of how Clojure namespaces work; learn more about them and how they should be used to help you structure your projects in Defining and Using Namespaces.
[14] In fact, namespaces correspond precisely with Java packages when
types defined in Clojure are compiled down to Java classes. For
example, a
Person type defined in
the Clojure namespace
app.entities
will produce a Java class named
app.entities.Person. See more about defining
types and records in Clojure in Chapter 6.
[15] See Clojure Reference Types for a full discussion of Clojure’s reference types, all of which contribute different capabilities to its concurrency toolbox.
No credit card required | https://www.safaribooksonline.com/library/view/clojure-programming/9781449310387/ch01s08.html | CC-MAIN-2018-34 | refinedweb | 791 | 51.89 |
Amet Wrote:you can read about how the script searches in this thread.
in short, it gets the name of the show, episode and season from xbmc if its scanned in the library, or tries to get it from the file name if the file is not in the library.
it works best if its scanned in for the obvious reasons
adytum Wrote:I assume the file name the script uses is what's in the textbox when I select "manual string search"?
I can't understand the logic in what happens, nothing shows up on most of the services, even inputting the show name only.
Amet Wrote:not sure what you mean, OpenSubtitles_OSD never had any options to position the screen
unknown_inc Wrote:I'm pretty sure it did. I could position it to the left, right, up or bottom (at least I think I could).
I don't have any older XBMC to try it on or even make a screenshot to post.
But are there any files I can look into on the script itself? That does say something about the OSD window of the subtitle search results?
Amet Wrote:you can look at script-XBMC-Subtitles-main.xml and play with <posx></posx> and <posy></posy> values for item position on the screen
<coordinates>
<system>1</system>
<posx>0</posx>
<posy>0</posy>
</coordinates>
newphreak Wrote:i got a weird error which made xbmc subtitle window hang. have a look here for debug log.
def download_subtitles (subtitles_list, pos, zip_subs, tmp_sub_dir, sub_folder, session_id): #standard input
[color=red]subs_file = ""[/color]
url = subtitles_list[pos][ "link" ]
language = subtitles_list[pos][ "language_name" ]
content, response_url = geturl(url)
newphreak Wrote:sidenote: my receiver goes CRAZY while the xbmc subtitles is open.
switches from pcm to dts like every 2 seconds, would like to see a fix on that aswell
mr_blobby Wrote:Please use pastebin.org.
Pastebin.no does no show me a horizontal scrollbar, so I can't see the ending of the long lines ...
And could you tell me which subtitle for "The Strangers" did you try to download exactly, so I can try and reproduce the error?
I tried some on my XBMC, and they all downloaded just fine.
rvrutten Wrote:When my default service is Bierdopje but there is no result, I try a diffrent service like Podapisi. When this service gives me a result (or not), i press the menu button or back then XBMC crashes and restarts.
Having this issue for a while now. Did clean installs of addon, new versions of xbmc, but it keeps coming back.
XBMC log
----- removed c/p log ---- | http://forum.kodi.tv/printthread.php?tid=75437&page=30 | CC-MAIN-2014-52 | refinedweb | 435 | 69.92 |
In previous articles, we discussed the classification of application testing and unit testing grounds. We also introduced mock objects and stubs. Today’s article was a challenge for me personally to describe the frameworks to support testing. At the beginning of the explanation, however, we will go in depth into mocks, and only then proceed to the frameworks. Of course, no one is able to describe them all, so I chose the four frameworks that are used every day. Some of them are easy to use, and some of them are really combined to create tests.
Today we will learn to use them and know their advantages and disadvantages. We will learn the three frameworks: Pex, NMock, and Rhino Mocks. So, you ready? What will we begin with? So we’re going to test frameworks. Note! All of the examples in this article will benefit, not eclipse Microsoft Visual Studio.
Mocks again
Let’s begin with mocks. It is a matter of reminding the basic information about them. A mock is really an abstraction of objects or classes, whose task is to effectively isolate the classes, in order to investigate and ensure the appropriate level of verification tests.
Take an example of a data management database. If your class has to use them, and does not want to create the data layer class to support them because of the time that we spend on creating a wrapper for MySQL, create a mock that simulates the retrieved data. In this way we do not need to configure anything or even think about how to write.
We write simply a dummy class and specify the manner in which the target class should behave. Then we simply write tests for the target class. These tests are created based on the expected behavior of the class, which is now a mock. It is one of the solutions that effectively separates all types of classes from each other and at the same time makes the whole process of software development and testing much more pleasant than usual.
So what is really a mock? A mock object is a model of research, which usually is used for research and testing of various interfaces. And so it is ideal for testing and determining the functionality of the individual because the interfaces are mainly used for separating the individual functions from each other. This saves the time that we spend on the implementation of the working classes of data on the interface.
When writing a mock we need to know one thing. First, we must always check the expectations that we have relative to the functionality, then we should check whether the class meets our expectations. A mock is used mostly for classes, where the survey is difficult for some reason. Sometimes mock objects also apply to references to external resources such as web services and databases. Mostly they are solutions that perform too slowly for there to be any point in writing unit tests for them.
An example in which we operate
I want to show you a simple example with which we operate in our project. This example will be based on mock objects and because of it I hope that I can fully convey to you the difference between mocks and stubs. So let’s begin:
First, let’s create a new project in Visual Studio called Examples.
Second, in ConnectionProvider create a class with the following content:
using System.Collections.Generic; using System.Linq; using System.Text; using System; using System.Diagnostics; using System.Net.NetworkInformation; namespace Examples2 { public class ConnectionProvider { public bool IsOnline (string address) { Debug.Assert (! String.IsNullOrEmpty (address)); using (Ping ping = new Ping ()) { PingReply pingReply = ping.Send (address); if (pingReply.Status == IPStatus.Success) { return true; } } return false; } } }
Third, create a class CheckServer, which will be tested:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace Examples2 { class CheckServer { public bool IsOnline (string address) { Debug.Assert (! String.IsNullOrEmpty (address)); ConnectionProvider ConnectionProvider connectionProvider = new (); connectionProvider.IsOnline return (address); } } }
Fourth, add a test method CheckServer:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; namespace Examples.UT { [TestClass] class CheckServerTest { [TestMethod] public void IsOnlineTest () { CheckServerTest CheckServerTest target = new (); string address = "google.com"; excepted bool = true; bool actual; actual = target.IsOnline (address); Assert.AreEqual (expected, actual); } } }
Fifth, run the test.
We created our first special test. Isn’t that great? Maybe you want to practice a bit on it? I will suggest two changes for you:
First, our dear customer has changed their requirements. Please, you must run an actualization test to check for availability of the host
Second, again the customer changed their requirements. Now you have to check the address
Ok it was not even a mock, but we can be proud of ourselves. Now you want to write something again, right? Let’s get to the end of the first mock. At the same project:
First, in ConnectionProviderEx create a class. Here’s the code:
using System.Text; using System.Diagnostics; using System.Net.NetworkInformation; namespace Examples2 { public interface IConnectionProviderEx { IsOnline bool (string address); } ConnectionProviderEx class: IConnectionProviderEx { public void IsOnline (string address) { Debug.Assert (! String.IsNullOrEmpty (address)); using (Ping ping = new Ping ()); PingReply reply = ping.Send (address); if (Reply.Status == IPStatus.Success) { return true; } return false; } } }
Second, create a class in ChecServerEx. What in this case, should you pay attention to? You must note that in this case ConnectionProviderEx class is not created inside the method IsOnline, instead the instance is injected into the constructor as CheckServerEx IConnectionProviderEx interface. Here’s the code:
- Dual Certification - CEH and CPT
- 5 days of Intensive Hands-On Labs
- Expert Instruction
- CTF exercises in the evening
- Most up-to-date proprietary courseware available
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace Examples2 { class CheckServerEx { connectionProvider_ IConnectionProviderEx private readonly; public CheckServerEx (IConnectionProviderEx connectionProvider) { connectionProvider_ = connectionProvider; } public bool IsOnline (string address) { Debug.Assert (! String.IsNullOrEmpty (address)); connectionProvider_.IsOnline return (address); } } }
Third, now let’s test methods CheckServerEx.IsOnline: IConnectionProviderEx connectionProvider = new (); CheckServerEx target = new CheckServerEx (connectionProvider); string address = "example.server.test"; excepted bool = true; bool actual; actual = target.IsOnline (address); Assert.AreEqaul (excepted, actual); } } }
Fourth, run this test. Check that the test is not passed, since the class that we are testing has a huge dependence on the remote machine.
Fifth, add a class to the project StubConnectionProviderEx Examples.UT tests. This class implements the same interface as the class ConnectionProviderEx:
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Examples.UT { internal class StubConnectionProviderEx: IConnectionProviderEx { public bool Online (get, set,); public bool IsOnline (string address) { Online return; } } }
Sixth, now it’s time to update CheckServerExTest.IsOnline to use the type of object you create: connectionProvider = new StubConnectionProviderEx ( Online = true; ); CheckServerEx target = new CheckServerEx (connectionProvider); string address = "example.server.test"; excepted bool = true; bool actual; actual = target.IsOnline (address); Assert.AreEqaul (excepted, actual); } } }
Seventh, now run the test to make sure it works. Consider why?
In this way we have created our first mock. True, it is not difficult? Now you can go to the frameworks.
NMock3 or start creating mocks object
“NMock3 builds on the work of NMock2. Specifically it adds lambda expressions to the matcher syntax. You can use lambda expressions instead of strings, making refactoring possible. The idea of Syntactic Sugar is maintained in all expectations that read like an English sentence.”
Source:
About the framework will not be much to write about. It is neither nice nor friendly. And yet you use it to also create a test. So let’s start at the points:
First, add UT.Examples to a project for the library NMock3 references.
Second, add a new test IsOnlineTest_NMock3:
namespace Examples.UT { [TestClass] class CheckServerExTest { [TestMethod] public void IsOnlineTest_NMock3 () { string address = "google.com"; excepted bool = true; bool actual; MockFactory MockFactory mockFactory = new (); var = mockFactory.CreateMock <IConnectionProviderEx> connectionProviderStub (MockStyle.Stub); connectionProviderStub.Expects.AtLeast (0). MethodWith (_ => _.IsOnline (address)). WillReturn (true); IConnectionProviderEx connectionProvider = connectionProviderStub.MockObject; CheckServerEx target = new CheckServerEx (connectionProvider); actual = target.IsOnline (address); Assert.AreEqual (expected, actual); } } }
Third, you must note that you should always add “using NMock” to mock the factory that was available.
Fourth, fire up the test, I’m sure will pass.
Pex and Moles, the most interesting of the frameworks for the Net
Pex is the first framework that will be discussed here. It is one of the best tools for testing applications in Visual Studio. It is used to perform automated testing by Whitebox. It is primarily used by applications with NET environment. It was developed by Microsoft in addition to the Visual Studio environment. Pex can be downloaded from.
- Dual Certification - CEH and CPT
- 5 days of Intensive Hands-On Labs
- Expert Instruction
- CTF exercises in the evening
- Most up-to-date proprietary courseware available
Pex can generate tests automatically from the machine that can no longer cover the entire code, known as automated testing. Pex has one major advantage: if Pex finds any potential errors in the code, it suggests how to alter them. This is one of the best features of the service.
Pex also has another interesting feature, namely, Pex is able to perform a complete analysis of the code by searching for boundary conditions and generating test cases. The tool also automatically finds errors and allows you to greatly reduce the maintenance costs of code. Currently Pex and Moles are a total of one tool for creating mocks.
Moles and Pex Installation:
First, if your system does not yet contain Moles and Pex, download and install them.
Second, create a new test project UT.Examples.
Third, look for references to the test project Examples.
Fourth, right-click on the reference from the context menu, select “Add Moles Assembly”. If this option is not available, go back to the first point.
Fifth, you must now rebuild the project Examples.UT.
Sixth you probably noticed in the reference branch, a new reference called “Examples.Moles”. It contains all the implementations of stub objects that are associated with classes Examples project.
Seventh, add a new test IsOnlineTest_MolesStub with the following content:
using System; <strong></strong> <strong>using Examples.Moles; </strong> <strong>using Microsoft.VisualStudio.TestTools.UnitTesting; </strong> <strong>namespace Examples.UT </strong> <strong>{ </strong> <strong> [TestClass] </strong> <strong> public class CheckServerExTest </strong> <strong> { </strong> <strong> [TestMethod] </strong> <strong> public void IsOnlineTest_MolesStub () { </strong> <strong> IConnectionProviderEx connectionProvider = new SIConnectionProviderEx { </strong> <strong> IsOnlineString = _ =>. True </strong> <strong> }; </strong> <strong> CheckServerEx target = new CheckServerEx (connectionProvider); </strong> <strong> string example = "google.com"; </strong> <strong> excepted bool = true; </strong> <strong> bool actual; </strong> <strong> actual = target.IsOnline (address); </strong> <strong> Assert.AreEquals (excepted, actual); </strong> <strong> } </strong> <strong> } </strong> <strong>}
Eighth, as you probably noticed, adding “using Examples.Moles” to a specific class type SIConnectionProviderEx was available.
Ninth now you can run the test. It is true that it is not too complex?
Well, we described our first test using Moles. The subject, as you see, really is very developed and very complex, and at the same time is very interesting. I hope you enjoyed this introduction to the testing environment Net.
Watch out, though Moles and Pex are your requirements, here they are, taken from Microsoft:
- You must have .NET Framework version 3.5 or later
- You must have installed Visual Studio 2008 or Visual Studio 2008 Team System
- You may also have installed Visual Studio 2010 environment
Of course, the use of Pex or Moles is also a choice. It may happen that you do not have Visual Studio and. NET Framework. Then you can use Pex and Moles on the command line of your operating system. Moles and Pex, despite its unusual requirements, are really not worth it to command, because they are too difficult to use. Soon we will know the real combination.
RhinoMocks, the white rhino of frameworks for testing
I remember how I learned to program using the methodology of TDD (Test Driven Development). When using TDD methodology is very important that the system that we studyis tested in complete isolation. And that’s when I learned to create mocks and stubs. I wanted always to the complete isolation of the system, so that nothing was able to falsify test results. I also needed to be able to simulate some parts of the system. It was then that I met Rhino.
Rhino Mock creates a dynamic dummy for each .NET object. Its main objective is to help the poor developers to create fake implementations of objects and create test interactions between them, and then test them individually. Rhino Mocks can be downloaded from the.
Now is the time, for example, to create tests using Rhino. Let’s use it for the same test. Here are the steps you need to take:
First, for a project we need to add references Examples.UT library RhinoMocks.dll.
Second, we add to our quality of testing a new test, IsOnlineTest_Rhino:
[TestMethod] <strong></strong> <strong> public void IsOnlineTest_Rhino () { </strong> <strong> adress string = "google.com"; </strong> <strong> connectionProvider var = MockRepository.GenerateStub <IConnnectionProviderEx> (); </strong> <strong> connectionProvider.Stub (c => c.IsOnline (address). Return (true); </strong> <strong> CheckServerExTest target = new CheckServerEx (connectionProvider); </strong> <strong> excepted bool = true; </strong> <strong> bool actual; </strong> <strong> actual = target.IsOnline (address); </strong> <strong> Assert.AreEqual (excepted, actual); </strong> <strong> }
Third, as you probably noticed, add a reference “using Rhino.Mocks”.
Fourth, now you can safely run the test.
As you can see, Rhino Mocks is another PHP framework for unit testing, which should be used. If you visit a page , you will find a lot more opportunities this framework. I personally honestly recommend it.
Summary
In total, we’ve covered all the most necessary frameworks. I hope that the examples in the code were not overwhelming. I promise that the next article will discuss Selenium alone, it will be a much lighter subject and you will not feel too overwhelmed by all this. Thanks for your attention. | http://resources.infosecinstitute.com/creating-a-professional-application-how-to-create-tests-part-4/ | CC-MAIN-2013-20 | refinedweb | 2,329 | 59.6 |
How to remove list elements within a loop effectively in python
I have a code as follows.
for item in my_list: print(item[0]) temp = [] current_index = my_list.index(item) garbage_list = creategarbageterms(item[0]) for ele in my_list: if my_list.index(ele) != current_index: for garbage_word in garbage_list: if garbage_word in ele: print("concepts: ", item, ele) temp.append(ele) print(temp)
Now, I want to remove the
ele from
mylist when it gets appended to
temp (so, that it won't get processed in the main loop, as it is a garbage word).
I know it is bad to remove elements straightly from the list, when it is in a loop. Thus, I am interested in knowing if there is any efficient way of doing this?
For example, if
mylist is as follows;]]
1st iteration
for the first element
tim_tam, I get garbage words such as
yummy_tim_tam and
berry_tim_tam. So they will get added to my
temp list.
Now I want to remove
yummy_tim_tam and
berry_tim_tam from the list (because they have already added to
temp), so that it won't execute from the beginning.
2nd iteration
Now, since
yummy_tim_tam is no longer in the list this will execute
pudding. For
pudding I get a diffrent set of garbage words such as
chocolate_pudding,
biscuits,
tiramu. So, they will get added to
temp and will get removed.
3rd iteration
ice_cream will be selected. and the process will go on.
My final objective is to get three separate lists as follows.
["tim_tam", 879.3000000000001], ["yummy_tim_tam", 315.0], ["berry_tim_tam", 171.9] , ["pudding", 298.2] ["chocolate_pudding", 218.4], ["biscuits", 178.20000000000002], ["tiramusu", 158.4] ["ice_cream", 141.6], ["vanilla_ice_cream", 122.39999999999999]
3 answers
- answered 2018-01-14 11:36 Patrick Artner
I would propose to do it like this:]] d = set() # remembers unique keys, first one in wins for i in mylist: shouldAdd = True for key in d: if i[0].find(key) != -1: # if this key is part of any key in the set shouldAdd = False # do not add it if not d or shouldAdd: # empty set or unique: add to set d.add(i[0]) myCleanList = [x for x in mylist if x[0] in d] # clean list to use only keys in set print(myCleanList)
Output:
[['tim_tam', 879.3000000000001], ['pudding', 298.2], ['biscuits', 178.20000000000002], ['tiramusu', 158.4], ['ice_cream', 141.6]]
If the order of things in the list is not important, you could use a dictionary directly - and create a list from the dict.
If you need sublists, create them:
similarThings = [ [x for x in mylist if x[0].find(y) != -1] for y in d] print(similarThings)
Output:
[ [['tim_tam', 879.3000000000001], ['yummy_tim_tam', 315.0], ['berry_tim_tam', 171.9]], [['tiramusu', 158.4]], [['ice_cream', 141.6], ['vanilla_ice_cream', 122.39999999999999]], [['pudding', 298.2], ['chocolate_pudding', 218.4]], [['biscuits', 178.20000000000002]] ]
- answered 2018-01-14 11:36 Dennis Soemers
You want to have an outer loop that's looping through a list, and an inner loop that can modify that same list.
I saw you got suggestions in the comments to simply not remove entries during the inner loop at all, but instead check if terms already are in
temp. This is possible, and may be easier to read, but is not necessarily the best solution with respect to processing time.
I also see you received an answer from Patrick using dictionaries. This is probably the cleanest solution for your specific use-case, but does not address the more general question in your title which is specifically about removing items in a list while looping through it. If for whatever reason this is really necessary, I would propose the following:
idx = 0 while idx < len(my_list) item = my_list[idx] print(item[0]) temp = [] garbage_list = creategarbageterms(item[0]) ele_idx = 0 while ele_idx < len(my_list): if ele_idx != idx: ele = my_list[ele_idx] for garbage_word in garbage_list: if garbage_word in ele: print("concepts: ", item, ele) temp.append(ele) del my_list[ele_idx] ele_idx += 1 print(temp) idx += 1
The key insight here is that, by using a
whileloop instead of a
forloop, you can take more detailed, ''manual'' control of the control flow of the program, and more safely do ''unconventional'' things in your loop. I'd only recommend doing this if you really have to for whatever reason though. This solution is closer to the literal question you asked, and closer to your original own code, but maybe not the easiest to read / most Pythonic code.
- answered 2018-01-14 11:36 joaquin
This code produces what you want:
from collections import defaultdict my_list = [['tim_tam', 879.3], ['yummy_tim_tam', 315.0], ['pudding', 298.2], ['chocolate_pudding', 218.4], ['biscuits', 178.2], ['berry_tim_tam', 171.9], ['tiramusu', 158.4], ['ice_cream', 141.6], ['vanilla_ice_cream', 122.39] ] creategarbageterms = {'tim_tam' : ['tim_tam','yummy_tim_tam', 'berry_tim_tam'], 'pudding': ['pudding', 'chocolate_pudding', 'biscuits', 'tiramusu'], 'ice_cream': ['ice_cream', 'vanilla_ice_cream']} all_data = defaultdict(list) temp = [] for idx1, item in enumerate(my_list): if item[0] in temp: continue all_data[idx1] = [item] garbage_list = creategarbageterms[item[0]] for idx2, ele in enumerate(my_list): if idx1 != idx2: for garbage_word in garbage_list: if garbage_word in ele: temp.append(ele[0]) all_data[idx1].append(ele) for item in all_data.values(): print('-', item)
This produces:
- [['tim_tam', 879.3], ['yummy_tim_tam', 315.0], ['berry_tim_tam', 171.9]] - [['pudding', 298.2], ['chocolate_pudding', 218.4], ['biscuits', 178.2], ['tiramusu', 158.4]] - [['ice_cream', 141.6], ['vanilla_ice_cream', 122.39]]
Note that for the purpose of the example I created a mock creategarbageterms function (as a dictionary) that produces the term lists as you defined it in your post. Note the use of a defaultdict which allows unlimited number of iterations, that is, unlimited number of final lists produced. | http://codegur.com/48249159/how-to-remove-list-elements-within-a-loop-effectively-in-python | CC-MAIN-2018-05 | refinedweb | 925 | 68.87 |
Type: Posts; User: eyekantbeme
I need this school project done now, please anyone, I will pay it is due tonight.
link to download:
PLEASE LET ME...
Okay, that's what I found out after I turned that int into a double, that was the reason why it wasnt working at first, great, thanks for that you guys, I cant wait til I start getting into more...
Ohhhhhh I see, that completely makes sense. Yeah you cant tell it's a decimal unless you know what the cin is going to be........okay, well, thanks for the heads up Ill try changing it into a...
#include <iostream>
using namespace std;
int main()
{
int frank;
int john;
int userage;
So this is my code and apparently somethings going on at around the if statement. It makes my calculations show up as 0. I know it's some bizarre coding, but Im sure you guys get what's going on. ... | http://forums.codeguru.com/search.php?s=b0bf6d8abe9a0bbc024b91127a848ac3&searchid=5599599 | CC-MAIN-2014-49 | refinedweb | 157 | 79.7 |
I have inhertied an issue where my AD Integrated DNS Zone was deleted and there are no system state backups available. I have checked in AD, under DNS and there are no records there. However, under the Zone (as listed in DNS) most records are present. On one of my DCs, I manually increased the SOA record and restarted the netlogon service, this has got the "zone" replicating again. I have also repopulated the GUIDs under the _MSDCS folder. Under the properties of the Zone, it is still listed as AD integrated.
My question is: If I change the properties to "Primary" it should create the txt file in System32\DNS. Assuming I allow replication time etc, can I just change it back to AD and will that repopulate the AD side of things?
Or will I need to export the contents of the txt file, then delete the zone and recreate?
Also, one further question: My Forest Root Domain has a broken delegation in DNS to the child Domain - again inherited and down to poor administration. The delegated namespace is the same as the AD integrated zone that I have the issue with above - if I repair this (by putting "Live" DCs within the delegated options) will this affect the other AD zone? Or is this a symptom of the above issue?
If you need further info, or if I have not been specific enough please accept my apologies and I shall endeavour to supply what you need.
Many thanks in advance for your assistance. | https://serverfault.com/questions/296251/ad-integrated-dns-zone-restore-repair | CC-MAIN-2021-10 | refinedweb | 256 | 69.92 |
29 September 2010 13:46 [Source: ICIS news]
TOKYO (ICIS)--Mitsubishi Plastics plans to establish a polyester film company in ?xml:namespace>
Mitsubishi Plastics would soon submit an application to the local government authorities to establish the company, it said in a statement.
The new China-based polyester film producer, which is yet to be named, would build two polyester film production lines, each with the capacity of around 20,000 tonnes/year each, Mitsubishi Plastics said.
The two units would together produce a total of 45,000 tonnes/year of optical polyester film, it added.
The first plant is scheduled to be completed by April 2013, while the second one would be built by April 2015; both would be built at the same location, Mitsubishi Plastics said.
The total cost for the entire project was estimated at Y24bn ($3.5bn), including the construction of the two plants and their running costs, the company said.
The new company aimed to generate annual net sales of Y20bn after both production lines come on stream, Mitsubishi Plastics said.
Mitsubishi Plastics also plans to establish an investment company in
($1 = Y83.94) | http://www.icis.com/Articles/2010/09/29/9397421/mitsubishi-plastics-to-establish-polyester-film-company-in-china.html | CC-MAIN-2014-35 | refinedweb | 189 | 59.74 |
I need to speed up computation without losing accuracy on the folling program for a Uni assesment can anyone help.
------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------Code:#include <iostream> #include <complex> using namespace std; typedef complex<double> Complex; const int size = 1000; int pix[size][size]; int pixval( double xx, double yy) { Complex z = 0; Complex c(xx,yy); int j; for (j = 0 ; j < 1000 && abs(z) < 2 ; ++j) z = z*z+c; return j / 20; } void fillpix(double x0, double y0, double xinc, double yinc) { int x, y; for( x = 0 ; x < size ; ++x) for( y = 0 ; y < size ; ++y) pix[x][y] = pixval( x0+x*xinc, y0+y*yinc); } int main() { fillpix(0.2, 0.2, 0.001, 0.001); for (int y = 0 ; y < size; y+=(size/60)){ for (int x = 0 ; x < size; x+=(size/20)) cout << char( ' '+pix[y][x]); cout << endl; } }
Here is some things I've been asked to consider
The time taken by the program is taken to be that returned for user time ( the time the CPU spends on your computation), given by
time prog when run on sol for a size of 1000*1000 pixels.
The final program should produce the same results as the original. Check this by writing out, to a file, the array produced by the original, and comparing any new results with this gold standard.
You should check that all 1000*1000 cells are the same.)
It is also possible to generate a much smaller output file by creating a signature -
Consider an array of 3 unsigned integers (buckets).
Initialise these to zero, then add
cells 0, 3, 6, ... into the zeroth bucket
cells 1, 4, 7, ... into the middle bucket
cells 2, 5, 8, ... into the last bucket.
(Every cell in the 2D array is to be added into some bucket).
The sequence of values in the array of buckets is the signature, and will be the same if created from identical 2D arrays of cells.
However, we may want an array of 1, 3, 17, or n buckets - the number of buckets should be a constant.
You are advised to try speeding up the computation by improving pixval, in which the computer spends most of its effort ( see man prof).
You should think about
Compiler optimisation
The CC compiler accepts flags for degrees of optimisation
-O1 .. -O5
-fast
Reorganising the program
Some program forms run faster than others.
Loop unrolling
Current computers work fastest on sequences without branches.
Re-arrange the program to cut down branches.
Algebraic reorganisation
you are welcome to work at the level of the underlying algebra. Remember that some variables are complex.
Inlining functions
A C++ compiler accepts the keyword inline as a request to the compiler to avoid a function call by planting code where the function is called.
The syntax is
inline double fn( int ss) { return ss+8.4;};
Any help very much appreciated
Thanks
Richard | http://cboard.cprogramming.com/cplusplus-programming/46366-speeding-up-computation-without-losing-accuracy.html | CC-MAIN-2015-40 | refinedweb | 484 | 69.31 |
Created on 2013-06-21 22:37 by philwebster, last changed 2013-07-13 06:42 by terry.reedy. This issue is now closed.
This is a single test for RstripExtension.py, following from #15392. I also added a mock EditorWindow module with a Text widget for testing. test_rstripextension.py seems to run fine inside IDLE and produces the following output:
>>>
test_do_rstrip (__main__.Test_rstripextension) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.100s
OK
However, when I run via the command line, I get the following output:
philwebster@ubuntu:~/Dev/cpython$ ./python -m test -ugui test_idle
[1/1] test_idle
Warning -- warnings.showwarning was modified by test_idle
1 test altered the execution environment:
test_idle
I attempted to replicate the results from #18189, but saw the same message. Any thoughts?
There is a separate issue about killing that warning.
If you leave off '-ugui', you will see the traceback and why I said that requires('gui') should be wrapped for unittest discovery. As the code is, the call is executed when the file is imported, possibly during discovery, before the unittest or regrtest runner starts running tests and counting success, fail, skip.
Two other thoughts before I look as rstrip and the content of the new test file.
I would like to just call it test_rstrip.
I have thought of having just one mock_idle.py for mock Idle classes (as opposed to mock_tk with mock tk classes. I am not sure how many will will need. Since your mock_ewin uses a real tk.Text widget, hence requiring gui, what is its purpose? The point of the mock_tk classes is to avoid gui and make the test text only. I am not saying that this is the only purpose, but other purposes should be stated and documented.
Thank you for the feedback Terry. I'm not seeing the traceback without '-ugui' either, so I'm going to look into that. I get the same results with requires('gui') moved inside of setUp, is that what you mean by wrapping?
For mock_ewin I used a real Text widget because RstripExtension uses index(), get(), and delete() and I was not able to figure out how the widget implemented these (is there a single string with the contents?). I can work on a non-gui test though if that's what needs to be done.
Thanks!
Modified the first patch to get rid of mock EditorWindow in favor of the real thing. Also renamed the test to 'test_rstrip'.
Added to Terry's Text Widget code (in #18226) and created mock_idle.py for the mock EditorWindow. Todd's FormatParagraph test in the aforementioned issue also passes with the mock EditorWindow.
I want to make two separate commits. First add mock Text into mock_tk.py, and add a new test_text.py. I suggested looking at the tkinter Text test. I turns out that it only tests a few special search cases. I am guessing that they have something to do with the tkinter interface. If neither you nor Todd feel like writing the Text test, I will. Let me know either way.
Second, add the new mock_idle and test_rstrip files. For the latter, at least one line should have no whitespace, and one should have an indent. With an indent, the test would fail if rstrip in the tested file were changed to strip.
In the tearDown method, the apparent purpose of
self.rstripextension = None
is to actually do
del self.rstripextension
and that happens automatically when self disappears.
With a mock editor window, there is no need for "root.destroy" and hence for the close call, and hence for the tearDown method.
With only one test method, the setUp lines can be part of the test. For the attribute names, I strongly prefer 'rstrip' to 'rstripExtension' and 'editor' to 'mockEditorWindow'.
I am curious about this comment:
# Note: Tkinter always adds a newline at the end of the text widget,
# hence the newline in the expected_text string
In live Idle, I tried 'strip trailing whitespace' with text that did not end with \n and there was none visible after.
An annoyance is that after stripping the filename is prefixed with * to indicate that it has been changed and needs to be changed, even when it has not (or should not have been). Closing brings up the unnecessary 'Changed, save?' box. Is this related to the comment?
I decided that the first commit should be a separate issue that this issue depends on.
This patch contains mock_idle.py and the rstrip test using the mock text widget from #18365.
Terry- For some reason, the Text widget always contains a '\n' as the last character even when there is nothing visible. Doing a text.get('1.0','end') always has a '\n' at the end from what I can tell. I'm not sure about the filename changing, is it worth creating a new issue for?
Phil (and everyone else): PLEASE submit patches with 4 space indents and no tabs and no trailing spaces. Even if the code below runs in the CPython interpreter,
self.undo = mockUndoDelegator() <8 spaces>
<4 spaces>
def get_selection_indices(self): <4 spaces>
first = self.text.index('1.0') <4 spaces, 1 tab >
class mockUndoDelegator:
def undo_block_start(*args): <1 tab>
pass <2 tabs>
the CPython repository whitespace pre-commit will say this:
remote: - file Lib/idlelib/idle_test/test_text.py is not whitespace-normalized in 979905090779
remote: * Run Tools/scripts/reindent.py on .py files or Tools/scripts /reindent-rst.py on .rst files listed above
remote: * and commit that change before pushing to this repo.
remote: transaction abort!
remote: rollback completed
as happened with the mock_tk/test_text patch. I already fixed this file, but next time...
About no trailing whitespace: that is not an option when committing, so I will change the string literal to use explicit \n and implicit catenation, as in 'ab \n' 'cd\n' == 'ab \ncd\n'. This will make the trailing ws visible anyway.
The way to not get the tk=added \n is to use 'insert' rather than 'end', as inself.text.get('1.0','insert'). 'Insert' is the end of user input, before the guard.
Will commit patch soon.
As a suggestion I always use the command "make patchcheck" (before making the patch) which catches the white space and tab problem plus it fixes other things. Here is more information on patch check in the developer's guide.
I am aware of patchcheck, but the problem for me is that 'make patchcheck' does not work on Windows; the doc is wrong on the awkward to type Windows alternative; and it is usually useless anyway. But I agree that anyone who does not use a editor configured to automatically converts tabs to 4 spaces, as are both Idle and Notepad++ here, should run reindent.py.
When I learn how to write extensions, maybe I will work on one to run or imitate patchcheck, and offer to open Acks and News for editing if not in the changeset.
Trying it out, I rediscovered that patchcheck has a Windows bug. This time I reported it ;-) #18439.
New changeset ec71fcdcfeac by Terry Jan Reedy in branch '2.7':
Issue #18279: Add tests for idlelib/RstripExtension.py. Original patch by
New changeset 22ce68d98345 by Terry Jan Reedy in branch '3.3':
Issue #18279: Add tests for idlelib/RstripExtension.py. Original patch by
A simple change to RstripExtension.py fixed the marking of unchanged files as changed. I also removed a useless extra iteration. Having a test makes it possible to do things like this without breaking what already worked.
I had to remove the dependency to close this, since the test_text issue #18365 was reopened. | http://bugs.python.org/issue18279 | CC-MAIN-2016-40 | refinedweb | 1,277 | 75.1 |
In complex applications, UI components consist of more building blocks than some state and UI. Before I already described a different way to look at our reusable UI components. We can look at them from developers' and users' perspectives at the same time. But on a conceptual level, components have more elements important to their behavior. It is important for developers to understand these concepts. Especially when working on big, complex and critical applications. We have to dive into the anatomy of a UI component.
The API, also known as properties
Interfaces are a way to describe how we want others to use and interact with our work, our components. The UI is a good example of an interface. It describes what we want our users to see and what we allow for interaction.
"Interfaces are a way to describe how we want others to use and interact with our components"
But what about the developers? The API of our components, better known as props or properties in most frameworks, is the interface for developers. There are some different API types we can define for other developers.
- Configuration: interfaces that allow developers to determine how our UI component should look and act. These are often static values that do not change based on user interaction. Examples are
classNameor
usePortal;
- Data: data often lives higher in the component tree. These interfaces allow data to be present and used in our component. These flows are uni-directional. An example is the
valueproperty;
- Actions: sometimes we need to invoke changes higher in the component tree. This requires callback functions to pass through the API. An example is the
onChangeproperty.
Note: to be in line with modern frameworks, I both use the terms properties and API
State
State is a mutable object that dictates the behavior and UI of our component. It is often combined with data received through the API. In the example below, we have a modal component with an incorporated button. When clicking the button, we set the value of
show to
true. Now our modal becomes visible for the user.
function MyModal (props) { const [show, setShow] = useState(false); const handleShow = () => setShow((s) => !s); return (<br/> <> <button onClick={handleShow}>...</button> {show && <Modal onClose={handleShow}>...</Modal> </> ); }
The addition of a state to a component makes it sometimes easy to introduce bugs. The data and action properties are part of the 'data-flow'. But we often interrupt this with our state by copying values from the data properties into our state. But what happens if the values change? Does our state also change? Should it? Look at the example below look of what happens when
showModal updates. If
MyComponent is already part of the component tree, then nothing happens. We have interrupted the data-flow. Don't.
function MyModal({ showModal }) { const [show, setShow] = useState(showModal); if (show) return null; return <Modal onClose={handleShow}>...</Modal>; }
Actions
As you can see in the diagram, actions link everything together. They are functions harboring small pieces logic. User interaction (e.g. a button click) trigger actions. But life-cycle methods, as described later, also trigger actions. Triggered actions can use data from the state and properties in their execution. Actions can come in many forms:
- Actions defined inside the component as a separate function;
- Actions defined in the life-cycle method of the component;
- actions defined outside the component and used in many components. Good examples are the actions within a module of the scalable architecture.
Below you can see part of a small React component example with two different actions. The first action changes the state on interaction (e.g. typing in an
<input /> field). The second action triggers the changes. It removes the modal, it makes an external call to a server to save the values and resets the internal state.
function MyComponent(props) { const [show, setShow] = useState(true); const [state, setState] = useState(); const save = useMyApiCall(...); function handleChange(value) { setState((old) => ({ ...old, key: value }); } function handleClose() { setShow(false); save(state); setState(); } return <>...</>; }
Note: the above component has some small flaws, as does two different state updates in one action. But, it fits its purpose.
Lifecycle
User inaction results in changes in the state of our component, or higher in the component tree. Data received through the API reflect these changes. When change happens, our component needs to update itself to reflect these changes. Or it needs to re-render. Sometimes, we want your component to execute extra logic when this happens. A so-called 'side-effect' needs to be triggered. of the changing values.
A simple example is a search component. When our user types, the state of the component should change, invoking a re-render. Every time we type, we want our component to perform an API-call. We can do this with the
onChange handler of
<input />. But what if our API-call depends on a value provided through the properties? And what if that value changes? We need to move our API-call to an update life-cycle method, as you can see below.
function SearchComponent({ query }) { const [search, setSearch] = useState(''); useEffect(() => { myApiCall({ ...query, search }); }, [query, search]); const handleSearch = (e) => setSearch(e.target.value); return <input value={search} onChange={handleSearch} />; }
Updates are not the only life-cycle methods. We also have the initialization of the component or the mounting of the component. Life-cycle methods trigger after rendering. This means that the initialization happens after the initial render. We have the life-cycle method for when a component is removed from the component tree. It is unmounted.
Most times, the logic called in life-cycles methods can be shared with other life-cycle methods or with handlers in the UI. This means we are invoking actions in our life-cycle methods. Actions, as illustrated, can cause changes in the state. But, life-cycle methods are called after state changes. Calling state-changing actions might cause a re-rendering loop. Be cautious with these types of actions.
The UI
The UI describes what we want our users to interact with. These interactions, such as clicking on a button, trigger actions. It results from the rendering of our UI component. State changes or changing properties trigger the rendering. It is possible to trigger some 'side-effects' when this happens in the components' life-cycle methods.
It is often possible to add logic to our rendering. Examples are conditional visibility or showing a list of data with varying sizes. To do so, we need logic, rendering logic. This be something simple as using a boolean value from the state, or use an
array.map() function. But sometimes we must combining many values in our rendering logic or even use functions to help us. In such a case, I would take that logic outside the rendering function itself as much as possible.
function MyModal ({ value }) { const [show, setShow] = useState(false); const showModal = show && value !== null; return ( <> <span>My component!</span> {showModal && <Modal onClose={handleShow}>...</Modal> </> ); }
Conclusion
When building our components, we can use various building blocks that work together. On both ends, we have interfaces for different audiences. We allow developers to interact with our UI components and change their behavior. On the other side, we have users interacting with our components. Different elements inside a component link these two interfaces together.
This article was originally posted on kevtiq.co
Posted on by:
Kevin Pennekamp
👋 Hey, I'm Kevin. I'm a Dutch software engineer. I love CSS, front-end architecture, engineering and writing about it!
Read Next
My website now loads in less than 1 sec! Here's how I did it! ⚡
C M Pandey -
7 security tips for your React application. 🔐
Vaibhav Khulbe -
How knowledgable you are about React? See common mistakes people make
adam klein -
Discussion
nice article | https://dev.to/vycke/ui-component-anatomy-the-architecture-of-a-component-14pc | CC-MAIN-2020-34 | refinedweb | 1,295 | 60.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.