text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
The switch case is a very important tool in C. It is a looping statement in programming. It helps us reduce the use of if/else when there are a number of conditions. In This article we will understand Switch Case in C. Following are the pointers will be discussed in this article,
So let us get started then,
Switch Case In C
In a switch statement, we pass a variable holding a value in the statement. If the condition is matching to the value, the code under the condition is executed. The condition is represented by a keyword case, followed by the value which can be a character or an integer. After this, there is a colon. Each case has a break statement, that makes the control to leave the loop. If no matching case if found it executes the default statement.
Syntax:
</p> switch (n) { case 1 : //execute if n=1 break; case 2 : //execute if n=2 break; case 3 : //execute if n=3 break; default: // execute this code if none of the cases match } <p style="text-align: justify;">
This is the way in which a switch case is used.
Remember that the variable passed to the switch must hold a character or an integer value, it must not be complex but only simple value.
Execution Of Switch case In C
If the value of n is 1 then the code under case 1 will be executed and it breaks from the loop using the break statement. Similarly, if the value is 2 then, it will first check for case 1 if it does not match the control will go to case 2. If there is a match it will be executed and followed by a break statement. If after checking all cases, no condition matched then the default condition is executed.
The use of break is to prevent the other cases from being executed even after finding the match and executing the statements under it. If there is no break statement then, the statement under the case below it is also executed. Each case should have the break statement except the default. Break of default is optional.
The default case is optional but, it is a good practice to have a default case.
Example switch code:
#include <stdio.h> int main() { int m=0; printf("enter your choice:n 1.book a car:n 2.book a bus:n 3.book a flight:n"); //accepting user choice scanf(" %d",&m); //passing choice in switch case switch (m) { case 1: printf("Car is booked"); break; case 2: printf("Bus is booked"); break; case 3: printf("Flight is booked"); break; default: printf("Invalid input"); break; } return 0; }
Output:
If a matching case is found.
data-src=
The program above is a basic demonstration of a switch case in C. We take input from the user based on the choices he has. The user choice is sent in the switch statement. The switch statement checks if there is a matching case. In the above code, the user first selects 3, so the statements under that block are executed, that is “booked a flight”. If the user enters a value beyond the range, then the default case is executed. In the above example, the user enters 7 and the statement under default is executed.
With this we come to the end of this article on ‘Switch Case. | https://www.edureka.co/blog/switch-case-in-c/ | CC-MAIN-2019-35 | refinedweb | 568 | 72.05 |
husk-scheme
R5RS Scheme interpreter, compiler, and library.
This package is not currently in any snapshots. If you're interested in using it, we recommend adding it to Stackage Nightly. Doing so will make builds more reliable, and allow stackage.org to host generated Haddocks.
Husk is a dialect of Scheme written in Haskell that implements a superset of the R5RS standard and a large portion of the R7RS-small language. Advanced features are provided including continuations, hygienic macros, libraries,.
More information is available on the husk website.
Installation
- Prerequisites: You will need the [Haskell Platform]() if you don't already have a recent copy installed.
- Install Husk using [cabal]():
cabal update cabal install husk-scheme
- Adjust your PATH: Before running Husk you may also need to add the cabal executable directory to your path. On Linux this is `~/.cabal/bin`.
Now you are ready to start up the interpreter:
justin@my-pc$ huski _ _ __ _ | | | | \\\ | | | |__ _ _ ___| | __ \\\ ___ ___| |__ ___ _ __ ___ ___ | '_ \| | | / __| |/ / //\\\ / __|/ __| '_ \ / _ \ '_ ` _ \ / _ \ | | | | |_| \__ \ < /// \\\ \__ \ (__| | | | __/ | | | | | __/ |_| |_|\__,_|___/_|\_\ /// \\\ |___/\___|_| |_|\___|_| |_| |_|\___| (c) 2010-2014 Justin Ethier Version 3.18 huski> (define (hello) 'world) (lambda () ...) huski> (hello) world
Husk has been tested on Windows, Linux, and FreeBSD.
Documentation
The online user manual provides an overview of the Scheme language as implemented by Husk, including:
- A getting started guide.
- Instructions for using the Haskell API
- An alphabetical listing of the Scheme API.
Directory Structure
docs- Documentation has been moved from here to the
gh-pagesbranch.
examples- Example programs, mostly written in Scheme.
extensions- Haskell-based extensions to Husk.
hs-src- Haskell source code for Husk.
lib- Library portions of Husk written in Scheme.
scripts- Build scripts for Husk and a basic Emacs integration script.
tests- Functional tests for Husk. These can be run automatically by using
make testfrom the main Husk directory.
License
Husk scheme is available under the MIT license.
The interpreter is based on code from the book Write Yourself a Scheme in 48 Hours written by Jonathan Tang and hosted / maintained by Wikibooks.
Changes
v3.19.3
Big Fixes:
Fixed compilation errors in
FFI.hs.
v3.19.2
New Features:
- Allow a
defaultthunk to be passed to
hash-table-ref.
- Added
hash-table-ref/default.
Bug Fixes:
- Fixed
rational?to properly handle floating-point numbers.
- Migrated
string-fill!from a macro to a function. This makes it easier to redefine, for example per SRFI 13.
v3.19.1
Bug Fixes:
- Allow
real-partand
imag-partto work with all types of numbers. For example,
(imag-part 2.5)no longer throws an error.
- Applied a fix from Rohan Drape to allow
Compilerand
Numericalmodules to build in GHC 7.10.1.
v3.19
New Features:
Added support for displaying call history when an error is thrown:
huski> ((lambda () (vector-length "v"))) Invalid type: expected vector, found "v" Call History: #0: (vector-length "v") #1: ((lambda () (vector-length "v")))
Sorry this took so long, as it should be a tremendous help for debugging!
- Print the number of received arguments when displaying an "incorrect number of arguments" error message. Also unbox any objects before displaying the error message.
- Allow
read-allto read from
stdinwhen no arguments are received.
Bug Fixes:
- Fixed bugs in
open-input-stringand
open-byte-vectorthat prevented a variable being passed as the "input" argument. Thanks to Dan Cecile for the bug report!
- Fixed a bug that could cause an infinite loop during macro expansion. Thanks to Dan Cecile for this report as well.
- Return the empty string from
string-appendif no arguments are received, instead of throwing an error.
- Throw an error when a function that takes a variable number of arguments is called without the minimum number of required arguments. For example
(map list)should throw an error because at least 2 arguments are required.
- Use
System.Processinstead of deprecated
System.Cmd.
v3.18
New Features:
- Added
exitfrom R7RS.
- Added support for SRFI 28 - Basic Format Strings.
Bug Fixes:
- Fixed bugs with
syntax-ruleswhere:
- A literal identifier may not have been matched in a sub-macro if macro hygiene renamed the input.
- The environment of macro definition may be overwritten during expansion of a
syntax-rulesmacro contained in another macro. This could cause macros defined in a library - but not exported from the library - to incorrectly fail to expand because they are not in scope.
for-eachno longer throws an error when an empty list is received.
- In compiled code, the
let-syntaxand
letrec-syntaxforms are now available to
evalat runtime.
- Added several missing I/O functions to the export list of the
(scheme base)library.
- bholst added
Unpackerto the exports from
Language.Scheme.Primitives, as it is required by
unpackEquals.
- bholst fixed many comments in the Haddock documentation.
v3.17.1
Added
call-with-portfrom R7RS.
Refactoring:
- Improved passing of extra arguments within the interpreter by removing the
extraReturnArgsparameter from
Continuationand adding it as an extra parameter to the
continueEvalfunction. That way a new
Continuationobject does not need to be created each time the function is called.
- Reduced size of compiled code by approximately 10%.
Bug Fixes:
- Added error checking to many I/O functions to prevent crashes when using a port that has already been closed.
- Added optional start/end arguments to
string->vector,
vector->string,
vector-copy!, and
string-copy!.
v3.17
- Added support for
define-record-typefrom R7RS and SRFI 9. This syntax allows creation of new disjoint types supporting access to multiple fields.
- Added support for parameter objects from R7RS and SRFI 39. See dynamic bindings in the user manual for more information.
- Added a
(scheme process-context)library containing the following functions:
Bug Fixes:
- Fixed a macro bug where the last element of a pattern's improper list may not be matched correctly if there is an ellipsis earlier in the list.
- Prevent infinite recursion when evaluating a pointer that contains a pointer to itself.
- Fixed the compiler to add full support for splicing of
begindefinitions.
- Updated
dynamic-windto return the value from the
duringthunk instead of the
afterthunk.
v3.16.1
Allow import of a library in the same directory as a program. For example to import
lib.sld:
(import (lib))
Bug Fixes:
- Husk no longer throws an error during expansion of a library macro that references another macro which is not exported from the same library.
- Fixed a bug where a
syntax-rulesmacro's literal identifier would not match the input when both identifiers are equal and both have no lexical binding.
v3.16
Improved import of libraries:
- Husk now detects cyclic dependencies and throws an error instead of going into an infinite loop.
- Each library is only evaluated once during the import process.
beginnow has the ability to evaluate contained expressions and definitions as if the enclosing
beginwere not present, per R7RS. For example:
huski> x Getting an unbound variable: x huski> (begin (define x 28) x) 28 huski> x 28
Added the following R7RS I/O functions:
get-output-bytevector
get-output-string
open-input-bytevector
open-input-string
open-output-bytevector
open-output-string
read-string
write-string
Added an
-icommand line option to
huski. This option will start the interactive REPL after a file specified on the command line is executed, and has no effect if no file is specified.
Haskell API:
The
Portdata type has been extended to include an optional in-memory buffer:
Port Handle (Maybe Knob)
These changes are isolated in husk, but if your code uses any
Portconstructors, you would need to change them, EG:
Port _ Nothing.
v3.15.2
Bug fixes:
- The
(husk random)library's
randintfunction no longer throws a runtime error when called.
- The
newlinefunction now accepts a port as an optional second argument, instead of throwing an error.
v3.15.1
This is a small bug fix release:
Preserve macro hygiene when using code that contains explicit renaming macros contained within syntax-rules macros. Previously, the syntax-rules system would not pass renamed variables across to the ER system. So an identifier could be renamed by syntax-rules but the ER macro would then have no knowledge of the rename and would be unable to use
renameto make the identifier hygienic. For example, the code:
(let ((unquote 'foo)) `(,'bar))
Should evaluate to
((unquote (quote bar))).
- Added support for multi-line input to
huski.
- Fixed GHC compiler warnings when building with
-Wall.
v3.15
The big change for this release is an online User Manual based on the R7RS, that documents Husk's Scheme and Haskell APIs, and explains the Scheme language as implemented by Husk. This is going to be a work-in-progress but is mostly complete, and can be used immediately as a reference.
In addition, many smaller fixes and enhancements are included:
- Improved library support so that examples no longer require running in a special mode.
- Added missing functions to
(scheme base).
- Added support for the
(scheme case-lambda)and
(scheme r5rs)libraries.
- Added libraries for SRFI 1, 2, and 69 so that these features are available via
import.
- For example:
(import (srfi 1)).
- Added
exact-integer-sqrtfrom R7RS, using the Chibi scheme reference implementation.
- Added
let-valuesand
let*-valuesfrom R7RS.
- Added the following I/O functions:
binary-port?
close-port
input-port-open?
open-binary-input-file
open-binary-output-file
output-port-open?
read-bytevector
textual-port?
u8-ready?
write-bytevector
- Allow character and string comparison predicates (such as
string=?and
char=?) to support more than two arguments.
- Fixed
cond-expandto support
and/
orclauses.
- Renamed
char-upperand
char-lowerto
char-upcaseand
char-downcaseto match the Scheme specs.
v3.14
Made the following enhancements to improve R7RS7RS7RS mode.
- Added most of the standard R7RS libraries:
(scheme base),
(scheme char), etc.
Extended syntax-rules to allow another identifier to be used to specify the ellipsis symbol, per R7RS. For example,
:::could be used instead:
(define-syntax and (syntax-rules ::: () ((and test1 test2 :::) (if test1 (and test2 :::) #f))))
- Added the following functions from R7RS:7RS.
-7RS style library syntax in the compiler. This enables functionality that was previously only available in the interpreter, and sets the stage for husk to begin adding support for R7RS.7RS:
(scheme r5rs)- Exposes the full husk R5RS environment
And, R5RS versions of the scheme libraries have been relocated to underneath
(scheme r5rs). Each of these libraries exposes a husk subset of the functions recommended by R7RS:
5RS.
v3.7
A major change for this release is the introduction of Scheme libraries using R7RS library syntax. For an example of how to use libraries, see
examples/hello-library/hello.scm in the husk source tree. Note that since R7RS7RS Storage Model:
> >. >
Internally husk uses Haskell data types, so the husk model differs slightly from the one in R5RS -5RS have been implemented.
v3.5.6
Enhanced the compiler to accept
load-ffias a special form, so a compiled version of a program does not have to wait for a module to be dynamically loaded. Instead, the module is included at compile time. This offers a nice speed improvement:
$ time huski ffi-cputime.scm Seconds of CPU time spent: 2.756171
$ time ./ffi-cputime Seconds of CPU time spent: 2.4001e-2
Allow a hash table to be defined directly using
#hash(alist)- for example,
#hash()for an empty table or
#hash((a 1) (b . 2))for a table with two elements. This is not part of R5RS5RS5RS.
- Allow a continuation to call into another continuation via call/cc - see R5RS5RS.
- Replaced the macro for
casewith the one from R5RS.5RS,5RS.7RS5RS, | https://www.stackage.org/package/husk-scheme | CC-MAIN-2018-05 | refinedweb | 1,943 | 57.57 |
One very disturbing trend that I have found with JDO and EJB 2.0 is simply the lack of XML support...
Without getting into major .NET vs. Java arguments, Microsoft has implemented a very common sense design pattern that I am anxious to see present itself in the Java community.
It revolves around ADO.Net, (Microsoft's first real solution to database persistence in my opinion). What basically happens is that the developer defines business objects using XML Schemas. The schemas are then compiled into binary objects called DataSets, (Kinda like JDO). However, there is a really handy dandy feature that allows you to marshall and unmarshall XML content from that object...
In the Java World I would really like to see an even better implementation...
- Developer defines XML Data Object with XML Schema, (This is already present in the JAXB specification minus the schema support. Only DTDs were supported last time I checked).
- A JDO or Entity Bean then Extends or has an interface to this JAXB object.
- All Persistence functions are managed by JDO or the Entity Bean...
- The JDO or Entity Bean would then contain the XML Marshalling and UnMarshalling features found in JAXB..
- A Value Object could be retrieved in binary or XML form from the JDO/Entity Bean Object.
- A JSP could easily extract the XML Schema and XML Data from the Value Object and perform operations on it such as XSL transformations.
I think that anyone reading this will say, "But you can already do that in Java..." This is true, Value Objects have been being used for quite some time and also XML patterns similar to this.
My argument is essentially that a standardization of this pattern would be nice. For example, as far as I know, JDO and Entity Beans have no standard way for interfacing to JAXB objects... (Nor were XML Schemas supported in JAXB the last time I checked.)
I have been implementing this design pattern for quite some time and it makes development a whole lot faster, (After you write the BMP Entity Bean...) It would be really nice if JAXB support was built into CMP Entity Beans and JDO.
Am I completely missing something here? Is this pattern already supported in IDEs and I have been coding this stuff by hand for no reason? Please let me know...
Thanks.
Discussions
J2EE patterns: JAXB/XML Value Object Design Pattern
JAXB/XML Value Object Design Pattern (22 messages)
- Posted by: Elika Kohen
- Posted on: March 03 2002 13:28 EST
Threaded Messages (22)
- JAXB/XML Value Object Design Pattern by Stefan Zobel on March 03 2002 20:31 EST
- JAXB/XML Value Object Design Pattern by Jon Inman on March 04 2002 11:07 EST
- JAXB/XML Value Object Design Pattern by Elika Kohen on March 05 2002 17:50 EST
- JAXB/XML Value Object Design Pattern by ganesh babu on March 06 2002 03:48 EST
- JAXB/XML Value Object Design Pattern by Elika Kohen on March 06 2002 07:09 EST
- JAXB/XML Value Object Design Pattern by Richard Seidenstein on March 07 2002 06:06 EST
- JAXB/XML Value Object Design Pattern by Avi Abrami on March 07 2002 02:12 EST
- JAXB/XML Value Object Design Pattern by Swapnil Shah on May 02 2002 03:45 EDT
- JAXB/XML Value Object Design Pattern by Matthew Adams on March 07 2002 11:18 EST
- JAXB/XML Value Object Design Pattern by null on March 07 2002 12:40 EST
- JAXB/XML Value Object Design Pattern by Elika Kohen on March 07 2002 16:00 EST
- JAXB/XML Value Object Design Pattern by Daniel Work on March 07 2002 11:53 EST
- JAXB/XML Value Object Design Pattern by Lu Huang on March 11 2002 11:54 EST
- JAXB/XML Value Object Design Pattern by Nitish Verma on March 08 2002 20:14 EST
- JAXB/XML Value Object Design Pattern by Elika Kohen on March 09 2002 10:21 EST
- JAXB/XML Value Object Design Pattern by Noam Borovoy on March 11 2002 07:10 EST
- JAXB/XML Value Object Design Pattern + JDO issues by Olivier Brand on March 12 2002 16:50 EST
- JAXB/XML Value Object Design Pattern + JDO issues by Jonathan Gibbons on March 26 2002 09:20 EST
- JAXB/XML Value Object Design Pattern + JDO issues by Neo Gigs on April 13 2002 12:59 EDT
- JAXB/XML Value Object Design Pattern + JDO issues by Elika Kohen on April 18 2002 03:10 EDT
- JAXB/XML Value Object Design Pattern + JDO issues by Elika Kohen on April 18 2002 03:18 EDT
- JAXB/XML Value Object Design Pattern by PThakur Thakur on October 29 2003 03:20 EST
JAXB/XML Value Object Design Pattern[ Go to top ]
Not sure if this should be called a design pattern in the strict sense.
- Posted by: Stefan Zobel
- Posted on: March 03 2002 20:31 EST
- in response to Elika Kohen
Nevertheless, I think it would be interesting if you posted some more details about your pattern/implementation, since it could be quite handy for some use cases.
Regards,
Stefan
JAXB/XML Value Object Design Pattern[ Go to top ]
Thats woul be kick b***. If I understand you correctly.... It would be easier as a client to know how an XML/ValueObject will parse as an autonomous Java object vs. loading 7Mb of Java XML parsing libraries and stepping through a DOM or SAX API.
- Posted by: Jon Inman
- Posted on: March 04 2002 11:07 EST
- in response to Elika Kohen
I know nothing of M$oft but that sounds like a good feature.
The only API I know is the SOAP message API, it is small and lets one manipulate just SOAP headers, methods, etc.(if I am understanding correctly...)
JAXB/XML Value Object Design Pattern[ Go to top ]
I am not exactly sure how I could further define this without using UML diagrams.... In short this is essentially how we have implemented
- Posted by: Elika Kohen
- Posted on: March 05 2002 17:50 EST
- in response to Jon Inman
this pattern in the past...
To try and make this a little clearer, I will try to describe the non database version of the pattern...
However, first I will state the summary so that the description may be easier to follow.
/*****************************************************/();
/***************************************************/
This is is the pattern without a business tier involved. (basically an XML/XSL Control pattern... Most people implement a caching mechanism
of some sort....)
- A JSP invokes a JavaBean passing in two arguments; an XML data file, and an XSLT source.
- The JavaBean performs the translation using standard APIs and returns the new XHTML, (We use XHTML exclusively now), markup to the calling
JSP...
- The JSP then appends the content to the page...
(As you can tell, all of the APIs to do this are already present within Java, (JAXP in particular, etc).
/***************************************************/
XML/XSL Control with Database Support
A JSP or Servlet invokes a WebService which returns XML records from a database. (For sake of the argument this WebService is called
AccountService and a WebMethod called getAccountInfo() returns all of the user's information in XML form...)
Well, this is Microsoft's version... This is the exact C#.Net syntax.....
/************************************************/
// This is the control that performs the translation... I recommend a JavaBean...
System.Web.UI.WebControls.Xml Xml2;
// AccountService.AccountService is essentially a class that wraps the Soap Invocation of the AccountService WebService.... It contains all
of the URL information, etc...
AccountService.AccountService accountService = new AccountService.AccountService();
// This class represents a Soap Header that eventually gets sent to the AccountService WebService....
AuthenticationHeader accountHeader = new AuthenticationHeader();
// This sets the properties in the Soap Header....
accountHeader.userName = User.Identity.Name;
accountHeader.ticket = Request.Cookies.Get(".ASPXAUTH").Value;
// This sets the Soap Header in the AccountService Wrapper so that when the WebService is actually invoked the Soap Header information gets
sent too....
accountService.AuthenticationHeaderValue = accountHeader;
// This is the cool part. This is ADO.Net. The GetAccountInfo() Web Method is invoked on the WebService and returns an XML representation
of the data. AccountDataSet is an ADO DataSet that I created from an XML Schema... The .Net framework automagically casts the XML response
into the AccountDataSet object.
AccountDataSet accountDataSet = accountService.GetAccountInfo();
// Now the following part is very convoluted because the .Net Framework does so much under the covers that noone really knows whats going
on... What essentially happens is that the XML gets extracted from AccountDataSet and gets sent sent to the XSL transform Control.... The
product is then automatically placed in the response object (I am not going to explain this here...)
XmlDataDocument doc = new XmlDataDocument(accountDataSet);
XslTransform trans = new XslTransform();
trans.Load(Server.MapPath("../XSL/Account.xsl"));
Xml2.Document = doc;
Xml2.Transform = trans;
/***************************************************/
The point...
It would be really cool to do something like this in Java...
myJAXBObject = accountService.GetAccountInfo();
---------- or ----------
myJDOObject = accountService.GetAccountInfo();
/***************************************************/
Unfortunately, as far as I know, none of the specification teams are trying to provide this support. (I sincerely hope I am wrong).
what I have had to do instead is this....
// I am using a proprietary DataObject which is essentially a mix of JDO and JAXB here... :(
myProprietaryDataObject.setXML(accountService.GetAccountInfo());
myTranslator.XSLSource = "../../XSL/AccountDisplay.XSL";
// This is really crazy pointer operation is covered up to avoid copying XML Data Again...
myTranslator.setXMLData(myProprietaryDataObject);
// htmlContent is a StringBuffer object....
htmlContent.append(myTranslator.translate());
/***************************************************/
This pattern is repeated again in the business layer...
This is going to look really crazy because this is not a standard.
As a matter of fact this is really EJB and JDO all rolled up into one... (Into what I believe they should be).
We compile an XML Schema document into a regular Java class... This class has configuration data which is intialized on instantiation
from an XML file that contains Database connection strings, Primary Key, Foreign Key, and constraint information for relationships...
This object inherits a class called JavaMug... (Essentially a JDO like class on XML Steroids with database and transaction support.)
// This class that gets created is:
public class AccountData extends JavaMug {}
We then can create two types of objects... One is a WebService EJB and the other is just a regular Entity Bean that does not comply with any
specification. However, we do provide the interfaces that can be accessed by Servlets just like regular Entity Beans. We wanted some kind
of portability).
// We rarely use the EJB interface to the Business Object, (JavaMug), since the SOAP interfaces are much faster than CORBA and the lack of
current EJB spefication support...
public class AccountEJB extends AccountData implements CustomEJB {}
public class AccountService extends AccountData implements WebService {}
Obviously the implementations are not provided in this example.
However, what happens is:
- Our compiler tool takes a UML diagram or database table that shows instance fields and relationships and creates an XML Schema from it.
- The schema gets compiled into our custom JavaMug object which provides the JDBC, JTS and XML support.
- Creates the basic implementation for the WebService or Entity Bean providing the appropriate get/set methods...
- One of the Methods provided is: getJavaMugVO(); which returns the parent object as a value object, (Binary and serializeable).
- Another Method that is provided is: getXMLVO(); which returns the XML data as a value object, (StringBuffer).
- In the case of a WebService all of the XML data is sent with the appropriate SOAP Headers and what-not...
- Another nice feature is that you can instantiate these JavaMug objects from an XML document: AccountData myAccountData = new
AccountData(XMLDocument); This also provides validation support based on the Schema that it was compiled with...
/*****************************************************/();
I hope this clears things up a bit....
/***************************/
P.S PLEASE BECOME ACTIVE IN THE JAVA COMMUNITY PROCESS.
If anyone hasn't noticed, new standards are taking forever to be completed...
JAXB/XML Value Object Design Pattern[ Go to top ]
We use Castor to do the same except that we don't have to transform XML using XSL.
- Posted by: ganesh babu
- Posted on: March 06 2002 03:48 EST
- in response to Elika Kohen
We are waiting for XML Schema support in JAXB. I agree with you that use of XML as VO must be standardized one way or other.
Ganesh
JAXB/XML Value Object Design Pattern[ Go to top ]
Where do you get Castor?
- Posted by: Elika Kohen
- Posted on: March 06 2002 19:09 EST
- in response to ganesh babu
JAXB/XML Value Object Design Pattern[ Go to top ]
Castor can be found at. It supports generating objects from an XML Schema and mapping these objects to the database using JDO. Castor is a more mature version of JAXB.
- Posted by: Richard Seidenstein
- Posted on: March 07 2002 06:06 EST
- in response to Elika Kohen
JAXB/XML Value Object Design Pattern[ Go to top ]
Oracle have something they call "Business Objects for Java"
- Posted by: Avi Abrami
- Posted on: March 07 2002 02:12 EST
- in response to Elika Kohen
(BC4J) that comes as part of their JDeveloper (9i) product.
I don't have first hand experience using this product, but
from what I know of it, it uses XML files to facilitate
object-relational mappings between database tables and CMP
entity beans.
If you're interested, you can find out more from Oracle's
"Technet" site at:
Cheers,
Avi.
JAXB/XML Value Object Design Pattern[ Go to top ]
Infact its pretty easy using JDEVELOPER 9 since it uses
- Posted by: Swapnil Shah
- Posted on: May 02 2002 03:45 EDT
- in response to Avi Abrami
XML and SCHEMA objects to represent DATABASE data or snapshots as called in programmers perpestive.
Its good tool where u can easily bind ur datalayer with
XML and uses different patterns inbuilt to give diff
customizable views.
It also facilitates easy creation and management of EJBS
also OCBJ and BC4J are the mirror of todays Patterns
and Services.
But one think is that the code generated at the back
is fairly Oracle specific so if u want to merge the
tool with different applications u should have to
customize a bit.
TO know more about it just visit technet.oracle.com
a great sight for Oracle lovers.
Regards
Swapnil S
JAXB/XML Value Object Design Pattern[ Go to top ]
I completely agree with you on the use of XML Schema as the basis of code generation to produce Java (or C#, or C++, or other) Value Objects that marshal XML-based data.
- Posted by: Matthew Adams
- Posted on: March 07 2002 11:18 EST
- in response to Elika Kohen
However...
It seems to me that the intent of Value Objects is to provide a way to provide data in a Java-specific and serializable way, primarily to remote clients. The focus of these objects is only data, not behavior; one can think of them as pseudointelligent structures.
I can't speak much to entity beans, as I have avoided using them since EJB 1.0, but I can speak to JDO persistence capable objects. These objects are intended to represent both data and behavior in your business domain. They represent the fundamental concepts in your domain, and they are fairly independent of process, fine-grained in nature, and only meant to be accessed locally. I would recommend using session bean facades and message-driven beans that manipulate JDO persistence capables according to a request-response paradigm, where the request and response objects themselves are manifested by code generated from XML Schema.
The request and response schemas are driven almost exclusively by the use cases of your system, whereas the object model of the JDO persistence capables are driven by the analysis of your problem domain. While it is appealing to attempt to represent the state of all of your fine-grained objects, often that state would amount to the full closure of your graph of persistent objects; the decision of where you stop becomes arbitrary, and inevitably will not always be appropriate for some use case. Instead, let the use cases themselves serve as requirements and as drivers of the request and response schemas. When I first saw SOAP, with its request and response schemas, I noticed how cleanly this model works with it.
If you do use XML Schemas to generate code, it is helpful to do a few things (in no particular order):
1. Ensure that your XML Schema fits well into an object model. Don't use nested complex types (<xsl:element<xsl:complexType>...). Use complex types (<xsl:complexType...) to define types, and use elements that declare themselves to be of a given type (<xsl:element...).
2. Consciously decide whether or not to use reflection during marshalling. If performance is of the utmost concern, you might consider having your code generator generate code that does its own marshalling. If code reuse is more important, consider having the code generator generate code to delegate the parsing to a reflection-based marshaller.
3. Implement Serializable. This ensures that your Value Objects can cross the wire.
4. Make the schemas self-contained to a reasonable degree. This is a consequence of implementing Serializable, and is important when working in an asynchronous, message-driven environment.
5. Include object IDs. This allows clients to find objects referenced in the response schema for later display or manipulation.
--Matthew
JAXB/XML Value Object Design Pattern[ Go to top ]
Have you looked at Castor? It has its own JDO and Java/XML implementations (which includes a XML to Java source generator) and it supports XML Schema.
- Posted by: null
- Posted on: March 07 2002 12:40 EST
- in response to Elika Kohen
JAXB/XML Value Object Design Pattern[ Go to top ]
I just looked at the Castor site... This is very similar to what we have also done....
- Posted by: Elika Kohen
- Posted on: March 07 2002 16:00 EST
- in response to null
Perhaps we could start another "Java Community" at Apache or somewhere to produce a real set of Open APIs... This way the community could manage the new technologies instead of letting the JCP sit on it forever.
JAXB/XML Value Object Design Pattern[ Go to top ]
We're trying something similar with Breeze (similar to castor) and Oracle's XSU. Haven't gone very far in to it though .
- Posted by: Daniel Work
- Posted on: March 07 2002 23:53 EST
- in response to Elika Kohen
JAXB/XML Value Object Design Pattern[ Go to top ]
Enhydra's Zeus is yet another program that does things similar to Castor and Jaxb (). Zeus 3.1beta supports XML Schema (and DTD of course). And we have used both Zeus and Jaxb in our projects.
- Posted by: Lu Huang
- Posted on: March 11 2002 23:54 EST
- in response to Elika Kohen
I think the original message in this thread has a very good point. It really makes sense that these XML data binding frameworks lead to a standard way of marshaling and unmarchaling XML files to/from Java objects. Jaxb has the potential to be included into J2EE 1.4. But its progress is very very slow. Making persistent Java objects from XML documents seem a long way from standardization. I certainly want to hear more about it...
JAXB/XML Value Object Design Pattern[ Go to top ]
Excellent posting, its the kind of .NET and ONE comparisons I would like to learn about.
- Posted by: Nitish Verma
- Posted on: March 08 2002 20:14 EST
- in response to Elika Kohen
Forgive my ignorance, but I thought that JAXB wasn't really available yet. How did you go about getting a reference implementation of JAXB?
Thanks,
JAXB/XML Value Object Design Pattern[ Go to top ]
You can get JAXB here....
- Posted by: Elika Kohen
- Posted on: March 09 2002 10:21 EST
- in response to Nitish Verma
The WebServices stuff can be found here...
JAXB/XML Value Object Design Pattern[ Go to top ]
You might want to look at what was discussed in the XML aware value object thread:
- Posted by: Noam Borovoy
- Posted on: March 11 2002 07:10 EST
- in response to Elika Kohen
Although it solves a somewhat different problem - there is much common ground.
also see JDOM at:
for an alternative XML API.
JAXB/XML Value Object Design Pattern + JDO issues[ Go to top ]
THis pattern is exactly the same than I posted a few months ago. See the link:
- Posted by: Olivier Brand
- Posted on: March 12 2002 16:50 EST
- in response to Elika Kohen
I have followed the SUN pattern format, so it should be easier for blueprints aware engineers.
I have not used JAXB since it does not support schema. Castor is once again an excellent framework. I am currently using it using their JDO implementation which is excellent !!
Defining delegates and DAO Factories would allow to use any persistent backend engines (EJB vs JDO).
JDO is much easier to learn than EJBs. Wrapping JDO objects into a Session Bean should achieve the same thing than writing Entities.
JAXB/XML Value Object Design Pattern + JDO issues[ Go to top ]
Its interesting, your basic point is:
- Posted by: Jonathan Gibbons
- Posted on: March 26 2002 09:20 EST
- in response to Olivier Brand
You want a STANDARD WAY to get XML from xxx.
Nothing wrong with that. Hope it happens.
BUT, several things I noted:
// We rarely use the EJB interface to the Business Object, (JavaMug), since the SOAP interfaces are much faster than CORBA and the lack of current EJB spefication support...
REALLY? SOAP are faster than corba? i.e. using an ascii xml stream with dom/sax marshalling and schema checking is faster than corba. I'm amazed. Not sure I believe, but if you have done the comparison I guess I must.
AND.
>// Presentation Layer:
>TranslationControl myTranslationControl = new >TransLationControl("../XSL/AccountStyleSheet.xsl", myData);
>
>httpContent.append(myData.translate();
Surely the only point at which you need XML is at the point of translation. So all you actually need are value objects than can render themselves as xml. This means you can buy into EJB's and app servers, no need of bespoke introspection factories or whatever. All you need is the 'marshallToXml' call on the value object.
This means value objects are still in use which use less bandwidth, can be reused and contain validation, can be serialised and use more efficient marshalling/unmarshalling.
So, all your code gen has to do is produce value objects with the XML marshalling code in. Which is your original point I guess, you want the call to have a 'standard' name.
Jonathan
=================================================
The LowRoad EJB code generator does all this and more...
JAXB/XML Value Object Design Pattern + JDO issues[ Go to top ]
Can I uses EJB to get some data from a XML file? My intention is that the Session bean would do some business logic and pick the correct stored procedure, and its parameters information from a XML file, ready to be parsed for db calling.
- Posted by: Neo Gigs
- Posted on: April 13 2002 00:59 EDT
- in response to Jonathan Gibbons
JAXB/XML Value Object Design Pattern + JDO issues[ Go to top ]
I would suggest using Apache Xindice to get XML data into an EJB... But then again, this is what this whole threade is about. MS SQL Server 2000 and Oracle 9i do emit XML but again, there is no standard way of doing this.
- Posted by: Elika Kohen
- Posted on: April 18 2002 15:10 EDT
- in response to Neo Gigs
JAXB/XML Value Object Design Pattern + JDO issues[ Go to top ]
Yes,
- Posted by: Elika Kohen
- Posted on: April 18 2002 15:18 EDT
- in response to Jonathan Gibbons
The SOAP interfaces that we use are much faster than our EJB/CORBA implemenations for several reasons.
-We don't have to worry about an Object Broker.
-Our SOAP messages are running over TCP and not HTTP at this point.
-We are using the PULL technique for XML parsing instead of regular DOM/SAX methods.
-EJBs are just slow. (Or hasn't anyone noticed?). The whole notion of the Application container being separate from the Web Container is ludicrous. Why? Certainly deploy them on several machines to improve scaleability but why implement the stupid EJB interfaces such as Passivate etc... There are much much better methods for going about doing this. If needed we could start a completely new thread about different techniques for persisting server side state.
We usually use one of three interfaces.
- SOAP/HTTP for Interoperability
- SOAP/TCP for B2B type implementations
- Binary/TCP for performance critical functionality.
JAXB/XML Value Object Design Pattern[ Go to top ]
Hello,
- Posted by: PThakur Thakur
- Posted on: October 29 2003 03:20 EST
- in response to Elika Kohen
JAXB ie Java XML Binding API provides interface and explains the reasons for this API. It is the responsibility of vendor to give JAXB engine that will take care of XML to Binary and vis-a-vis. | http://www.theserverside.com/discussions/thread.tss?thread_id=12246 | CC-MAIN-2014-41 | refinedweb | 4,154 | 61.36 |
I would like to insert the value of a string into a Ruby hash at runtime. While I can do this by putting it inside a function,
def run_code(dynamic_variable)
trigger_template = {
:trigger => "#{dynamic_variable}"
}
.. #do something
end
dynaic_variable
trigger_template
I'll just give you an example of how to load a hash from a file then modify it at runtime. You can either use JSON or YAML to serialize the Hash. For example, this is a YAML file:
--- foo: "bar" bar: "foo"
Then I say
require 'yaml' hash = YAML.load("./my/file.yaml") hash["bar"] = "my new bar val" put hash # => { "foo" => "bar", "bar" => "my new bar val" } | https://codedump.io/share/oxL3w0dB6rGT/1/how-to-add-dynamic-variable-into-a-ruby-hash-at-runtime | CC-MAIN-2018-17 | refinedweb | 107 | 62.98 |
Overriding the Authentication Provider Selection Page
In the first part of this series, we saw how we register and configure our SharePoint site to use Windows Live ID as an authentication provider.
Right now our site has two authentication providers, Windows authentication and Windows Live authentication. When you click the Sign in button, SharePoint will display a page giving you a chance to select the authentication provider you would like to use...
4. Strong name the assembly that you are creating, because you will place it the global assembly cache later.
5. Add a new ASPX page to your project. Copy a page from an existing ASP.NET web application project;.
6. In the code-behind file (.aspx.cs file), change the namespace to match the namespace of your current project.
7. Change the class so that it inherits from Microsoft.SharePoint.IdentityModel.Pages.MultiLogonPage.
8.,. | https://blogs.technet.microsoft.com/meamcs/2012/05/30/internet-facing-sharepoint-2010-site-with-windows-live-id-part-2/ | CC-MAIN-2017-43 | refinedweb | 146 | 57.37 |
Re: [soaplite] apache soap breaks with 0.52
Expand Messages
- --- Paul Kulchenko <paulclinger@...> wrote:
> > btw, there must be a better way than uri_escapingyes, it does. but it doesn't do anything about
> all strings;
> Why do you need to do that? If you specify type
> (string) SOAP::Lite
> will take care about escaping '<', '&' and ']]>'.
>
spacial characters > 0x7F, like german umlauts.
they seem to work ok on the perl side, but again
java is the problem. if the xml parser gets
anything with an umlaut inside, it complains
about not being able to find the next closing tag.
and if java is asked to put an umlaut into a soap
message, it puts out two characters. maybe the
unicode representation of that umlaut.
> > return SOAP::Data->type( 'Array') ->that works. thanks.
> name( daten =>
> You don't need to specify type for arrays.
> SOAP::Lite will do that
> base on value (which is array reference). If you do
> specify Array as
> a type, serializer thinks that it's some custom type
> 'Array' and put
> it into specific namespace (that's why ApacheSOAP
> complains). Just
> do:
>
> SOAP::Data->name(daten => \@ary);
(strange that my old approach did work with
0.51 too. was that the bug?)
__________________________________________________
Do You Yahoo!?
Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
- Hi, Werner!
--- werner ackerl <cjbecjbe@...> wrote:
> yes, it does. but it doesn't do anything aboutYes, it doesn't. Details about why and what to do you may find here:
> spacial characters > 0x7F, like german umlauts.
Code examples are in cookbook () or in
conference materials here:
My interoperability tests for special chars don't show any problem
with Java implementations.
> that works. thanks.Low-level mechanism for binding classes to namespaces changed. For
> (strange that my old approach did work with
> 0.51 too. was that the bug?)
every complex type namespace will be choosen from the hash specified
as parameter for maptypes(), and for all other types default
namespace will be used (that's why Array type gets it). Array has
it's own handling because of number of attributes (like arrayType)
that other types don't have.
Best wishes, Paul.
__________________________________________________
Do You Yahoo!?
Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/1035?xm=1&o=0&l=1 | CC-MAIN-2017-17 | refinedweb | 390 | 76.82 |
Horticulture/Modifying Wikipedia Articles for Use in Horticulture
The best way to get a start on most chapters is to "transwiki" the Wikipedia article on the subject, and then modify it to suit the Manual of Style used in this book. Basing the modules of this book on Wikipedia articles has two important advantages:
- By basing the module on a Wikipedia article, the topic doesn't have to be developed "from scratch". Many Wikipedia articles have been well-edited and copyedited over time, have images to illustrate features of the topic. and so on.
- Because Wikipedia (like Wikibooks) is licensed under the GFDL, there are rarely problems with copyrights.
The main "problem" with Wikipedia articles is simple: they're encyclopedia articles, not book chapters, and so need to be modified, rearranged, and relinked to fit into this book. In order to facilitate this, a number of templates have been developed to fit various topics, which both help suggest a framework into which the sections (or even sentences) of the Wikipedia articles can be put into, as well as helping to create a consistent structure for all chapters, which in turn makes the book a lot more useful for the home reader or classroom environment.
Contents
Step One: Start the module[edit]
The best way to start a transwiki-based module is to first Import the Wikipedia article, and then apply the relevant template and develop the chapter. The only problem with this is that only an Administrator can use the import tool. To get the article imported, simply list it on WB:RFI, or for faster action make the request on our main IRC channel, #wikibooks (on freenode). There are almost always administrators listening on the channel, so just type !admin for attention.
Alternatively, contributors can simply copy-and-paste the Wikipedia article and work on it while awaiting Import. When doing this, please be sure to list both the wikipedia article and the chapter as an Import/Merge request here. Note that this is a bit more difficult for administrators (several steps are involved), but it accomplishes the purpose just as well. Very active contributors who wish to import large numbers of articles should consider simple becoming an administrator (the RFA process on wikibooks is far less stressful than it is on Wikipedia).
If there is no article on Wikipedia, or the article is a fairly useless stub, just start the chapter from scratch using the templates.
Step Two: Templatize[edit]
There are a number of templates available for creating page structures in A Wikimanual of Gardening. They must be used with "subst:" to work correctly, but the templates they include (primarily infoboxes) should not be substituted:
- For modules about individual plants, type {{subst:plantprof}} at the top of the page (usually just above the taxobox), and save.
- For modules about weed plants, type {{subst:weedprof}} at the top of the page (usually just above the taxobox), and save.
- For modules about garden pests, type {{subst:pestprof}} at the top of the page (usually just above the taxobox), and save.
- For modules about plant diseases, type {{subst:phytopathprof}} at the top of the page (usually just above the taxobox), and save.
Step Three: "Bookify"[edit]
"Bookify" is a word occasionally used to mean "change an encyclopedia article into part of a book". The templates for A Wikimanual of Gardening are used to facilitate this process by providing a consistent "narrative structure". Wikipedia articles need to be "dewikified" (wikilinks either removed or replaces with subpage-style links), and for this book they need t be snipped up and re-organized to fit into the fields provided by the templates.
Dewikifying the easy way[edit]
The easiest way to dewikify is to simply open both the module page and the edit page side-by side, and just copy from the module and paste over the wikified version on the edit page. There are a few things to look out for however:
- References need to be pasted around, i.e. they will not render as references from a copy of the text as it appears on the reading view.
- Italics and boldface are also lost in this process
- Images also need to be worked around, for the same reason the references need it. If an image appears as a redlink, see the section on this, below.
- Sometimes it's better to modify the wikilinks to the subpage protocols, rather than simply removing them. See this section for how to do that.
Moving from taxobox to WMOG infoboxes[edit]
- Move the image from the taxobox to the hortibox
- Add information about family
- After {{Hortibox|, {{Pestbox|, etc. (the first line of the template), add the common name of the plant, pest, pathogen, etc.
- Use the "| Genus =" field only when the page is about the genus (if the binomial field is used, don't also use the genus field)
- Fill in as many fields as you are able, leaving the other ones blank (don't delete fields, as they might be filled in later).
Reordering the text[edit]
After dewikifying, move the text into the fields provided by the template.
- If there is a lot of information that's just not at all of interest for horticulture (like long discussions of the uses of chemicals found in a plant, "trivia" sections mentioning every song or movie that mentions a particular flower, etc.), just remove them, though keep in mind that some trivial information (particularly historical and folklore references) might be useful to the student of horticulture as mneumonic devices, as well as being educational about the uses of plants.
- For chapters about plant genera, the long species lists often contained in Wikipedia articles are generally not useful, and serve little purpose outside of cluttering up a page. They can be pasted onto the talk page for further reference, but be sure to use the <pre> </pre> protocol in onder to prevent creating bad redlinks.
Adding material[edit]
After the adaptations are completed, there will almost certainly be gaps to be filled. The more information the better, remember that Wikibooks is not paper. If using external sources (books, journals, other websites, etc.) please be sure to add a link or citation under the "References" section.
Making Chapter Links[edit]
Unlike Wikipedia, Wikibooks uses subpages. Links to other chapters of this book can easily be made using [[../link/]]. Redlinks are fine.
Fixing Image Redlinks[edit]
If an image appears on the Wikipedia article, but does not appear on the Wikibooks page, this means that the image in in the Wikipedia Image namespace, rather than being on commons. If the image is important to the chapter, images can easily be moved to commons using commonshelper on the toolserver.
Categorizing[edit]
A Wikimanual of Gardening uses a deep category structure. General categories are automatically added when using the templates, but sometimes they need minor modification. There are also more specific categories that will eventually be employed using DynamicPageList (DPL) to provide lists for gardeners who are looking for plants for a particular purpose. | https://en.wikibooks.org/wiki/Horticulture/Modifying_Wikipedia_Articles_for_Use_in_Horticulture | CC-MAIN-2017-13 | refinedweb | 1,174 | 57.61 |
"Serge E. Hallyn" <serue@us.ibm.com> writes:> Heh, well I tried several approaches - adding tag_ops to kset, to ktype,> etc. Finally ended up just calling sysfs_enable_tagging on> /sys/kernel/uids when that is created. It's now working perfectly.Sounds good.>> I suspect since you are working on this and I seem to be stuck>> in molasses at the moment it makes sense to figure out what it>> will take to handle the uid namespace before pushing these>> patches again.>> I had ported your patches to 2.6.25, but Benjamin in the meantime ported> them to 2.6.25-mm1. Since that's closer to the -net tree it's a more> useful port, so I'll let him post his patchset. Then I'll send the> userns patch on top of that. While I'm not actually able to send> network traffic over a veth dev (I probably am still not setting it up> right), I am able to pass veth devices into network namespaces, and the> user namespaces are properly handled.>> I believe Benjamin did notice a problem with some symlinks not existing,> and I think we want one more patch on top of yours removing the> hold_net() from sysfs_mount, which I don't think was what you really> wanted to do. By simply removing that, if all tasks in a netns go away,> the netns actually goes away and a lookup under a bind-mounted copy of> its /sys/class/net is empty.I will have to look, I need to refresh myself on where all of this code is.I think hold_net was what I wanted. A record that there is a userbut not something that will keep the network namespace from going away.Essentially hold_net should be a debugging check rather then areal limitation.> Anyway the patches should be hitting the list next week.Cool. We can figure out what we need to do to merge them fromthere.>> Taking a quick look and having a clue what we will need to>> do for a theoretical device namespace is also a possibility.>> I'm not sure I'm familiar enough with the kobject/class/sysfs/device> relationships yet to comment on that. It doesn't look like it should> really be a problem, though simply adding tags to every directory> under /sys/class (/sys/class/tty, /sys/class/usb_device, etc) doesn't> seem like necessarily the nicest way to go...True. And the goal is something maintainable. There are still a lotof implications of a device namespace left unexamined so we shall see.Eric | http://lkml.org/lkml/2008/4/25/393 | CC-MAIN-2016-50 | refinedweb | 430 | 70.33 |
ORM stands for object/relational mapping. ORM is the automated persistence of objects in a Java application to the tables in a relational database.
2.What does ORM consists of ?
An ORM solution consists of the followig four pieces:
AP for performing basic !R"# operations
AP to e$press %ueries refering to classes
&acilities to specif' metadata
Optimi(ation facilities : dirt' chec)ing*la(' associations fetching
3.What are the ORM levels ?
+he ORM levels are:
Pure relational ,stored procedure.-
.ight objects mapping ,J#/!-
Medium object mapping
&ull object Mapping ,composition*inheritance* pol'morphism* persistence b' reachabilit'-
4.What is Hibernate?
0ibernate is a pure Java object1relational mapping ,ORM- and persistence framewor) that allows 'ou to map plain old Java objects to relational database
tables using ,2M.- configuration files.ts purpose is to relieve the developer from a significant amount of relational data persistence1related programming
tas)s.
5.Why do you need ORM tools like hibernate?
+he main advantage of ORM li)e hibernate is that it shields developers from mess' 34.. Apart from this* ORM provides following benefits:
!"roved "roductivity
o 0igh1level object1oriented AP
o .ess Java code to write
o 5o 34. to write
!"roved "erfor!ance
o 3ophisticated caching
o .a(' loading
o 6ager loading
!"roved !aintainability
o A lot less code to write
!"roved "ortability
o ORM framewor) generates database1specific 34. for 'ou
#.What $oes Hibernate %i!"lify?
0ibernate simplifies:
3aving and retrieving 'our domain objects
Ma)ing database column and table name changes
!entrali(ing pre save and post retrieve logic
!omple$ joins for retrieving related items
3chema creation from object model
&.What is the need for Hibernate '!l !a""in( file?
0ibernate mapping file tells 0ibernate which tables and columns to use to load and store objects. +'pical mapping file loo) as follows:
).What are the !ost co!!on !ethods of Hibernate confi(uration?
+he most common methods of 0ibernate configuration are:
Programmatic configuration
2M. configuration ,hibernate.cfg.xml-
*.What are the i!"ortant ta(s of hibernate.cf(.'!l?
&ollowing are the important tags of hibernate.cfg.$ml:
1+.What are the ,ore interfaces are of Hibernate fra!e-ork?
+he five core interfaces are used in just about ever' 0ibernate application. "sing these interfaces* 'ou can store and retrieve persistent objects and
control transactions.
3ession interface
3ession&actor' interface
!onfiguration interface
+ransaction interface
4uer' and !riteria interfaces
11.What role does the %ession interface "lay in Hibernate?
+he 3ession interface is the primar' interface used b' 0ibernate applications. t is a single1threaded* short1lived object representing a conversation
between the application and the persistent store. t allows 'ou to create %uer' objects to retrieve persistent objects.
Session session = sessionFactory.openSession();
%ession interface role:
7raps a J#/! connection
&actor' for +ransaction
0olds a mandator' ,first1level- cache of persistent objects* used when navigating the object graph or loo)ing up objects b' identifier
12.What role does the %ession.actory interface "lay in Hibernate?
+he application obtains 3ession instances from a 3ession&actor'. +here is t'picall' a single 3ession&actor' for the whole application89:reated during
application initiali(ation. +he 3ession&actor' caches generate 34. statements and other mapping metadata that 0ibernate uses at runtime. t also holds
cached data that has been read in one unit of wor) and ma' be reused in a future unit of wor)
SessionFactory sessionFactory = configuration.buildSessionFactory();
13.What is the (eneral flo- of Hibernate co!!unication -ith R$/M%?
+he general flow of 0ibernate communication with R#/M3 is :
.oad the 0ibernate configuration file and create configuration object. t will automaticall' load all hbm mapping files
!reate session factor' from configuration object
;et one session from this session factor'
!reate 04. 4uer'
6$ecute %uer' to get list containing Java objects
14.What is Hibernate 0uery 1an(ua(e 2H013?
0ibernate offers a %uer' language that embodies a ver' powerful and fle$ible mechanism to %uer'* store* update* and retrieve objects from a database.
+his language* the 0ibernate %uer' .anguage ,04.-* is an object1oriented e$tension to 34..
15.Ho- do you !a" 4ava Ob5ects -ith $atabase tables?
&irst we need to write Java domain objects ,beans with setter and getter-.
7rite hbm.$ml* where we map java class to table and database columns to Java class variables.
6'a!"le :
<hibernate-mapping>
<class name=com.test.!ser table=user>
<property column=!S"#$%&'" length=())
name=user%ame not-null=true type=*a+a.lang.String,>
<property column=!S"#$-&SS./#0 length=())
name=user-ass1ord not-null=true type=*a+a.lang.String,>
<,class>
<,hibernate-mapping>
1#.What7s the difference bet-een load23 and (et23?
load,- vs. get,- :1
load23 (et23
Onl' use the load() method if 'ou are sure that the object e$ists. f 'ou are not sure that the object e$ists* then use one of the get() methods.
load() method will throw an e$ception if the uni%ue id is not found in the database. get() method will return null if the uni%ue id is not found in the database.
load() just returns a pro$' b' default and database won<t be hit until the pro$' is first invo)ed. get() will hit the database immediatel'.
1&.What is the difference bet-een and !er(e and u"date ?
"se update() if 'ou are sure that the session does not contain an alread' persistent instance with the same identifier* and merge() if 'ou want to merge
'our modifications at an' time without consideration of the state of the session.
1).Ho- do you define se8uence (enerated "ri!ary key in hibernate?
"sing =generator> tag.
6'a!"le:1
<id column=!S"#$20 name=id type=*a+a.lang.3ong>
<generator class=se4uence>
<param name=table>S"5!"%6"$%&'"<,param>
<generator>
<,id>
1*.
2+.What do you !ean by :a!ed ; %01 8uery?
5amed 34. %ueries are defined in the mapping $ml document and called wherever re%uired.
6'a!"le<
<s4l-4uery name = empdetails>
<return alias=emp class=com.test."mployee,>
S"3"67 emp."'-$20 &S 8emp.empid9:
emp."'-$&00#"SS &S 8emp.address9:
emp."'-$%&'" &S 8emp.name9
F#/' "mployee "'- .;"#" emp.%&'" 32<" =name
<,s4l-4uery>
nvo)e 5amed 4uer' :
3ist people = session.get%amed5uery(empdetails)
.setString(7om>rady: name)
.set'ax#esults()?)
.list();
21.Ho- do you invoke %tored =rocedures?
<s4l-4uery name=select&ll"mployees$S- callable=true>
<return alias=emp class=employee>
<return-property name=empid column="'-$20,>
<return-property name=name column="'-$%&'",>
<return-property name=address column="'-$&00#"SS,>
8 @ = call select&ll"mployees() 9
<,return>
<,s4l-4uery>
22.6'"lain ,riteria >=
!riteria is a simplified AP for retrieving entities b' composing !riterion objects. +his is a ver' convenient approach for functionalit' li)e @search@ screens
where there is a variable number of conditions to be placed upon the result set.
6'a!"le :
3ist employees = session.create6riteria("mployee.class)
.add(#estrictions.liAe(name: aB) )
.add(#estrictions.liAe(address: >oston))
.add/rder(/rder.asc(name) )
.list();
23.$efine Hibernate?e!"late?
org.springframe1orA.orm.hibernate.;ibernate7emplate is a helper class which provides different methods for %uer'ing/retrieving data from the
database. t also converts chec)ed 0ibernate6$ceptions into unchec)ed #ataAccess6$ceptions.
24.What are the benefits does Hibernate?e!"late "rovide?
+he benefits of 0ibernate+emplate are :
;ibernate7emplate* a 3pring +emplate class simplifies interactions with 0ibernate 3ession.
!ommon functions are simplified to single method calls.
3essions are automaticall' closed.
6$ceptions are automaticall' caught and converted to runtime e$ceptions.
25.Ho- do you s-itch bet-een relational databases -ithout code chan(es?
"sing 0ibernate 34. #ialects * we can switch databases. 0ibernate will generate appropriate h%l %ueries based on the dialect defined.
2#.f you -ant to see the Hibernate (enerated %01 state!ents on console@ -hat should -e do?
n 0ibernate configuration file set as follows:
<property name=sho1$s4l>true<,property>
2&.What are derived "ro"erties?
+he properties that are not mapped to a column* but calculated at runtime b' evaluation of an e$pression are called derived properties. +he e$pression
can be defined using the formula attribute of the element.
2).What is co!"onent !a""in( in Hibernate?
A component is an object saved as a value* not as a reference
A component can be saved directl' without needing to declare interfaces or identifier properties
Re%uired to define an empt' constructor
3hared references not supported
6'a!"le:
2*.What is the difference bet-een sorted and ordered collection in hibernate?
sorted collection vs. order collection :1
31.Wh at is
the
advanta(e of Hibernate over 5dbc?
0ibernate Cs. J#/! :1
sorted collection order collection
A sorted collection is sorting a collection b' utili(ing the sorting features provided
b' the Java collections framewor). +he sorting occurs in the memor' of JCM
which running 0ibernate* after the data being read from database using java
comparator.
Order collection is sorting a collection b' specif'ing the order1b' clause for
sorting this collection when retrieval.
f 'our collection is not large* it will be more efficient wa' to sort it. f 'our collection is ver' large* it will be more efficient wa' to sort it .
JDBC Hibernate
7ith J#/!* developer has to write code to map
an object modelDs data representation to a
relational data model and its corresponding
database schema.
0ibernate is fle$ible and powerful ORM solution to
map Java classes to database tables. 0ibernate
itself ta)es care of this mapping using 2M. files so
developer does not need to write code for this.
7ith J#/!* the automatic mapping of Java
objects with database tables and vice versa
conversion is to be ta)en care of b' the
developer manuall' with lines of code.
0ibernate provides transparent persistence and
developer does not need to write code e$plicitl' to
map database tables tuples to application objects
during interaction with R#/M3.
J#/! supports onl' native 3tructured 4uer'
.anguage ,34.-. #eveloper has to find out the
efficient wa' to access database* i.e. to select
effective %uer' from a number of %ueries to
perform same tas).
0ibernate provides a powerful %uer' language
0ibernate 4uer' .anguage ,independent from t'pe
of database- that is e$pressed in a familiar 34. li)e
s'nta$ and includes full support for pol'morphic
%ueries. 0ibernate also supports native 34.
statements. t also selects an effective wa' to
perform a database manipulation tas) for an
application.
Application using J#/! to handle persistent
data ,database tables- having database specific
code in large amount. +he code written to map
table data to application objects and vice
versa is actuall' to map table fields to object
properties. As table changed or database
changed then it<s essential to change object
structure as well as to change code written to
map table1to1object/object1to1table.
0ibernate provides this mapping itself. +he actual
mapping between tables and application objects is
done in 2M. files. f there is change in #atabase or
in an' table then the onl' need to change 2M. file
properties.
7ith J#/!* it is developer<s responsibilit' to
handle J#/! result set and convert it to Java
objects through code to use this persistent
data in application. 3o with J#/!* mapping
between Java objects and database tables is
done manuall'.
0ibernate reduces lines of code b' maintaining
object1table mapping itself and returns result to
application in form of Java objects. t relieves
programmer from manual handling of persistent
data* hence reducing the development time and
maintenance cost.
7ith J#/!* caching is maintained b' hand1
coding.
0ibernate* with +ransparent Persistence* cache is
set to application wor) space. Relational tuples are
moved to this cache as a result of %uer'. t
improves performance if client application reads
same data man' times for same write. Automatic
+ransparent Persistence allows the developer to
concentrate more on business logic rather than this
application code.
n J#/! there is no chec) that alwa's ever'
0ibernate enables developer to define version t'pe
field to application* due to this defined field
0ibernate updates version field of database table
ever' time relational tuple is updated in form of
Java class object to that table. 3o if two users
32.What are the ,ollection ty"es in Hibernate ?
/ag
3et
.ist
Arra'
Map
33.What are the -ays to e'"ress 5oins in H01?
04. provides four wa's of e$pressing ,inner and outer- joins:1
An implicit association join
An ordinar' join in the &ROM clause
A fetch join in the &ROM clause.
A theta-style join in the 706R6 clause.
34.
35.What is Hibernate "ro'y?
+he proxy attribute enables la(' initiali(ation of persistent instances of the class. 0ibernate will initiall' return !;./ pro$ies which implement the
named interface. +he actual persistent object will be loaded when a method of the pro$' is invo)ed.
3#.Ho- can Hibernate be confi(ured to access an instance variable directly and not throu(h a
setter !ethod ?
/' mapping the propert' with access?@field@ in 0ibernate metadata. +his forces hibernate to b'pass the setter method and access the instance variable
directl' while initiali(ing a newl' loaded object.
3&.Ho- can a -hole class be !a""ed as i!!utable?
Mar) the class as mutable?@false@ ,#efault is true-*. +his specifies that instances of the class are ,not- mutable. mmutable classes* ma' not be updated or
deleted b' the application.
3).What is the use of dyna!ic9insert and dyna!ic9u"date attributes in a class !a""in(?
!riteria is a simplified AP for retrieving entities b' composing !riterion objects. +his is a ver' convenient approach for functionalit' li)e @search@ screens
where there is a variable number of conditions to be placed upon the result set.
dynamic-update ,defaults to false-: 3pecifies that !-0&7" 34. should be generated at runtime and contain onl' those columns whose values
have changed
dynamic-insert ,defaults to false-: 3pecifies that 2%S"#7 34. should be generated at runtime and contain onl' the columns whose values are
not null.
3*.What do you !ean by fetchin( strate(y ?
A fetching strategy is the strateg' 0ibernate will use for retrieving associated objects if the application needs to navigate the association. &etch
strategies ma' be declared in the O/R mapping metadata
a* or over1ridden b' a particular 04. or6riteria %uer'.
4+.What is auto!atic dirty checkin(?
Automatic dirt' chec)ing is a feature that saves us the effort of e$plicitl' as)ing 0ibernate to update the database when we modif' the state of an object
inside a transaction.
41.What is transactional -rite9behind?
0ibernate uses a sophisticated algorithm to determine an efficient ordering that avoids database foreign )e' constraint violations but is still sufficientl'
predictable to the user. +his feature is called transactional write1behind.
42.What are ,allback interfaces?
!allbac) interfaces allow the application to receive a notification when something interesting happens to an objectEfor e$ample* when an object is
loaded* saved* or deleted. 0ibernate applications donDt need to implement these callbac)s* but the'Dre useful for implementing certain )inds of generic
functionalit'.
43.What are the ty"es of Hibernate instance states ?
+hree t'pes of instance states:
+ransient 1+he instance is not associated with an' persistence conte$t
Persistent 1+he instance is associated with a persistence conte$t
#etached 1+he instance was associated with a persistence conte$t which has been closed F currentl' not associated
44.What are the differences bet-een 64/ 3.+ A Hibernate
0ibernate Cs 6J/ G.H :1
Hibernate EJB 3.0
%essionF!ache or collection of loaded objects relating to a single unit of wor)
=ersistence ,onte't13et of entities that can be managed b' a given 6ntit'Manager is defined b' a
persistence unit
B$oclet >nnotations used to support Attribute Oriented Programming 4ava 5.+ >nnotations used to support Attribute Oriented Programming
$efines H01 for e$pressing %ueries to the database $efines 64/ 01 for e$pressing %ueries
%u""orts 6ntity Relationshi"s through mapping files and annotations in Java#oc %u""ort 6ntity Relationshi"s through Java I.H annotations
=rovides a =ersistence Mana(er >= e$posed via the 3ession* 4uer'* !riteria* and +ransaction AP =rovides and 6ntity Mana(er nterface for managing !R"# operations for an 6ntit'
=rovides callback su""ort through lifec'cle* interceptor* and validatable interfaces =rovides callback su""ort through 6ntit' .istener and !allbac) methods
6ntity Relationshi"s are unidirectional. /idirectional relationships are implemented b' two unidirectional
relationships
6ntity Relationshi"s are bidirectional or unidirectional
45.What are the ty"es of inheritance !odels in Hibernate?
+here are three t'pes of inheritance models in 0ibernate:
+able per class hierarch'
+able per subclass
+able per concrete class
Q. serice.
! hibernate.cfg.xml (alternatiely can use hibernate.properties)" These two files are used to configure the hibernate seice (connection drier
class, connection #$%, connection username, connection password, dialect etc). &f both files are present in the classpath then hibernate.cfg.xml
file oerrides the settings found in the hibernate.properties file.
! 'apping files (*.hbm.xml)" These files are used to map persistent ob(ects to a relational database. &t is the best practice to store each ob(ect in
an indiidual mapping file (i.e mapping file per class) because storing large number of persistent classes into one mapping file can be difficult to
manage and maintain. The naming conention is to use the same name as the persistent ()*+*) class name. For example ,ccount.class will
hae a mapping file named ,ccount.hbm.xml. ,lternatiely hibernate annotations can be used as part of your persistent class code instead of the
*.hbm.xml files.
Q. What is a SessionFactory? Is it a thread-safe object?
Answer:
SessionFactory is -ibernates concept of a single datastore and is threadsafe so that many threads can access it concurrently and re.uest for
sessions and immutable cache of compiled mappings for a single database. , SessionFactory is usually only built once at startup. SessionFactory
should be wrapped in some /ind of singleton so that it can be easily accessed in an application code.
SessionFactory sessionFactory 0 new Configuration().configure().buildSessionfactory()1
Q. What is a Session? an you share a session object between different theads?
Answer:
Session is a light weight and a non2threadsafe ob(ect (3o, you cannot share it between threads) that represents a single unit2of2wor/ with the
database. Sessions are opened by a SessionFactory and then are closed when all wor/ is complete. Session is the primary interface for the
persistence serice. , session obtains a database connection la4ily (i.e. only when re.uired). To aoid creating too many sessions Thread%ocal
class can be used as shown below to get the current session no matter how many times you ma/e call to the currentSession() method.
5
public class -ibernate#til 6
5
public static final Thread%ocal local 0 new Thread%ocal()1
public static Session currentSession() throws -ibernate7xception 6
Session session 0 (Session) local.get()1
88open a new session if this thread has no session
if(session 00 null) 6
session 0 sessionFactory.openSession()1
local.set(session)1
9
return session1
9
9
&t is also ital that you close your session after your unit of wor/ completes. 3ote" :eep your -ibernate Session ,)& handy.
Q. What are the benefits of detached objects?
Answer:
;etached ob(ects can be passed across layers all the way up to the presentation layer without haing to use any ;T*s (;ata Transfer *b(ects).
<ou can later on re2attach the detached ob(ects to another session.
Q. What are the !ros and cons of detached objects?
Answer:
"ros:
! =hen long transactions are re.uired due to user thin/2time, it is the best practice to brea/ the long transaction up into two or more
transactions. <ou can use detached ob(ects from the first transaction to carry data all the way up to the presentation layer. These detached
ob(ects get modified outside a transaction and later on re2attached to a new transaction ia another session.
ons
! &n general, wor/ing with detached ob(ects is .uite cumbersome, and better to not clutter up the session with them if possible. &t is better to
discard them and re2fetch them on subse.uent re.uests. This approach is not only more portable but also more efficient because 2 the ob(ects
hang around in -ibernate>s cache anyway.
! ,lso from pure rich domain drien design perspectie it is recommended to use ;T*s (;ataTransfer*b(ects) and ;*s (;omain*b(ects) to
maintain the separation between Serice and #& tiers.
Q. How does Hibernate distinguish between transient #i.e. newly instantiated$ and detached objects?
Answer
! -ibernate uses the ersion property, if there is one.
! &f not uses the identifier alue. 3o identifier alue means a new ob(ect. This does wor/ only for -ibernate managed surrogate /eys. ;oes not
wor/ for natural /eys and assigned (i.e. not managed by -ibernate) surrogate /eys.
! =rite your own strategy with &nterceptor.is#nsaed().
Q. What is the difference between the session.get#$ %ethod and the session.load#$ %ethod?
?oth the session.get(..) and session.load() methods create a persistent ob(ect by loading the re.uired ob(ect from the database. ?ut if there was not such
ob(ect in the database then the method session.load(..) throws an exception whereas session.get(5) returns null.
Q. What is the difference between the session.u!date#$ %ethod and the session.loc&#$ %ethod?
?oth of these methods and sae*r#pdate() method are intended for reattaching a detached ob(ect. The session.loc/() method simply reattaches the
ob(ect to the session without chec/ing or updating the database on the assumption that the database in sync with the detached ob(ect. &t is the best
practice to use either session.update(..) or session.sae*r#pdate(). #se session.loc/() only if you are absolutely sure that the detached ob(ect is in sync
with your detached ob(ect or if it does not matter because you will be oerwriting all the columns that would hae changed later on within the same
transaction.
'ote: =hen you reattach detached ob(ects you need to ma/e sure that the dependent ob(ects are reatched as well.
Q. How would you reatach detached objects to a session when the sa%e object has already been loaded into the session?
<ou can use the session.merge() method call.
Q. What are the general considerations or best !ractices for defining your Hibernate !ersistent classes?
@.<ou must hae a default no2argument constructor for your persistent classes and there should be getAAA() (i.e accessor8getter) and setAAA( i.e.
mutator8setter) methods for all your persistable instance ariables.
B.<ou should implement the e.uals() and hashCode() methods based on your business /ey and it is important not to use the id field in your e.uals() and
hashCode() definition if the id field is a surrogate /ey (i.e. -ibernate managed identifier). This is because the -ibernate only generates and sets the field
when saing the ob(ect.
C. &t is recommended to implement the Seriali4able interface. This is potentially useful if you want to migrate around a multi2processor cluster.
D.The persistent class should not be final because if it is final then la4y loading cannot be used by creating proxy ob(ects.
E.#se A;oclet tags for generating your *.hbm.xml files or ,nnotations (+;: @.E onwards), which are less erbose than *.hbm.xml files.
=ould li/e to hear more .uestions and answers from the readers.....
1. Whats Hibernate?
Hibernate is a popular framework of Java which allows an efficient Object Relational mapping using configuration
files in XML format. fter java objects mapping to !atabase tables" !atabase is use! an! han!le! using Java objects
without writing comple# !atabase $ueries.
%. What is ORM?
ORM &Object Relational Mapping' is the fun!amental concept of Hibernate framework which maps !atabase tables
with Java Objects an! then provi!es various ()*s to perform !ifferent t+pes of operations on the !ata tables.
,. How properties of a class are appe! to the col"ns of a !atabase table in Hibernate?
Mappings between class properties an! table columns are specifie! in XML file as in the below e#ample-
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<@xml +ersion=C.?@> <D0/67E-" hibernate-mapping -!>326
http=,,hibernate.sourceforge.net,hibernate-mapping-(.?.dtd>
<hibernate-mapping>
<class name=department6lass table=7>3$0"-7>
<property name=dept column=department$name>
<property name=0escription column=department$details,>
<many-to-one name=nxt-er cascade=all column=%xt-er2d,>
<,class>
<,hibernate-mapping>
/. Whats the "sa#e of Confi#"ration $nterface in hibernate?
0onfiguration interface of hibernate framework is use! to configure hibernate. )t*s also use! to bootstrap hibernate.
Mapping !ocuments of hibernate are locate! using this interface.
1. How can we "se new c"sto interfaces to enhance f"nctionalit% of b"ilt&in interfaces of hibernate?
2e can use e#tension interfaces in or!er to a!! an+ re$uire! functionalit+ which isn*t supporte! b+ built3in interfaces.
4. 'ho"l! all the appin# files of hibernate ha(e .hb.)l e)tension to wor* properl%?
5o" having .hbm.#ml e#tension is a convention an! not a re$uirement for hibernate mapping file names. 2e can
have an+ e#tension for these mapping files.
6. How !o we create session factor% in hibernate?
7o create a session factor+ in hibernate" an object of configuration is create! first which refers to the path of
configuration file an! then for that configuration" session factor+ is create! as given in the e#ample below-
.
1
6onfiguration config = ne1 6onfiguration();
2
3
4
config.add#esource(Famp;4uot;myinstance,configuration.hbm.xmlFamp;4uot;);
config.set-roperties( System.get-roperties() );
SessionFactory sessions = config.buildSessionFactory();
8. What are +OJOs an! whats their si#nificance?
(OJOs& (lain Ol! Java Objects' are java beans with proper getter an! setter metho!s for each an! ever+ properties.
9se of (OJOs instea! of simple java classes results in an efficient an! well constructe! co!e.
:. Whats H,-?
H;L is the $uer+ language use! in Hibernate which is an e#tension of <;L. H;L is ver+ efficient" simple an! fle#ible
$uer+ language to !o various t+pe of operations on relational !atabase without writing comple# !atabase $ueries.
1=. How can we in(o*e store! proce!"res in hibernate?
)n hibernate we can e#ecute store! proce!ures using co!e as below-
.
1
2
3
4
5
6
7
8
<s4l-4uery name=getStudents callable=true>
<return alias=st class=Student>
<return-property name=std$id column=S70$20,>
<return-property name=s$name column=S70$%&'",>
<return-property name=s$dept column=S70$0"-'"%7,>
8 @ = call selectStudents() 9
<,return>
<,s4l-4uery>
11. What is criteria .+$?
0riteria is a simple +et powerful () of hibernate which is use! to retrieve entities through criteria object composition.
1%. What are the benefits of "sin# Hibernate teplate?
>ollowing are some ke+ benefits of using Hibernate template-
a. <ession closing is automate!.
b. )nteraction with hibernate session is simplifie!.
c. ?#ception han!ling is automate!.
1,. How can we see hibernate #enerate! ',- on console?
2e nee! to a!! following in hibernate configuration file to enable viewing <;L on the console for !ebugging
purposes-
.
1
<property name=sho1$s4l>true<,property>
1/. What are the two t%pes of collections in hibernate?
>ollowing are the two t+pes of collections in hibernate-
a. <orte! 0ollection
b. Or!er 0ollection
11. Whats the !ifference between session.sa(e/0 an! session.sa(eOr1p!ate/0 etho!s in hibernate?
<essionsave&' metho! saves a recor! onl+ if it*s uni$ue with respect to its primar+ ke+ an! will fail to insert if primar+
ke+ alrea!+ e#ists in the table.
saveOr9p!ate&' metho! inserts a new recor! if primar+ ke+ is uni$ue an! will up!ate an e#isting recor! if primar+
ke+ e#ists in the table alrea!+.
14. What the benefits are of hibernate o(er JDBC?
a. Hibernate can be use! seamlessl+ with an+ t+pe of !atabase as its !atabase in!epen!ent while in case of J@A0"
!eveloper has to write !atabase specific $ueries.
b. 9sing hibernate" !eveloper !oesn*t nee! to be an e#pert of writing comple# $ueries as H;L simplifies $uer+
writing process while in case of J@A0" its job of !eveloper to write an! tune $ueries.
c. )n case of hibernate" there is no nee! to create connection pools as hibernate !oes all connection han!ling
automaticall+ while in case of J@A0" connection pools nee! to be create!.
16. How can we #et hibernate statistics?
2e can get hibernate statistics using get<tatistics&' metho! of <ession>actor+ class as shown below-
<ession>actor+.get<tatistics&'
18. What is transient instance state in Hibernate?
)f an instance is not associate! with an+ persistent conte#t an! also" it has never been associate! with an+ persistent
conte#t" then it*s sai! to be in transient state.
1:. How can we re!"ce !atabase write action ties in Hibernate?
Hibernate provi!es !irt+ checking feature which can be use! to re!uce !atabase write times. @irt+ checking feature
of hibernate up!ates onl+ those fiel!s which re$uire a change while keeps others unchange!.
%=. Whats the "sa#e of callbac* interfaces in hibernate?
0allback interfaces of hibernate are useful in receiving event notifications from objects. >or e#ample" when an object
is loa!e! or !elete!" an event is generate! an! notification is sent using callback interfaces.
%1. When an instance #oes in !etache! state in hibernate?
2hen an instance was earlier associate! with some persistent conte#t &e.g. a table' an! is no longer associate!" it*s
calle! to be in !etache! state.
%%. What the fo"r ORM le(els are in hibernate?
>ollowing are the four ORM levels in hibernate-
a. (ure Relational
b. Light Object Mapping
c. Me!ium Object Mapping
!. >ull Object Mapping
%,. Whats transaction ana#eent in hibernate? How it wor*s?
7ransaction management is the process of managing a set of statements or comman!s. )n hibernateB transaction
management is !one b+ transaction interface as shown in below co!e-
.
1
2
3
4
5
6
7
8
9
10
11
12
Session s = null;
7ransaction tr = null;
try 8
s = sessionFactory.openSession();
tr = s.begin7ransaction();
do7he&ction(s);
tr.commit();
9 catch (#untime"xception exc) 8
tr.rollbacA();
9 finally 8
s.close();
9
%/. What the two etho!s are of hibernate confi#"ration?
2e can use an+ of the following two metho!s of hibernate configuration-
a. XML base! configuration & using hibernate.cfg.#ml file'
b. (rogrammatic configuration & 9sing co!e logic'
%1. What is the !efa"lt cache ser(ice of hibernate?
Hibernate supports multiple cache services like ?H0ache" O<0ache" <2RM0ache an! 7ree0ache an! !efault
cache service of hibernate is ?H0ache.
%4. What are the two appin# associations "se! in hibernate?
)n hibernateB we have following two t+pes of mapping associations between entities-
a. One3to3One ssociation
b. Man+3to3Man+ ssociation
%6. Whats the "sa#e of Hibernate ,BC .+$?
Hibernate ;uer+ A+ 0riteria &;A0' () is use! to create $ueries b+ manipulation of criteria objects at runtime.
%8. $n how an% wa%s2 ob3ects can be fetche! fro !atabase in hibernate?
Hibernate provi!es following four wa+s to fetch objects from !atabase-
a. 9sing H;L
b. 9sing i!entifier
c. 9sing 0riteria ()
!. 9sing <tan!ar! <;L
%:. How priar% *e% is create! b% "sin# hibernate?
@atabase primar+ ke+ is specifie! in the configuration file hbm.#ml. Cenerator can also be use! to specif+ how
primar+ ke+ is being create! in the !atabase.
)n the below e#ample" !ept)! acts as primar+ ke+-
.
1
2
3
4
<id name=dept2d type=string >
<column name=column2d length=G?,>
<generator,>
<,id>
,=. How can we reattach an% !etache! ob3ects in Hibernate?
Objects which have been !etache! an! are no longer associate! with an+ persistent entities can be reattache! b+
calling session.merge&' metho! of session class.
,1. What are !ifferent wa%s to !isable hibernate secon! le(el cache?
Hibernate secon! level cache can be !isable! using an+ of the following wa+s-
a. A+ setting useDsecon!DlevelDcache as false.
b. A+ using 00H?MO@?.)C5OR?
c. 9sing cache provi!er as org.hibernate.cache.5o0ache(rovi!er
,%. What is ORM eta!ata?
ll the mapping between classes an! tables" properties an! columns" Java t+pes an! <;L t+pes etc is !efine! in
ORM meta!ata.
,,. Which one is the !efa"lt transaction factor% in hibernate?
2ith hibernate ,.%" !efault transaction factor+ is J@A07ransaction>actor+.
,/. Whats the role of JM4 in hibernate?
Java pplications an! components are manage! in hibernate b+ a stan!ar! () calle! JMX (). JMX provi!es tools
for !evelopment of efficient an! robust !istribute!" web base! solutions.
,1. How can we bin! hibernate session factor% to J5D$ ?
Hibernate session factor+ can be boun! to J5@) b+ making configuration changes in hibernate.cfg file.
,4. $n how an% wa%s ob3ects can be i!entifie! in Hibernate?
Object i!entification can be !one in hibernate in following three wa+s-
a. 9sing Object )!entit+- 9sing EE operator.
b. 9sing Object ?$ualit+- 9sing e$uals&' metho!.
c. 9sing !atabase i!entit+- Relational !atabase objects can be i!entifie! if the+ represent same row.
,6. What !ifferent fetchin# strate#ies are of hibernate?
>ollowing fetching strategies are available in hibernate-
1. Join >etching
%. Aatch >etching
,. <elect >etching
/. <ub3select >etching
,8. How appin# of 3a(a ob3ects is !one with !atabase tables?
7o map java objects with !atabase tables" we nee! to have Java beans properties names same as column names of
a !atabase table. 7hen mapping is provi!e! in hbm.#ml file as given below-
.
1
2
3
4
5
6
7
8
<hibernate-mapping>
<class name=Student table=tbl$student>
<property column=studentname length=())
name=student%ame not-null=true type=*a+a.lang.String,>
<property column=student0isciplne length=())
name=student0iscipline not-null=true type=*a+a.lang.String,>
<,class>
<,hibernate-mapping>
,:. What are !eri(e! properties in hibernate?
@erive! properties are those properties which are not mappe! to an+ columns of a !atabase table. <uch properties
are calculate! at runtime b+ evaluation of an+ e#pressions.
/=. What is eant b% a 5ae! ',- ,"er% in hibernate an! how its "se!?
5ame! <;L $ueries are those $ueries which are !efine! in mapping file an! are calle! as re$uire! an+where.
>or e#ample" we can write a <;L $uer+ in our XML mapping file as follows-
.
1
2
3
4
5
6
7
<s4l-4uery name = studentdetails>
<return alias=std,>
S"3"67 std.S7!0"%7$20 &S 8std.S7!0"%7$209:
std.S7!0"%7$02S62-32%" &S 8std.discipline9:
F#/' Student std .;"#" std.%&'" 32<" =name
<,s4l-4uery>
7hen this $uer+ can be calle! as follows-
.
1
2
3
4
3ist students = session.get%amed5uery(Famp;4uot;studentdetailsFamp;4uot;)
.setString(Famp;4uot;7om>radyFamp;4uot;: name)
.set'ax#esults()?)
.list();
/1. Whats the !ifference between loa!/0 an! #et/0 etho! in hibernate?
Loa!&' metho!s results in an e#ception if the re$uire! recor!s isn*t foun! in the !atabase while get&' metho! returns
null when recor!s against the i! isn*t foun! in the !atabase.
<o" i!eall+ we shoul! use Loa!&' metho! onl+ when we are sure about e#istence of recor!s against an i!.
/%. Whats the "se of (ersion propert% in hibernate?
Fersion propert+ is use! in hibernate to know whether an object is in transient state or in !etache! state.
/,. What is attrib"te oriente! pro#rain#?
)n ttribute oriente! programming" a !eveloper can a!! Meta !ata &attributes' in the java source co!e to a!! more
significance in the co!e. >or Java &hibernate'" attribute oriente! programming is enable! b+ an engine calle!
X@oclet.
//. Whats the "se of session.loc*/0 in hibernate?
session.lock&' metho! of session class is use! to reattach an object which has been !etache! earlier. 7his metho! of
reattaching !oesn*t check for an+ !ata s+nchroniGation in !atabase while reattaching the object an! hence ma+ lea!
to lack of s+nchroniGation in !ata.
/1. Does hibernate s"pport pol%orphis?
Hes" hibernate full+ supports pol+morphism. (ol+morphism $ueries an! pol+morphism associations are supporte! in
all mapping strategies of hibernate.
/4. What the three inheritance o!els are of hibernate?
Hibernate has following three inheritance mo!els-
a. 7ables (er 0oncrete 0lass
b. 7able per class hierarch+
c. 7able per sub3class
/6. How can we ap the classes as i"table?
)f we !on*t want an application to up!ate or !elete objects of a class in hibernate" we can make the class as
immutable b+ setting mutableEfalse
/8. Whats #eneral hibernate flow "sin# RDBM'?
Ceneral hibernate flow involving R@AM< is as follows-
a. Loa! configuration file an! create object of configuration class.
b. 9sing configuration object" create session>actor+ object.
c. >rom session>actor+" get one session.
!. 0reate H;L $uer+.
e. ?#ecute H;L $uer+ an! get the results. Results will be in the form of a list.
/:. What is -i#ht Ob3ect Mappin#?
Light Object Mapping is one of the levels of ORM $ualit+ in which all entities are represente! as classes an! the+ are
mappe! manuall+.
1=. Whats !ifference between ana#e! associations an! hibernate associations?
Manage! associations relate to container management persistence an! are bi3!irectional while hibernate
associations are uni!irectional. | https://ru.scribd.com/document/240953269/Hibernate-Questions | CC-MAIN-2019-30 | refinedweb | 6,290 | 62.44 |
I think it is useful to summarize a bit the discussion at this point.
We have identified two approaches to address the SOAP infoset vs. JSON question:
1. As Amila suggests in his last post, put a dummy SOAP envelope in
the message context and store the JSON message (actually a stream
representing that message) in a message context property. This is
technically simple, but it is not an innocent choice because it
deviates from what we do elsewhere in Axis2. Therefore this requires
some careful thinking about the implications of such a choice.
2. Preserve the requirement that a message must have a well defined
SOAP infoset and use a trivial representation similar (or identical)
to what we use for plain text. This has the advantage that it is more
in line with the rest of the Axis2 architecture, but requires a
careful analysis (and potentially some enhancements in Axiom or Axis2)
to make sure that we can implement streaming. It should be noted that
this has applications that are important in a context much broader
than JSON. In particular, the ability to stream character data
efficiently is important for Synapse as well.
So, one of the tasks of the proposed GSOC project would be to analyze
and evaluate both approaches, so that we can make an educated choice.
I think that it would also be interesting to add another task in the
scope of that GSOC project, namely to analyze why Axis2 doesn't have a
good support for mapped JSON. In fact, if you look at Shameera's
initial post, he (she?) takes the fact that "Mapped formatted JSON
with namespaces are not supported in Axis2" as a basic assumption. The
interesting question is actually why this is so. I was thinking about
this a couple of months ago, and I believe that this is actually due
to a too restrictive assumption that is made in the Axis2 architecture
(which is that it is possible to construct a SOAP infoset solely based
on the properties of the incoming message, i.e. the content of the
message and its content type), and that this is connected to some
other problems as well as the presence of code in Axis2 that doesn't
fit naturally into the architecture.
Fixing that properly would probably be out of scope for a GSOC
project, but doing an analysis would be highly interesting, in
particular if Shameera is interested not only in development, but also
in architecture and design.
I think that if one includes these different things into the proposal,
it would indeed make a very interesting GSOC project. Can we agree on
that?
Andreas
On Wed, Jan 4, 2012 at 13:54, Sagara Gunathunga
<sagara.gunathunga@gmail.com> wrote:
> This proposal is to address real issue with Axis2, that is in Axis2 JSON
> messages are not perform well as XML messages. Since we have enough time for
> GSoC we can decide the best approach for this. With your explanation 2nd
> approach sound good to me , also this approach enable to use QName based
> dispatching on JSON messages too.
>
> One design consideration need to fulfill is full streaming support in
> builders/formatters level so that gson can process underline stream
> directly, otherwise this proposal is meaningless.
>
> My thought about project scope is first let student to define the goals and
> scope and give our comments later during community discussion period so that
> he can add/remove some additional goals that he has confidence on
> implementing them.
>
> Thanks !
>
> On Wed, Jan 4, 2012 at 4:27 PM, Andreas Veithen <andreas.veithen@gmail.com>
> wrote:
>>
>> Axiom is an object model for XML and SOAP. Using it to store something
>> that doesn't have an XML representation is sonsense. What you are
>> probably referring to is the fact that an OMDataSource that backs an
>> OMSourcedElement can store an arbitrary Java object. However, the
>> OMDataSource must be able to produce an XML representation of that
>> data. More precisely it must be able to create a representation in the
>> form of an XMLStreamReader and it must be able to write the XML
>> representation to an XMLStreamWriter.
>>
>> At the level of Axis2 that translates into the fact that when a
>> message flows through the Axis2 engine, at any given point in time
>> that message has a well defined SOAP infoset. In principle you could
>> serialize the message to an XML document, deserialize it again and
>> replace the SOAPEnvelope in the MessageContext with that deserialized
>> message, without changing the outcome of the request.
>>
>> I don't know what you are doing in WSO2 products, but to my knowledge
>> there is no exception to that rule in Axis2 or Synapse, even for plain
>> text and binary messages. For both types of messages, Axis2/Synapse
>> internally uses a well defined SOAP infoset:
>>
>> - For plain text messages, the SOAP infoset uses an element that wraps
>> the entire text message as character data. E.g. for a message with
>> content "my message", the SOAP infoset would be (namespaces removed):
>>
>> <soap:Envelope><soap:Body><ns:wrapper>my
>> message</ns:wrapper></soap:Body></soap:Envelope>
>>
>> - For binary messages, the SOAP infoset uses an element that wraps the
>> message encoded as base64Binary.
>>
>> That being said, Axis2 uses several Axiom features to avoid building a
>> full DOM like in memory representation of the entire SOAP infoset:
>>
>> - For a request, the databindings consume the SOAP infoset without
>> building the Axiom tree.
>> - For a response, the databindings use an
>> OMDataSource/OMSourcedElement that is able to write the XML
>> representation directly to an XMLStreamWriter.
>> - For plain text, we also use a special OMDataSource implementation
>> that is able to produce the XML representation shown above, but that
>> at the same time allows to stream the character data.
>> - For binary messages, we simply use the Axiom features that are also
>> used for XOP/MTOM, i.e. we construct a complete Axiom tree, but with
>> an OMText instance that refers to a DataHandler with the binary data.
>>
>> However, these various optimizations don't change anything about the
>> fact that in Axis2, a message always has a well defined SOAP infoset.
>>
>> Since google-gson defines a direct mapping between JSON and Java
>> without defining an XML representation, you will have two options:
>>
>> 1. Use an OMDataSource that doesn't have an XML representation, i.e.
>> that doesn't have meaningful implementations of the getReader and
>> serialize methods, but that only acts as a holder for a Java object
>> that can't be transformed to XML. That would clearly be a misuse of
>> Axiom.
>>
>> 2. Define a trivial XML representation, which would be the JSON string
>> wrapped in a wrapper element. Since this is the same thing as we do
>> for plain text, we already have the corresponding message builders and
>> formatters, and one would simply map these builders/formatters to the
>> JSON content type. Implementing the proposal would then require only
>> three things:
>>
>> - Implementing the message receiver.
>> - Probably one would have to create a specialized OMDataSource that
>> enables streaming of the response.
>> - Potentially some minor enhancements to Axiom and/or the plain text
>> message builders/formatters to make sure that streaming is fully
>> supported.
>>
>> Since the message receiver is basically glue gode between google-gson,
>> Axiom and the service object, it will be fairly trivial. The problem
>> is then that the scope of this is likely not large enough for a GSOC
>> project.
>>
>> Andreas
>>
>> On Sun, Jan 1, 2012 at 16:25, Sanjiva Weerawarana <sanjiva@opensource.lk>
>> wrote:
>> > +1 - while Andreas this functionality can be implemented without Axis2,
>> > the
>> > proposed feature would add a lot of value to use of Axis2 as a way to
>> > have
>> > services that have a good JSON binding in addition to other bindings.
>> > Axiom's design allows passing of non-XML content without forcing XML and
>> > that model performs perfectly fine and well (Synapse and WSO2 ESB both
>> > leverage that heavily).
>> >
>> > Sanjiva.
>> >
>> >
>> > On Fri, Dec 30, 2011 at 10:25 AM, Amila Suriarachchi
>> > <amilasuriarachchi@gmail.com> wrote:
>> >>
>> >>
>> >>
>> >> On Fri, Dec 30, 2011 at 12:35 AM, Andreas Veithen
>> >> <andreas.veithen@gmail.com> wrote:
>> >>>
>> >>> On Thu, Dec 29, 2011 at 15:55, Amila Suriarachchi
>> >>> <amilasuriarachchi@gmail.com> wrote:
>> >>> >
>> >>> >
>> >>> > On Tue, Dec 27, 2011 at 7:58 PM, Andreas Veithen
>> >>> > <andreas.veithen@gmail.com>
>> >>> > wrote:
>> >>> >>
>> >>> >> On Sun, Dec 25, 2011 at 15:09, Shameera Rathnayaka
>> >>> >> <shameerainfo@gmail.com> wrote:
>> >>> >> > 2. store json string without doing any process untill
it reaches
>> >>> >> > JsonMessageReceiver. JsonMessageReceiver is a new Message
>> >>> >> > Receiver
>> >>> >> > which
>> >>> >> > use
>> >>> >> > gson to convert json to java objects, call relevant operation
and
>> >>> >> > get
>> >>> >> > result.
>> >>> >>
>> >>> >> What this means in practice is that you will have a message
>> >>> >> builder, a
>> >>> >> message receiver and a message formatter that interact with
each
>> >>> >> other, but that have no meaningful interaction with any other
>> >>> >> component of the Axis2 framework (the fundamental reason being
that
>> >>> >> google-gson defines a mapping between JSON and Java objects,
but
>> >>> >> eliminates XML from the picture). The question is then why
would a
>> >>> >> user go through all the pain of setting up Axis2 for this?
>> >>> >
>> >>> >
>> >>> > if you look into a point where users only need to expose a POJO
with
>> >>> > json
>> >>> > then they don't have to use Axis2.
>> >>> >
>> >>> > But if the user want to expose the same POJO service both soap
and
>> >>> > json
>> >>> > formats this provides a value in terms of performance for latter
>> >>> > case.
>> >>> > In
>> >>> > this case JSON message receiver can be written extending RPC message
>> >>> > receiver and call the normal RPC processing if the received message
>> >>> > is
>> >>> > not a
>> >>> > json one.
>> >>> >
>> >>> > thanks,
>> >>> > Amila.
>> >>>
>> >>> As you know, Axis2 assumes that every message it processes is
>> >>> representable as XML (which is different from CXF where a message can
>> >>> have different representations, depending on the phase that is
>> >>> executed). Until now this has always been the case, even for plain
>> >>> text and unstructured binary data. Are you going to drop that
>> >>> requirement from the Axis2 architecture
>> >>
>> >>
>> >> Drop that requirement ( I would say initially Axis2 is designed like
>> >> that
>> >> but latter specially in all contract first approaches it has not
>> >> followed
>> >> this for performance reasons) and make an efficient way to work with
>> >> JSON.
>> >> Then obviously this won't support WS-Security etc .. which are anyway
>> >> meaningless for json.
>> >>
>> >> If you look at how ADB works for non security (or non message building
>> >> case) is similar to this. It stores the xml stream in the Axiom object
>> >> (this
>> >> feature has come from axiom differed building) and get that underline
>> >> stream
>> >> at the message receiver and directly build the java objects from that.
>> >> Then
>> >> at the response also it save the response in OMDatasource and directly
>> >> serialize to the xml stream at the formatter.
>> >>
>> >> So idea for this is to provide such a direct stream parsing serializing
>> >> technique which performs well for POJO objects to communicate using
>> >> json.
>> >>
>> >> thanks,
>> >> Amila.
>> >>
>> >>>
>> >>> or else, what would be the XML
>> >>> representation of a JSON message received by that message receiver?
>> >>>
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >> Andre.
>> >
>> >
>> >
>> >
>> > --
>> > Sanjiva Weerawarana, Ph.D.
>> > Founder, Director & Chief Scientist; Lanka Software Foundation;
>> >
>> > Founder, Chairman & CEO; WSO2;
>> > Founder & Director; Thinkcube Systems;
>> > Member; Apache Software Foundation;
>> > Visiting Lecturer; University of Moratuwa;
>> >
>> >
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-dev-unsubscribe@axis.apache.org
>> For additional commands, e-mail: java-dev-help@axis.apache.org
>>
>
>
>
> --
> Sagara Gunathunga
>
> Web -
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@axis.apache.org
For additional commands, e-mail: java-dev-help@axis.apache.org | http://mail-archives.us.apache.org/mod_mbox/axis-java-dev/201201.mbox/%3CCADx4_uVgd-upeNsqjEkZLW35GuO21OV76-4QXg0NX-67RSnDbA@mail.gmail.com%3E | CC-MAIN-2019-26 | refinedweb | 1,884 | 50.06 |
A big challenge with DSLs is composability. DSLs are mostly used in silos these days to solve specific problems in one particular domain. But within a single domain there are situations when you need to compose multiple DSLs to design modular systems. Languages like Scala and Haskell offer powerful type systems to achieve modular construction of abstractions. Using this power, you can embed domain specific types within the rich type systems offered by these languages. This post describes a cool example of DSL composition using Scala's type system. The example is a very much stripped down version of a real life scenario that computes the payroll of employees. It's not the richness of DSL construction that's the focus of this post. If you want to get a feel of the power of Scala to design internal and external DSLs, have a look at my earlier blog posts on the subject. Here the main focus is composition and reusability - how features like dependent method types and abstract types help compose your language implementations in Scala.
Consider this simple language interface for salary processing of employees ..
trait SalaryProcessing {
// abstract type
type Salary
// declared type synonym
type Tax = (Int, Int)
// abstract domain operations
def basic: BigDecimal
def allowances: BigDecimal
def tax: Tax
def net(s: String): Salary
}
Salaryis an abstract type, while
Taxis desfined as a synonym of a
Tuple2for the tax components applicable for an employee. In real life, the APIs will be more detailed and will possibly take employee ids or employee objects to get the actual data out of the repository. But, once again, let's not creep about the DSL itself right now.
Here's a sample implementation of the above interface ..
trait SalaryComputation extends SalaryProcessing {
type Salary = BigDecimal
def basic = //..
def allowances = //..
def tax = //..
private def factor(s: String) = {
//.. some implementation logic
//.. depending upon the employee id
}
def net(s: String) = {
val (t1, t2) = tax
// some logic to compute the net pay for employee
basic + allowances - (t1 + t2 * factor(s))
}
}
object salary extends SalaryComputation
Here's an implementation from the point of view of computation of the salary of an employee. The abstract type
Salaryhas been concretized to
BigDecimalwhich indicates the absolute amount that an employee makes as his net pay. Cool .. we can have multiple such implementations for various types of employees and contractors in the organization.
Irrespective of the number of implementations that we may have, the accounting process needs to record all of them in their books, where they would like to have all separate components of the salary separately from one single API. For this, we need to define a separate implementation for the accounting department with a different concrete type definition for
Salarythat separates the net pay and the tax part. Scala's abstract types allow this type definition overriding much like values. But the trick is to design the
Accountingabstraction in such a way that it can be composed with all definitions of
Salarythat individual implementations of
SalaryProcessingdefine. This means that any reference to
Salaryin the implementation of
Accountingneeds to refer to the same definition that the composed language uses.
Here's the definition of the
Accountingtrait that embeds the semantics of the other language that it composes with ..
trait Accounting extends SalaryProcessing {
// abstract value
val semantics: SalaryProcessing
// define type to use the same semantics as the composed DSL
type Salary = (semantics.Salary, semantics.Tax)
def basic = semantics.basic
def allowances = semantics.allowances
def tax = semantics.tax
// the accounting department needs both net and tax info
def net(s: String) = {
(semantics.net(s), tax)
}
}
and here's how
Accountingcomposes with
SalaryComputation..
object accounting extends Accounting {
val semantics = salary
}
Now let's define the main program that processes the payroll for all the employees ..
def pay(semantics: SalaryProcessing,
employees: List[String]): List[semantics.Salary] = {
import semantics._
employees map(net _)
}
The
paymethod accepts the semantics to be used for processing and returns a dependent type, which depends on the semantics passed. This is an experimental feature in Scala and needs to be used with the
-Xexperimentalflag of the compiler. This is an example where we publish just the right amount of constraints that's required for the return type. Also note the semantics of the import statement in Scala that's being used here. Firstly it's scoped within the method body. And also it imports only the members of an object that enbales us to use DSLish syntax for the methods on semantics, without explicit qualification.
Here's how we use the composed DSLs with the
paymethod ..
val employees = List(...)
// only SalaryComputation
println(pay(salary, employees))
// SalaryComputation composed with Accounting
println(pay(accounting, employees))
1 comment:
it's good to see this information in your post, i was looking the same but there was not any proper resource, thanx now i have the link which i was looking for my research.
Accounting Dissertation Proposal | http://debasishg.blogspot.com/2009/07/dsl-composition-techniques-in-scala.html | CC-MAIN-2016-40 | refinedweb | 816 | 54.32 |
.
very nice
Posted by guest on August 26, 2011 at 07:26 PM PDT #
WHEN IS OCPJP(PREVIOUS SCJP) 1.7 GETTING STARTED.I AM WAITING FOR IT. COULD YOU PLEASE UPDATE ME ON THIS.
Posted by JAYASIMHA on September 10, 2011 at 12:58 AM PDT #
hi ;
I hav downloaded the JDK1.7 .
I am using Windows command promt to compile and using the eclipse IDE .
Whn I tried with the statement
import java.until.Scanner;
Its says
java.until does not exist .
How can I install the packages ??
Posted by guest on September 11, 2011 at 06:39 AM PDT #
Is the java 7 documentation available for download? If yes, where can I download it?
Posted by guest on September 11, 2011 at 12:29 PM PDT #
java is platform independent object oriented programing language.
Posted by rabbani on September 12, 2011 at 11:25 PM PDT #
So basically what are the "Commercial" features in the JVM that you want "$$$" license for? And isn't this deviating from the previous Java everything is open and free spirit?
Posted by TJ on September 14, 2011 at 10:47 PM PDT #
I have a new computer,have installed pogo.com on it for the games. I keep getting a "fatal error" on runtime. caqn anyone tell why and how doI correct this without buying software costing ov er $100, which I don't have. Thanks much.
Posted by Don Roberts on September 23, 2011 at 05:23 AM PDT #
When will JRE 7 update 1 be released?
Posted by Hans on September 26, 2011 at 05:13 AM PDT #
While and for loops cause problems with mutable objects.
Posted by guest on September 30, 2011 at 04:36 PM PDT #
Ia using the jkd7 version this very independent and acrobeting plateform providing version. so i think this is very useful to us so i request you and all jaa users. jkd7 use the evrytime because this very comfortable and easily providing output.
Posted by pankaj kumar on October 02, 2011 at 10:44 PM PDT #
I think jvm is very difficult language but it is very fast providing the output. so i think this is very usefulto us.so try it.
Posted by pankaj kumar on October 02, 2011 at 10:48 PM PDT #
I am looking again at:
Q: Where can I find API documentation?
A: Javadocs are available here.
My question is simple: why is the API documentation nearly useless?
For example, suppose I wanted to find out how to use "ActionListener
". The API provides nearly no information on how to use this. I am looking at it right now, and I am completely confused on where and when to use it, how to apply it, and so on. Sure, there are texts that describe the whole process, but the API is sooooo useless, i don't even know why you bother.
Posted by Baruch Atta on October 05, 2011 at 06:30 AM PDT #
My question concerns running Java 6 and Java 7 together on the same machine. I need to have Java 6 running on my workstation to support the software that uses Java. However, I would like to test Java 7.
Is this possible?
Posted by Baruch Atta on October 05, 2011 at 06:34 AM PDT #
My third comment concerns the incompatability between releases. We support a Java commercial software package, and we find that some Java releases are compatable, and some just break our software. This is so irrational. Can you insure that there is backwards compatability from Java 7 to Java 6?
Posted by Baruch Atta on October 05, 2011 at 06:38 AM PDT #
Very nice Oracle!
Posted by guest on October 07, 2011 at 02:09 AM PDT #
Q: Where can I find API documentation?
A: Javadocs are available here.
But where can I find the API documentation as a zip or tar ?
I don't stop programming when the Oracle site is not available (e.g. internet connection is down).
Also checked older documentation, but the links do not work, anymore.
See
"Download this JDK documentation" in upper left corner and on the bottom left.
Posted by guest on October 12, 2011 at 11:43 PM PDT #
Will JRockit & JDK merge going to make Oracle JDK to perform better on Intel platform (as good as JRockit)? The earlier JDK versions (6 and below) were highly optimized for AMD platforms and not for Intel whereas JRockit addressed optimizations for Intel platforms.
If yes, when this is going to happen and in what version?
Regards,
Prakash
Posted by Prakash C Rao on December 08, 2011 at 12:39 AM PST # | https://blogs.oracle.com/henrik/entry/java_7_questions_answers | CC-MAIN-2017-09 | refinedweb | 779 | 74.39 |
When.
This article assumes general .NET Framework skills, knowledge of programming ASP.NET applications, and object-oriented design. I also assume the reader understands key concepts in .NET development such as "decorating with Attributes" and "importing Namespaces," etc. Custom Web control development is a huge topic and the subject of complete books. I will provide as detailed and complete a tutorial as possible during the development of sample Web controls in the attempt to wet your appetite to pursue more knowledge on the subject.
Web controls give ASP.NET developers the same advantages in reusability, encapsulation, and complete OOP that Windows Form developers have had for a while.
As a free-lance consultant, I've noticed that when receiving ASP.NET skill-set requirements from clients or recruiters for contract jobs, knowledge of how to program custom Web controls is never requested. Now, I don't expect my business customers to know specific technologies when requesting a solution to their problems, but even IT departments that are in charge of screening consultants for required skills, and request experience in developing reusable software, seem to lack knowledge of this powerful part of ASP.NET. Programming Web controls goes way beyond using the ones that come with Visual Studio and can greatly change the way you develop and reuse code in Web applications.
I recently published an article on Declarative Programming using Web controls, where I generalized an approach to ASP.NET development using custom Web controls. This time, we're going to travel backwards and learn exactly how these extremely reusable components work, and how we can create our own to bring a level of object-oriented development to the UI level that was not easily achievable in the past. We will be developing three useful custom Web controls in this article that will cover many, though not all, of the technologies available to us. At the end, you will hopefully walk away with not only a good understanding of how to develop custom Web controls, but also three very useful Web controls that you can use as-is or as educational tools to get you going in developing more, on you own, and take advantage of this powerful yet underused part of the .NET world.
The "What" and the "How"
Put simply, ASP.NET Web controls are server-processed components that render standard HTML to a browser. Even in today's highly evolved world of Web application development, HTML is still all our browsers really understand. Sure, there's JavaScript, embedded OCXs, Flash animations, and so on, but all of them are kicked off by some kind of HTML.
When Visual Basic 5 introduced the Control Creation Edition, or CCE, Windows programmers were first given the opportunity to develop reusable OCXs (later to become ActiveX controls) of their own. For me at least, this changed the way I developed Windows applications. ASP.NET Web controls give ASP.NET developers the same advantages in reusability, encapsulation, and now complete object-orientation, when developing Web applications. However, the visual aspect of Windows Form controls and Web controls to a developer using them on a Windows form or a Web form is where the similarity ends.
You can essentially think of Web controls as code-generators. They are components that, through simple or complex logic, generate standard HTML that will get rendered in a browser. The functionality behind Web controls that generates the appropriate HTML gets processed within the ASP.NET page lifecycle. From a code standpoint, Web controls are just classes; classes with methods, properties, and events, just like any other class you have been programming with for over three years. This class-like nature gives developers the power of object-oriented programming at the UI level where they did not really have it before.
Advantages of Using Web Controls
Programming with custom Web controls provide many advantages in all stages of development..
Can Maintain Their Own State
Web controls can, and usually do, control their own state in order to persist information they need during page postbacks.
Fully Object-Oriented
As I mentioned before, Web controls are just classes, albeit pretty elaborate ones. This means they can inherit, be inherited, implement interfaces, and take part in any design pattern you see appropriate to use.
Promote. from the fact that it directly renders HTML straight through the rendering engine. This control will not typically contain any other controls within it, it the ability to raise events and/or check data values during that action. Because everything is controlled at a fine-grain level, there can typically be a bit more work involved when developing complex rendered controls. The proper naming, styling, and event handling of its contained HTML elements must all be handled through an extensive amount of code. I'll talk about the advantages and disadvantages of this type of Web control after I describe composite controls.
Composite Controls
The last type of Web control is called a composite Web control and it consists of a control that contains one or more other Web controls. This control relies on the rendering ability of the Web controls it contains and acts as a container mechanism, deciding where to place said controls by providing framing HTML code around them. This will make a lot more sense when I show you how to develop one later. Because composite controls instantiate one or more (possibly many) controls within it, straight forward to develop. If you know how to use the Web controls that Visual Studio provides for you then you're half way there in developing composite Web controls. In defense of composite Web controls and their slower rendering, if you're developing a Web form and place several Label and Textbox Web controls on it, you are going to take a similar performance hit as if you were rendering one Web control which contained a similar set of Labels and Textboxes within it..
Let's Code Already!
I'm going to show you how to develop three Web controls in the rest of this article. I will go into as much detail as I can considering that this is an article and not a book. Some of the things I will cover will be more general than others and I'm hoping that you get intrigued enough to read further on the subject from the books I've listed in the sidebar called "More on the Subject."
The WebControl class inherits form the Control class and adds some styling properties to it.
The first custom Web control you'll learn to develop is a button that will have the ability to pop up a confirmation dialog when it is clicked. This will give the user the ability to cancel whatever processing would have taken place when the button was clicked. I'll show you how to develop this Web control as an Inherited control and that I'll call the ConfirmationButton.
Next you'll learn to develop a second custom Web control I'll call the FormField control. This control was in my article on Declarative Programming. This control is a combination of a label, a textbox, and a button that you can arrange in a variety of ways. The idea behind this control is that it serves as the basis for data entry on a Web page. Because of the heavy use this control may receive on a single Web page, I'll develop this as a Rendered control.
The third and last custom Web control that you'll lean to develop is an EmailContact control. You no doubt have seen many Web sites that provide the user with a "Contact Us" page. This page usually includes a small form that allows the visitor of the site to send an e-mail to the site's Webmaster. You can see similar forms in today's most popular Web site among the develop community?the blog. You may see this kind of form when you leave feedback for a blog posting. I'll show you how to encapsulate everything necessary to build this kind of form into a custom Web control. You'll even see how to build the actual e-mailing functionality directly into the control. I'll show you how to develop this control as a composite control and will contain within it both the FormField Rendered control and the ConfirmationButton Inherited control.
You will enable all these controls with certain functionality during the development steps in this article. At the end of this article, I will list additional functionality that the final version of the controls include in the downloadable code.
So let's get to work...
The ConfirmationButton Control
The first custom Web control will also be the simplest. The ConfirmationButton control will be a simple class that starts by inheriting from the built-in Button Web control that ships with Visual Studio.
In VB.NET:
Public Class ConfirmationButton Inherits Button End Class
In C#:
public class ConfirmationButton : Button { }
The Button class this control inherits from comes from the System.Web.UI.WebControls namespace which you obtain by referencing System.Web.dll in your project. If this class is compiled as-is, it will duplicate the functionality of the Button class that .NET provides. In fact, you can compile this class, add it to the toolbox for a Web form, then drop it on a Web form and it will be no different than dropping a Button control from the Web form toolbox.
Properties
To your new class, you will only make two simple changes. The first one is the addition of a property. I want to describe this process in detail, however, because this is the way you'll add most properties in all three controls you'll build in this article. I say most properties because when you build the composite control, you'll code "mapped" properties a bit differently, but I'll get to that later. You want to add a ConfirmationMessage property to your ConfirmationButton class, and a value other than an empty string will indicate that you want a confirmation dialog box to pop up when a user clicks a button. Here is the code for your new property.
In VB.NET:
Public Property ConfirmationMessage() As String Get If CType( _ VieState("ConfirmationMessage"), _ Object) Is Nothing Then Return String.Empty End If Return CType( _ ViewState("ConfirmationMessage"), String) End Get Set(ByVal Value As String) ViewState("ConfirmationMessage") = Value End Set End Property
In C#:
public string ConfirmationMessage { get { if((object) ViewState["ConfirmationMessage"] == null) return String.Empty; return (string) ViewState["ConfirmationMessage"]; } set { ViewState["ConfirmationMessage"] = value; } }
Look closely and you'll notice that this property is not much different than properties you write every day in your classes. The typical procedure for a property is to expose a member variable of some kind. You've no doubt done this time and time again. The ConfirmationMessage property works similarly with the difference being that you aren't using a privately declared member variable. Instead, you're using a ViewState variable as the internal storage for the ConfirmationMessage property; so let me talk a little more about this.
The get accessor in the property returns the value of the internal variable. In the property, this is stored in ViewState["ConfirmationMessage"], and the set accessor sets the ViewState variable to whatever you set the property to. ViewState is a variable of type StateBag. Like the famous Session variables and Cookie variables that you have used in Web applications, it is a dictionary that contains one or more values accessed by a key. In this case, I've standardized the naming conventions so the key is equivalent to the name of the property. Keep this in mind because you're going to see it a lot more through the rest of this article. The contents of the ViewState variable are stored in the __ViewState hidden field on your Web page in an encrypted format and are used to save and retrieve values that you need to persist in between page postbacks. The rest of the code around the 'return' statement in the get accessor checks to see if the specific ViewState value exists in the ViewState array. Since ViewState is a dictionary type (actually, StateBag is), checking for an entry in it that does not exist does not raise an exception; instead, it adds an empty object. The if-statement checks the ViewState value for Nothing (or null). If this is the case, then it will return whatever you want the default value to be. The check for Nothing (or null) is done with an object-cast of the ViewState variable. The reason for this is due to the nature of the StateBag object, whereas an attempted access of a non-existing value will automatically add a null value.
You'll also have a few additions for this property that will be "standard operating procedure" for all properties you write in custom Web controls. These consist of certain attributes with which you'll decorate your property. Some properties will receive more attribute decorations than others, but all will receive at least four attributes: Category, Description, Bindable, and DefaultValue. All are found in the System.ComponentModel namespace.
In VB.NET:
< _ Category("Behavior"), _ Description("Gets or sets the message to be displayed in a confirmation dialog. The dialog will present itself if this property is not empty."), _ DefaultValue(""), _ Bindable(True) _ > _
In C#:
[ Category("Behavior"), Description(""), DefaultValue(""), Bindable(true) ]
Category
This attribute specifies the category that your property will be listed under when you view it in the property browser. Indulge me in a comical tangent: Make sure you do not accidentally misspell the category name. I can't tell you how many times I have misspelled the word "Appearance" when using it as a Category. Instead, I've typed "Apperance" and guess what the property browser shows? That's right?two categories, "Appearance" and "Apperance."
Description
This attribute determines the description of the property that is displayed at the bottom of the property browser. It provides simple instructions for the programmer that uses your Web control.
DefaultValue
Contrary to what the name of this attribute makes you assume, it does not determine what gets stored in the property by default. As you saw when I described the Property statement, you're setting the default value for the ViewState variable in your get accessor. This attribute lets Visual Studio know what the default value of the property will be for two main reasons. The first is that it lets the property browser know whether to display the value of the property in bold letters or not. If you haven't noticed before, when you drop a control on a Web form and refer to the property browser, all the property values are displayed in normal font until you change their values, when they are then displayed in bold letters. This let's you, the programmer, know that this value is not the default for the property. The second reason is that only properties with non-default values will appear in the ASPX tags that define your Web control on a Web form.
Bindable
This attribute determines if this property will participate in data binding functionality whereas the data source for the property can be defined in an expression. This topic is beyond the scope of this article, but if you want further information, look at the Data category of a typical Web control in the property browser and click on the DataBindings value, then click the Help button for a full explanation.
This is the technique for creating properties that you'll see in many of the properties going forward in all your Web controls, so this will be the only place where I will get into a lot of detail for Web control properties.
Rendering
Now that you've added your custom property to the Web control, you need to tell the control what to do with it. Remember that your goal is to check the content of this property and if it is other than an empty string, you want a confirmation dialog to pop up and let the user confirm or cancel the postback that the button will initiate. When you create custom Web controls, it's important to understand what the eventual HTML result should be remember I stated earlier that Web controls are essentially code generators that render standard HTML. The ConfirmationButton control, like its Button base class, renders a standard HTML input tag that looks something like this.
<input type="submit"... />
The rest of the tag is not important at this time. What is important is what you would need to add to such an HTML tag if you wanted a confirmation pop up, and that is some Jscript code in its 'onclick' event.
<input type="submit" onclick="if(!confirm('Are you sure?') return false; "... />
This Jscript code will simply pop up a confirmation message asking "Are you sure?" then if the user clicks Cancel, the code returns a false. This has the effect of canceling the submission of the form originally intended by pressing the button, and effectively canceling the postback. Now you need to get this code into your Web control. You do this by overriding the AddAttributestoRender method. This method adds tag attributes to the rendered HTML tag your control will eventually produce.
In VB.NET:
Protected Overrides Sub AddAttributesToRender( _ ByVal writer As HtmlTextWriter) If Me.ConfirmationMessage <> _ String.Empty Then writer. _ AddAttribute( _ HtmlTextWriterAttribute. _ Onclick, "if(!confirm('" & _ Me.ConfirmationMessage.Replace( _ "'", "\'") + "')) return false;") End If MyBase.AddAttributesToRender(writer) End Sub
In C#:
protected override void AddAttributesToRender( HtmlTextWriter writer) { if(this.ConfirmationMessage != String.Empty) { writer.AddAttribute( HtmlTextWriterAttribute.Onclick, "if(!confirm('" + this.ConfirmationMessage. Replace("'", "\'") + "')) return false;"); } base.AddAttributesToRender(writer); }
The argument of type HtmlTextWriter is used to write to the HTML rendering engine and you will see much more of this later. One of the methods of this type allows you to add an attribute to the HTML tag that will later get rendered. As you can see, you're adding an 'onclick' attribute and providing the appropriate Jscript to perform the task. The code uses the 'Replace' statement to replace any single ticks with ones preceded by a backslash. This is standard Jscript syntax.
I've twice mentioned the tag that is rendered. The Button class code has decided this for you so it is not discussed in detail here. You're taking it on faith that Microsoft's Button control renders an 'input' tag.
That's it for the ConfirmationButton class. If you compile it you can add it to your toolbox and use it on a Web form. When you drag it onto a Web form and examine the property browser, you'll notice the ConfirmationMessage property under the Behavior category. Set this property to a value of "Are you sure?", run the Web form and press the button, and you should see something like Figure 1.
If you click Yes you should see the page postback as normal, but clicking "No" will simply make the pop up disappear and you'll notice that the postback will be cancelled. (You can tell if the postback occurs by looking at the bottom status bar of the browser.)
With very little work, you could have your button decide whether or not to use the ConfirmationMessage property based on another Boolean property called Confirm; instead of depending on the existence of a message text.
The assembly, or DLL file, that this class compiled into can be distributed to any Web applications and used freely. This is where the extreme reusability factor of custom Web controls comes in. In fact, once this control exists in your toolbox, dragging it onto a Web form will automatically add the reference to its DLL to your ASP.NET application.
Before I wrap this up, I want touch on one other thing that will apply to all Web controls. I'm referring to the control's tag prefix. All of you have used the Visual Studio Web controls already, and I'm sure you've noticed the ASPX code that gets generated when you drag something like a textbox onto a Web form.
<asp:TextBox id="TextBox1" runat="server" ... />
The "asp:" in this case is called the tag prefix. By default, .NET uses "cc1" for your new control but you're going to change it to something else. Since my company name is InfoTek Consulting Group, I tend to use "icg" as the tag prefix for all of my Web controls. I'll place the definition for a tag prefix in the AssemblyInfo file and the declaration looks like this.
[assembly: TagPrefix("InfoTek.Web.UI.WebControls", "icg")]
For VB.NET projects, the [ and ] brackets are replaced with < and >. The first argument specifies the custom control's namespace and the second argument specifies the tag prefix you want to use. By adding this attribute to the AssemblyInfo file, your ConfirmationButton control takes this form.
<icg:ConfirmationButton id="ConfirmationButton1" runat="server" ... />
Congratulations. You've just developed a very simple, yet useful Inherited Web control. You can use the technique I showed you here on any type control and it is only limited by your own creativity. For practice, try creating another Inherited control but make it extend a Textbox control to convert its contents to upper or lower case when it loses focus. Here's a hint: the Jscript "onblur" event gets fired when the control loses focus. I'll leave the rest to you.
The FormField Control
I'll use the FormField control as an example of a Rendered custom Web control. You'll find this control to be extremely useful in any Web application and you'd very likely use it heavily within a single page. As I explained before, it is for that reason that I've chosen to develop this control as a Rendered Web control, which is one that directly writes to the HTML rendering engine; making the control very efficient. This control will serve well in any Web form (or other Web controls) where data-entry is performed. The control replaces the need to drop a separate label, textbox, and sometimes a button on Web forms several times in order to cover all fields used for data-entry. The label and textbox are common everywhere, but the button is an added extra for our control. The control can display a button to the left of the textbox. This button will raise an event to the code-behind that can be trapped by whatever page uses the control. This feature, along with some interesting properties that we'll go over soon, will give the FormField control some really cool functionality. Figure 2 shows what the finished control will look like in a standard layout.
Now you can start to create the class that will become your FormField control. This class will inherit from the System.Web.UI.WebControls.WebControl class which is one of two classes that [non-inherited] custom Web controls inherit from. The other is the Control class but it does not provide the styling properties that you want for your control.
In VB.NET:
Public Class FormField Inherits WebControl End Class
In C#:
public class FormField : WebControl { }
I'll discuss styling later but for now your next step is to add some properties to your control.
Properties
In the interest of space, I will not go into the property details here but the final downloadable code has every property full documented, specifying the category they fall into and their description, which are both provided to each of the properties by way of the attributes I wrote about earlier. The properties that you'll add to this control will determine how the control renders its contents and what kind of behavior it provides to the Web form using it. At the end of this section, I'll give you a quick list of functionality the final control has. Details on all of this is just too much for this article, but I will tell you that in the process of adding functionality to this and the control that follows, I'll demonstrate a variety of features you can give your Web controls. This should give you a good reference to use when building more controls in the future. Table 1 lists three properties that I'll refer to during the development that follows.
Since this is going to be a Rendered control, the next step is to jump right into the method that provides the rendering engine that you'll write to.
Rendering
The Render method, which you need to override, has only one argument of the type System.Web.UI.HtmlTextWriter. This argument, which you'll call output, is the entry point into the Html rendering engine. To write HTML (or any other) text using this object, you could use this snippet.
output.write("<input type='text' id='txt1'>");
This code would write out the text between the quotes directly without any other process in between. This kind of direct access makes rendered controls very efficient for rendering speed. While your control will write out using this object, your control won't write literal text as shown in the code snippet above. The HtmlTextWriter object also provides many methods that help you create HTML, and that's what you'll take advantage of.
Speaking of HTML, now's a good time to describe what kind of HTML you want to render for the FormField control. Remember the FormField control will be made up of three parts: a caption, a textbox, and a button. Essentially, the HTML will give the control a <span> tag, an <input type='text'> tag, and an <input type='submit'> tag. I'll walk you through that and then we'll build upon it. Listing 1 shows the VB .NET code that will render the HTML for the three sections of your custom control in its simplest form. Listing 2 shows the same process in C#.
Let's look at a piece of HTML that will help you analyze this code:
<span id="ctlFormField1_Caption" name="ctlFormField1:Caption">Enter Name:</span>
This is essentially the piece of code that the first five code lines of the Render method will create. You want the label to use the "span" tag for the caption of the FormField control. This basic HTML tag contains literal text and can apply any styling you desire to that text. If you note the third, fourth, and fifth line in the code, you're using the RenderBeginTag method of the HtmlTextWriter object to render a "span" tag. Then you'll render the caption text from the Caption property, then you'll close the tag using the RenderEndTag method. Note that there you don't need to specify which tag you need to close because the rendering engine automatically keeps track of that. The first two lines in the code use the AddAttribute method of the HtmlTextWriter object to add custom attributes that will get rendered with the "span" tag. In the sample HTML above, the "span" tag contain an id and name attribute. These attributes are added using the AddAttribute method in conjunction with an HtmlTextWriterAttribute enum value. Every call to AddAttribute gets "stacked" and the attributes all get added to whatever is the next tag rendered with RenderBeginTag. After that the attribute stack clears up and the task starts again. You can see this demonstrated in the previous code with the id and name attributes that get added to the three tags being rendered. The id and name attributes lead this article to an important topic for rendered controls naming.
Naming
When a programmer drops your newly created FormField control on a Web form, the first thing they usually do is give it a name, such as fldName. The rendered control, however, is made up of several HTML tags that would normally each have their own identifier. Since you are manually rendering the entire HTML that your custom Web control is composed of, it is obviously up to you to provide those HTML tags with their appropriate identifiers. My code examples will follow an ASP.NET naming convention that all of ASP.NET's out-of-the-box Web controls use. The two attributes you have to give to each of the HTML tags are id and name. The values that these attributes should contain consist of the name of the encapsulating control (which is your Web control) followed by whatever name you want.
It goes without saying that this name should make sense if you read the rendered HTML from the browser. The delimiter between the two should be a ":" for the name attribute and a "_" for the id attribute. You have no way of knowing what a developer will name your control when you use the UniqueID property of the Control class, which is ultimately the class your control inherits from. Following this naming convention, you want to keep the code easy for any programmer to understand and allow your custom Web controls to maintain the same standards that Microsoft and other control vendors use. Note in the code that the textbox control retains the same name as your Web control (UniqueID); no ":childname" or "_childname" is concatenated. The reason for this has to do with making the Web control compatible with client-side validators and will be explained later.
What do we have so far?
Now I'd like you to compile your control and see what it looks like in a designer. I'll leave it up to you to have an ASP.NET Web application ready to go with a test page so you can test your controls. Add the control to the toolbox by browsing for the compiled assembly. If you drag your new control onto a Web form, you should get something that looks like Figure 3.
I think that looks pretty good with very little effort. Of course, you should fix a couple of details before you add any bells and whistles. First of all, I'll show you how to fix the control visually so it looks cleaner. You can simply add a space inbetween the caption and textbox and between the textbox and button. If you recall the last code snippet, you used three RenderBeginTags and RenderEndTags to render the three HTML tags. Now you need to add code to render a space character after the RenderEndTag that corresponds to the "span" tag and after the RenderEndTag that corresponds to the "input" tag for the textbox. You can use the Write method of the HtmlTextWriter object. In this method call you can send any literal text you want; in this case you'll send " ", which is the HTML control code for a space character. In C# your code would look like this.
output.Write(" ");
For VB.NET code you just remove the trailing semi-colon. If you recompile your control and look at your Web form now you'll see the space between each of the elements of your control. If you modify the properties of the control a little bit, you'll see the Caption and ButtonCaption in the Appearance category of the property browser. Change the Caption to "Enter name:" and the ButtonCaption to "search" and you should see something like what's in Figure 4.
I think by now you can see how useful this can be. In fact, at this point you can use the control on a Web form and see it in action. The Text property can retrieve or set the textbox's value from the form's code-behind page. But for real versatility, you still have a little more work to do.
To Be or Not Be
It's pretty obvious that few if not most of the instances of this control that you would use in an actual Web form don't need the button to the right of the textbox. You could take all the code in the Render method that corresponds to the button, including the space character you rendered before it, and place it around a condition statement that checks a new Boolean property called ButtonVisible. This lets you toggle the visibility of the button on your FormField control. However, in the downloadable code for this article, you'll notice that I don't use a ButtonVisible property. Instead, I've added a little more versatility with a property called ButtonLocation which is an enum value with three possibilities: Right, Left, or Hide. Using this approach I can place the button either to the right or left of the textbox, or I can hide it altogether. The logic to achieve this functionality involves creating conditions for rendering the HTML in the Render event and doing things in a different order depending on certain property values. I won't go into those details here but I suggest you review the downloadable code. You'll find this and plenty of other added functionality.
In the second part of this article, I'll show you three more things you can do to this control to make it really useful. First I'll show you how to add the ability to capture an event when the button is clicked or when the text in the textbox changes. In addition, I'll show you how to add basic styling, and I'll show you how to allow each element that makes up your control resize appropriately. In the mean time, feel free to read ahead in the code samples. | https://www.codemag.com/article/0509051 | CC-MAIN-2019-13 | refinedweb | 5,619 | 60.95 |
CircleCI has recently published a very useful post “Why we’re no longer using Core.typed” that raises some important concerns w.r.t. Typed Clojure that in their particular case led to the cost overweighting the benefits. CircleCI has a long and positive relation to Ambrose Bonnaire-Sergeant, the main author of core.typed, that has addressed their concerns in his recent Strange Loop talk “Typed Clojure: From Optional to Gradual Typing” (gradual typing is also explained in his 6/2015 blog post “Gradual typing for Clojure“). For the sake of searchability and those of us who prefer text to video, I would like to summarise the main points from the response (spiced with some thoughts of my own).
Disclaimer: All the useful information comes from Ambrose. All the errors and misinterpretations, if any, are mine.
The Concerns
(It should be noted that CircleCI has quite a large codebase with ~ 90 typed namespaces.)
- Slow checking – If you work on multiple files, you need to re-scan them all which takes very long time on a large project
- Mixing of typed and untyped code and thus the use of types with :no-check leads to weakened guarantees; both inside one’s own codebase and when interacting with untyped libraries (see #4).
- Some expressions cannot by typed (get-in, …) and it is difficult to distinguish whether an error is caused by core.typed’s limitations or an actual defect in the code being checked
- It’s costly to maintain type for untyped 3rd-party libraries
- It is difficult to find out the right type signature for some expressions
The Solutions
Summary: The situation is already better and will be much better when gradual typing is fully finished.
#1 Slow checking – this is already solved by the Typed REPL and the caching built into require / load.
Ambrose Bonnaire-Sergeant (ABS): But the Typed REPL is still work in progress, “In particular, it doesn’t play nicely with many tools that use the current namespace to evaluate tools-specific things (eg. autocompletions). […] it’s undocumented and unproven” though the require/load caching might be split out and used separately as it doesn’t suffer from these problems.
#2 Mixing of typed and untyped code and thus lack of compile-time guarantees – this will be solved by gradual typing when finished by adding runtime checks to these types (and adding runtime checks to ensure that untyped code cannot pass illegal values to typed code)
#3 Expressions that are impossible / hard to type – I don’t think this has been addressed in the talk, though I have seen in the Google Group that the community continually thinks about ways to be able to type more Clojure idioms. My thoughts: There should be a first-class support for these, i.e. a well known, well supported and easy to use way to work around these. Something like the solution for external untyped code where we provide our own type signature and mark it with “:no-check”. (Though I obviously have no idea what this solution for the particular problem of type uncheckable expressions would be.) Also, it should not be impossible to modify the error reporting to clearly distinguish errors caused by core.typed’s limitations and those by defects in the code being checked.
ABS: “The most significant work on this front (ie. supporting more idioms at compile-time) has been incremental since 2013. Overhauls are needed for the challenges CircleCI use as examples.”
#4 Cost of maintaining type signatures for 3rd party libraries – this will be still costly but much more valuable and reliable as these types will be turned into runtime guarantees. My thoughts: This is no different from using Prismatic Schema, there you too need to check that external code conforms to your contracts.
ABS: One interesting idea is to convert existing Schema annotations to core.typed for libs that have them.
#5 The difficulty of finding out the right type signatures of more complex functions – this hasn’t been addressed. My thoughts: Making it easier to derive type signature is certainly an interesting are for exploration though likely outside of the main focus now. It would be great to run a function with few examples of its usage through some magical box and get back a type signature for it :-)
ABS: “I’m personally mostly interested in using compile-time data to infer types at the moment, which would again require some overhauls if at all possible. Your suggestion has been successfully used in practice though, see DRuby. I would like to know if this approach works on typical Clojure code. IME types in many Clojure programs aren’t especially polymorphic or higher-order when crossing function boundaries, functions often take a map or some other simple value, so there’s probably something worth investigating.”
Conclusion
Core.typed is relevant and useful in many cases. With the progress towards Gradual Typing it will become even much more powerful and useful on mixed typed-untyped code based.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/an-answer-to-circlecis-why-were-no-longer-using-co | CC-MAIN-2018-09 | refinedweb | 847 | 57.91 |
User Details
- User Since
- Jan 5 2020, 11:18 PM (11 w, 5 d)
Today
Yesterday
Updated to handle nested namespaces, exclude C linkages functions, and made check language specific. (:
Thu, Mar 26
Won't deleting the files in the AOR directory cause the make check on the buildbots to fail? It might make sense to leave the files and delete them once all functions have been migrated?
Wed, Mar 25
Tue, Mar 24
Mon, Mar 23
Sat, Mar 21
Fri, Mar 20
Thanks!
Wed, Mar 18
Tue, Mar 17
Addressed comments. (:
Add comment explaining why we wait on pid.
Mon, Mar 16
Thu, Mar 12
Wed, Mar 11
Changed to install ninja from apt instead of source.
Mocked the header files so that we don't experience failures due to differences in systems. Mind taking a quick look @aaron.ballman?
Tue, Mar 10
Addressed @Eugene.Zelenko comments.
Addressed @aaron.ballman comments (:
Mon, Mar 9
This looks ready to me.
nit: I'd maybe add some comments explaining the pipe trick since it might not be overtly obvious what's happening here on first glance. (:
Fri, Mar 6
Thanks for the heads up phosek, I removed the check from fuchsia's directory.
Also addressed Eurgene.Zelenko's comments. (:
Thanks for the suggestions, the general check sounds like a great idea. I can see the use case for this as it can be used by anyone. I took the time to port out fuchsia's check and flesh out the user facing documentation. Here is the patch for that D75786.
Thu, Mar 5
Wed, Mar 4
Tue, Mar 3
Mon, Mar 2
Done.
Also would it be wise to add a .clang-tidy file to libc/ to enable this module for that subdirectory?
Yes, this will be done in a separate patch. Thanks for pointing it out!
Fri, Feb 28
Feb 26 2020
Feb 23 2020
Feb 22 2020
Feb 21 2020
Removed most dependencies on system libc headers and integrated changes requested by Kostya. | https://reviews.llvm.org/p/PaulkaToast/ | CC-MAIN-2020-16 | refinedweb | 334 | 83.25 |
Python is a living language — under constant development to keep up with the times. The Python Software Foundation is not just making additions to the standard library and to the reference implementation CPython, but also introducing new features and refinements to the language itself.
For instance, Python 3.8 introduced a new syntax for in-line assignments (the “walrus operator”) that makes certain operations more concise. Another newly approved syntax improvement, pattern matching, will make it easier to write code that evaluates for one of many possible cases. Both of these features were inspired by their presence and utility in other languages.
And they’re only two of a slew of useful features that could be added to Python to make the language more expressive, more powerful, more suited to the modern programming world. What else might we wish for? Here are four more language features that could add something of real value to Python — two we might actually get, and two we probably won’t.
True constants
Python doesn’t really have the concept of a constant value. Today, constants in Python are mostly a matter of convention. Using a name that’s in all-caps and snake case — e.g.,
DO_NOT_RESTART — is a hint that the variable is intended to be a constant. Similarly, the
typing.Final type annotation provides a hint to linters that an object should not be modified, but it doesn’t enforce that at runtime.
Why? Because mutability is deeply ingrained in Python’s behaviors. When you assign a value to a variable — e.g.,
x=3 — you’re creating a name in the local namespace,
x, and pointing it at an object in the system that has the integer value
3. Python assumes at all times that names are mutable — that any name could point to any object. That means that every time a name is used, Python goes to the trouble of looking up what object it’s pointing at. This dynamism is one of the chief reasons Python runs more slowly than some other languages. Python’s dynamism offers great flexibility and convenience, but it comes at the cost of runtime performance.
One advantage of having true constant declarations in Python would be some reduction in the frequency of object lookups that take place during runtime, and thus better performance. If the runtime knows ahead of time that a given value never changes, it doesn’t have to look up its bindings. This could also provide an avenue for further third-party optimizations, like systems that generate machine-native code from Python apps (Cython, Nuitka).
However, true constants would be a major change, and most likely a backward incompatible change. It would also be up for debate if constants would come by way of new syntax — for instance, the as-yet-unused
$ symbol — or as an extension of Python’s existing way to declare names. Finally, there is the larger, philosophical question of whether or not true constants make sense in a language where dynamism has been a big part of the appeal.
In short, it’s possible we’ll see true constants in Python, but it would be a major breaking change.
True overloading and generics
In many languages, multiple versions of the same function can be written to work with different kinds of input. For instance, a
to_string() function could have different implementations for converting from integers, floating-point numbers, or other objects — but they would share the same name for the sake of convenience. “Overloading,” or “generics,” make it easier to write robust software, since you can write generic methods for common processes rather than use a method specifically for a given type.
Python does let you use one function name do the work of many, but not by defining multiple instances of a function. You can define a name only once in a given scope and bind it to only a single object at a time, so you can’t have multiple versions of a single function under the same name.
What Python developers typically do to work around this is use built-ins like
isinstance() or
type() to determine the type of variable submitted to a function, then take action based on the type. Sometimes this involves dispatching to a type-specific version of a function under the hood. But this approach makes it hard for other developers to extend your function unless you go out of your way to make it extensible — for instance, by dispatching to methods within a class, which could be subclassed.
PEP 3124, advanced in April 2007, proposed a mechanism for decorating functions to indicate they could be overloaded. The proposal was deferred rather than being rejected outright — meaning the idea was fundamentally sound, but the time wasn’t right to implement it. One factor that might speed the adoption of overloading in Python — or cause the idea to be ditched entirely — is the implementation of the newly proposed pattern matching system.
In theory, pattern matching could be used under the hood to handle overload dispatch. However, pattern matching could also be given as a rationale for not implementing generics in Python, since it already provides an elegant way to dispatch operations based on type signatures.
So we might get true overloading in Python one day, or its advantages might be superseded by other mechanisms.
Tail recursion optimizations
Many language compilers employ tail recursion optimizations, where functions that call themselves don’t create new stack frames in the application, and thus risk blowing up the stack if they run for too long. Python doesn’t do this, and in fact its creators have consistently come out against doing so.
One reason is that much of Python, from the inside out, uses iteration rather than recursion — generators, coroutines, and so on. In this case, it means using a function with a loop and a stack structure instead of a recursive mechanism. Each call of the loop can be saved into a stack to create a new recursion, and popped off the stack when the recursion finishes.
Python developers are encouraged to use these patterns instead of recursion, so there seems little hope for recursion optimizations. The chances here are not likely at all, as Python’s idioms support other solutions.
Multiline lambdas
Lambdas, or anonymous functions, made it into Python only after some resistance on the part of language creator Guido van Rossum. As Python lambdas exist now, they’re highly constrained: They only allow you to use a single expression (essentially, anything to the right of an equals sign in an assignment operation) as the function body. If you want a full block of statements, just break them out and make an actual function from them.
The reason comes down to the design of the language as van Rossum sees it. As van Rossum wrote in 2006, “I find any solution unacceptable that embeds an indentation-based block in the middle of an expression. Since I find alternative syntax for statement grouping (e.g. braces or begin/end keywords) equally unacceptable, this pretty much makes a
- Virtualenv and venv: Python virtual environments explained
- Python virtualenv and venv do’s and don’ts
- Python threading and subprocesses explained
- How to use the Python debugger
- How to use timeit to profile Python code
- How to use cProfile to profile Python code
- How to | https://www.infoworld.com/article/3566382/4-powerful-features-python-is-still-missing.amp.html | CC-MAIN-2021-49 | refinedweb | 1,230 | 58.72 |
Did I find the right examples for you? yes no Crawl my project Python Jobs
Bucket keys to be compacted are placed on a sorted set. The compactor needs to atomically pop one bucket key off the set. This can be done with a lock--entailing the use of a lock key and various timeout mechanisms--or by evaluating a Lua script, which is probably the best way. Unfortunately, Lua scripts are not supported prior to client version 2.7.0 or server version 2.6.0. This class provides an abstraction around these two methods, simplifying compactor().
src/t/u/turnstile-HEAD/tests/unit/test_compactor.py turnstile(Download)
def test_init(self): gbk = compactor.GetBucketKey({}, 'db') self.assertEqual(gbk.db, 'db') self.assertEqual(gbk.key, 'compactor')
def test_init_altconf(self): gbk = compactor.GetBucketKey({ 'compactor_key': 'alt_compactor', 'max_age': '60', 'min_age': '5',
def test_call(self, mock_get, mock_debug, mock_sleep, mock_time): db = mock.Mock() gbk = compactor.GetBucketKey({}, db) result = gbk() | http://nullege.com/codes/search/turnstile.compactor.GetBucketKey | CC-MAIN-2021-43 | refinedweb | 154 | 53.88 |
Indicate Device State with LEDs
Most AVS products differentiate themselves with unique styling features, but they all speak the same design language to ensure customers can easily interact with them. When you build your first Alexa-enabled device, you can follow AVS' UX Guidelines to help your customers understand what's happening. Most Alexa-enabled products use LEDs to communicate the "Attention State" of the device.
As you can see from the above table, attention state guidelines include Blue LEDs when Alexa recognizes the Wake Word (listening state), or Red LEDs when the user has turned off the microphones (privacy mode). In this workshop, we’ll use the AVS Device SDK to implement visual indicators of device state into your product.
Get your required hardware
If you've already got some LEDs and resistors laying around, you can use those. For this tutorial, we're using the GPIO Breakout Bundle from CanaKit, available from our wishlist here.
Hook up your LEDs to the Raspberry Pi
Plug your ribbon cable into the black breakout board and install it in the breadboard as shown in the below picture. Using two jumper wires, attach them to the pins labeled 17 and 18 on the black breakout PCB. You can do this by plugging into the same row as them on the breadboard.
Install two resistors from your kit with color bands red/red/black/gold as shown. This corresponds to 220 ohms which is enough to push some current through the LEDs without exploding them. Make sure the resistors straddle the middle of the breadboard.
Grab a Red LED and a Blue LED from your kit. Notice how one of the legs is shorter than the other? Put the short leg in the furthest right row on your breadboard - this should correspond to the negative terminal (GND) on the breakout board, marked as negative on both the black PCB and shown as a blue row on the breadboard.
The long leg of the LED should connect through a resistor to your jumper wires - wire pin 17 to your Red LED's resistor, and pin 18 to your Blue LED's resistor.
Check your connections against the picture carefully before plugging the other end of the ribbon cable into the Raspberry Pi's header. Ensure your ribbon cable is aligned squarely on the Pi's header without any pins offset or sticking out.
Modify the AVS Device SDK to implement the Attention System
Now let's write some software to activate the right pins on the Pi. In File Explorer, navigate to the folder home/pi/avs-device-sdk/SampleApp/src
We need to add the WiringPi library to the SampleApp project so that we can control output pins on our Raspberry Pi. Open the file “main.cpp” and add the include header statement at the top of the file as shown.
#include <wiringPi.h>
We will also initialize the library using the function “wiringPiSetup” and set up the GPIOs we plan on using to drive our Red and Blue LEDs. The Pi makes a couple dozen pins available for use, but for now, let's reserve two output pins for our tutorial. Still in main.cpp, scroll down to the
int main function and add the following code:
wiringPiSetup () ; pinMode (0, OUTPUT) ; pinMode (1, OUTPUT) ;
Save your text file and close it. Still in the home/pi/avs-device-sdk/SampleApp/src folder, open the file “UIManager.cpp”. We need to include the WiringPi.h header file by adding
#include <wiringPi.h> at the top of your file:
If you've already done the Indicate Device State with Sounds tutorial, these next steps will be familiar to you - we're going to find the hooks for Privacy Mode and Listening state, then add the output commands to them to drive our LEDs.
In your
UIManager::microphoneOff() function, add
digitalWrite (0, HIGH); to turn on the Red LED when your microphone is turned off:
Scroll down a bit to the
UIManager::microphoneOn() function and add
digitalWrite (0, LOW); to turn the Red LED back off when you exit Privacy Mode.
One more thing! We need to initialize that LED so it's not in an indeterminate state on startup. Scroll up to the
UIManager::printWelcomeScreen() function and add
digitalWrite (0, LOW); inside the brackets as shown.
Now let's add the Blue LED for Alexa's Listening state. Near the bottom of the file in the
printState() function, where it says
case DialogUXState::LISTENING:, add
digitalWrite (1, HIGH); to drive the Pi's GPIO pin to 3.3V and turn on our LED.
Of course, we've got to switch the LED off when Alexa isn't listening. Let's do this by adding a
digitalWrite (1, LOW); to states
IDLE and
THINKING. When you're finished, your states should include the code as shown below. Don't forget to save before closing.
For the final step, you'll need to make a change to the CMake file in order for the project to use the WiringPi library when you rebuild the Sample App. Open the file “CMakeLists.txt” from the same home/pi/avs-device-sdk/SampleApp/src folder and add
target_link_libraries(SampleApp "-lwiringPi") at the bottom of the file as shown.
Rebuild the Sample app
Open a terminal and input the following command to rebuild the Sample App to implement the changes you just made:
cd /home/pi/build/SampleApp sudo make
You'll also need to re-input your credentials after adding wiringPi to the CMake file. You can do this by re-doing the initial install step. This should only take 30 seconds, and won't require you to re-authenticate.
cd /home/pi/ sudo bash setup.sh config.json [-s 1234]
Restart your Sample App by initiating the startsample.sh script in a terminal:
cd /home/pi/ sudo bash startsample.sh
Now, say "Alexa" - you should see the Blue LED light up to indicate when your device is in the "Listening" state. Toggle your device in and out of Privacy Mode by typing "m" and hitting "return" in your Sample App - your Red LED should indicate the state.
The AVS Device SDK allows you to easily implement the Attention State system in hardware. Stay tuned for more advanced tutorials where we'll follow AVS's UX Guidelines to implement animations for
THINKING and
SPEAKING states, as well as indicators for Notifications, Error States, and more.
Be the first to know when new tutorials are released by signing up for our Voice Mail Newsletter for professional AVS developers on the Alexa Voice Service Developer Portal.
If you make something awesome - please share it! Share a link via the feedback button. | https://developer.amazon.com/pt-br/docs/alexa-voice-service/indicate-device-state-with-leds.html | CC-MAIN-2019-39 | refinedweb | 1,120 | 70.13 |
>hakan...@ohsu.edu said: >>. > richard.ell...@gmail.com said: > NFS v4 or DFS (or even clever sysadmin + automount) offers single namespace > without needing the complexity of NFSv4.1, lustre, glusterfs, etc.
Advertising
Been using NFSv4 since it showed up in Solaris-10 FCS, and it is true that I've been clever enough (without automount -- I like my computers to be as deterministic as possible, thank you very much :-) for our NFS clients to see a single directory-tree namespace which abstracts away the actual server/location of a particular piece of data. However, we find it starts getting hard to manage when a single project (think "directory node") needs more space than their current NFS server will hold. Or perhaps what you're getting at above is even more clever than I have been to date, and is eluding me at the moment. I did see someone mention "NFSv4 referrals" recently, maybe that would help. Plus, believe it or not, some of our customers still insist on having the server name in their path hierarchy for some reason, like /home/mynfs1/, /home/mynfs2/, and so on. Perhaps I've just not been persuasive enough yet (:-). richard.ell...@gmail.com said: > Don't forget about backups :-) I was hoping I could get by with telling them to buy two of everything. Thanks and regards, Marion _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org | https://www.mail-archive.com/zfs-discuss@opensolaris.org/msg50729.html | CC-MAIN-2017-04 | refinedweb | 234 | 58.42 |
Picture this: you have a perfectly good function component, and then one day, you need to add a lifecycle method to it.
Ugh.
“Maybe I can work around it somehow?” eventually turns to “oooook FINE I’ll convert it to a class.”
Cue the
class Thing extends React.Component, and copy-pasting the function body into
render, and then fixing the indentation, and then finally adding the lifecycle method.
The
useEffect hook gives you a better way.
With
useEffect, you can handle lifecycle events directly inside function components. Namely, three of them:
componentDidMount,
componentDidUpdate, and
componentWillUnmount. All with one function! Crazy, I know. Let’s see an example.
import React, { useEffect, useState } from 'react'; import ReactDOM from 'react-dom'; function LifecycleDemo() { // It takes a function useEffect(() => { // This gets called after every render, by default // (the first one, and every one after that) console.log('render!'); // If you want to implement componentWillUnmount, // return a function from here, and React will call // it prior to unmounting. return () => console.log('unmounting...'); }) return "I'm a lifecycle demo"; } function App() { // Set up a piece of state, just so that we have // a way to trigger a re-render. const [random, setRandom] = useState(Math.random()); // Set up another piece of state to keep track of // whether the LifecycleDemo is shown or hidden const [mounted, setMounted] = useState(true); // This function will change the random number, // and trigger a re-render (in the console, // you'll see a "render!" from LifecycleDemo) const reRender = () => setRandom(Math.random()); // This function will unmount and re-mount the // LifecycleDemo, so you can see its cleanup function // being called. const toggle = () => setMounted(!mounted); return ( <> <button onClick={reRender}>Re-render</button> <button onClick={toggle}>Show/Hide LifecycleDemo</button> {mounted && <LifecycleDemo/>} </> ); } ReactDOM.render(<App/>, document.querySelector('#root'));
Try it out in CodeSandbox.
Click the Show/Hide button. Look at the console. It prints “unmounting” before it disappears, and “render!” when it reappears.
Now, try the Re-render button. With each click, it prints “render!” and it prints “umounting”. That seems weird…
Why is it “unmounting” with every render?
Well, the cleanup function you can (optionally) return from
useEffect isn’t only called when the component is unmounted. It’s called every time before that effect runs – to clean up from the last run. This is actually more powerful than the
componentWillUnmount lifecycle because it lets you run a side effect before and after every render, if you need to.
Not Quite Lifecycles
useEffect runs after every render (by default), and can optionally clean up for itself before it runs again.
Rather than thinking of
useEffect as one function doing the job of 3 separate lifecycles, it might be more helpful to think of it simply as a way to run side effects after render – including the potential cleanup you’d want to do before each one, and before unmounting.
Prevent useEffect From Running Every Render
If you want your effects to run less often, you can provide a second argument – an array of values. Think of them as the dependencies for that effect. If one of the dependencies has changed since the last time, the effect will run again. (It will also still run after the initial render)
const [value, setValue] = useState('initial'); useEffect(() => { // This effect uses the `value` variable, // so it "depends on" `value`. console.log(value); }, [value]) // pass `value` as a dependency
Another way to think of this array: it should contain every variable that the effect function uses from the surrounding scope. So if it uses a prop? That goes in the array. If it uses a piece of state? That goes in the array.
Only Run on Mount and Unmount
You can pass the special value of empty array
[] as a way of saying “only run on mount and unmount”. So if we changed our component above to call
useEffect like this:
useEffect(() => { console.log('mounted'); return () => console.log('unmounting...'); }, []) // <-- add this empty array here
Then it will print “mounted” after the initial render, remain silent throughout its life, and print “unmounting…” on its way out.
This comes with a big warning, though: passing the empty array is prone to bugs. It’s easy to forget to add an item to it if you add a dependency, and if you miss a dependency, then that value will be stale the next time
useEffect runs and it might cause some strange problems.
Focus On Mount
Sometimes you just want to do one tiny thing at mount time, and doing that one little thing requires rewriting a function as a class.
In this example, let’s look at how you can focus an input control upon first render, using
useEffect combined with the
useRef hook.
import React, { useEffect, useState, useRef } from "react"; import ReactDOM from "react-dom"; function App() { // Store a reference to the input's DOM node const inputRef = useRef(); // Store the input's value in state const [value, setValue] = useState(""); useEffect( () => { // This runs AFTER the first render, // so the ref is set by now. console.log("render"); // inputRef.current.focus(); }, // The effect "depends on" inputRef [inputRef] ); return ( <input ref={inputRef} value={value} onChange={e => setValue(e.target.value)} /> ); } ReactDOM.render(<App />, document.querySelector("#root"));
At the top, we’re creating an empty ref with
useRef. Passing it to the input’s
ref prop takes care of setting it up once the DOM is rendered. And, importantly, the value returned by
useRef will be stable between renders – it won’t change.
So, even though we’re passing
[inputRef] as the 2nd argument of
useEffect, it will effectively only run once, on initial mount. This is basically “componentDidMount” (except the timing of it, which we’ll talk about later).
To prove it, try out the example. Notice how it focuses (it’s a little buggy with the CodeSandbox editor, but try clicking the refresh button in the “browser” on the right). Then try typing in the box. Each character triggers a re-render, but if you look at the console, you’ll see that “render” is only printed once.
Fetch Data With useEffect
Let’s look at another common use case: fetching data and displaying it. In a class component, you’d put this code in the
componentDidMount method. To do it with hooks, we’ll pull in
useEffect. We’ll also need
useState to store the data.
It’s worth mentioning that when the data-fetching portion of React’s new Suspense feature is ready, that’ll be the preferred way to fetch data. Fetching from
useEffect has one big gotcha (which we’ll go over) and the Suspense API is going to be much easier to use.
Here’s a component that fetches posts from Reddit and displays them:
import React, { useEffect, useState } from "react"; import ReactDOM from "react-dom"; function Reddit() { // Initialize state to hold the posts const [posts, setPosts] = useState([]); // effect functions can't be async, so declare the // async function inside the effect, then call it useEffect(() => { async function fetchData() { // Call fetch as usual const res = await fetch( "" ); // Pull out the data as usual const json = await res.json(); // Save the posts into state // (look at the Network tab to see why the path is like this) setPosts(json.data.children.map(c => c.data)); } fetchData(); }); // <-- we didn't pass a value. what do you think will happen? // Render as usual return ( <ul> {posts.map(post => ( <li key={post.id}>{post.title}</li> ))} </ul> ); } ReactDOM.render( <Reddit />, document.querySelector("#root") );
You’ll notice that we aren’t passing the second argument to
useEffect here. This is bad. Don’t do this.
Passing no 2nd argument causes the
useEffect to run every render. Then, when it runs, it fetches the data and updates the state. Then, once the state is updated, the component re-renders, which triggers the
useEffect again. You can see the problem.
To fix this, we need to pass an array as the 2nd argument. What should be in the array?
Go ahead, think about it for a second.
…
…
The only variable that
useEffect depends on is
setPosts. Therefore we should pass the array
[setPosts] here. Because
setPosts is a setter returned by
useState, it won’t be recreated every render, and so the effect will only run once.
Fun fact: When you call
useState, the setter function it returns is only created once! It’ll be the exact same function instance every time the component renders, which is why it’s safe for an effect to depend on one. This fun fact is also true for the
dispatch function returned by
useReducer.
Re-fetch When Data Changes
Let’s expand on the example to cover another common problem: how to re-fetch data when something changes, like a user ID, or in this case, the name of the subreddit.
First we’ll change the
// 1. Destructure the `subreddit` from props: function Reddit({ subreddit }) { const [posts, setPosts] = useState([]); useEffect(() => { async function fetchData() { // 2. Use a template string to set the URL: const res = await fetch( `{subreddit}.json` ); const json = await res.json(); setPosts(json.data.children.map(c => c.data)); } fetchData(); // 3. Re-run this effect when `subreddit` changes: }, [subreddit, setPosts]); return ( <ul> {posts.map(post => ( <li key={post.id}>{post.title}</li> ))} </ul> ); } // 4. Pass "reactjs" as a prop: ReactDOM.render( <Reddit subreddit='reactjs' />, document.querySelector("#root") );
This is still hard-coded, but now we can customize it by wrapping the
function App() { // 2 pieces of state: one to hold the input value, // another to hold the current subreddit. const [inputValue, setValue] = useState("reactjs"); const [subreddit, setSubreddit] = useState(inputValue); // Update the subreddit when the user presses enter const handleSubmit = e => { e.preventDefault(); setSubreddit(inputValue); }; return ( <> <form onSubmit={handleSubmit}> <input value={inputValue} onChange={e => setValue(e.target.value)} /> </form> <Reddit subreddit={subreddit} /> </> ); } ReactDOM.render(<App />, document.querySelector("#root"));
Try the working example on CodeSandbox.
The app is keeping 2 pieces of state here – the current input value, and the current subreddit. Submitting the input “commits” the subreddit, which will cause
btw: Type carefully. There’s no error handling. If you type a subreddit that doesn’t exist, the app will blow up. Implementing error handling would be a great exercise though! ;)
We could’ve used just 1 piece of state here – to store the input, and send the same value down to
The
useState at the top might look a little odd, especially the second line:
const [inputValue, setValue] = useState("reactjs"); const [subreddit, setSubreddit] = useState(inputValue);
We’re passing an initial value of “reactjs” to the first piece of state, and that makes sense. That value will never change.
But what about that second line? What if the initial state changes? (and it will, when you type in the box)
Remember that
useState is stateful (read more about useState). It only uses the initial state once, the first time it renders. After that it’s ignored. So it’s safe to pass a transient value, like a prop that might change or some other variable.
A Hundred And One Uses
The
useEffect function is like the swiss army knife of hooks. It can be used for a ton of things, from setting up subscriptions to creating and cleaning up timers to changing the value of a ref.
One thing it’s not good for is making DOM changes that are visible to the user. The way the timing works, an effect function will only fire after the browser is done with layout and paint – too late, if you wanted to make a visual change.
For those cases, React provides the
useMutationEffect and
useLayoutEffect hooks, which work the same as
useEffect aside from when they are fired. Have a look at the docs for useEffect and particularly the section on the timing of effects if you have a need to make visible DOM changes.
This might seem like an extra complication. Another thing to worry about. It kinda is, unfortunately. The positive side effect of this (heh) is that since
useEffect runs after layout and paint, a slow effect won’t make the UI janky. The down side is that if you’re moving old code from lifecycles to hooks, you have to be a bit careful, since it means
useEffect is almost-but-not-quite equivalent to
componentDidUpdate in regards to timing.
Try Out useEffect
You can try
useEffect on your own in this hooks-enabled CodeSandbox. A few ideas…
- Render an input box and store its value with
useState. Then set the
document.titlein an effect. (like Dan’s demo from React Conf)
- Make a custom hook that fetches data from a URL
- Add a click handler to the document, and print a message every time the user clicks. (don’t forget to clean up the handler!)
If you’re in need of inspiration, here is Nik Graf’s Collection of React Hooks – currently at 88 and counting! Most of them are simple to implement on your own. (like
useOnMount, which I bet you could implement based on what you learned in this post!) | https://daveceddia.com/useeffect-hook-examples/ | CC-MAIN-2019-47 | refinedweb | 2,170 | 57.06 |
Hans,
Geronimo doesn't use JNDI names the way you're trying to use them.
The only thing we use an EJB's JNDI name for is for remote clients to
access. I'm assuming the lookup code you posted was from a web
component or something else running on the server side. You should
declare an ejb-ref in your J2EE deployment descriptor, either with an
ejb-link there or with a Geronimo deployment plan that also contains
an ejb-ref and points to a specific ejb. Then the lookup should use
the java:comp/env namespace.
So, for example: | http://mail-archives.apache.org/mod_mbox/geronimo-user/200511.mbox/raw/%3C74e15baa0511240749o78b94f17k78715dc7193b9984@mail.gmail.com%3E/ | CC-MAIN-2015-18 | refinedweb | 101 | 72.87 |
rad_attach_pid, rad_bind_pid - Attaches or binds a process
to a Resource Affinity Domain by process ID (libnuma
library)
#include <numa.h>
int rad_attach_pid(
pid_t pid,
radset_t radset,
ulong_t flags ); int rad_bind_pid(
pid_t pid,
radset_t radset,
ulong_t flags );
Identifies the process to be attached or bound to the
specified set of Resource Affinity Domains (RADs). Specifies
the RAD set to which the process will be attached or
bound. Specifies options (a bit mask) that affect the
attachment or binding operation. See DESCRIPTION for
details.
The rad_attach_pid() function attaches the process specified
by pid to the set of RADs specified by radset.
The rad_bind_pid() function binds the process specified by
pid to the set of RADs specified by radset.
While both functions assign a "home" RAD for the process,
an attach operation allows remote execution on other RADs
while a bind operation restricts execution to the "home"
RAD. For both functions, if the pid argument is NULL, the
call is self-directed. That is, the function behaves as
if pid identified the calling process.
The memory allocation policy for the process will be set
to MPOL_THREAD. The home RAD for the process will be
selected by the system scheduler from among the RADs
included in radset and will be based on current system
load balance and the flags argument. The overflow set
(mattr_radset) for the process will be set to radset. If
the process has multiple threads, then any of those
threads that have inherited the process's default memory
allocation policy will be attached or bound by using the
same new memory allocation policy as used for the process
that contains them.
The threads of the specified process will be scheduled on
one of the CPUs associated with the selected RAD, except
for threads that have been explicitly bound to some other
processor. The CPU will be selected by the scheduler from
among those CPUs associated with the selected RAD in the
process's partition. (This partition might not be the
same as the caller's partition if the caller has appropriate
privilege.) The selection will be determined by the
The following options are defined for the flags argument:
Any processes later forked by the specified process can be
assigned to any RAD on the system, and might not inherit
its parent's home RAD assignment; that is, the child processes
might not be assigned to the same home RAD as the
parent. This allows the system to assign a home RAD to the
child process depending on available resources.
Normally, child processes do inherit the assignments
and attributes of the parent process.
By default, processes that are later forked by the
process specified in a rad_attach_pid() or
rad_bind_pid() call inherit the RAD assignment of
their parent. The requested attachments or bindings
are mandatory. If this option is not set, the
system will consider the request to be a "hint" and
may take no action for the specified process or, if
applicable, any child processes that the specified
process contains. The process has small memory
requirements, so the system should favor (for the
home RAD) those RADs with light CPU loads, independent
of their available memory. The process has
large memory requirements, so the system should
favor (for the home RAD) those RADs with more
available memory, independent of their CPU loads.
Arrange for existing memory of the process to be
migrated to the new home RAD. If RAD_MIGRATE is
omitted, only newly allocated pages will be allocated
on the new home RAD. Existing pages will
migrate if or when they experience a high rate of
remote cache misses. Migration will occur only for
pages in memory objects that have inherited the
process's default memory allocation policy. Wait
for the requested memory migration to be completed.
Effectively, this specifies "migrate now!".
If the caller does not have partition administration privilege
and if pid is not in the caller's partition, or if
the radset argument contains RADs that are not in the
caller's partition, an error will be returned.
The value for the radset argument could be obtained from a
prior call to nloc() that assigned or migrated the process
to a RAD close or closer to a particular resource. When
obtained this way, radset will contain only the RADs in
the caller's partition at the time of the nloc() call. The
partition configuration could change between a call to
nloc() and a subsequent call to rad_attach_pid() or
rad_bind_pid(), resulting in an error. This error is not
likely to occur often, but a robust application should
handle it.
Success. Failure. In this case, the functions set errno
to indicate the error.
If either of these functions fail,
errno is set to one of the following values for the condition
specified: RAD_INSIST and RAD_MIGRATE were specified
and the specified process cannot be migrated for some reason.
For example, memory is wired (locked) on the process's
current RAD. The radset argument points to an
invalid address. One or more of the RADs in the radset
argument or options in the flags argument are invalid.
RAD_INSIST and RAD_MIGRATE were specified and the
specified process cannot be migrated because insufficient
memory exists on the specified RAD set. The real or
effective user ID of the caller does not match the real or
effective user ID of the specified process, or the caller
does not have appropriate privileges to assign processes
to RADs. The process specified by pid does not exist.
Functions: nloc(3), rad_detach_pid(3)
rad_attach_pid(3) | http://nixdoc.net/man-pages/Tru64/man3/rad_attach_pid.3.html | CC-MAIN-2020-10 | refinedweb | 915 | 59.64 |
CI/CD With Kubernetes and Helm
CI/CD With Kubernetes and Helm
DevOps, cloud, and microservices come together for a continuously iterable environment.
Join the DZone community and get the full member experience.Join For Free
In this blog, I will be discussing the implementation of CI/CD pipeline for microservices which are running as containers and being managed by Kubernetes and Helm charts
Note: Basic understanding of Docker, Kubernetes, Helm, and Jenkins is required. I will discuss the approach but will not go deep into its implementation. Please refer to the original documentation for a deeper understanding of these technologies.
Before we start designing the pipeline for Kubernetes- and Helm-based architecture, a few things we need to consider are
1. Helm chart code placement
2. Versioning strategy
3. Number of environments
4. Branching model
Helm Chart Code Placement
Helm chart code can be placed along with code or in a separate repo but we ended up keeping it along with the source code. Points which made us inclined to this decision were:
a. If a developer adds some variables which need to be updated in Kubernetes config map or secrets, this knowledge needs to be shared with the owner of the Helm chart repo. This will add make things complex and buggy.
b. CI scripts also need to be updated so they can interact with a different repo either by lambda functions or some other ways.
c. Microservices tends to grow into very large numbers and if we keep the Helm chart in a separate repo we would be requiring 2 times the original repos.
Versioning Strategy
Whenever a new build is deployed in preview or a review version has to be updated so it can reflect what new things it have. Apart from many options we have, like using a third-party service or custom scripts for this, we can also use a simple yet powerful feature of git, the describe command.
This command finds the most recent tag and returns it, adding a suffix on it with the number of commits done after the tag and the abbreviated object name of the most recent commit.
For example, if the tag is 1.0.1 and we have 10 commits after it,
git describe will return us 1.0.1–10-g1234.
This number can be used as a version.
Environments
We need multiple environments which can be used for development, testing, staging, production, etc., for shipping our code to production. While we can create as many environments as we need, they do make the development life cycle bit complex. Generally, three types of environments work in most cases, so unless there is a very specific need, we should stick with them.
a. Preview environment, where developers can quickly deploy and test their changes before raising a pull request for review.
b. Staging environment, a pre-production environment where the reviewer hosts the changes for final review with different stakeholders.
c. Production environment, as the name suggests, where our running build lives.
In Kubernetes, there is the concept of namespaces, which can give us isolated environments within the same cluster, similar to what VPC does in AWS. So instead of creating a different cluster for each environment, we can use the same cluster and separate them by namespaces (for production we can have a separate one).
Branching Model
The Vincent Gitit flow model has been widely used in industry, we would be listing out few details in considering this model.
Below is the diagram for flow which has been explained in detail:
Stage 1: Developer makes changes in the code and tests it locally and then pushes the changes to git which will trigger the Jenkins pipeline. The pipeline will build the new Docker image and tags it with the output of the
git describe command and pushes it to Docker registry. The same tag is updated in the Helm chart code and a new chart is pushed as well into the chart museum. In the end, the deployment is done by running the Helm update command against the new version of the chart by Jenkins in development namespace. Finally, the developer raises the pull request.
Stage 2: A reviewer reviews the code and accepts it once he is satisfied and merges into the release branch. The version number also gets updated by removing all the suffixes by incrementing it based on its major/minor/path release, a new Docker image is created with a new tag, and the Helm chart also gets updated with the same tag. Jenkins then runs the Helm update in a staging namespace so it can be verified by others.
Stage 3: If staging looks good, the last step would be to push the changes into production. The Jenkins pipeline won't create new image or chart in this case, it only uses the last Helm chart version and updates the chart in the production namespace.
Overall, CI/CD in general is a complex exercise if the goal is to make it fully automated. Though this article doesn’t cover a lot of details, the information shared can be useful while designing your pipeline. If you are doing this over AWS, spot instances can be used for the preview environment to make it cost-effective.
Published at DZone with permission of Gaurav Vashishth . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/cicd-with-kubernetes-and-helm?fromrel=true | CC-MAIN-2020-16 | refinedweb | 916 | 60.75 |
Nuno Carvalho wrote: > I am trying to compile one file called xpriv.c which belongs to the >Radio Track install ! Unfortunally i get that: > >------------------------------------------------ >xpriv.c: In function `give_up_root`: >xpriv.c:30: `uid_t` undeclared (first use this function) >xpriv.c:30: (Each undeclared identifier is reported only once >xpriv.c:30: for each function it appears in.) >xpriv.c:30: parse error before `uid` >xpriv.c:33: `uid` undeclared (first use this function) > >----------------------------------------------- > > The code is: > >--------------------------- >#include <unistd.h> >#include <stdio.h> > >int give_up_root(void) >{ > /* get the real uid and give up root */ > uid_t uid; > int err; > > uid=getuid(); > err=seteuid(uid); > return (err); >} >----------------------------- > >What is going wrong ? > > uid_t structure isn`t already defined ? You also need: #include <sys/types.h> I am reporting this as a bug in the manpage of getuid and setuid against manpages-dev. -- -- To UNSUBSCRIBE, email to debian-user-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org | http://lists.debian.org/debian-user/1998/06/msg00417.html | CC-MAIN-2013-48 | refinedweb | 159 | 63.25 |
- You can use XML::Simple or any other XML generation tool you want. Just make sure you specify the SOAP::Data- type( xml )- value($myXML). Thom EdenMessage 1 of 4 , Aug 1, 2007View Source
- ... Something like this: $soap = SOAP::Data- name(subscr= SOAP::Data- value( SOAP::Data- name(tag1= 10)- attr({ n1:id = 1}),Message 2 of 4 , Aug 2, 2007View Source
> Sorry.... I'm new to SOAP, hacking my way through this... I can't rename it,Something like this:
> so how do I qualify it into a namespace?
$soap = SOAP::Data->name(subscr=>\SOAP::Data->value(
SOAP::Data->name(tag1=>10)->attr({'n1:id'=>1}),
SOAP::Data->name(tag2=>20)->attr({'n2:id'=>2})
)
)->attr({'xmlns:n1'=>"myns1",
'xmlns:n2'=>"myns2",});
Here the namespaces myns1 and myns2 are declared within the parent
element subscr with prefixes n1 and n2. Id attributes are then
qualified into these namespaces.
HTH
--
Radek
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/5989?o=1&xm=1&l=1 | CC-MAIN-2014-10 | refinedweb | 164 | 57.47 |
NAME
vm_map_wire, vm_map_unwire - manage page wiring within a virtual memory map
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/vm_map.h> int vm_map_wire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags); int vm_map_unwire(vm_map_t map, vm_offset_t start, vm_offset_t end, int flags);
DESCRIPTION
The vm_map_wire() function is responsible for wiring pages in the range between start and end within the map map. Wired pages are locked into physical memory, and may not be paged out as long as their wire count remains above zero. The vm_map_unwire() function performs the corresponding unwire operation. The flags argument is a bit mask, consisting of the following flags: If the VM_MAP_WIRE_USER flag is set, the function operates within user address space. If the VM_MAP_WIRE_HOLESOK flag is set, it may operate upon an arbitrary range within the address space of map. If a contiguous range is desired, callers should explicitly express their intent by specifying the VM_MAP_WIRE_NOHOLES flag.
IMPLEMENTATION NOTES
Both functions will attempt to acquire a lock on the map using vm_map_lock(9) and hold it for the duration of the call. If they detect MAP_ENTRY_IN_TRANSITION, they will call vm_map_unlock_and_wait(9) until the map becomes available again. The map could have changed during this window as it was held by another consumer, therefore consumers of this interface should check for this condition using the return values below.
RETURN VALUES
The vm_map_wire() and vm_map_unwire() functions have identical return values. The functions return KERN_SUCCESS if all pages within the range were [un]wired successfully. Otherwise, if the specified range was not valid, or if the map changed while the MAP_ENTRY_IN_TRANSITION flag was set, KERN_INVALID_ADDRESS is returned.
SEE ALSO
mlockall(2), munlockall(2), vm_map(9)
AUTHORS
This manual page was written by Bruce M Simpson 〈bms@spc.org〉. | http://manpages.ubuntu.com/manpages/intrepid/man9/vm_map_unwire.9freebsd.html | CC-MAIN-2015-11 | refinedweb | 293 | 62.38 |
I started thinking about doing a “Remedial .NET” series shortly after wrapping up our popular “Remedial XML” series. But then I realized that Builder.com already has loads of content covering most everything you need to know to get cooking with .NET. So rather than reinvent the wheel, I thought I should just point you to a selection of articles already available on the subject.
Getting the background info you need
The fact that the .NET framework is completely and truly object-oriented is exciting for some and intimidating for others. Getting a handle on how .NET works depends on understanding how object-oriented programming works on the .NET platform. “The .NET Common Programming Model (CPM)” will give you a helpful overview of how .NET supports object-oriented programming (OOP).
Of equal importance is working out what .NET assemblies are, how they work, and how they affect deployment. These pseudo libraries are the fundamental elements of component programming on the .NET platform, and they represent a healthy step up from how things worked in the old COM world. “What’s in a namespace?” provides a nice introduction to assemblies from a programming standpoint.
You’ll probably also have some technical questions about the nuts and bolts of .NET as an application platform itself. While there’s not much about it that’s truly new, it is definitely different from what most Windows developers have seen before. I recommend a doubleheader here:
- The programs you write using the .NET platform aren’t compiled directly to native machine code. There’s an intermediate step. Read “Check under the MSIL hood to see how the CLR is running” for more about how this Intermediate Language works, and why you should care.
- “.NET compilation demystified” will give you details on how Common Language Runtime works and how your application’s Intermediate Language code is actually executed.
Which language should you choose?
Quite a bit has been said about the range of language choices available for .NET programmers. Usually, this discussion boils down to, “Should I learn VB.NET or C#?” and there are lots of misconceptions about both languages. In reality, it’s a silly question, as the choice is more a matter of personal taste. Check out “Where will the Visual Basic 6.0 developers go?” and “The three-million-programmer question: VB.NET or C#?” for some straight information before you choose one over the other.
If you do decide you’d like to try your hand at C#, you might want to have a look at our C# crash course. No shame there; I think I’m starting to favor it too.
Actually, your language choices aren’t limited to just C# or VB.NET. A number of other options are available, as I explain in “A hitchhiker’s guide to alternate .NET languages.”
Some practical instruction
Turning to more practical matters, it’s almost a certainty that you won’t be developing all new .NET applications from the start. You’ll likely find yourself instead needing access to “legacy” COM functionality you created for a previous project. Luckily, Microsoft knew this would be an issue, and it built in a way to leverage existing COM applications. In fact, not only can you use COM components from a .NET application, as explained in “COM and .NET interoperability,” but you can call code in a .NET assembly from a COM application. “Using .NET assemblies with COM” provides details on that half of the picture.
One area that’s sure to baffle VB6 developers is error handling—or more appropriately, exception handling. You’ll need to get a handle on that (no pun intended) before you get very far in your new .NET life. To help you out there, I’ve got two articles to offer:
- “.NET exceptions for the exceptionally challenged”
- “Ten tips for handling .NET exceptions exceptionally”
Further individual study required
These Builder.com articles should give you the background you’ll need to continue in your .NET self-education. Stay tuned to Builder.com for more great .NET articles, and be sure to send us an e-mail if you'd like us to cover a particular area.
What was your favorite?
Do you have a favorite .NET article on Builder.com? If so, send the editors an e-mail and let them know which one it is. | http://www.techrepublic.com/article/get-up-and-running-with-a-host-of-net-articles/ | CC-MAIN-2017-30 | refinedweb | 728 | 68.67 |
This
In ASP.NET, applications are said to be running side by side when they are
installed on the same computer, but use different versions of the .NET
Framework.
Yes! VS.Net works with the .Net framework, while VS6.0 works with MFC or the
Windows API directly, for the most part. They can be installed and run on the
same machine without any considerations.
Manifest.Namespace is an abstract container providing context for the items it
holds and allows disambiguation of items having the same name (residing in
different namespaces. It can also be said as a context for identifiers. So
under a namespace you can have multiple assemblies.
DataSet is logical represantation of database so it can store as much as
database.
Deployment - It is the procedure to deploy Web Application on the Server. You
can deploy .Net application by simply Xcopy and create virtual directory for
the application on server.
Dataset is an in-memory representation of a database. Its stored no where but in
memory. Goes off when GC stats after a littl sleep.
When a class is not provided with full functionalitythen it is declared as
abstract.it doesn't support instance creation as well as it cannot be
overridable to child class.interface is a colection of methods only without
functionality.interface is 90% same as abstract class.
if you install biztalk server it provides Biztalk Project in the project types
like webproject, windows project, console project. We use rest of the products
of the Biztalk like adapters and all those thing and use them in .net.
Marshaling performs the necessary conversions in data formats between managed
and unmanaged code.CLR allows managed code to interoperate with unmanaged code
usining COM Marshaler(Component of CLR)..
.NET Compact Framework come with CLR which perform automatic garbage collector
to free the memory without using destector(perform garbage collector when is
declear) | http://www.megasolutions.net/qs/Net_2005_Interview_Questions.aspx | CC-MAIN-2015-14 | refinedweb | 315 | 59.19 |
In this tutorial, we will be creating a Hello World widget, displaying only Hello World.
Step 1
We create a python file called helloworld.py and open it with an editor.
Step 2: imports
We need to import wxPython to create the user interface of the widget, and ofcourse we will have to import the widgets module of Tribler:
import wx
import Tribler.Core.Widgets.widgets as widgets
Step 3: General information
We will now describe our widget in the module.
__name__ = "Hello World"
__author__ = "My Name"
__version__ = "0.1"
__description__ = "Displays Hello world"
width = 200
height = 200
We create a widget of 200x200 with title Hello World.
Step 4: create the widget class with user interface
class widget(widgets.tribler_widget):
def __init__(self, *args, **kw):
widgets.tribler_widget.__init__(self, *args, **kw)
self.helloWorld = wx.StaticText(self, -1, "Hello World", (0,0), (-1,-1))
First, we create a class called widget which extends the tribler_widget from the Tribler widget module. The __init__ function is standard and calls the base class init.
Lastly, we create a text with 'Hello World' at location (0,0) using wxPython.
That's it!
Start Tribler, click the "Add widget" button and go to the "insert widget" tab. Select your widget file and select "Debug Widget".
Test your widget, before inserting it into the repository!
You can also download the source from this page. | https://www.tribler.org/WidgetDeveloping/Tutorials/HelloWorld/ | CC-MAIN-2019-43 | refinedweb | 228 | 67.35 |
It says that i didn't change it from a comment
public class Generalizations {
public static void main(String[] args) {
//yuio
boolean isComplete=true;
int awesomeLevel=121;
int epicLevel= awesomeLevel*2;
System.out.println(epicLevel);
}
}
What are the instructions?
5.) instruction says this @konaesan "5.Uncomment the last line so that the console prints out the value of epicLevel."
Lesson Link here:
However @fenrirthenorselupus it would seem your code is correct just by my glancing at my code compared to your code. I don't see no comment in your last line so as long as you passed each step individually you should be fine... the only thing I can think of is maybe get rid of the
at the top of your code as maybe thats playing with the lesson sensors?
In fact I just pasted your whole code into my lesson after resetting it and it passed so maybe refresh browser? or reset lesson and copy and paste it in? something of that nature | https://discuss.codecademy.com/t/please-help-me-with-1-12-step-5-please-debug-my-code/75271 | CC-MAIN-2017-43 | refinedweb | 166 | 72.56 |
See also: IRC log
<trackbot> Tracking ISSUEs and ACTIONs from
<paulc> test
<cferris> pong
<paulc> Chris: Your new agenda at
<paulc> does not mention DavidO's email for Action 28 ()
chris, tony nadalin is with me on the chat
<cferris> scribe: maryann
scribe + nadalin
<asir> Scribe: Maryann Hondo
<asir> ScribeNick: maryann
<paulc> Minutes for Aug 16 and Aug 23 are both approved unanimously.
<paulc> Discount room rate is available until Sept 1.
asir is providing update on editors action items
<paulc> All editorial work except 3605 is done.
<paulc> fsasaki(fsasaki@128.30.52.28)] tracker for the editors is ready now, except trackbot, that will be ready tomorrow. See
editors request an update on tracker
<asir> We completed all the editorial actions from last week conference call except issue 3605
<asir> We are still waiting for the terminology extraction XSLT (DaveO is on point
<asir> We are planning to provide a drop (editors' drafts) for the F2F meeting. Our target is Tuesday Sept 5 at Noon
<asir> We spend a lot of time tracking our actions. We'll appreciate an ETA for our tracker tool
asir: there is a link to general tracker but not specific tracker
<fsasaki> I'll make the HP update
asir: 3605 is in progress
toufic: was working on 3605, there were 2 corrections
... only one has been updated
... will complete today
paul: what is eta for next editors draft
<paulc> ETA for editor's draft is Tue Sep 5 for F2F meeting
asir: targetted for 5th
chris: logistics from f2f
dave: working on logistics update
chris: new date?
paul: by the f2f
... new date for update sept 12th
chris: action 48 update?
... action 59 no update
... there is another action about new members which has been fixed
... paul push to next week to complete
... item 63 this belongs on the editors list
Resolution: close 63
chris: action 67
glen: is the issue done?
paul: this is to update the issue
glen: yes
RESOLUTION: glen said its done, to send update to paul
paul: close this action with a pointer to the item
<GlenD>
chris: done with action items
chris: opportunitiy to bring up new issues
... so that when we reach the F2f we can resolve them
<scribe> topic : issue 3590
daveO: 3590 as an editor, added attribute extensibility
... did a rewrite in bugzilla for all extensibility places
... numbered the extension points 1 to 8
... children are treated as assertions
... 6,7,8 more interesting, started looking at policy reference
... looked at some of the other inclusion mechanisms
... all allow element extensibility and listed types
... we can''t predict, but there are cases where others have chosen to have element extensibility
... add element extensibility to policy reference
... normalization rules would need to be updated if we did attribute extensibility
... not sure what the treatment would be
<bijan> +1
daveO: unless we can come up with a reasonable solution i propose no change on all and exactly one
<bijan> I think think exactlyone and all shouldn't be touched
daveO: propose text be made consistent with the schema
<Ashok> i agree with Bijan ... what are semantics of allowing extensibility on them
daveO: number 4 is the significant item
... slight change to the notation section
chris: questions on dave's issue?
... hopefully we can address this in time for the f2f
paul: or next week
... trying to get an idea of what to put on next week's agenda
daveO: there is a firm proposal
... add eleement extensibility
... asir may have some pushback
asir: agree this is ready for next week
... agree with 1,2, 3
... some pushback on 4
daveO: people use other namespace....
chris: can we respond on the list?
daveO: ok we'll keep emailing
<paulc> David's proposal is in
paul: proposal is in comment 2 in bugzilla
<asir> Response from Asir is at
daveO: its also in the email archive
<cferris>
<paulc> Update proposal for 3590 is in
ashok: if we agree people want to attach policies to components then we have to specify how wsdl 1.1 is referred
to
... this is in the email sent
<cferris>
ashok: do we agree that we refer to wsdl 1.1 components using external attachment?
chris: does everyone understand the issue?
<asir> I am not aware of any discussion thread
paul: is there a email thread?
ashok: yes
umit: why are we discussing the issue?
<asir> Response from Asir is at
umit: do we need a proposal? maybe ashok should publish it again so people can review
... with proposed changes
chris: ashok, reply?
ashok: does the WG want this written up?
daveO: i'd like to see something that shows capabilities that can't be expressed with what we have now
ashok: there is no way to refer to WSDL 1.1 definition
daveO: not sure why you attach policy to a definition
ashok: component?
daveO: the root element?
... the definition in not a policy subject
asir: 2 points,
... what is the interop issue?
... the uri and fragment identifiers, the domain expression is an XML element
<umit> I was not aware that we are limiting our selves to interop only. WS-Policy is a general framework.
chris: lets take this to the list
... ashok, we need a more concrete proposal
pauL; action for F2F
<scribe> ACTION: ashok to present proposal at F2F [recorded in]
<trackbot> Created ACTION-75 - Present proposal at F2F [on Ashok Malhotra - due 2006-09-06].
chris: action 3602
dan: an assertion whose type is part of vocabulary.........(reading from the bug)
<cferris>
dan: one interpretation is that I'm explicitly prohibited
... outlined an example where that breaks down
... there could be legitimate cases that can be prohibitied by the framework
... we proposed to remove the text
chris: any questions?
<cferris>
chris: item 3604
bijan: haven't heard from any one else on list
... can re-iterate but doesn't like the section
chris: proposal is to nuke goals section?
bijan: yes
<asir> +1 to Bijan
chris: yakov??
... anyone object?
<scribe> ACTION: editors to remove [recorded in]
<asir> ACTION: Editors to implement the resolution for issue 3604 [recorded in]
<trackbot> Created ACTION-76 - Remove [on Editors - due 2006-09-06].
<trackbot> Created ACTION-77 - Implement the resolution for issue 3604 [on Editors - due 2006-09-06].
paul: trying to find the thread in archive
chris: bijan is there a thread?
asir: there is email in the archive
<asir> it is at
RESOLUTION: resolved 3604 closed as proposed
<bijan> Head of the thread:
chris: frederick?
paul: frederick is on holiday
<paulc>
<scribe> ACTION: chris to ping frederick about proposal [recorded in]
<trackbot> Sorry, couldn't find user - chris
oops
<bijan> maryann: updated bugzilla with ref to mail to WSA group
<bijan> maryann: One response to the WSA list, but unsure how to followup and coordinate
<bijan> cferris: do we take it to the CG?
<GlenD> Marc's mail:
<bijan> <some chatter to try to find the message; glend saves the day>
<bijan> <bugzilla woes>
paul: maryann will have to do the change again
<scribe> ACTION: maryann update bug again [recorded in]
<trackbot> Created ACTION-78 - Update bug again [on Maryann Hondo - due 2006-09-06].
asir: why does this have to go to the coordination group?
<scribe> ACTION: paul & chris to bring up to CG [recorded in]
<trackbot> Created ACTION-79 - & chris to bring up to CG [on Paul Cotton - due 2006-09-06].
ACTION 6= paul & chris to bring up 3619
<cferris> ACTION 6= paul & chris to bring up 3619 on next wek's WSCG call
glen: we don't explain how to put policy into EPRs
chris: is this for this group?
glen: yes
chris: glen can you propose language?
glen: sure
... we should have discussion first
<Ashok> I agree it shd be in the spec
paul: this is out of scope
... this is V next
glen: i don't get that from the charter
paul: there is an explicit list in the charter
daveO: notwithstanding the charter discussion, the risk of not doing this is that others may do it in a different
way
... another can put it somewhere else, and then you don't have interoperability
... might be able to do something simple
... and might prevent potential interop issues
<GlenD> I do believe a solution to this can actually be pretty simple (esp. in that EPRs :: Endpoint Subjects), and agree with Dave's interop concerns.
daveO: since addressing got done before us it seems we should do somthing here
ashok: i remember that ws-addressing spec lets you attach policies to an epr
... how can they speak about it and that's out of scope for us?
chris: not agreement on out of scope
... we take this discussion to list and discuss in time for F2F
... might involve some technical work and we don't want to wait
umit: in favor of daveO
... lets move on and we need to figure out if metatdata is about epr or target and the use cases need to be clearly explained
asir: interesting work, but think its a major piece of work and is out of scope in the charter
<danroth>
asir: metadata exchange is the correct technical solution
<GlenD> Even if MEX was in-scope for us, I don't think it actually does a good enough job of specing this either.
<GlenD> But that's another discussion :)
daveO: where metadata (policy is one type) is defined, there are several places
... 3 types or places where specification of metadata is done, pros and cons in each
... wsdl- can't go back, schema- can't go back
chris: varying opinions on this issue
<asir> charter says, 'If some function, mechanism or feature is not mentioned here, and it is not mentioned in the Scope of Work section either, then it will be deemed to be out of scope.
chris: suggest we take this to the list
paul: is there a thread?
glen: no thread, bug in bugzilla
... i will start a thread
paul: point thread back to bugzilla
chris: thread on in scope
<scribe> ACTION: paul to start in scope for 3620 [recorded in]
<trackbot> Created ACTION-80 - Start in scope for 3620 [on Paul Cotton - due 2006-09-06].
<scribe> ACTION: glen to provide proposal for 3620 by next call [recorded in]
<trackbot> Created ACTION-81 - Provide proposal for 3620 by next call [on Glen Daniels - due 2006-09-06].
<cferris>
bijan: formal semantics would help resolve some of the issues we are discussing
chris: questions?
... is there a thread?
bijan: will take it to the list
paul: it is an action
chris: dig up thread?
<paulc>
<bijan> Thanks paulc
<cferris>
bijan: extensions to wsdl
paul: took action on this
... this is on the agenda of the next call
... i'm going to ask that group to review our working draft
bijan: i thought we were referencing their spec
paul: not in the charter
bijan: thought it was an oversight
paul: what do we need to do?
bijan: attachment allows for inline and out of band for wsdl
chris: we will discuss this at the CG group
paul: we can attach policies to wsdl, they can attach to wsdl, why does this require coordination?
bijan: there is overlap
paul: many flowers bloom
... its not in the liasson section of the charter
<scribe> ACTION: chris and paul to track & report on WG CG for 3623 [recorded in]
<trackbot> Sorry, couldn't find user - chris
paul: 3639
ashok: we'd like to figure out which of the possible alternatives is being followed
<paulc>
ashok: no way to do this, have to check all possible alternative
... should be able to tell from a message
<paulc> and
ashok: we need to work on this
<paulc> 1. An algorithm to select a single alternative if more than one alternative in the two policies matches
paul: both bug and the mail include 3 questions
<paulc> 2. A mechanism to indicate the selected alternative
<paulc> 3. An ability for the message to indicate the policy alternative it is following
<paulc> 3 is the same as 2
paul: are 2 & 3 different?
<paulc> Just 2 questions.
paul: need clarification?
... there is not a concrete proposal
dan: when you says it seems wrong?
ashok: its extra work
dan: do you wan the optimization?
ashok: yes
... its fundamental to how we use policy
bijan: when we say alternative, do you mean the normal form?
ashok: yes
bijan: isn't only one alternative allowed?
ashok: taking two policies and matching to find one complete alternative in common
... you can have one or more match
chris: might have A & B as provider, consumer has A
... consumer needs to indicate A selected
... no formal proposal
... ashok, can you produce a formal proposal?
<bijan> I wonder if just sending a policy that contains the selected alternative would do the job
ashok: is this an item we will pursue?
chris: is there email?
paul: yes
chris: respond to thread
... look for consensus next week
ashok: want a policy assertion that adds something to message
... we then want the message logged, and you want timestamp then log
... no way to order in the framework, although WS-Security policy does this for signing/encryption
... some mechanism is often useful
asir: security policy does not order assertion
... it has assertions that order the runtime behavior
daveO: lots of comments.....
... toward the end of the meeting, i've been trying to address one of the editor items and i'm confused about what is to be done
chris: we'll add to the agenda
... ashok?
ashok: didn't understand the point about runtime
... many of the assertions are applied at runtime, asir was trying to make a fine distinction that eludes me
<bijan> I think asir and I were making a similar point; asir on the call, me in the email thread
chris: "l" and "m" are probably major issues, encourage getting this on the list
paul: it would help, people are referring to another spec.
<Ashok> bijan, I guess I didn't understand either of you ...
paul: ashok should show where this inference is
daveO: working on this
... making progress, hopefully have it for the f2f
... if can't get a resolution, will manually insert
chris: back to 3604
paul: bijan suggested replacing the entire section
daveO: except that there's a term defined there
bijan: i don't recall this
daveO: doc has changed since you first wrote this up
... we can remove this section and do the termdef later
... but that means more work
chris: open a new issue
<bijan> I'm looking for the definition
<scribe> ACTION: editors to attempt to make doc consistent and need to deal with "cascading delete" [recorded in]
<bijan> Oh, I see, in the editors draft
<trackbot> Created ACTION-82 - Attempt to make doc consistent and need to deal with \"cascading delete\" [on Editors - due 2006-09-06].
<bijan> Isn't it also defined in section 4?
chris: out of scope issues
<whenry> YEs, closed
paul: bug is closed
<whenry>
RESOL
<whenry> Woops, sorry
RESOLUTION: 3592 closed with no action\
<cferris>
<cferris> ashok's email from this morning
ashok: want one paragraph to say something like domain expressions can be used to refer to wsdl 1.1 components,
whatever and followup with one non-trivial example
... there seems to be a lot of pushback on this
paul: is there missing text?
ashok: I thought editors could wordsmith
paul: message 164
ashok: just making spec clearer for reader
asir: i had an issue to raise issues on mailing list
... but i don't see answers
paul: you two are talking past each other
ashok: what is the problem with putting in this paragraph?
paul: you are not participating in the dialog
chris: we need to have continued discussion and ask that ashok address the quetions asir raised and asir address ashok's question?
ashok: its a small thing
paul: the chairs are trying to get both sides to answer the questions
dan: i have a question, i'm confused
... we had a section with a domain expression and we removed it
... why are we going back?
ashok: it would be nice to add a quasi-real example
umit: this came from the eprs
... what are other possible examples? that are illlustrative?
... question to asir, what is your concern?
... i'm not following your question
... jms is domain specific, and it can be a policy subject, so is the objection to the jms domain?
... that was the spirit of the example
chris: it could be that the example is contentious not because of the policy example
... use an example other than jms
ashok: that isn't part of the example
chris: asir's email is asking about jms
... it could be that no one wants a jms example in there, might we come up with a less controversial example
ashok: that's not what we're asking for
chris: the idea is to engage in a dialog to address the questions in the previous mail
<GlenD> /me I think it's
chris: semantics of specific interaction
<cferris>
glen: spec states that the semantics of intersection can be determined by extension specific extensions
... if your processor does not understand, you can't know if the intersection will work and this is a problem for generic tooling
... proposed a number of solutions
<bijan> +1 to option 2
glen: there has been some discussion on list
dan: clarification, interop issues, because of lack of metadata, can you elaborate?
chris: please do this in email
umit: this could eliminate a custom response
glen: yes this would be at a qname level it could affect other things
umit: no opt out in #2
<cferris>
umit: was sick last week, not much progress
... its about chosing alternatives
... we don't have a deadline to produce this, and won't be able to deliver it quickly because going on vacation
paul: worded how?
umit: will write this as part of guidelines for handling optionality
... identify the things to watch out for, and that's the minimum
<scribe> ACTION: umit to draft proposal for primer to address 3577 [recorded in]
<trackbot> Created ACTION-83 - Draft proposal for primer to address 3577 [on Umit Yalcinalp - due 2006-09-06].
paul: need to change due date
<bijan> Hmm. I just noticed we skipped one of my new issues :)
<bijan> J
<bijan> j)
<cferris> which one? I thought we did both
<bijan> there were 3 :()
<bijan> ':)
<cferris> lol
<bijan> j) Policy assertion equivalence and generality, Bijan Parsia
<bijan>
<cferris> sorry
<bijan> S'ok
<bijan> Three ina row is too much :) | http://www.w3.org/2006/08/30-ws-policy-minutes.html | CC-MAIN-2015-18 | refinedweb | 3,097 | 69.72 |
内存:64 时间:1
题目描述
Smith Numbers
While skimming his phone directory in 1982, mathematician Albert Wilansky noticed that the telephone number of his brother-in-law H. Smith had the following peculiar property: The sum of the digits of that number was equal to the sum of the digits of the prime factors of that number. Got it? Smith’s telephone number was 493-7775. This number can be written as the product of its prime factors in the following way:
4937775 = 3 . 5 . 5 . 65837
The sum of all digits of the telephone number is 4 + 9 + 3 + 7 + 7 + 7 + 5 = 42, and the sum of the digits of its prime factors is equally 3 + 5 + 5 + 6 + 5 + 8 + 3 + 7 = 42. Wilansky named this type of number after his brother-in-law: the Smith numbers.
As this property is true for every prime number, Wilansky excluded them from the definition. Other Smith numbers include 6,036 and 9,985.
Wilansky was not able to find a Smith number which was larger than the telephone number of his brother-in-law. Can you help him out?
输入
The input consists of several test cases, the number of which you are given in the first line of the input. Each test case consists of one line containing a single positive integer smaller than 109.
输出
For every input value n, compute the smallest Smith number which is larger than n and print it on a single line. You can assume that such a number exists.
样例输入
1
4937774
样例输出
4937775
提示
代码如下
#include <iostream> #include <set> #include <cmath> using namespace std; int work(int n) { int c=0,t; for(int i=2; i*i<=n; ++i) { while(n%i==0) { n/=i; t=i; while(t) { c+=t%10; t/=10; } } } if(n>1&&c) { while(n) { c+=n%10; n/=10; } } return c; } int main() { int T; int n; cin>>T; while(T--) { cin>>n; for(int i=n+1;; ++i) { int t=i; int c=0; while(t) { c+=t%10; t/=10; } t=work(i); if(t==c) { cout<<i<<endl; break; } } } return 0; }
代码来源于互联网,仅供参考!
评论
评论功能已经关闭! | https://www.imlhx.com/posts/745.html | CC-MAIN-2019-22 | refinedweb | 365 | 75.84 |
README
bytes-to-co2bytes-to-co2
Calculate the co2 footprint (carbon dioxide released to the atmosphere)
of transmitting an
x amount of bytes over the internet.
Transport data over the internet requires energy (datacenters, repeaters, switches, etc) and how this energy is different from country to country. This is known as the "co2 emission intensity". Each country have different ways to produce electricity (solar, wind, coal, diesel, nuclear, etc) and each one of these releases to the atmosphere a different amount of Carbon Dioxide (co2), therefore transmit x amount of data will release a y amount of co2.
This module gets the information from Electricity Maps through the co2-data library. I downloaded the results every hour for 1 day and averaged them by country.
UsageUsage
Install the package using
yarn add bytes-to-co2
or
npm install bytes-to-co2
Import the library and call the function as shown:
import { bytesToCo2 } from "bytes-to-co2"; const uk = bytesToCo2({ byteSize: 1000000, country: 'GB' }); // 0.35021555843286817 const sweden = bytesToCo2({ byteSize: 1000000, country: 'SE' }); // 0.06411629304105701 const spain = bytesToCo2({ byteSize: 1000000, country: 'ES' }); // 0.4461472854018211 const world = bytesToCo2({ byteSize: 1000000, country: 'ZZ' }); // 0.539680558728753
ContributingContributing
If anything in the way I'm calculating the footprint looks odd to you, please feel free to open an issue or PR. Any feedback or improvements in the way the co2 is calculated is welcomed.
Special thanksSpecial thanks
- Wholegrain Digital: They were very helpful explaining to me how the carbon calculation is made.
- Electricity Map: They were very kind in allowing me to use their data for this library. Without them I would still be using very outdated data. | https://www.skypack.dev/view/bytes-to-co2 | CC-MAIN-2021-49 | refinedweb | 273 | 54.12 |
This lab provides practice scenarios to help prepare you for the Certified Kubernetes Administrator (CKA) Exam. You will be presented with tasks to complete, as well as server(s) and/or an existing Kubernetes cluster to complete them in. You will need to use your knowledge of Kubernetes to successfully complete the provided tasks, much like you would on the real CKA exam. Good luck!
Learning Objectives
Successfully complete this lab by achieving the following learning objectives:
- Drain Worker Node 1
- Drain the
acgk8s-worker1node.
Note: Note that you may run into issues with this process. If you do, use the appropriate command-line flags to force the drain process to proceed anyway.
- Create a Pod That Will Only Be Scheduled on Nodes with a Specific Label
- Add the
disk=fastlabel to the
acgk8s-worker2Node.
- Create a pod called
fast-nginxin the
devnamespace that will only run on nodes with this label. Use the
nginximage for this pod. | https://acloudguru.com/hands-on-labs/certified-kubernetes-administrator-cka-practice-exam-part-6 | CC-MAIN-2021-43 | refinedweb | 158 | 54.42 |
This Tutorial on Copying and Cloning of Arrays Discusses the Various Methods to Copy an Array in Java:
Here we will discuss the copy operation of Java arrays. Java provides various ways in which you can make copies of array elements. As we know, in Java, arrays can contain elements either of primitive types or objects or references.
While making copies of primitive types, the task is rather easy but when it comes to objects or references, you need to pay attention as to whether the copy is deep or shallow.
=> Take A Look At The Java Beginners Guide Here.
Shallow copy makes a copy of the element. It is not a problem when primitive data types are involved. But when references are involved, a shallow copy will just copy the value and not the underlying information.
Thus, even though you have made copies of elements, a change in one copy will reflect in other copy too as the memory locations are shared. To prevent this, you need to go for a deep copy in which the memory locations are not shared.
What You Will Learn:
Copy And Clone Java Arrays
Java allows you to copy arrays using either direct copy method provided by java.util or System class. It also provides a clone method that is used to clone an entire array.
In this tutorial, we will discuss the following methods of Copying and Cloning Arrays.
- Manual copying using for loop
- Using System.arraycopy()
- Using Arrays.copyOf()
- Using Arrays.copyOfRange()
- Using Object.clone()
Let’s Explore!!
Manual Copying Using For Loop
Normally when we copy variables, for example, a and b, we perform the copy operation as follows:
a=b;
It is not going to work correctly if we apply the same method to arrays.
Let’s see a programming example.
public class Main { public static void main(String[] args) { int intArray[] = {12,15,17}; //print original intArray System.out.println("Contents of intArray[] before assignment:"); for (int i=0; i<intArray.length; i++) System.out.print(intArray[i] + " "); // Create an array b[] of same size as a[] int copyArray[] = new int[intArray.length]; // intArray is assigned to copyArray; so references point to same location copyArray = intArray; // change element of copyArray copyArray[1]++; //print both arrays System.out.println("\nContents of intArray[]:"); for (int i=0; i<intArray.length; i++) System.out.print(intArray[i] + " "); System.out.println("\nContents of copyArray[]:"); for (int i=0; i<copyArray.length; i++) System.out.print(copyArray[i] + " "); } }
Output:
In the above program, there are two arrays i.e. intArray and copyArray. The task is to copy the contents of the intArray to copyArray. To do this, the statement copyArray = intArray is introduced. What is done here is the references of the array are assigned. Hence this is not actual copying.
As a result of the above statement, the memory location of the intArray is shared by the copyArray as well. Now when the copyArray element is incremented, that change is reflected in the intArray too. This is shown in the output.
To overcome this problem, we employ a method of copying the array using for loop. Here, each element of the original array is copied to the new array using a for loop.
This program is shown below.
public class Main { public static void main(String[] args) { int intArray[] = {12,15, 17}; // define an array copyArray to copy contents of intArray int copyArray[] = new int[intArray.length]; // copy contents of intArray to copyArray for (int i=0; i<intArray.length; i++) copyArray[i] = intArray[i]; // update element of copyArray copyArray[0]++; //print both arrays System.out.println("intArray[] elements:"); for (int i=0; i<intArray.length; i++) System.out.print(intArray[i] + " "); System.out.println("\n\ncopyArray[] elements:"); for (int i=0; i<copyArray.length; i++) System.out.print(copyArray[i] + " "); } }
Output:
Here we have modified the previous program to include for loop and inside for loop, we assign each element of intArray to the corresponding element of copyArray. This way, the elements are actually copied. So when one array is modified, the changes do not reflect in another array.
Using System.arraycopy()
Java’s System class has a method called “ArrayCOpy” that allows you to copy elements of one array to another array.
The general prototype of this method is as follows:
public static void arraycopy( Object src_array, int src_Pos,Object dest_array, int dest_Pos, int length )
Here,
- src_array => Source array from where the contents are to be copied.
- src_Pos => The position in the source array from where copying will start.
- dest_array => Destination array to which elements are to be copied.
- dest_Pos => Starting position in the destination array for the elements to be copied.
- length => Length of the array to be copied.
Let’s understand this method with an example.
class Main { public static void main(String[] args) { //declaring a source array char[] src_array = { 'S','o','f','t','w','a','r','e','T','e','s','t','i','n','g','H','e','l','p'}; char[] dest_array = new char[19]; System.arraycopy(src_array, 0, dest_array, 0,19); System.out.println("Source array:" + String.valueOf(src_array)); System.out.println("Destination array after arraycopy:"+ String.valueOf(dest_array)); } }
Output:
In the above program, we use the ‘arraycopy’ method to copy an array to another array. You can see the call to the arraycopy method. We copy the source array from the beginning (0th location) and copy the entire array.
Lastly, we display the string equivalent of both the source as well as destination arrays.
With the arraycopy method, you can copy even partial arrays as it takes the start and end element positions as arguments. This method makes a shallow copy of array elements.
Using Arrays.copyOf()
The method Arrays.copyOf() internally makes use of the System.arraycopy () method. Though it is not as efficient as arraycopy, it can be used to copy full or partial array just like the arraycopy method.
‘copyOf()’ method is a part of the java.util package and belongs to the “Arrays” class.
The prototype of this method is as follows:
public static int[] copyOf(int[] original_array,int newLength)
Where,
- original: The array to be copied to the new array.
- newLength: The length of the copied array to be returned.
Thus, this method makes a copy of the array provided in the first argument to the specified length by truncating or padding the length with 0 to the new array. This means if the length of the copied array is more than the original array, 0s are replaced for the remaining elements.
The program given below shows an example of the copyOf method.
importjava.util.Arrays; public class Main { public static void main(String args[]) { // define original array int[] even_Array = new int[] {2,4,6,8}; System.out.println("Original Array:" + Arrays.toString(even_Array)); // copying array even_Array to copy_Array int[] copy_Array = Arrays.copyOf(even_Array,5); System.out.println("Copied Array:" + Arrays.toString(copy_Array)); // assign value to unassigned element of copied array copy_Array[4] = 10; System.out.println("Copied and modified Array:" + Arrays.toString(copy_Array)); } }
Output:
In the above program, we copy the even_Array of length 4 by using the copyOf method. The second argument provided is 5. Hence, the new copied array has 5 elements in it. The first four are the same as the original array and the fifth element is 0 as copyOf pads it because the length of the original array is less than that of the new array.
Using Arrays.copyOfRange()
The method Arrays.copyOfRange() is specifically used when you want to copy partial arrays. Like copyOf() method, this method also internally makes use of System.arraycopy() method.
The prototype of Arrays.copyOfRange() method is as follows:
public static short[] copyOfRange(short[] original, int from, int to)
where,
- original: The array from which a range is to be copied.
- from: Initial index of the range to be copied, inclusive.
- to: The final index of the range to be copied, exclusive.
An example implementation of the copyOfRange method is shown below.
import java.util.Arrays; class Main { public static void main(String args[]) { int intArray[] = { 10,20,30,40,50 }; // to index is within the range int[] copyArray = Arrays.copyOfRange(intArray, 2, 6); System.out.println("Array copy with both index within the range: " + Arrays.toString(copyArray)); //to index is out of range int[] copy1 = Arrays.copyOfRange(intArray, 4, intArray.length + 3); System.out.println("Array copy with to index out of range: " + Arrays.toString(copy1)); } }
Output:
Using Object.clone()
Java array internally implements a Cloneable interface and thus it is easy to clone a Java array. You can clone one-dimensional as well as two-dimensional arrays. When you clone one-dimensional array, it makes a deep copy of array elements which is copying the values.
On the other hand, when you clone two dimensional or multi-dimensional arrays, a shallow copy of elements is made i.e. only references are copied. This cloning of arrays is done by the ‘Clone ()’ method provided by the arrays.
A deep copy of 1-D arrays as a result of cloning can be represented as below:
Now let’s implement the 1-D array cloning in a Java program.
class Main { public static void main(String args[]) { int num_Array[] = {5,10,15,20,25,30}; int clone_Array[] = num_Array.clone(); System.out.println("Original num_Array:"); for (int i = 0; i <num_Array.length; i++) { System.out.print(num_Array[i]+" "); } System.out.println(); System.out.println("Cloned num_Array:"); for (int i = 0; i <clone_Array.length; i++) { System.out.print(clone_Array[i]+" "); } System.out.println("\n"); System.out.print("num_Array == clone_Array = "); System.out.println(num_Array == clone_Array); } }
Output:
As you can see in the output, the expression to check for equality of both the arrays returns false. This is because the cloning of one-dimensional array results in deep copy wherein the values are copied to a new array and not merely references.
Frequently Asked Questions
Q #1) How to make a copy of an array in Java?.
Q #2) How do you assign one array to another?
Answer: You can assign the array to another using a simple assignment operator (=). You have to ensure that the two arrays are of the same data type and have an identical dimension.
Q #3) What is a Shallow copy and Deep copy?
Answer: In shallow copy, only the attributes of objects or arrays in question are copied. So any changes to the copied array will reflect in the original. Java cloning is an example of a shallow copy.
A deep copy is the one wherein we need a complete copy of the object so that when we clone or copy that object, it is an independent copy. When primitive or built-in types are involved, there is no difference between the shallow and deep copy.
Q #4) What does an Array Clone do?
Answer: The cloning method of arrays is used to copy the attributes of one object to another. It uses a shallow copy for doing this.
Q #5) Can you store an Array in an Array?
Answer: Arrays can contain arrays provided with the elements that are of the same type (primitive or object). This means you cannot store an integer array in a string array.
Conclusion
In this tutorial, we explored copy array and clone arrays in Java. We have seen various methods/approaches to copy and clone an array.
Note that most of these methods implement a shallow copy. For primitive data types, shallow and deep copy does not differ. But when an array contains objects or references, the programmer needs to implement a deep copy as per the requirements.
In our subsequent tutorials, we continue to explore more about Java arrays.
=> Read Through The Easy Java Training Series. | https://www.softwaretestinghelp.com/java-copy-array/ | CC-MAIN-2021-17 | refinedweb | 1,955 | 59.09 |
Hi, I am trying to make a simple program which is meant to resemble a hospital patient database.
My aim is that when the user types in a patients name, I want it to type a couple of messages using methods which I have done successfully, for example "searching database". I want the program to then search an array for the patients name, if found, print the patients details/medical history onto the screen.
Here is a sample of what I have got so far, I have been watching tutorials off youtube for guidance:
Code :
public class HospitalOne { private String patient; public void setPatient(String name) { patient = name; } public String getPatient() { return patient; } public void details() { System.out.printf("Patient: %s", getPatient()); } public void searching() { System.out.println(""); System.out.println("Searching Database..."); } public String[][] patName[100][10]; patName[0][0] = "Name: Stephen Myhill"; patName[0][1] = "Location: Chesterfield"; patName[0][2] = "D.O.B: 31/2/80"; patName[0][3] = "Medical History:"; }
Code :
import java.util.Scanner; class MainProgram { public static void main(String[] args) { Scanner input = new Scanner(System.in); HospitalOne col = new HospitalOne(); System.out.println("Enter the name of the patient: "); String tempP = input.nextLine(); col.setPatient(tempP); col.details(); col.searching(); if (col.patName[][].equalsIgnoreCase(tempP)) { } } }
I am getting all kinds of errors but cannot identify and fix them. I don't even know if I am doing this correctly.
My questions are:-
- Where should I be creating my array?
- Should I have a multi-dim array or should I use arrayList?
Thank you for your time.
Regards,
SS.
P.S - bit off topic, but what is wrong with java-forums.org (It is not loading for me anymore. I had a PM from some dude, replied and since then cannot get back on there). | http://www.javaprogrammingforums.com/%20object-oriented-programming/10684-where-do-i-create-my-array-printingthethread.html | CC-MAIN-2014-15 | refinedweb | 300 | 57.87 |
PSLab has the capability to perform a variety of experiments. The PSLab Android App and the PSLab Desktop App have built-in support for about 70 experiments. The experiments range from variety of trivial ones which are for school level to complicated ones which are meant for college students. However, it is nearly impossible to support a vast variety of experiments that can be performed using simple electronic circuits.
So, the blog intends to show how PSLab can be efficiently used for performing experiments which are otherwise not a part of the built-in experiments of PSLab. PSLab might have some limitations on its hardware, however in almost all types of experiments, it proves to be good enough.
Identifying the requirements for experiments
- The user needs to identify the tools which are necessary for analysing the circuit in a given experiment. Oscilloscope would be essential for most experiments. The voltage & current sources might be useful if the circuit requires DC sources and similarly, the waveform generator would be essential if AC sources are needed. If the circuit involves the use and analysis of data of sensor, the sensor analysis tools might prove to be essential.
- The circuit diagram of any given experiment gives a good idea of the requirements. In case, if the requirements are not satisfied due to the limitations of PSLab, then the user can try out alternate external features.
Using the features of PSLab
- Using the oscilloscope
- Oscilloscope can be used to visualise the voltage. The PSLab board has 3 channels marked CH1, CH2 and CH3. When connected to any point in the circuit, the voltages are displayed in the oscilloscope with respect to the corresponding channels.
- The MIC channel can be if the input is taken from a microphone. It is necessary to connect the GND of the channels to the common ground of the circuit otherwise some unnecessary voltage might be added to the channels.
- Using the voltage/current source
- The voltage and current sources on board can be used for requirements within the range of +5V. The sources are named PV1, PV2, PV3 and PCS with V1, V2 and V3 standing for voltage sources and CS for current source. Each of the sources have their own dedicated ranges.
- While using the sources, keep in mind that the power drawn from the PSLab board should be quite less than the power drawn by the board from the USB bus.
- USB 3.0 – 4.5W roughly
- USB 2.0 – 2.5W roughly
- Micro USB (in phones) – 2W roughly
- PSLab board draws a current of 140 mA when no other components are connected. So, it is advisable to limit the current drawn to less than 200 mA to ensure the safety of the device.
- It is better to do a rough calculation of the power requirements in mind before utilising the sources otherwise attempting to draw excess power will damage the device.
- Using the Waveform Generator
- The waveform generator in PSLab is limited to 5 – 5000 Hz. This range is usually sufficient for most experiments. If the requirements are beyond this range, it is better to use an external function generator.
- Both sine and square waves can be produced using the device. In addition, there is a feature to set the duty cycle in case of square waves.
- Sensor Quick View and Sensor Data Logger
- PSLab comes with the built in support for several plug and play sensors. The support for more sensors will be added in the future. If an experiment requires real time visualisation of sensor data, the Sensor Quick View option can be used whereas for recording the data for sensors for a period of time, the Sensor Data Logger can be used.
Analysing the Experiment
- The oscilloscope is the most common tool for circuit analysis. The oscilloscope can sample data at very high frequencies (~250 kHz). The waveform at any point can be observed by connecting the channels of the oscilloscope in the manner mentioned above.
- The oscilloscope has some features which will be essential like Trigger to stabilise the waveforms, XY Plot to plot characteristics graph of some devices, Fourier Transform of the Waveforms etc. The tools mentioned here are simple but highly useful.
- For analysing the sensor data, the Sensor Quick View can be paused at any instant to get the data at any instant. Also, the logged data in Sensor Data Logger can be exported as a TXT/CSV file to keep a record of the data.
Additional Insight
- The PSLab desktop app comes with the built-in support for the ipython console.
- The desired quantities like voltages, currents, resistance, capacitance etc. can also be measured by using simple python commands through the ipython console.
- A simple python script can be written to satisfy all the data requirements for the experiment. An example for the same is shown below.
This is script to produce two sine waves of 1 kHz and capturing & plotting the data.
from pylab import * from PSL import sciencelab I=sciencelab.connect() I.set_gain('CH1', 2) # set input CH1 to +/-4V range I.set_gain('CH2', 3) # set input CH2 to +/-4V range I.set_sine1(1000) # generate 1kHz sine wave on output W1 I.set_sine2(1000) # generate 1kHz sine wave on output W2 #Connect W1 to CH1, and W2 to CH2. W1 can be attenuated using the manual amplitude knob on the PSlab x,y1,y2 = I.capture2(1600,1.75,'CH1') plot(x,y1) #Plot of analog input CH1 plot(x,y2) #plot of analog input CH2 show()
References
- Read more about how to perform experiments using PSLab –
- Examples of writing python scripts for the desktop app to perform experiments – | https://blog.fossasia.org/performing-custom-experiments-with-pslab/ | CC-MAIN-2022-21 | refinedweb | 943 | 62.48 |
I tried to use FFT functions in a project and included the header file required for CMSIS library, arm_math.h.
But I found one issue.
Including “arm_math.h” in a project disables systick function which is used as timer event implementing source for the project.
#define __CMSIS_GENERIC /* disable NVIC and Systick functions */
If this is removed, any of CMSIS library functions becomes unavailable.
What is the reason of disabling Systick function?
Is there any work-around way on the issue?
Hello simplehhan,
The CMSIS DSP library (arm_math.h) includes the CMSIS core files (e.g.: core_cm4.h) but uses only the core generic part of the CMSIS core file. The core dependant part, which contains also the SysTick defines is not used and therefore not included. This is achieved with setting the define __CMSIS_GENERIC.
The CMSIS core file is also included in the device specific header file, which needs to be CMSIS compliant (as an example you can check file .\Keil\ARM\Device\ARM\ARMCM4\Include\ARMCM4.h)
If you want to use the CMSIS DSP library together with the CMSIS SysTick definitions you need to include both arm_math.h and CMSIS compliant device header file.
e.g.:
#include "arm_math.h"
#include "ARMCM4.h"
Best Regards,
Martin Guenther
Now somehow Systick function is working. I don't know what was different in the first.
In my source code, arm_math.h is included and ARM_MATH_CM4 is defined in complier preprecessor setup.
Is that the point of the your comment - just including arm_math.h and armcm4.h will resolve the issue?
"arm_math.h" included has the following codes.
The function of the code below, what I expect is, to disable Systick if arm_math.h is included.
#ifndef _ARM_MATH_H
#define _ARM_MATH_H
#define __CMSIS_GENERIC /* disable NVIC and Systick functions */
#if defined (ARM_MATH_CM4)
#include "core_cm4.h"
...
#else
#include "ARMCM4.h"
#warning "Define either ARM_MATH_CM4 OR ARM_MATH_CM3...By Default building on ARM_MATH_CM4....."
#endif /* I (Hyeong Han) think this is the end of block end starting with #ifdef_ARM_MATH_H, which means
the definition code enabling Systick functions is not entered if arm_math.h is included in the source code.
#undef __CMSIS_GENERIC /* enable NVIC and Systick functions */
#include "string.h"
#include "math.h"
#ifdef __cplusplus
extern "C"
{
#endif
Thank you Martin for your support.
Best regards,
Hyeong Han | https://community.arm.com/thread/4644 | CC-MAIN-2016-36 | refinedweb | 379 | 61.33 |
Important: Please read the Qt Code of Conduct -
How to calculate the collision (if boxes overlap)
Dear readers,
can someone tell me how to write my IF statement on the bottom of my code?
I want it to return a number i.e. what side there is collision on.
@#include <SFML/System.hpp>
#include <SFML/Graphics.hpp>
#include <SFML/Window.hpp>
#include <SFML/Audio.hpp>
#include <SFML/Config.hpp>
#include "collision.h"
/*
Collision Detection Function
1 = top
2 = right
3 = bottom
4 = left
*/
int collisionCheck( sf::Sprite player, sf::Sprite object )
{
// get the position and dimensions of the player sprite
int pPosX = player.GetPosition().x;
int pPosY = player.GetPosition().y;
int pSizeX = player.GetSize().x;
int pSizeY = player.GetSize().y;
// get the position and dimensions of the object sprite int oPosX = object.GetPosition().x; int oPosY = object.GetPosition().y; int oSizeX = object.GetSize().x; int oSizeY = object.GetSize().y; // check for collision on the right side of the player if( (pPosX + pSizeX) >= oPosX && (pPosX + pSizeX) <= (oPosX + oSizeX) && (pPosY + pSizeY) >= oPosY && pPosY <= (oPosY + oSizeY ) ){
return 2;
}
// check for collision on the left side of the player if( (pPosX - 2) <= (oPosX + oSizeX) && (pPosY + pSizeY) >= oPosY && pPosY <= (oPosY + oSizeY ) ){
return 4;
}
// check for collision on the top side of the player if( ){
return 1;
}
// check for collision on the bottom side of the player if( ){
return 1;
}
}
int main()
{
// create a new window 800x600 resolution 32 bit
sf::RenderWindow App(sf::VideoMode(800, 600, 32), "LittleBigGame");
// draw a cube (red) sf::Image player; player.Create(50, 50, sf::Color::Red); sf::Texture playerTexture; playerTexture.LoadFromImage(player); sf::Sprite playerSprite(playerTexture); playerSprite.SetX(500); playerSprite.SetY(410); // draw a wall (blue) sf::Image wall; wall.Create(350, 50, sf::Color::Blue); sf::Texture wallTexture; wallTexture.LoadFromImage(wall); sf::Sprite wallSprite(wallTexture); wallSprite.SetX(150); wallSprite.SetY(410); // draw a wall (yellow) sf::Image wall2; wall2.Create(50, 350, sf::Color::Yellow); sf::Texture wallTexture2; wallTexture2.LoadFromImage(wall2); sf::Sprite wallSprite2(wallTexture2); wallSprite2.SetX(500); wallSprite2.SetY(110); // timer so every computer has the same speed sf::Clock clock; while( App.IsOpened()) { // handle events sf::Event Event; while (App.PollEvent(Event)) { switch(Event.Type) { // als het scherm gesloten word case sf::Event::Closed: App.Close(); break; } } if(clock.GetElapsedTime() >= 33) { // execute stuff if(sf::Keyboard::IsKeyPressed(sf::Keyboard::Up)) {
int check = collisionCheck( playerSprite, wallSprite2 );
if(check != 1){
playerSprite.Move( 0, -6 );
}
}
if(sf::Keyboard::IsKeyPressed(sf::Keyboard::Right)) {
int check = collisionCheck( playerSprite, wallSprite2 );
if(check != 2){
playerSprite.Move( +6, 0 );
}
}
if(sf::Keyboard::IsKeyPressed(sf::Keyboard::Down)) { playerSprite.Move( 0, +6 ); } if(sf::Keyboard::IsKeyPressed(sf::Keyboard::Left)) {
int check = collisionCheck( playerSprite, wallSprite2 );
if(check != 4){
playerSprite.Move( -6, 0 );
}
}
// reset the clock clock.Reset(); } // clean screen App.Clear(); /* draw sprites which will be BEHIND of our player */
//App.Draw(wallSprite);
App.Draw(wallSprite2);
// draw the player sprite
App.Draw(playerSprite);
/* draw sprites which will be IN FRONT of our player */ // display the window App.Display(); } // end while App is opened return EXIT_SUCCESS;
}
@
Hope someone can inform me on how to do this :)
P.S. I am using SFML but that doesnt matter much, as this is about how to calculate it and the above code should be clear to anyone :)
The code I have above seems to work for left and right, but I cant get the top and bottom ones working... please help me out ; ;
- tobias.hunger Moderators last edited by
What does this have to do with Qt? Maybe you should try a forum specializing on the library you are actually using?
Not much with Qt but its just a common question about how to make a sum, nothing to do with a library or what so ever. I am asking how to make the sum, not how to use a part of the library.
The sum is just normal C++ like doing math that's all.
To get back to Qt: Qt has classes like QRect and QRegion that support methods like intersects() that might be of help.
Thank you very much Andre, I will look those functions up, that might be just what I need :)
Aside from that... my question was just a normal C++ question and yes, I asked it on the forums from the makers
of this library but all people there are to "cocky" to explain me this simple formula so I tought, lets try the Qt forums
since I have had a lot of tips, and explenations from nice people on here :)
Anyways, I will look at the functions but still, can anyone explain me the formula? :P
Its not really any sort of formula. If this was in 3D you would need formulas, but this are just basic comparisons. I'd suggest you draw two boxes in a coordinate system on a paper and do the comparisons for left and right (as already exist in the code above) by hand to get a better understanding of what's happening. | https://forum.qt.io/topic/13844/how-to-calculate-the-collision-if-boxes-overlap | CC-MAIN-2021-21 | refinedweb | 824 | 57.98 |
Introducing stax-ex project
We've been using StAX API more and more lately. Its support is added in JAXB a while ago, JAX-WS RI has started using it internally, and now with the rearchitecture, the JAX-WS RI is using StAX API more extensively internally.
As a part of the rearchitecture, Paul Sandoz and I were talking about the need of a few small extensions to the StAX API.
For example, we'd like to handle MTOM below the StAX API, since it's a kind of XML encoding. But when we do that, we'd like to acecss the binary data more efficiently than turning it into base64-encoded text. This also allows FastInfoset to pass through its binary data to client applications, like JAXB.
Another area that needs a fix is the poorly designed NamespaceContext. What we needed in the Java platform was a very simple NamespaceResolver (that just resolves prefixes to URLs and useful for XPath parsing and etc), and then more comprehensive NamespaceContext that exposes complete hierarchy of in-scope namepsace bindings for parsers. But today's NamespaceContext tried to be both at the same time, and it ended up neither. The consequence of this for StAX is that there's currently no way to enumerate the complete in-scope namespace bindings, and this hurts.
This discussion initially started in the dev@jax-ws.dev.java.net list, but we eventually realized that it needs to get its own home, so that components like SJSXP, JAXB, FI can all refer to it and make use of it.
So today we created the stax-ex project for this purpose. The project hosts a few interfaces that inherit the interfaces defined in JSR-173. So in some sense it works like optional add-on to the JSR-173 interface.
Javadoc is also available, and jar files can be downloaded from the Maven java.net repository, thanks to the maven java.net plugin.
- Login or register to post comments
- Printer-friendly version
- kohsuke's blog
- 1391 reads | https://weblogs.java.net/node/234622/atom/feed | CC-MAIN-2015-14 | refinedweb | 340 | 62.17 |
This add-on is operated by 84codes AB
Cloud based MQTT broker, pub/sub for mobile applications and sensors
CloudMQTT
Table of Contents
CloudMQTT is an add-on for providing a MQTT broker to your application(s).
MQTT is a lightweight pub/sub protocol, especially suited for low processor/bandwidth units like sensors and built-in system, but also suited for fast communication within applications.
CloudMQTT is exposed through the MQTT protocol for which there are supported client in Java, C, Python, Node.js, Ruby, Objective-C etc.
Provisioning the add-on
Cloud CLOUDMQTT_URL=value.
Use Foreman to configure, run and manage process types specified in your app’s Procfile. Foreman reads configuration variables from an .env file. Use the following command to add the CLOUDMQTT_URL values retrieved from heroku config to
.env.
$ heroku config -s | grep CLOUDMQTT_URL >> .env $ more .env
Credentials and other sensitive configuration values should not be committed to source-control. In Git exclude the .env file with:
echo .env >> .gitignore.
Service setup
A MQTT server can be installed for use in a local development environment. Typically this entails installing a MQTT compatible server like Mosquitto and pointing the CLOUDMQTT_URL to this local service.
Your CLOUDMQTT_URL can then be subsituted with
mqtt://localhost:1883.
Using with Ruby
Currently the most mature client library for Ruby is the synchronous ruby-mqtt, the async em-mqtt as yet to support user/password before it’s usable with CloudMQTT.
First you need to add
mqtt as a dependency to your
Gemfile and execute
bundle install. In the following code snippet you can see you how can publish and subscribe. Note that the client is synchronous so you have to use threads if you want to subscribe and do other things at the same time.
require 'mqtt' require 'uri' # Create a hash with the connection parameters from the URL uri = URI.parse ENV['CLOUDM: github.com/CloudMQTT/ruby-mqtt-example
Worth noting is that the client does not yet support other QoS levels than 0, ie. no publish acknowledge or redelivery.
Using with Node.js
A good javascript MQTT library is MQTT.js. Add
mqtt to your
package.json file. Then a simple example could look like this:
var mqtt = require('mqtt'), url = require('url'); // Parse var mqtt_url = url.parse(process.env.CLOUDMQTT_URL || 'mqtt://localhost:1883'); var auth = (mqtt_url.auth || ':').split(':'); // Create a client connection var client = mqtt.createClient(mqtt_url.port, mqtt_url.hostname, { username: auth[0], password: auth[1] }); client.on('connect', function() { // When connected // subscribe to a topic client.subscribe('hello/world', function() { // when a message arrives, do something with it client.on('message', function(topic, message, packet) { console.log("Received '" + message + "' on '" + topic + "'"); }); }); // publish a message to a topic client.publish('hello/world', 'my message', function() { console.log("Message is published"); client.end(); // Close the connection when published }); });
A full sample web app which uses MQTT.js, Express.js and SSE to deliver messages from and to a web browser is available here: github.com/CloudMQTT/mqtt-sse and can be tested out here at mqtt-sse.herokuapp.com.
Using with Python
The most feature complete MQTT client for Python is Mosquitto. Below you see an sample app which both publish and subscribes to CloudMQTT.
import mosquitto, os, urlparse # Define event callbacks() # Assign event callbacks mqttc.on_message = on_message mqttc.on_connect = on_connect mqttc.on_publish = on_publish mqttc.on_subscribe = on_subscribe # Uncomment to enable debug messages #mqttc.on_log = on_log # Parse CLOUDMQTT_URL (or fallback to localhost) url_str = os.environ.get('CLOUDMQTT_URL', 'mqtt://localhost:1883') url = urlparse.urlparse(url_str) # Connect mqttc.username_pw_set(url.username, url.password) mqttc.connect(url.hostname, url.port) # Start subscribe, with QoS level 0 mqttc.subscribe("hello/world", 0) # Publish a message mqttc.publish("hello/world", "my message") # Continue the network loop, exit when an error occurs rc = 0 while rc == 0: rc = mqttc.loop() print("rc: " + str(rc))
The full code can be seen at github.com/CloudMQTT/python-mqtt-example.
Using with Java
The by far best MQTT client for Java/JVM is Paho. Please email support@cloudmqtt.com if you need help to get started.
Dashboard.
Migrating between plans
Removing the add-on
CloudMQTT can be removed via the CLI.
This will destroy all associated data and cannot be undone!
$ heroku addons:remove cloudmqtt -----> Removing cloudmqtt from sharp-mountain-4005... done, v20 (free)
Support
All CloudMQTT support and runtime issues should be submitted on of the Heroku Support channels. Any non-support related issues or product feedback is welcome at support@cloudmqtt.com.
Keep reading
Your feedback has been sent to the Dev Center team. Thank you. | https://devcenter.heroku.com/articles/cloudmqtt | CC-MAIN-2015-11 | refinedweb | 759 | 52.46 |
numpy has been the rage for quite sometime within the python community and I have yet to find a nice write up that really compares the performance of using numpy vs regular python lists to get specific tasks done so I decided on writing up a quick and dirty comparison.
First lets simply compare the amount of memory required to store 1 million integers in a python list vs 1 million integers in a numpy array:
from memory_profiler import profile @profile def main(): data = [0 for _ in range(0, 1000000)] if __name__ == '__main__': main()
and in numpy:
from memory_profiler import profile import numpy as np @profile def main(): data = np.zeros(1000000) if __name__ == '__main__': main()
We used the neat little library memory_profiler
to take care of profiling memory usage for the
main() function and in this one
little test we can already see the tremendous benefits of using numpy over
traditional python lists:
> python 1_million_python_list.py Filename: 1_million_python_list.py Line # Mem usage Increment Line Contents ================================================ 3 13.1 MiB 0.0 MiB @profile 4 def main(): 5 21.0 MiB 7.9 MiB data = [ 0 for _ in range(0, 1000000)] 2017-04-02 15:25:09 $?=0 pwd=/home/rlgomes/workspace/python3/numpy venv=env duration=65.931s
vs
> python 1_million_numpy_array.py Filename: 1_million_numpy_array.py Line # Mem usage Increment Line Contents ================================================ 5 26.1 MiB 0.0 MiB @profile 6 def main(): 7 26.4 MiB 0.3 MiB data = np.zeros(1000000) 2017-04-02 15:23:57 $?=0 pwd=/home/rlgomes/workspace/python3/numpy venv=env duration=.290s
Which in terms of time to execute numpy is 227x faster and in terms of memory usage it is 26x more efficient.
Now what if we assume the list already exists in memory and we don’t care about the time it took to load it or even how much space it occupies what happens when we try to calculate statistics over the million integers:
import timeit import random import numpy as np python_list = [random.random() for _ in range(0, 1000000)] numpy_array = np.array(python_list) elapsed = timeit.timeit('sum(python_list)', number=100, globals=locals()) print('%.5fs for python sum(list)' % elapsed) elapsed = timeit.timeit('np.sum(numpy_array)', number=100, globals=locals()) print('%.5fs for python numpy.sum(array)' % elapsed) elapsed = timeit.timeit('max(python_list)', number=100, globals=locals()) print('%.5fs for python max(list)' % elapsed) elapsed = timeit.timeit('np.amax(numpy_array)', number=100, globals=locals()) print('%.5fs for python numpy.amax(array)' % elapsed)
The above gives the following output on my machine:
> python million_integer_stats.py 0.56150s for python sum(list) 0.06869s for python numpy.sum(array) 2.05134s for python max(list) 0.06859s for python numpy.amax(array)
Which puts numpy at 8x faster at calculating the sum of a million integers and at 29.9x faster at calculating the maximum value.
Any other comparisons will surely continue to show just how much more efficient numpy is at statistical analysis and there are a ton of more functionality the library has to offer from operating on matrices to fitting polynomials to existing data.
The main thing to takeaway from this post is to use numpy when you are doing any kind of math over hundreds of thousands of numbers as it will perform much better and remove the need for coding up your own statistical functions. | http://rlgomes.github.io/work/python/numpy/python3/2017/04/02/15.11-is-numpy-really-that-much-faster.html | CC-MAIN-2017-26 | refinedweb | 563 | 68.26 |
Lollypop no longer launches. Can’t listen to my music. I am on Gnome edition, testing. Anyone else experience this issue?
Works fine for me. What is the output running it from the terminal?
Traceback (most recent call last):
File “/usr/bin/lollypop”, line 45, in
from lollypop.application import Application
File “/usr/lib/python3.8/site-packages/lollypop/application.py”, line 18, in
gi.require_version(“Handy”, “1”)
File “/usr/lib/python3.8/site-packages/gi/init.py”, line 132, in require_version
raise ValueError(‘Namespace %s not available for version %s’ %
ValueError: Namespace Handy not available for version 1
When an attempt to launch lollypop from terminal, I get the above text.
What other info can I provide? I forget how to create a good thread.
It’s an issue with the version of libhandy that’s in the repos:
You can install this AUR package; if you have the older version, it won’t even replace it (the older version), but just sit alongside it.
Aha, that’s why I’m not having the same issue. I have
libhandy1 installed as a dependency for other packages.
FYI, there’s a Arch bug report open for it:
FS#68054 - [lollypop] Need libhandy1
Yeah,
libhandy 1.0.0 is in the Arch testing repo right now.
So it’s coming. But until then, that AUR package should work as a temporary fix.
Thank you both @Yochanan and @jabber
That is the solution then? To wait for the version of libhandy package. Or install it from AUR for the temporary fix.
Think I will wait. Will just listen to Spotify in the meantime.
I suppose you could downgrade Lollypop.
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed. | https://forum.manjaro.org/t/lollypop-no-longer-launches-after-last-testing-update/29524 | CC-MAIN-2020-45 | refinedweb | 294 | 70.5 |
Writing or reading from/to a specific source might be acceptable for specialized applications. In spite of that, typically we should separate the way our program reads and writes from the actual input and output device; therefore, this avoids us the urge of directly address specific approaching for each different device (changing our program for each screen or disk in the market) or only use a limited amount of screens or disks that we happen to like,and by this, limiting our clients.
Nowadays, operating systems separate the details of handling specific I/O devices into device drivers allowing programs to access through a generic I/O library which encourage a better development.
The C++ Standard provides I/O libraries to define output and input for every built-in type, but in this article we’re going to focus on 3 specific stream libraries:
- fstream for reading from a file and writing to a file in the same stream.
- ofstream converts objects in memory into streams of byte and writes them to the file.
- ifstream takes streams of bytes from the file and composes objects from it.
Write a File steps
The following snippet shows the basic- recommend steps to write a file using C++:
//Header for I/O file streams using namespace std; int main(){ //1. Name it (or create it) for writing ofstream ofs ("test.txt",ios_base::out); //2.- Open it if(ofs.is_open()){ //3.- Write out objects ofs<<"Hello C++"<<endl; for(unsigned int i=0;i<5;i++) ofs<<"Paragraph: "<<i<<endl; //4.-Close ofs.close(); } //0. Hold errors. else{ //Error //... } return 0; }
Read a file steps
The coming snippet shows the basic- recommend steps to read a file using C++, this program reads the content of the file and shows the same content in the output console window:
//... Read it and show it //Read //1. Know its name ifstream ifs{"test.txt",ios_base::in}; if(!ifs){ cout<<"can open input file text.txt"<<endl;} else{ //2. Open it string temp; temp.clear(); //3. Read in the characters while(ifs){ ifs>>temp; cout<<temp; } } //4. Close (Implicitly closed) //ifs.close();
Exercises
The next exercise creates a file of data in a specific format <hour (0-23), temperature (ºC)>.
The row (1) demonstrates how to create a file with a specific version and mode (in this case, for writing the file).
The row (2) shows how to handle errors (displaying a error message and ending the program). Various C++ authors recommend to check for errors just after the creation/opening of the file.
The (3) represents the format we’re writing in the file <12 56C>.
int main(int argc, char *argv[]) { ofstream os{"raw_temps.txt",ios_base::out}; //(1) if(!os.is_open()){ //(2) //hold error cout<<"The file produces a error"<<endl; return 1; } bool cont_flag=true; cout<<"[Welcome to the Temperature File creator]"<<endl; do{ int tempHour; int tempTemp=0; char cDecision; cout<<"Type the hour (0 to 23): "<<endl; cin>>tempHour; cout<<"Type the temperature (ºC): "<<endl; cin>>tempTemp; os<<tempHour<<'\t'<<tempTemp<<'C'<<'\n'; //(3) cout<<"Do you want to add another pair? Y-N "<<endl; cin>>cDecision; cont_flag=(cDecision== 'Y' || cDecision== 'y')?true:false; }while(cont_flag); return 0; }
So far, that’s it! I hope you enjoy this article. | http://gearstech.com.mx/blog/2019/03/15/cio-stream/ | CC-MAIN-2021-31 | refinedweb | 546 | 63.29 |
>> check query for 0 sum
Suppose we have an array A with n elements, the elements are in range -1 to 1. And have another array of pairs for m queries Q like Q[i] = (li, ri). The response to the query will be 1 when the elements of array a can be rearranged so as the sum Q[li] + ... + Q[ri] = 0, otherwise 0. We have to find answers of all queries.
So, if the input is like A = [-1, 1, 1, 1, -1]; Q = [[1, 1], [2, 3], [3, 5], [2, 5], [1, 5]], then the output will be [0, 1, 0, 1, 0]
Steps
To solve this, we will follow these steps −
n := size of A m := size of Q z := 0 for initialize , i := 0, when i < n, update (increase i by 1), do: z := z + (1 if a < 0, otherwise 0) if z > n - z, then: z := n - z for initialize i := 0, when i < m, update (increase i by 1), do: l := Q[i, 0] r := Q[i, 1] print 1 if (((r - l) mod 2 is 1 and (r - l + 1) / 2) <= z), otherwise 0
Example
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h> using namespace std; void solve(vector<int> A, vector<vector<int>> Q){ int n = A.size(); int m = Q.size(); int z = 0; for (int a, i = 0; i < n; ++i) z += a < 0; if (z > n - z) z = n - z; for (int i = 0; i < m; i++){ int l = Q[i][0]; int r = Q[i][1]; cout << (((r - l) % 2 && (r - l + 1) / 2) <= z) << ", "; } } int main(){ vector<int> A = { -1, 1, 1, 1, -1 }; vector<vector<int>> Q = { { 1, 1 }, { 2, 3 }, { 3, 5 }, { 2, 5 }, { 1, 5 } }; solve(A, Q); }
Input
{ -1, 1, 1, 1, -1 }, { { 1, 1 }, { 2, 3 }, { 3, 5 }, { 2, 5 }, { 1, 5 } }
Output
1, 0, 1, 0, 1,
- Related Questions & Answers
- Check for Subarray in the original array with 0 sum JavaScript
- Check for Valid hex code - JavaScript
- 8085 Program to check for two out of five code
- Can you check for both a blank string and 0 in one condition with a single MySQL query?
- C++ code to check triangular number
- Program to check for two out of five code in 8085 Microprocessor
- C++ code to check reengagements can be done so elements sum is at most x
- Query MySQL with unicode char code?
- C++ code to find card spread way to make all sum equal for each player
- MongoDB query to sum specific fields
- How can we check for NULL in a MySQL query?
- Check for Palindrome after every character replacement Query in C++
- C++ code to process query operation on binary array
- How to front pad zip code with “0” in MySQL?
- Optimization Tips for C# Code
Advertisements | https://www.tutorialspoint.com/cplusplus-code-to-check-query-for-0-sum | CC-MAIN-2022-33 | refinedweb | 476 | 69.79 |
AUTHENTICATE(3) BSD Programmer's Manual AUTHENTICATE(3)
auth_approval, auth_cat, auth_checknologin, auth_mkvalue, auth_userchallenge, auth_usercheck, auth_userokay, auth_userresponse, auth_verify - simplified interface to the BSD Authentication system
#include <login_cap.h> #include <bsd_auth.h> int auth_userokay(char *name, char *style, char *type, char *password); auth_session_t * auth_userchallenge(char *name, char *style, char *type, char **challengep); auth_session_t * auth_usercheck(char *name, char *style, char *type, char *password); int auth_userresponse(auth_session_t *as, char *response, int more); int auth_approval(auth_session_t *as, struct login_cap *lc, char *name, char *type); int auth_cat(char *file); void auth_checknologin(struct login_cap *lc); char * auth_mkvalue(char *value); auth_session_t * auth_verify(auth_session_t *as, char *style, char *name, ...);
These functions provide a simplified interface to the BSD Authentication system (see bsd_auth(3)). The auth_userokay() function provides a single function call interface. Provided with a user's name in name, and an op- tional style, type, and password, the auth_userokay() function returns a simple yes/no response. A return value of 0 implies failure; a non-zero return value implies success. If style is not NULL, it specifies the desired style of authentication to be used. If it is NULL then the de- fault style for the user is used. In this case, name may include the desired style by appending it to the user's name with a single colon (':') as a separator. If type is not NULL then it is used as the authen- tication type (such as "auth-myservice"). If password is NULL then auth_userokay() operates in an interactive mode with the user on standard input, output, and error. If password is specified, auth_userokay() operates in a non-interactive mode and only tests the specified pass- words. This non-interactive method does not work with challenge-response authentication styles. The auth_usercheck() function operates the same as the auth_userokay() function except that it does not close the BSD Authentication session created. Rather than returning the status of the session, it returns a pointer to the newly created BSD Authentication session. The auth_userchallenge() function takes the same name, style, and type arguments as does auth_userokay(). However, rather than authenticating the user, it returns a possible challenge in the pointer pointed to by challengep. The return value of the function is a pointer to a newly created BSD Authentication session. This challenge, if not NULL, should be displayed to the user. In any case, the user should provide a password which is the response in a call to auth_userresponse(). In addition to the password, the pointer returned by auth_userchallenge() should be passed in as as and the value of more should be non-zero if the program wishes to allow more attempts. If more is zero then the session will be closed. The auth_userresponse() function closes the BSD Authentication session and has the same return value as auth_userokay(). The auth_approval() function calls the approval script for the user of the specified type. The string "approve-" will be prepended to type if missing. The resulting type is used to look up an entry in /etc/login.conf for the user's class. If the entry is missing, the gener- ic entry for "approve" will be used. The name argument will be passed to the approval program as the name of the user. The lc argument points to a login class structure. If it is NULL then a login class structure will be looked up for the class of user name. The auth_approval() function re- turns a value of 0 on failure to approve the user. Prior to actually calling the approval script, the account's expiration time, the associated nologin file, and existence of the account's home directory (if requirehome is set for this class) are checked. Failure on any of these points causes the auth_approval() function to return a value of 0 and not actually call the approval script. The auth_cat() function opens file for reading and copies its contents to standard output. It returns 0 if it was unable to open file and 1 other- wise. The auth_checknologin() function must be provided with a pointer to a lo- gin class. If the class has a "nologin" entry defined and it points to a file that can be opened, the contents of the file will be copied to stan- dard output and exit(3) will be called with a value of 1. If the class does not have the field "ignorenologin" and the file /etc/nologin exists its contents will be copied to standard output and exit(3) will be called with a value of 1. The auth_verify() function is a front end to the auth_call(3) function. It will open a BSD Authentication session, if needed, and will set the style and user name based on the style and name arguments, if not NULL. Values for the style and user name in an existing BSD Authentication ses- sion will be replaced and the old values freed (if the calling program has obtained pointers to the style or user name via auth_getitem(3), those pointers will become invalid). The variable arguments are passed to auth_call() via the auth_set_va_list(3) function. The, possibly created, BSD Authentication session is returned. The auth_getstate(3) or auth_close(3) function should be used to determine the outcome of the au- thentication request. The auth_mkvalue() function takes a NUL-terminated string pointed to by value and returns a NUL-terminated string suitable for passing back to a calling program on the back channel. This function is for use by the lo- gin scripts themselves. The string returned should be freed by free(3) when it is no longer needed. A value of NULL is returned if no memory was available for the new copy of the string.
auth_subr(3), getpwent(3), pw_dup(3)
The auth_approval(), auth_usercheck(), auth_userokay(), and auth_userchallenge() functions call getpwnam(3) or getpwuid(3), overwrit- ing the static storage used by the getpwent(3) family of routines. The calling program must either make a local copy of the passwd struct pointer via the pw_dup(3) function or, for auth_approval() and auth_usercheck() only, use the auth_setpwd(3) function to copy the passwd struct into a BSD Authentication session structure which can then be passed to auth_approval() or auth_usercheck(). MirOS BSD #10-current March 26,. | http://mirbsd.mirsolutions.de/htman/i386/man3/auth_mkvalue.htm | crawl-003 | refinedweb | 1,031 | 52.7 |
in reply to Re^3: How to write Perl Client that uses WCF TCP/IP Servicein thread How to write Perl Client that uses WCF TCP/IP Service
Hello
PerlApprentice, whom you asked a question, hasn't been here in 2 months, 3 weeks.
The node to which you replied is 2 months, 3 weeks old.
Welcome, see The Perl Monks Guide to the Monastery, see How do I post a question effectively?, Where should I post X?
My advice, see links and links I post in SOAP::Lite - UNKNOWN ARGUMENT, Suppressing nil attribute in empty SOAP tag,
import wsdl wizard, helpful for debugging your SOAP::Lite calls, example of using/installing SOAP::Simple, which has better WSDL support that SOAP::Lite, and its simpler than XML::Compile, on which it is built.
Deep frier
Frying pan on the stove
Oven
Microwave
Halogen oven
Solar cooker
Campfire
Air fryer
Other
None
Results (322 votes). Check out past polls. | http://www.perlmonks.org/?node_id=943055 | CC-MAIN-2016-26 | refinedweb | 158 | 66.98 |
Forum Index
I just had a discussion with Walter, Andrei and Ali about open methods. While Andrei is not a great fan of open methods, he likes the idea of improving D to better support libraries that extend the language - of which my openmethods library is just an example. Andrei, correct me if I misrepresented your opinion in this paragraph.
Part of the discussion was about a mechanism to add user-defined per-object or per-class metadata (there's another part that I will discuss in another thread).
Andrei's initial suggestion is to put it in the vtable. If we know the initial size of the vtable, we can grow it to accommodate new slots. In fact we can already do something along those lines...sort of:
import std.stdio;
class Foo {
abstract void report();
}
class Bar : Foo {
override void report() { writeln("I'm fine!"); }
}
void main() {
void*[] newVtbl;
auto initVtblSize = Bar.classinfo.vtbl.length;
newVtbl.length = initVtblSize + 1;
newVtbl[0..initVtblSize] = Bar.classinfo.vtbl[];
newVtbl[initVtblSize] = cast(void*) 0x123456;
byte[] newInit = Bar.classinfo.m_init.dup;
*cast(void***) newInit.ptr = newVtbl.ptr;
Bar.classinfo.m_init = newInit;
Foo foo = new Bar();
foo.report(); // I'm fine!
writeln((*cast(void***)foo)[initVtblSize]); // 123456
}
This works with dmd and gdc, not with ldc2. But it gives an idea of what the extension would like.
A variant of the idea is to allocate the user slots *before* the vtable and access them via negative indices. It would be faster.
Of course we would need a thread safe facility that libraries would call to obtain (and release) slots in the extended vtable, and return the index of the allocated slot(s). Thus a library would call an API to (globally) reserve a new slot; then another one to grow the vtable of the classes it targets (automatically finding and growing all the vtables is unfeasible because nested classes are not locatable via ModuleInfo).
Walter also reminded me of the __monitor field so I played with it too. Here is prototype of what per-instance user defined slots could look like.
import std.stdio;
class Foo {
}
void main() {
byte[] init;
init.length = Foo.classinfo.m_init.length;
init[] = Foo.classinfo.m_init[];
(cast(void**) init.ptr)[1] = cast(void*) 0x1234;
Foo.classinfo.m_init = init;
Foo foo = new Foo();
writeln((cast(void**) foo)[1]); // 1234 with dmd and gdc, null with ldc2
}
This works with dmd and gdc but not with ldc2.
This may be useful for implementing reference-counting schemes, Observers, etc.
In both cases I use the undocumented 'm_init' field in ClassInfo. The books and docs do talk about the 'init' field that is used to initialize structs, but I have found no mention of 'm_init' for classes. Perhaps we could document it and make it mandatory that an implementation uses its content to pre-initialize objects.
Also here I am using the space reserved for the '__monitor' hidden field. This is a problem because 1/ it will go away some day 2/ it is only one word. Granted, that word could store a pointer to a vector of words, where user-defined slots would live; but that would be at the cost of performance.
Finally, note that if you have per-instance user slots and a way of automatically initializing them when an object is created, then you also have per-class user-defined metadata: just allocate a slot in the object, and put a pointer to the data in it.
Please send in comments, especially if you are a library author and have encountered a need for this kind of thing. Eventually the discussion may lead to the drafting of a DIP.
I realize that I focused too much on the how, and not enough on the why.
By "metadata" I mean the data that is "just there" in any object, in addition to user defined fields.
An example of per-class metadata is the pointer to the the virtual function table. It is installed by the compiler or the runtime as part of object creation. It is the same for all the instances of the same class.
Just like virtual functions, my openmethods library uses "method tables" and needs a way of finding the method table relevant to an object depending on its class. I want the library to work with objects of any classes, without requiring modifications to existing classes. Thus, there is a need to add that information to any object, in an orthogonal manner. Openmethods has two ways of doing this (one actually hijacks the deprecated 'deallocator' field in ClassInfo) but could profit from the ability to plant pointers right inside objects.
Examples of per-object metadata could be: a reference count, a time stamp, an allocator, or the database an object was fetched from. | https://forum.dlang.org/thread/aubnuulcntgsjqistkok@forum.dlang.org | CC-MAIN-2018-17 | refinedweb | 796 | 64.3 |
In Python, with Matplotlib, how can a scatter plot with empty circles be plotted? The goal is to draw empty circles around some of the colored disks already plotted by
scatter()
From the documentation for scatter:
Optional kwargs control the Collection properties; in particular: edgecolors: The string ‘none’ to plot faces with no outlines facecolors: The string ‘none’ to plot unfilled outlines
Try the following:
import matplotlib.pyplot as plt import numpy as np x = np.random.randn(60) y = np.random.randn(60) plt.scatter(x, y, s=80, facecolors='none', edgecolors='r') plt.show()
Note: For other types of plots see this post on the use of
markeredgecolor and
markerfacecolor. | https://codedump.io/share/snaLP2lqVaug/1/how-to-do-a-scatter-plot-with-empty-circles-in-python | CC-MAIN-2017-04 | refinedweb | 112 | 54.73 |
2.12. Plugin Infrastructure in Buildbot¶
New in version 0.8.11.
Plugin infrastructure in Buildbot allows easy use of components that are not part of the core. It also allows unified access to components that are included in the core.
The following snippet
from buildbot.plugins import kind ... kind.ComponentClass ...
allows to use a component of kind
kind.
Available
kinds are:
worker
- workers, described in Workers
changes
- change source, described in Change Sources.
Note
If you are not very familiar with Python and you need to use different kinds of components, start your
master.cfg file with:
from buildbot.plugins import *
As a result, all listed above components will be available for use.
This is what sample
master.cfg file uses.
2.12.1. Finding Plugins¶
Buildbot maintains a list of plugins at.
2.12.2. Developing Plugins¶
Distribute a Buildbot Plug-In contains all necessary information for you to develop new plugins. Please edit to add a link to your plugin!
2.12. | https://buildbot.readthedocs.io/en/v0.9.14/manual/plugins.html | CC-MAIN-2018-09 | refinedweb | 165 | 61.22 |
BC(1) NetBSD General Commands Manual BC(1)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
bc -- arbitrary precision calculator language
SYNOPSIS
bc [-hilqsvw] [long-options] [file ...]
DESCRIPTION
bc is a language that supports arbitrary precision numbers with interac- tive docu- ment describes the language accepted by this processor. Extensions will be identified as such. OPTIONS -h, --help Print the usage and exit. -i, --interactive Force interactive mode. -l, --mathlib Define the standard math library. -q, --quiet Quiet mode. -s, --standard Process exactly the POSIX bc language. -v, --version Print the version number and copyright and quit. -w, --warn Give warnings for extensions to POSIX bc. NUMBERS The most basic element in bc is the number. Numbers are arbitrary preci- sion numbers. This precision is both in the integer part and the frac- tional part. All numbers are represented internally in decimal and all computation is done in decimal. (This version of bc truncates results from divide and multiply operations.) There are two attributes of num- bers, the length and the scale. The length is the total number of sig- nificant decimal digits in a number and the scale is the total number of decimal digits after the decimal point. For example: under- scores. Comments in bc start with the characters ``/*'' and end with the charac- ters ``*/''. 36. (Base values greater than 16 are an exten- sion.) Assigning a value outside this range to ibase will result in a value of 2 or 36. Input numbers may contain the characters 0-9 and A-Z. ``ZZZ'' repre- sent vari- able and then the variable is incremented by one. var -- The result of the expression is the value of the vari- able and then the variable is decremented by one. expr + expr The result of the expression is the sum of the two expressions. expr - expr The result of the expression is the difference of the two expressions. expr * expr The result of the expression is the product of the two compute a-(a/b)*b to the scale of the maximum of scale + scale(b) and scale(a). If scale is set to zero and both expressions are integers this expression is the integer remainder function. expr ^ expr the exponent is negative. If the exponent is posi- tive, the scale of the result is the minimum of the scale of the first expression times the value of the exponent and the maximum of scale and the scale of the first expression. (e.g. scale(a^b) = min(scale(a)*b, max( scale, scale(a))).) It should be noted that expr^0 will always return the value of 1. ( expr ) This alters the standard precedence to force the evalua- tion of the expression. var = expr The variable is assigned the value of the expression. var <op>= expr This is equivalent to var = var <op> expr with the exception that the ``var'' part is evaluated only once. This can make a difference if ``var'' is an array. Relational expressions are a special kind of expression that always eval- uate to 0 or 1, 0 if the relation is false and 1 if the relation is true. These may appear in any legal expression. (POSIX bc requires that rela- tional expressions are used only in if, while, and for statements and that only one relational test may be done in them.) The relational oper- ators are: expr1 < expr2 The result is 1 if expr1 is strictly less than expr2. expr1 <= expr2 The result is 1 if expr1 is less than or equal to expr2. expr1 > expr2 The result is 1 if expr1 is strictly greater than expr2. expr1 >= expr2 The result is 1 if expr1 is greater than or equal to expr2. expr1 == expr2 The result is 1 if expr1 is equal to expr2. expr1 != expr2 The result is 1 if expr1 is not equal to expr2. Boolean operations are also legal. (POSIX bc does NOT have boolean oper- ations.) The result of all boolean operations are 0 and 1 (for false and true) as in relational expressions. The boolean operators are: !expr The result is 1 if expr is 0. expr && expr The result is 1 if both expressions are non-zero. expr || expr The result is 1 if either expression is non-zero. The expression precedence is as follows: (lowest to highest) 1. || operator, left associative 2. && operator, left associative 3. ! operator, nonassociative 4. Relational operators, left associative 5. Assignment operator, right associative 6. + and - operators, left associative 7. *, / and % operators, left associative 8. ^ operator, right associative 9. unary - operator, nonassociative 10. ++ and -- operators, nonassociative This precedence was chosen so that POSIX compliant bc programs will run correctly. This will cause the use of the relational and logical opera- tors assign- ment operators. There are a few more special expressions that are provided in bc. These have to do with user defined functions and standard functions. They all appear as ``name (parameters)''. See the section on functions for user defined functions. The standard functions are: length (expression) The value of the length function is the number of significant digits in the expression. read () The read function (an extension) will read a number from the standard input, regardless of where the function occurs. Beware, this can cause problems with the mixing of data and program in the standard input. The best use for this function is in a pre- viously written program that needs input from the user, but never allows program code to be input from the user. The value of the read function is the number read from the standard input using the current value of the variable ibase for the conver- sion base. scale (expression) The value of the scale function is the number of digits after the decimal point in the expression. sqrt (expression) The value of the sqrt function is the square root of the expression. If the expression is negative, a run time error is generated. STATEMENTS Statements (as in most algebraic languages) provide the sequencing of expression evaluation. In bc statements are executed ``as soon as possible''. Execution happens when a newline in encountered and there is one or more complete statements. Due to this immediate execution, new- lines brackets ([]) are optional parts of the statement.) expression This statement does one of two things. If the expres- sion starts with <variable> <assignment> ..., it is con- sidered to be an assignment statement. If the expres- sion is not an assignment statement, the expression is evaluated and printed to the output. After the number is printed, a newline is printed. For example, ``a=1'' is an assignment statement and ``(a=1)'' is an expres- sion that has an embedded assignment. All numbers that are printed are printed in num- bers last.) string The string is printed to the output. Strings start with a double separated by commas. Each string or expres- sion is printed in the order of the list. No terminat- ing newline is printed. Expressions are evaluated and their value is printed and assigned to the variable last. Strings in the print statement are printed to the output and may contain special characters. Special characters start with the backslash character (\). The special characters (expression) statement1 [else statement2] The if statement evaluates the expression and executes statement expression is non-zero. It evaluates the expression before each execution of the statement. Termination of the loop is caused by a zero expression value or the execution of a break statement. for ([expression1]; [expression2]; [expression3]) statement The for statement controls repeated execution of the statement. Expression1 is evaluated before the loop. Expression2 is evaluated before each execution of the statement. If it is non-zero, the statement is evalu- ated. If it is zero, the loop is terminated. After each execution of the statement, expression3 is evalu- ated before the reevaluation of expression2. If expression1 or expression3 are missing, nothing is eval- uated at the point they would be evaluated. If expression2 is missing, it is the same as substituting the value 1 for expression2. (The optional expressions are an extension. POSIX bc requires all three expres- sions.) itera- tion. halt The halt statement (an extension) is an executed state- ment. PSEUDO STATEMENTS These statements are not statements in the traditional sense. They are not executed statements. Their function is performed at "compile" time. longer warranty notice. This is an extension. FUNCTIONS Functions provide a method of defining a computation that can be executed later. Functions in bc always compute a value and return it to the caller. Function definitions are "dynamic" in the sense that a function is undefined until a definition is encountered in the input. That defi- nition defini- tion, zero or more parameters are defined by listing their names sepa- rated func- tion sep- arated by semicolons or newlines. Return statements cause the termina- tion of a function and the return of a value. There are two versions of the return statement. The first form, ``return'', returns the value 0 to the calling expression. The second form, ``return (expression)'', com- putes the value of the expression and returns that value to the calling expression. There is an implied ``return (0)'' at the end of every func- tion. execu- tion num- ber of newlines before and after the opening brace of the function. For example, the following definitions are legal. define d (n) { return (2*n); } define d (n) { return (2*n); }. state- ment is zero, the zero is printed. For px (1), no zero is printed because the function is a void function. Also, call by variable for arrays was added. To declare a call by vari- able array, the declaration of the array parameter in the function defi- nition looks like ``name []''. The call to the function remains the same as call by value arrays. MATH LIBRARY If bc is invoked with the -l option, a math library is preloaded and the default scale is set to 20. The math functions will calculate their results to the scale set at the time of their call. The math library defines the following functions:.
ENVIRONMENT the environment arguments are processed before any com- mand line argument files. This allows the user to set up "standard" options and files to be processed at every invocation of bc. The files in the environment vari- ables would typically contain function definitions for functions the user wants defined every time bc is run. BC_LINE_LENGTH This should be an integer specifying the number of char- acters in an output line for numbers. This includes the backslash and newline characters for long numbers.
EXAMPLES
In /bin/sh, the following will assign the value of pi to the shell vari- able); } EDITLINE OPTIONS bc is compiled using the editline(3) library. This allows the user to do editing of lines before sending them to bc. It also allows for a history of previous lines typed. This adds to bc one more special variable. This special variable, history is the number of lines of history retained. The default value of -1 means that an unlimited number of his- tory lines are retained. Setting the value of history to a positive num- ber restricts the number of history lines to the number given. The value of 0 disables the history feature. For more information, read the user manual for the editline(3) library. stan- dard environment This version does not conform to the POSIX standard in the processing of the LANG environment variable and all environment variables starting with LC_. names Traditional and POSIX bc have single letter names for functions, variables and arrays. They have been extended to be multi-character names that start with a letter and may contain letters, numbers and the under- score expres- sion. array parameters POSIX bc does not (currently) support array parameters in full. The POSIX grammar allows for arrays in func- tion definitions, but does not provide a method to spec- ify an array as an actual parameter. (This is most likely an oversight in the grammar.) Traditional imple- mentations of bc have only call-by-value array parame- ters. function format POSIX bc requires the opening brace on the same line as the define key word and the auto statement on the next line. =+, =-, =*, =/, =%, =^ POSIX bc does not require these "old style" assignment operators to be defined. This version may allow these "old style" assignments. Use the limits statement to see if the installed version supports them. If it does support the "old style" assignment operators, the state- ment ``a =- 1'' will decrement a by 1 instead of setting a to the value -1. spaces in numbers Other implementations of bc allow spaces in numbers. For example, ` interactive execution code will invalidate the current execution block. The execution block is terminated by an end of line that appears after a complete sequence of state- ments. For example, a = 1 b = 2 has two execution blocks and { a = 1 b = 2 } has one execution block. Any runtime error will termi- nate the execution of the current execution block. A runtime warning will not terminate the current execution block. interrupts During an interactive session, the SIGINT signal (usu- ally generated by the control-C character from the ter- minal) will cause execution of the current execution block to be interrupted. It will display a "runtime" error. dis- tributed..
DIAGNOSTICS
If any file on the command line can not be opened, bc will report that the file is unavailable and terminate. Also, there are compile and run time diagnostics that should be self-explanatory.
HISTORY
This man page documents bc version nb1.0.
AUTHORS
Philip A. Nelson <phil@NetBSD.org> ACKNOWLEDGEMENTS The author would like to thank Steve Sommars for his extensive help in testing the implementation. Many great suggestions were given. This is a much better product due to his involvement.
BUGS
Error recovery is not very good yet. NetBSD 9.1 April 16, 2017 NetBSD 9.1 | https://man.netbsd.org/NetBSD-9.1/bc.1 | CC-MAIN-2021-49 | refinedweb | 2,334 | 58.48 |
"ctypes is an advanced FFI ... package for Python ...", according to its home page.
Python 2.5 includes ctypes.
CTypes FAQs
FAQ: How do I copy bytes to Python from a ctypes.Structure?
def send(self): return buffer(self)[:]
FAQ: How do I copy bytes to a ctypes.Structure from Python?
def receiveSome(self, bytes): fit = min(len(bytes), ctypes.sizeof(self)) ctypes.memmove(ctypes.addressof(self), bytes, fit)
FAQ: Why should I fear using ctypes.memmove?
ctypes.memmove emulates the memmove of C with complete faithfulness. If you tell memmove to copy more bytes than sizeof(self), you will overwrite memory that you do not own, with indeterminate consequences, arbitrarily delayed.
FAQ: How do I start or stop reversing the bytes of each field?
To decide the byte order of your fields, derive your class of _fields_ from the ctypes.LittleEndianStructure class or from the ctypes.BigEndianStructure class rather than deriving your class from the local native ctypes.Structure class.
FAQ: How do I change the byte length of a ctypes.Structure?
Declare the max length and allocate that much memory, but then copy less than all of the memory allocated. You change the byte length that you use, not the byte length that ctypes.sizeof reports. For example:
class MaxByteString(ctypes.Structure): _fields_ = [('bytes', 0xFF * ctypes.c_ubyte)]
FAQ: How do I say memcpy?
ctypes.memmove
(Remember the difference in argument order between memmove and memcpy.)
FAQ: How do I say offsetof?
def offsetof(self, field): return ctypes.addressof(field) - ctypes.addressof(self)
That example of an offsetof method works when self is an instance of the structure, the field is a member of self, and the field is an instance of an array class or structure class.
The offsetof macro of C works more often. The C macro works for any type of field and works when you have the class but no instance. Imitating the C macro with Python ctypes is possible but tedious: you begin by fetching the _pack_ of the self.__class__ and the ctypes.sizeof for each of the _fields_.
FAQ: How do I say uchar?
ctypes.c_ubyte
FAQ: How do I say ((void *) -1)?
INVALID_HANDLE_VALUE = ctypes.c_void_p(-1).value
FAQ: How do I contribute to this CTypes FAQ?
FAQ: Why is the documentation for ctypes so utterly worthless and devoid of useful examples?
1. Learn how to edit this Wiki page at:
2. Post newbie CTypes questions into comp.lang.python, per the remarkable invitation of:
from:
Don't be surprised to have your questions answered by the original ... The best thing about comp.lang.python is how "newbie-friendly" the mail group is. You can ask any question and never get a "RTFM" thrown back at you.
FAQ: How do I learn Python CTypes, after learning C and Python?
1. Read a version of the CTypes tutorial:
The search once upon a time did find
2. Download and locally search a version of the CTypes tutorial.
The search once upon a time did find
3. Read other CTypes wiki's:
The search once upon a time did find
FAQ: How do I convert between c declarations (in .h files) to ctypes declarations?
It would be nice if you could define a struct only once. Is there any module that can parse .h files structs? It seems like it should be here.
The only such parse I know of is part of the python SWIG package (). I imagine that it could become involved to write a far more lean and concise parser without ending up relying on a full fledge C compiler like this (unless you wanted to define only a subset of valid C headers for which to write your parser for). | https://wiki.python.org/moin/ctypes?highlight=(CategoryFaq) | CC-MAIN-2016-50 | refinedweb | 619 | 77.13 |
Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 16, 2009 12:26 PM
I'm excited about the new module implementation, but I have some questions about the way it's getting implemented in Cairngorm 3 so far. It would seem that the way InsyncContext is architected, the modules are hard coded in the app. The way I've implemented modules in the past was to make them completely data-driven, so I can make the available modules user profile-driven.
I'm interested in anyone's thoughts on how this could be done with Cairngorm 3. Maybe some sort of Factory class that provides a new ParsleyModuleDescriptor with passed parameters for the module URL, etc?
Jeff
1. Re: Cairngorm 3 & Dynamic ModulesTom Sugden Nov 18, 2009 6:10 AM (in response to Jeff Battershall)
Hi Jeff,
That's a good question and I think modules are a good candidate for runtime configuration. We've also seen the requirement you mention, where different users are entitled to access different modules. Also, it can be desirable to release new modules into production without needing to redeploy the shell application that loads them.
So how could this be implemented? Well both Spring ActionScript and Parsley provide a means of configuring a context at runtime from an XML file. With both frameworks this system is commonly used for configuring the Flex logging framework, but the same mechanism could be used for configuring module descriptors. What is needed is a class for describing the module and a component for loading and rendering it.
Re. the Cairngorm module library, it's early days. This is one of the more experimental parts of Cairngorm and its design is very much up for debate. The intention is to provide some features not included in the Flex SDK, such as:
- Abstraction of the loading process, so that modules, compiled stylesheets, sub-applications and other asynchronous loading processes can be handled in the same way.
- Support for progress and error states that display while the module is loading and if an error occurs.
- Runtime configuration of modules through declarative descriptors, placed in XML or MXML (your requirement).
- Lazy loading of modules in response to messages.
At the same time, features of the standard Flex ModuleLoader shouldn't be lost, such as the ability to declare multiple instances of the same module at different parts of the UI.
Do these requirements resonate with you? Do you have thoughts on how you'd like to see them solved? Have you solved them before? It would be nice to stay as close to convention as possible.
Best,
Tom
2. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 18, 2009 6:50 AM (in response to Tom Sugden)
Thanks Tom,
The way I solved this previously was via a LoadModule event, which was
derived from the profile downloaded for the logged in user. This was based
on a simple "there can be only one" module of a given type. The module was
loaded into a ViewStack and as part of its instantiation, loaded necessary
data assets. The issue I ran into as the unloading of the module. Some
modules would load/unload with no memory leaks a some wouldn't. I spent
quite a bit of time with the profiler trying to figure out why. Binding
would seem to be a possible liability as when using quick and dirty MXML
binding, the references were hard.
I was very excited to read the Parsley docs about modules and contexts. If
the context that the module is relying on is destroyed on unload, that would
seem to address the issues with the module not unloading cleanly, so long as
the view is bound to model objects with weak references.
In the case of Cairngorm and Parsley, injection should be used instead, but
the question becomes how to perform that injection in ActionScript vs. MXML.
That's what leads me toward a global Module factory class that would deliver
a fully configured module based upon a descriptor and add that to the
appropriate landmark in the view hierarchy.
Jeff
3. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 19, 2009 11:43 AM (in response to Jeff Battershall)
I decided to throw caution to the winds and attempt to port an existing application over to CG3 with modules - but with the modules hard-coded like they are in the sample CG3 app. I'm treating this as sort of a tutorial to get up to speed with CG3 and Parsley. I've been able to get the modules to load correctly but run aground on commands. Events fired in the module's context don't seem to want to communicate to Comands declared in the same context. Basically the messaging 'sub-system' isn't working and I haven't been able to figure out what I'm doing wrong.
Any advice appreciated.
Jeff
4. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 19, 2009 11:48 AM (in response to Jeff Battershall)
Are you using parsley 2.1 and the command and module libs from trunk?
If not, please try that.
Sent from my iPhone
5. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 19, 2009 12:10 PM (in response to Alex Uhlmann)
Thanks Alex,
I believe that I've got all the dependencies downloaded from the trunk and in the build path:
Module
Navigation
Integration
IntegrationRPC
Task
parsley-complete-flex3-2.1.0.swc
spicelib-complete-flex-2.1.0.swc
Here's the PM that is dispatching the event:
ilc.modules.timeline.presentation
{
import flash.events.EventDispatcher;
import ilc.modules.timeline.GetDecadesEvent;
[
Event(name="getDecades", type="ilc.modules.timeline.GetDecadesEvent")]
[ManagedEvents(
"getDecades")]
public class TimelinePM extends EventDispatcher
{
public function TimelinePM()
{
getDecades();
}
public function getDecades():void
{
dispatchEvent(
new GetDecadesEvent());
}
}
}
Here's the Command class:
package
{
import ilc.modules.timeline.GetDecadesEvent;
import ilc.modules.timeline.domain.Decades;
import mx.collections.ArrayCollection;
import mx.rpc.AsyncToken;
import mx.rpc.events.FaultEvent;
import mx.rpc.events.ResultEvent;
import mx.rpc.remoting.mxml.RemoteObject;
public class GetDecadesCommand
{
public function GetDecadesCommand()
{
}
[MessageDispatcher]
public var dispatcher:Function;
[Inject]
public var decades:Decades;
[Inject(id=
'remoteService')]
public var service:RemoteObject;
[Command]
public function execute(event:GetDecadesEvent):AsyncToken
{
trace('getting decades');
return service.getDecades() as AsyncToken;
}
[CommandResult(selector=
"getDecades")]
public function serviceResult(event:ResultEvent):void
{
decades.decades =
new ArrayCollection(event.result as Array);
}
[CommandFault(selector=
"getDecades")]
public function showError(fault:FaultEvent):void
{
trace(fault.toString());
}
}
}
Do you see anything obvious?
Jeff
6. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 19, 2009 12:36 PM (in response to Jeff Battershall)
I'm beginning to think there's something up with the way I'm creating the child contexts - as my PM that should have been injected into one of the views in the heirarchy, but it's null. It should be as simple as creating a new config,mxml in the module itself - should it not?
Jeff
7. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 19, 2009 2:28 PM (in response to Jeff Battershall)
You also need to wire your view to Parsley with i.e. a configureIOC event or the Configure tag. Check out the Parsley docs for that. Have you checked out the InsyncModularExtended projects for an example using the C3 module library? There's also a tutorial on that in our guidelines.
8. Re: Cairngorm 3 & Dynamic ModulesTom Sugden Nov 20, 2009 12:52 AM (in response to Jeff Battershall)
I think that messaging is normally scoped to a context hierarchy, so in order for a message sent in the context of a module to be received by handlers in another context, those contexts must be within a hierarchy. In other words, when you build the context for your module, Parsley needs to be able to find the parent context too. It tries to do this automatically with Parsley 2.1, but in some cases it cannot succeed, so you need to specify the parent context manually via the FlexContextBuilder.build() method.
9. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 20, 2009 3:33 AM (in response to Tom Sugden)
Thanks Tom (and Alex),
I'll check the Parsley docs on how to refer to the parent context. In the case of modules it would seem to me that the model objects would want to live exclusively in the context of the module - am I right?
What I am noticing in the case of my module is that the context is available at the module view heirarchy level but not available in any child views, even if I have included them in the context definition and have fired off the 'configureIOC' event from within the view. In the app I am converting, the module contains another view which I am injecting my PM into, and the PM keeps coming back null.
Even still, though, when I am firing my event which should result in command class getting executed in my module, the messaging does not appear to be working.
Jeff
10. Re: Cairngorm 3 & Dynamic ModulesTom Sugden Nov 20, 2009 3:47 AM (in response to Jeff Battershall)
Maybe you're using the wrong view wiring event? In Parsley 2.0 it was called "configureIOC", but in Parsley 2.1 the ViewConfigurationEvent,CONFIGURE_VIEW is used, or the <Configure> tag.
11. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 20, 2009 3:49 AM (in response to Jeff Battershall)
Tom,
A further development: if I use the MessageHandler metatag instead of Command, it works (at least in the immediate context of the module). Am I correct that the Command extensions are in the Integration.swc? I checked out the integration libraries have referenced the Integration.swc from my test project. Is there another dependency that I'm missing?
Jeff
12. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 20, 2009 4:05 AM (in response to Jeff Battershall)
Jeff,
please have a look into the InsyncModularExtended sample applications. They also use Commands, the IntegrationRPC library and show you how to set it up. It works there but I'd be interested if there's anything different in your scenario that would make the integration library fail. Maybe you're missing to initialize the integration RPC library in your entry level MXML. Check out the Insync samples. They have commands in the contacts module.
13. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 20, 2009 4:29 AM (in response to Alex Uhlmann)
Alex,
I did not have the integration libs tag in my module - only at the top level of the app That was it!!!!!! - thank you and Tom for hanging in there on this.
However, I'm still not getting my model injected into a view child of my module. I switched to ViewConfigurationEvent.CONFIGURE_VIEW to ensure that there was nothing up with my spelling but no, my child view still isn't getting injected with my model, even though I have the view in my context definition.
Jeff
14. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 20, 2009 12:01 PM (in response to Jeff Battershall)
Tom, Alex,
Although my understanding of C3 is progressing, it is far from what it should be. The most frustrating part of this is getting the data retrieved by a Command back into the model. It appears that the PM model that I might be injecting is not singleton or possibly may exist in more than one scope. I've tried passing a reference to my target model in a Managed event, but so far no dice. Almost makes me wish for a ModelLocator.
Jeff
15. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 20, 2009 12:14 PM (in response to Jeff Battershall)
16. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 20, 2009 12:18 PM (in response to Alex Uhlmann)
Alex,
I have been into the Parsley docs quite a bit. Just gotta keep pushing until I fully assimilate the pattern of development.
Jeff
17. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 21, 2009 4:11 AM (in response to Alex Uhlmann)
Alex,
I admit to being completely confused at this point. I fire off a managed
event which invokes a Command, which returns an array, everything seems to
be fine, but the reference if the model object that's being updated is not
the one I referred to. It's almost as if the context and my normal Flex
view aren't talking to each other. It's like the context and view are in
parallel universes. And I still have the issue where subviews aren't
getting processed by the context upon instantiation.
I'm quite sure I am **** something wrong - I'm comparing my source code
against the sample insync extended app and trying to identify where I've
gone astray...
Jeff
18. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 21, 2009 6:20 AM (in response to Jeff Battershall)
Have you looked into the Parsley forum post I pointed out in my last post (discussion between Xavi and Jens). Your problem sounds familiar. Also, make sure you understand Parsley's messaging scope feature.
19. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Nov 23, 2009 10:15 AM (in response to Alex Uhlmann)
Alex, Tom,
I'm making some incremental headway but it is slow going as I try to figure out whether the behavior I'm seeing is the result of my inexperience with Parsley or possible bugs in the framework itself. Here's what I've been able to accomplish so far:
I've got my shell app working with modules being loaded on demand - not dynamic modules mind you - but hard coded at this point. Works fine.
I've also been able to get my Commands to send results back to the view - this works but things go awry when I have a subview to the module. I can inject the child view, but the child view keeps getting added/removed/added into the context and my RPC calls are duplicated. In the end, the bindings from my model to the child view's components don't get executed.
Jeff
20. Re: Cairngorm 3 & Dynamic ModulesAlex Uhlmann Nov 23, 2009 10:34 PM (in response to Jeff Battershall)
Hi Jeff,
sorry but I have to repeat myself. Have you read the Parsley forum post between Xavi and Jens I pointed to above? There are also more threads on that topic in the Parsley forum if you search for it. Your problem sounds exactly like that. It's caused by multiple addedToStage events. There are workarounds.
21. Re: Cairngorm 3 & Dynamic ModulesAlberto Alcaraz Jan 19, 2010 5:05 AM (in response to Jeff Battershall)
UPDATED: Ok, doesn't mind. I just understand that you were simply meaning to put the tag <cairngorm:CairngormIntegrationLib /> inside the module.
Thanks anyway!!
Hi Jeff, I'm having the exact same problem in a project; if I change [Command] by [MessageHandler] it works correctly.
The problem is that I have the integrationlib.swc added to the module. What do you mean with "I did not have the integration libs tag in my module"?
Thanks!
22. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Jan 22, 2010 7:29 AM (in response to Tom Sugden)
Tom,
I was wondering if you've been working on runtime module configuration and if there's been progress in that area.
Jeff
23. Re: Cairngorm 3 & Dynamic ModulesTom Sugden Jan 22, 2010 8:22 AM (in response to Jeff Battershall)
Hi Jeff,
I haven't had time to work on this library myself but have a few opinions about how it should change. It's being used on a few other Adobe projects, so I expect some improvements based on their experiences.
I'd like the design to change in this way:
1. Module loading abstracted by an IContent (or something) interface instead of overloading IModuleInfo. Then different implementations can be used for modules, modules with compiled CSS, or other configurations. Some projects need to use special authentication services before loading each module, so that kind of customization could be hidden behind the interface. Instances of these classes could be declared in MXML or XML config files, with the latter being suitable for dynamic modules.
2. Change the ViewLoader component so it works for multiple instances of the same module, and so the behavior is more similar to ModuleLoader in Flex.
3. Write a new stack container for modules, so that only a single loading-overlay and error-state child is present. If you place 10 ViewLoaders in a view stack, you end up with a loading overlay child for each module.
Best,
Tom
24. Re: Cairngorm 3 & Dynamic ModulesTom Sugden Jan 22, 2010 8:29 AM (in response to Jeff Battershall)
Hi Jeff,
Did you notice that commands have been incorporated into the Parsley 2.2 release candidate 2:
Jens Halm made some definite improvements over the implementation in Cairngorm 3, so I'd recommend having a look at those. The short-lived command objects are particularly nice. It looks like the Parsley 2.2 developer manual is still in progress, but a description of the new command features can be found here:
Best,
Tom
25. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Jan 22, 2010 8:36 AM (in response to Tom Sugden)
Thanks, Tom.
My particular use case is pretty darned simple - user profile maps to
available modules and navigation reflects this. When navigated to for the
first time, the module is loaded into a view stack. In my app, there's only
area that loads modules. So what we really have is a shell application with
modules loaded on demand.
I guess my question is, would a simpler approach be more appropriate for my
app? Conceivably it could be as simple is sending a message that a certain
module should be loaded (with it's url, etc), and the selectedIndex of the
ViewStack changed accordingly.
Jeff
26. Re: Cairngorm 3 & Dynamic ModulesJeff Battershall Jan 22, 2010 11:08 AM (in response to Tom Sugden)
Tom,
That looks great! Nice collaboration between you and Jens!
Jeff | https://forums.adobe.com/message/2416535 | CC-MAIN-2015-14 | refinedweb | 3,010 | 61.77 |
How Dependent Haskell Can Improve Industry Projects
Dependent types are a hot topic in the Haskell community. Many voices advocate for adding dependent types to Haskell, and a lot of effort is being invested towards that end. At the same time, the sceptics raise various concerns, one being that dependent types are more of a research project than a tool applicable in industrial software development.
That is, however, a false dichotomy. While dependent types are still a subject of active research, we already have a good enough understanding of them to see how to apply them in real-world situations and reap immediate benefits.
In this post, we show that Dependent Haskell can be used to simplify and improve the code in a large production codebase. The subject of our case study is Morley: an implementation of the smart contract language of the Tezos blockchain.
Getting started with Dependent Haskell
Dependent types are a broad concept. If you take a close look at the existing dependently typed languages, such as Agda, Coq, and Idris, you will find that their type systems all have certain unique characteristics. In that regard, Haskell is also about to get its own flavour of dependent types, designed to fit as well as possible with the rest of the language, rather than blindly imitate prior work.
Most of the theory has been developed in Adam Gundry’s “Type Inference, Haskell and Dependent Types” and then Richard Eisenberg’s “Dependent Types in Haskell: Theory and Practice”. The reader is encouraged to get acquainted with these bodies of work, especially the latter (since it’s more recent).
As a more casual reading, there is the GHC GitLab Wiki Page on the topic and GHC Proposal #378. While not as extensive, these resources provide a solid starting point for learning about the proposed design of Dependent Haskell.
Finally, for the sake of completeness, we shall now recap the most important concepts here: quantifiers, dependence, visibility, and erasure.
Quantifiers and their attributes
When we say “quantifier”, we mean the part of a type that corresponds to a place in a term where you could, if you wanted, introduce a variable (source: comment 775422563). As usual, the definition alone is rather hard to grasp, so let’s make things concrete with a couple of examples.
Consider the lambda abstraction
\a -> a == 0. The corresponding type is
Int -> Bool. We can split both of these as follows:
The part of the type corresponding to the term-level binder is
Int -> ..., so that’s the quantifier.
Note that the lambda abstraction may be obscured by other language features, such as pattern matching. For example:
f :: Int -> Bool f 0 = True f _ = False
Even though there’s no explicit lambda abstraction or variable here, we still call the
Int -> ... part of the type a quantifier. For this discussion, we look at terms through the lens of their desugaring. The above code snippet is really equivalent to:
f :: Int -> Bool f = \a -> case a of 0 -> True _ -> False
And now the lambda abstraction is explicit. Read our article about Haskell desugaring to learn more about this transformation.
Another example of a quantifier is
ctx => ..., which introduces a class constraint. If you are not familiar with dictionary passing, then it might not be apparent how constraints are related to variables and quantification. And what makes it tricky is that we can’t do a transformation to make this apparent in surface Haskell, we need to look at the internal language of GHC called Core. Fortunately, this is also covered by the aforementioned article about desugaring.
In short, a constrained function of type
Num a => a -> a is equivalent to a function of type
NumDict a -> a -> a, where
NumDict is a record containing implementations of numeric methods such as
(+),
(*),
negate, and so on.
That’s also where the concept of visibility comes into play. We call
a -> a visible quantifier because at the term level, both abstraction and application are explicit. On the other hand,
ctx => ... is an invisible quantifier, as the term-level class method dictionaries are hidden from the programmer.
In Haskell, as specified by the Haskell 2010 Report, these are the only two quantifiers. However, with the
ExplicitForAll extension, we get another one:
forall a. ... . Just as the double arrow,
forall is an invisible quantifier, so it may be hard to see why it’s a quantifier at all (and it is also covered by the article on desugaring). However, with the
TypeApplications extension, you can override the visibility at use sites. Instead of writing
map (>0), one can write
map @Int @Bool (>0). The three inputs correspond to the first three quantifiers in the type of map.
map :: forall a. -- e.g. @Int forall b. -- e.g. @Bool (a -> b) -> -- e.g. (>0) ([a] -> [b])
This leaves us with three quantifiers in today’s Haskell, which are either visible or invisible:
Visibility, however, is not the only attribute we care about. Another one is erasure. We call a quantifier retained (as opposed to erased) if it is possible to pattern match on the variable it introduces.
For example, the following code is not valid:
evil_id :: forall a. a -> a evil_id x = case a of -- Nope! Int -> 42 _ -> x
In the above code snippet, the idea is that
evil_id would behave mostly as
id, but return
42 when applied to an
Int. That is not possible, though, as the type variable
a is not available for case analysis. We, therefore, call
forall an erased quantifier. Since erased arguments cannot be subjected to case analysis, they never affect which code branch is taken.
Erased arguments are not passed at runtime.
Also, since class method dictionaries are passed at runtime, we say that
ctx => ... is a retained quantifier. Naturally, the data contained in the dictionary is available for case analysis.
For example,
evil_id can be implemented by utilising the
Typeable class:
import Type.Reflection import Data.Type.Equality evil_id :: forall a. Typeable a => a -> a evil_id x = case testEquality (typeRep @a) (typeRep @Int) of Just Refl -> 42 Nothing -> x
Finally, let us discuss dependence. A quantifier is considered dependent if the variable it introduces can be mentioned in the rest of the type.
For example, ordinary functions are not dependent:
f :: Bool -> ... -- x cannot be used here f = \x -> ...
On the other hand,
forall is a dependent quantifier:
f :: forall x. ... -- x can be used here f = ...
This means that the value taken by a dependent variable can affect the rest of the type. The type of
(+) @Int is
Int -> Int -> Int, whereas the type of
(+) @Double is
Double -> Double -> Double.
Let us conclude this subsection with a summary of the quantifiers available today and their attributes:
New quantifiers of Dependent Haskell
You may notice that the quantifier table has quite a few missing rows. What about visible erased dependent quantification, or invisible retained dependent quantification, and so on?
The main focus of Dependent Haskell is adding the most powerful form of quantification that would be simultaneously retained and dependent. We shall call the new quantifier
foreach. Visibility is not that important, so the plan is to offer both the visible and the invisible variation. And while we’re at it, we might as well throw visible erased dependent quantification into the mix.
The new quantifiers would provide a more principled replacement to some current practices, including
Proxy,
Typeable,
TypeRep,
Sing, and
SingI. That is precisely what we are about to explore since Tezos Morley happens to make use of these definitions.
Specifically, we will perform the following (mostly mechanical) transformations:
As a result, the code shall become more laconic and easier to maintain. We will no longer require the
singletons package, which defines
Sing and
SingI, since we use all these new quantifiers instead of intricate machinery from
singletons.
How dependent types can help industry
In this section, we discuss several examples of how one can simplify industrial code that makes use of advanced types by means of Dependent Haskell. Programmers already simulate dependent types (e.g., using singletons) in their projects for miscellaneous purposes. We show what such projects might look like if we can get rid of those simulacrums in favour of real dependent types.
Our case study is Morley, which is a part of Tezos.
Tezos is a blockchain system with a proof-of-stake consensus algorithm. Michelson is a functional smart contract language for the Tezos blockchain. It’s a stack-based language with strong typing, and it is inspired by such functional languages as ML and Scheme. There’s also a formal description of the Michelson operational semantics available at.
Morley is a set of tools for writing Michelson smart contracts. The word ‘Morley’ is a bit overloaded since it refers to the Haskell package, the smart contract language that extends Michelson, and the same-named framework. The package consists of the Morley interpreter implementation and the type checker.
Getting rid of
singletons
singletons
singletons is a library that emulates dependent types in Haskell. You can learn more about it from its README and the paper that introduced the library: “Dependently Typed Programming with Singletons” by Richard Eisenberg and Stephanie Weirich.
We assume that constructions such as the
Sing type family and the
SingI class are already known to the reader. Otherwise, one may have a glance at the documentation.
Example 1:
T and
getWTP
Tand
getWTP
Let us have a look at the data type
T from
morley:
data T = TKey | TUnit | TSignature | TChainId | TOption T | TList T | TSet T | TOperation | TContract T | TPair T T | TOr T T | TLambda T T | TMap T T | TBigMap T T | TInt | TNat | TString | TBytes | TMutez | TBool | TKeyHash | TBls12381Fr | TBls12381G1 | TBls12381G2 | TTimestamp | TAddress | TNever
This is a regular ADT that describes the types of Michelson values. If we wanted to verify that a type is well-formed, we could implement a predicate:
isWellFormed :: T -> Bool
However, using a mere
Bool means we don’t have any evidence that validation succeeded. Instead,
morley defines the
getWTP of the following type:
getWTP :: forall (t :: T). (SingI t) => Either NotWellTyped (Dict (WellTyped t))
If the input type
t :: T is well-formed, the function produces
Right with the evidence. Otherwise, it fails and returns
Left. Notably, the type of evidence
WellTyped t refers to the value of
t. That is why we had to employ the elaborate construction
forall t. SingI t => instead of adding a simple function parameter
T ->.
Quite a few complications arise from this. Firstly, we need to generate singletons for
T:
$(let singPrefix, sPrefix :: Name -> Name singPrefix nm = mkName ("Sing" ++ nameBase nm) sPrefix nm = mkName ("S" ++ nameBase nm) in withOptions defaultOptions{singledDataConName = sPrefix, singledDataTypeName = singPrefix} $ concat <$> sequence [genSingletons [''T], singDecideInstance ''T]
This snippet of Template Haskell generates
SingT, which is the singleton type for
T, and also
SingI and
SDecide instances:
data SingT t where STUnit :: SingT TUnit STSignature :: SingT TSignature ... STPair :: SingT t1 -> SingT t2 -> SingT (TPair t1 t2) ... -- and so on for each constructor of T.
Secondly, the implementation of
getWTP now has to work with
SingT values instead of plain
T values.
Let’s have a look at one of the branches of
getWTP that handles
STPair:
getWTP :: forall t. (SingI t) => Either NotWellTyped (Dict (WellTyped t)) getWTP = case sing @t of ... STPair s1 s2 -> withSingI s1 $ withSingI s2 $ fromEDict (getWTP_ s1) $ fromEDict (getWTP_ s2) $ Right Dict … getWTP_ :: forall t. Sing t -> Either NotWellTyped (Dict (WellTyped t)) getWTP_ s = withSingI s $ getWTP @t
TPair is a constructor of
T that corresponds to the Michelson tuple data type, and
STPair a b is the corresponding singleton. This function has the
SingI constraint, and here we pattern match on
sing @t, where
t is a type variable of kind
T.
With Dependent Haskell, we are getting the
foreach quantifier, which could be used to simplify all the above. The type of
getWTP would become:
getWTP :: foreach (t :: T). Either NotWellTyped (Dict (WellTyped t))
And the implementation of
getWTP could pattern match on regular
T values rather than
SingT:
getWTP @(TPair s1 s2) = fromEDict (getWTP @s1) $ fromEDict (getWTP @s2) $ Right Dict
Moreover, we no longer need the
getWTP_ helper function, which was used for recursive calls. Instead, we simply use a visibility override
@.
Example 2:
Peano and
UpdateN
Peanoand
UpdateN
Here’s another example. In
morley we need both term-level and type-level natural numbers to index the elements on the stack of the stack machine.
At the moment, we have the classic data type that defines natural numbers inductively à la Peano:
data Peano = Z | S Peano
And with the
DataKinds language extension, we can use it at the type level, as we do in the type of instructions for the stack machine:
data Instr (inp :: [T]) (out :: [T]) where … UPDATEN :: forall (ix :: Peano) (val :: T) (pair :: T) (s :: [T]). ConstraintUpdateN ix pair => PeanoNatural ix -> Instr (val : pair : s) (UpdateN ix val pair ': s) ...
But in addition to the type-level variable
ix :: Peano, we also need to mirror it at the term level to pattern match on it. That is the purpose of the
PeanoNatural ix field.
Typically, one would use a singleton type for this purpose:
data SingNat (n :: Nat) where SZ :: SingNat 'Z SS :: !(SingNat n) -> SingNat ('S n)
But we take it one step further. There are plenty of situations when we need to convert this singleton value to a natural number represented as non-inductive
Natural.
The straightforward solution is to utilise a conversion function like the following one:
toPeano :: SingNat n -> Natural toPeano SZ = 0 toPeano (SS n) = 1 + (toPeano n)
Such a conversion is at runtime, and it would be inefficient to invoke it repeatedly. Instead of such a conversion, we define the
PeanoNatural data type that caches the result of such conversion next to the singleton.
data PeanoNatural (n :: Peano) = PN !(SingNat n) !Natural
Of course, we don’t want to make an element of
PeanoNatural from an arbitrary pair of
SingNat n and
Natural. We would like to have an invariant that might be formulated as
PN s k :: PeanoNatural n iff
k = toPeano s. We formalise this idea by introducing pattern synonyms
Zero and
Succ.
data MatchPS n where PS_Match :: PeanoNatural n -> MatchPS ('S n) PS_Mismatch :: MatchPS n matchPS :: PeanoNatural n -> MatchPS n matchPS (PN (SS m) k) = PS_Match (PN m (k - 1)) matchPS _ = PS_Mismatch pattern Zero :: () => (n ~ 'Z) => PeanoNatural n pattern Zero = PN SZ 0 pattern Succ :: () => (n ~ 'S m) => PeanoNatural m -> PeanoNatural n pattern Succ s <- (matchPS -> PS_Match s) where Succ (PN n k) = PN (SS n) (k+1)
Those patterns cover all possible cases, but GHC can’t figure it out on its own, so we have to use the
COMPLETE pragma to avoid the
incomplete-patterns warnings.
With Dependent Haskell, we could rewrite
PeanoNatural to avoid the use of singletons:
data PeanoNatural (n :: Peano) where PN :: foreach !(n :: Peano) -> !Natural -> PeanoNatural n
Interestingly, this reveals a need for strict
foreach – a topic not previously discussed in the literature, so it’s worth investigating separately.
Back to the
UPDATEN constructor of
Instr:
UPDATEN :: forall (ix :: Peano) (val :: T) (pair :: T) (s :: [T]). ConstraintUpdateN ix pair => PeanoNatural ix -> Instr (val : pair : s) (UpdateN ix val pair ': s)
UPDATEN is the instruction for the stack machine to update the n-th node of a given right-combed pair on top of the stack.
Here,
UpdateN is a type-level list operation:
type family UpdateN (ix :: Peano) (val :: T) (pair :: T) :: T where UpdateN 'Z val _ = val UpdateN ('S 'Z) val ('TPair _ right) = 'TPair val right UpdateN ('S ('S n)) val ('TPair left right) = 'TPair left (UpdateN n val right)
In Dependent Haskell, we can redefine
UpdateN as a term-level function:
updateN :: Peano -> T -> T -> T updateN Z val _ = val updateN (S Z) val (TPair _ right) = TPair val right updateN (S (S n)) val (TPair left right) = TPair left (updateN n val right)
Getting rid of
Proxy
Proxy
In Morley, we use phantom labels to identify arithmetic and other algebraic operations:
data Add -- addition data Sub -- subtraction data Mul -- multiplication data And -- conjunction data Or -- disjunction data Xor -- exclusive disjunction ...
Then to implement these operations, we have a class called
ArithOp:
class (Typeable n, Typeable m) => ArithOp aop (n :: T) (m :: T) where type ArithRes aop n m :: T evalOp :: proxy aop -> Value' instr n -> Value' instr m -> Either (ArithError (Value' instr n) (Value' instr m)) (Value' instr (ArithRes aop n m))
The
aop type variable stands for one of the aforementioned operations. The
n and
m type variables stand for the input types of the operation, and a single operation can be overloaded to work on various inputs.
instance ArithOp 'Add TInt TInt -- addition of Ints instance ArithOp 'Add TNat TNat -- addition of Nats ... instance ArithOp 'And TBool TBool -- logical `and` instance ArithOp 'And TNat TNat -- bitwise `and`
The
ArithRes type family specifies the type of the result:
instance ArithOp 'Add TInt TInt where type ArithRes 'Add TInt TInt = TInt ... -- offsetting a timestamp by a given number of seconds instance ArithOp 'Add TInt TTimestamp where type ArithRes 'Add TInt TTimestamp = TTimestamp ...
Finally, we have the
evalOp method which actually implements the operation at the term level:
instance ArithOp Or 'TNat 'TNat where type ArithRes Or 'TNat 'TNat = 'TNat evalOp _ (VNat i) (VNat j) = Right $ VNat (i .|. j)
You will notice that
evalOp ignores its first argument, which is a proxy value. Its only role is to specify the operation at the use site.
let k = evalOp (Proxy :: Proxy Add) n m
The problem with
Proxy is that it’s a value passed at runtime, so it incurs a certain amount of overhead. The optimiser can’t always get rid of it. Another problem is that constructing it at use sites introduces syntactic noise and makes the API less convenient. We would rather write
evalOp Add than
evalOp (Proxy :: Proxy Add).
What if we simply removed it? Like so:
class ... => ArithOp aop (n :: T) (m :: T) where ... evalOp -- removed the (proxy aop ->) parameter :: Value' instr n -> Value' instr m -> Either (ArithError (Value' instr n) (Value' instr m)) (Value' instr (ArithRes aop n m))
Then we would solve both problems: no input to pass at runtime, and at use sites we could simply write
evalOp @Add. But the cost is that we’d introduce a new problem: the
aop type variables would become ambiguous. That is permitted if the
AllowAmbiguousTypes extension is enabled, but it leads to major deterioration of error messages if one forgets to specify the ambiguous type variable at the use site.
One of the quantifiers of Dependent Haskell offers a better solution. The visible
forall a -> quantifier is mostly equivalent to a regular
forall a., but the type variable must be always specified at use sites and is never ambiguous.
The type we want for
evalOp is:
evalOp :: forall instr n m. forall aop -> ArithOp aop n m => Value' instr n -> Value' instr m -> Either (ArithError (Value' instr n) (Value' instr m)) (Value' instr (ArithRes aop n m))
For variables we want the compiler to infer at use sites, we use the ordinary quantifier
forall instr n m.; but for
aop, which must be specified explicitly at the use site, we use
forall aop ->.
We could apply the same trick to other functions which involve this variable. For example, in today’s Morley there’s a wrapper around
evalOp that operates on values from the stack of the stack machine.
runArithOp :: (ArithOp aop n m, EvalM monad) => proxy a
With the visible forall, we would rewrite it as follows:
runArithOp :: forall aop -> (ArithOp aop n m, EvalM mon
The
runArithOp function evaluates arithmetic operations and either succeeds or fails. The first argument is
proxy aop, which specifies the arithmetic operation itself. The function uses
evalOp, a method of the type class
ArithOp with the associated type
ArithRes that represents the resulting type of operation.
This change cascades downstream to other functions that make use of
runArithOp. For example, the
runInstrImpl function has the following equation:
type InstrRunner m = forall inp out. Instr inp out -> Rec StkEl inp -> m (Rec StkEl out) runInstrImpl :: EvalM m => InstrRunner m -> InstrRunner m … runInstrImpl _ OR (l :& r :& rest) = (:& rest) <$> runArithOp (Proxy @Or) l r
Instead of
Proxy @Or, we would simply write
Or here:
runInstrImpl _ OR (l :& r :& rest) = (:& rest) <$> runArithOp Or l r
Term-level functions instead of type families
Example 1:
Drop and
Take
Dropand
Take
In Dependent Haskell, we will be able to use functions at the type level. In particular, we can get rid of type families in favour of usual term-level functions.
For example, we may replace the following type families with the corresponding functions on lists:
type family Drop (n :: Peano) (s :: [k]) :: [k] where Drop 'Z s = s Drop ('S _) '[] = '[] Drop ('S n) (_ ': s) = Drop n s type family Take (n :: Peano) (s :: [k]) :: [k] where Take 'Z _ = '[] Take _ '[] = '[] Take ('S n) (a ': s) = a ': Take n s
These type families are from
morley as well. We replace
Drop and
Take with their usual term-level counterparts:
drop :: Peano -> [a] -> [a] drop Z l = l drop _ [] = [] drop (S n) (_ : s) = drop n s take :: Peano -> [a] -> [a] take Z _ = [] take _ [] = [] take (S n) (x : xs) = x : take n s
In addition to operations on lists, we use type families to enforce invariants:
type family IsLongerThan (l :: [k]) (a :: Peano) :: Bool where IsLongerThan (_ ': _) 'Z = 'True IsLongerThan (_ ': xs) ('S a) = IsLongerThan xs a IsLongerThan '[] _ = 'False type LongerThan l a = IsLongerThan l a ~ 'True
The
IsLongerThan is a binary predicate that is true iff the length of a list is greater than a given natural number.
One may reformulate this piece of code in Dependent Haskell as follows:
isLongerThan :: [a] -> Peano -> Bool isLongerThan xs n = length xs > n -- longerThan :: [a] -> Peano -> Constraint longerThan l n = isLongerThan l n ~ True
Example 2:
IsoValue and
ToT
IsoValueand
ToT
Now we have a look at the
IsoValue type class. This type class defines a mapping from Haskell types to Michelson ones using an associate type:
class (WellTypedToT a) => IsoValue a where -- | Type function that converts a regular Haskell type into a @T@ type. type ToT a :: T type ToT a = GValueType (G.Rep a)
In Dependent Haskell, we can replace associated types with methods (assuming #267 to control visibility):
class (WellTypedToT a) => IsoValue a where toT a :: T
We can further translate type families that make use of
ToT. For example, currently, we have
ToTs that applies
ToT to a list of types:
type family ToTs (ts :: [Type]) :: [T] where ToTs '[] = '[] ToTs (x ': xs) = ToT x ': ToTs xs
With DH, one could simply write
map toT.
Example 3:
DUPN
DUPN
Now let us consider an example directly related to the Morley stack machine.
Recall that
Instr is the data type that represents the stack machine instructions, such as
UPDATEN or
DUPN:
data Instr (inp :: [T]) (out :: [T]) where ... DUPN :: forall (n :: Peano) inp out a. (ConstraintDUPN n inp out a) => PeanoNatural n -> Instr inp out ...
As with
UPDATEN discussed in an earlier section, we rewrite
DUPN to use
foreach instead of
PeanoNatural:
data Instr (inp :: [T]) (out :: [T]) where ... DUPN :: foreach (n :: Peano) -> forall inp out a. (ConstraintDUPN n inp out a) => Instr inp out ...
We have already discussed this transformation, and now we are interested in something else: the
ConstraintDUPN constraint.
type ConstraintDUPN n inp out a = ConstraintDUPN' T n inp out a type ConstraintDUPN' kind (n :: Peano) (inp :: [kind]) (out :: [kind]) (a :: kind) = ( RequireLongerOrSameLength inp n , n > 'Z ~ 'True , inp ~ (Take (Decrement n) inp ++ (a ': Drop n inp)) , out ~ (a ': inp) )
Let’s focus on this line in particular:
inp ~ (Take (Decrement n) inp ++ (a ': Drop n inp))
There are four type families involved here:
Take,
Drop,
Decrement, and
++. We already discussed
Take and
Drop.
Decrement is defined as follows:
type family Decrement (a :: Peano) :: Peano where Decrement 'Z = TypeError ('Text "Expected n > 0") Decrement ('S n) = n
Once again, in Dependent Haskell, we don’t need to replicate arithmetic operations at the type level as type families. So we replace the capital ‘D’ with the lowercase one in
Decrement.
decrement :: Peano -> Peano decrement Z = error "Expected n > 0" decrement (S n) = n
The call to
TypeError can be translated to the familiar term-level
error. As to
Take and
Drop, we have demonstrated their translation above. As to
++, currently we use one from the
vinyl library.
type family (as :: [k]) ++ (bs :: [k]) :: [k] where '[] ++ bs = bs (a ': as) ++ bs = a ': (as ++ bs)
In Dependent Haskell, we could use the term-level one from
base. So, we redefine
ConstraintDUPN' in the following way:
type ConstraintDUPN' kind (n :: Peano) (inp :: [kind]) (out :: [kind]) (a :: kind) = ( RequireLongerOrSameLength inp n , n > 'Z ~ 'True , inp ~ (take (decrement n) inp ++ (a ': drop n inp)) , out ~ (a ': inp) )
Future of Dependent Haskell
We hope this article explains how these changes in the language will allow writing a more transparent Haskell code. In particular, these changes are of interest in those situations when we would like to guarantee safety at the type level.
Let us quickly discuss further steps for Dependent Haskell. At the time of writing, the proposal with design for dependent types has been recently accepted. That’s quite a remarkable achievement since this topic was rather controversial amongst the Haskell community. But we still have plenty to do to ‘materialise’ these enhancements that look fairly speculative since full-fledged dependent types in Haskell are not ready yet.
Right now, we are working on such issues as enabling visible
foralls and binding type variables in functions. These are examples of issues the solution of which makes dependent types in Haskell a bit closer. However, introducing foreach quantifiers requires extensive research since we have only a design sketch at the moment.
How to participate?
The GHC developers community is always open for new enthusiasts, so some readers of this post might want to participate in Dependent Haskell development.
If so, you may have a look at ghc.dev. This page contains basic commands for building and debugging GHC. See also the GHC chapter by Simon Marlow and Simon Peyton Jones in ‘The Architecture of Open Source Applications’. We also recommend overviewing the GHC list of issues for newcomers.
Feel free to contact the authors – Vladislav Zavialov or Danya – on Twitter if any of this sounds interesting to you.
- Getting started with Dependent Haskell
- How dependent types can help industry
- Future of Dependent Haskell
| https://serokell.io/blog/how-dependent-haskell-can-improve-industry-projects | CC-MAIN-2021-43 | refinedweb | 4,435 | 59.03 |
APACHE PORTABLE RUNTIME (APR) LIBRARY STATUS: -*-text-*-
Last modified at [$Date: 2002/06/28 23:15:19 $]
Release:
0.9.0-dev : in progress:
* Must namespace protect all include/apr_foo.h headers. Jon Travis
has especially observed these including apr and Apache-1.3.
Message-ID: <20020128100116.A4288@covalent.net>
(Those problems have been fixed, but it is a good example of
what to look for.) (?),
nuint8, nuint16, NGetLo8, NGetHi8,
HIBYTE, LOBYTE)
apr.hw (NO_USE_SIGACTION)
*?
* Almost every API in APR depends on pools, but pool semantics
aren't a good match for a lot of applications. We need to find
a way to support alternate allocators polymorphically without
a significant performance penalty.
* extract the MAJOR version from apr_version.h and pass it to
libtool for use in applying version numbers to the shared
libraries.
CURRENT VOTES:
* apr_time_t has proven to be a performance problem in some key
apps (like httpd-2.0) due to the need for 64-bit division to
retrieve the seconds "field." Alternatives that have been
discussed on dev@apr are:
1) Keep the 64-bit int, but change it to use binary microseconds
(renaming the function to get rid of apr_time_t vs time_t confusion,
and having macros to convert BUSEC to USEC and back if need be)
+1: BrianP, Cliff, Brane, rbb, Jim, Thom
2) Add a separate data type (and supporting functions) for seconds only
-0: Cliff, Brane, rbb, Jim
3) Replace it with a struct with separate fields for sec and usec
-0: BrianP, Cliff, Brane, rbb, Thom
*:
*:
*
Update: Apache deals with this itself, though it might be nice
if APR could do something.
* Build scripts do not recognise AIX 4.2.1 pthreads
Justin says: "Is this still true?"
*>. | http://mail-archives.apache.org/mod_mbox/apr-dev/200207.mbox/%3C200207040345.g643jIQ31497@Boron.MeepZor.Com%3E | CC-MAIN-2017-30 | refinedweb | 286 | 69.52 |
Docker Tutorial for Beginners Linux -.
This entry-level guide will tell you why and how to Dockerize your WordPress projects.
This entry-level guide will tell you why and how to Dockerize your WordPress projects.
Docker: Enterprise Container Platform. Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers.
Docker: Enterprise Container Platform. Docker is a set of coupled software-as-a-service and platform-as-a-service products that use operating-system-level virtualization to develop and deliver software in packages called containers.Introduction
Since its open source launch in 2013, Docker became one of the most popular pieces of technology out there. A lot of companies are contributing, and a huge amount of people are using and adopting it. But why is it so popular? What does it offer that was not there before? In this blog post we want to dive deeper into the internals of Docker to understand how it works.
The first part of this post will give a quick overview about the basic architectural concepts. In the second part we will introduce four main functionalities that form the foundation for isolation in Docker containers: 1) cgroups, 2) namespaces, 3) stackable image-layers and copy-on-write, and 4) virtual network bridges. In the third section there will be a discussion about opportunities and challenges when using containers and Docker. We conclude by answering some frequently asked questions about Docker.Basic Architecture
"Docker is an open-source project that automates the deployment of applications inside software containers." - Wikipedia
People usually refer to containers when talking about operating system level virtualization. Operating system level virtualization is a method in which the kernel of an operating system allows the existence of multiple isolated application instances. There are many implementations of containers available, one of which is Docker.
**Docker **launches containers based off of images. An image is like a blueprint, defining what should be inside the container when it is being created. The usual way to define an image is through a Dockerfile. A Dockerfile contains instructions on how to build your image step by step (don't worry you will understand more about what is going on internally later on). The following Dockerfile, for example, will start from an image containing OpenJDK, install Python 3 there, copy the
requirements.txt inside the image and then install all Python packages from the requirements file.
FROM openjdk:8u212-jdk-slim RUN apt-get update \ && apt-get install -y --no-install-recommends \ Python3=3.5.3-1 \ Python3-pip=9.0.1-2+deb9u1 \ && rm -rf /var/lib/apt/lists/* COPY requirements.txt requirements.txt RUN pip3 install --upgrade -r requirements.txt
Images are usually stored in image repositories called Docker registries. Dockerhub is a public Docker registry. In order to download images and start containers you need to have a Docker host. The Docker host is a Linux machine which runs the Docker daemon (a daemon is a background process that is always running, waiting for work to be done).
In order to launch a container, you can use the Docker client, which submits the necessary instructions to the Docker daemon. The Docker daemon is also talking to the Docker registry if it cannot find the requested image locally. The following picture illustrates the basic architecture of Docker:
What is important to note already is that Docker itself does not provide the actual containerization but merely uses what is available in Linux. Let's dive into the technical details.Container Isolation
Docker achieves isolation of different containers through the combination of four main concepts: 1) cgroups, 2) namespaces, 3) stackable image-layers and copy-on-write, and 4) virtual network bridges. In the following sub sections we are going to explain these concepts in detail.
The Linux operating system manages the available hardware resources (memory, CPU, disk I/O, network I/O, ...) and provides a convenient way for processes to access and utilize them. The CPU scheduler of Linux, for example, takes care that every thread will eventually get some time on a CPU core so that no applications are stuck waiting for CPU time.
Control groups (cgroups) are a way to assign a subset of resources to a specific group of processes. This can be used to, e.g., ensure that even if your CPU is super busy with Python scripts, your PostgreSQL database still gets dedicated CPU and RAM. The following picture illustrates this in an example scenario with 4 CPU cores and 16 GB RAM.
All Zeppelin notebooks started in the zeppelin-grp will utilize only core 1 and 2, while the PostgreSQL processes share core 3 and 4. Same applies to the memory. Cgroups are one important building block in container isolation as they allow hardware resource isolation.
While cgroups isolate hardware resources, namespaces isolate and virtualize system resources. Examples of system resources that can be virtualized include process IDs, hostnames, user IDs, network access, interprocess communication, and filesystems. Let's first dive into an example of process ID (PID) namespaces to make this more clear and then briefly discuss other namespaces as well.
The Linux operating system organizes processes in a so called process tree. The tree root is the first process that gets started after the operating system is booted and it has the PID 1. As only one process tree can exist and all other processes (e.g. Firefox, terminal emulators, SSH servers) need to be (directly or indirectly) started by this process. Due to the fact that this process initializes all other processes it is often referred to as the init process.
The following figure illustrates parts of a typical process tree where the init process started a logging service (
syslogd), a scheduler (
cron) and a login shell (
bash):
1 /sbin/init +-- 196 /usr/sbin/syslogd -s +-- 354 /usr/sbin/cron -s +-- 391 login +-- 400 bash +-- 701 /usr/local/bin/pstree
Inside this tree, every process can see every other process and send signals (e.g. to request the process to stop) if they wish. Using PID namespaces virtualizes the PIDs for a specific process and all its sub processes, making it think that it has PID 1. It will then also not being able to see any other processes except its own children. The following figure illustrates how different PID namespaces isolate the process sub trees of two Zeppelin processes.
1 /sbin/init | + ... | +-- 506 /usr/local/zeppelin 1 /usr/local/zeppelin +-- 2 interpreter.sh +-- 3 interpreter.sh +-- 511 /usr/local/zeppelin 1 /usr/local/zeppelin +-- 2 java
Another use case for namespaces is the Linux filesystem. Similar to PID namespaces, filesystem namespaces virtualize and isolate parts of a tree - in this case the filesystem tree. The Linux filesystem is organized as a tree and it has a root, typically referred to as
/.
In order to achieve isolation on a filesystem level, the namespace will map a node in the filesystem tree to a virtual root inside that namespace. Browsing the filesystem inside that namespace, Linux does not allow you to go beyond your virtualized root. The following drawing shows part of a filesystem that contains multiple "virtual" filesystem roots inside the
/drives/xx folders, each containing different data.
Besides the PID and the filesystem namespaces there are also other kinds of namespaces. Docker allows you to utilize them in order to achieve the amount of isolation you require. The user namespace, e.g., allows you to map a user inside a container to a different user outside. This can be used to map the root user inside the container to a non-root user outside, so the process inside the container acts like an admin inside but outside it has no special privileges.Stackable Image Layers and Copy-On-Write
Now that we have a more detailed understanding of how hardware and system resource isolation helps us to build containers, we are going to take a look into the way that Docker stores images. As we saw earlier, a Docker image is like a blueprint for a container. It comes with all dependencies required to start the application that it contains. But how are these dependencies stored?
Docker persists images in stackable layers. A layer contains the changes to the previous layer. If you, for example, install first Python and then copy a Python script, your image will have two additional layers: One containing the Python executables and another one containing the script. The following picture shows a Zeppelin, an Spring and a PHP image, all based on Ubuntu.
In order not to store Ubuntu three times, layers are immutable and shared. Docker uses copy-on-write to only make a copy of a file if there are changes.
When starting a container-based on an image, the Docker daemon will provide you with all the layers contained in that image and put it in an isolated filesystem namespace for this container. The combination of stackable layers, copy-on-write, and filesystem namespaces enable you to run a container completely independent of the things "installed" on the Docker host without wasting a lot of space. This is one of the reasons why containers are more lightweight compared to virtual machines.
Now we know ways to isolate hardware resources (cgroups) and system resources (namespaces) and how to provide each container with a predefined set of dependencies to be independent from the host system (image layers). The last building block, the virtual network bridge, helps us in isolating the network stack inside a container.
A network bridge is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. Let's look at a typical setup of a physical network bridge connecting two network segments (LAN 1 and LAN 2):
Usually we only have a limited amount of network interfaces (e.g. physical network cards) on the Docker host and all processes somehow need to share access to it. In order to isolate the networking of containers, Docker allows you to create a virtual network interface for each container. It then connects all the virtual network interfaces to the host network adapter, as shown in the following picture:
The two containers in this example have their own
eth0 network interface inside their network namespace. It is mapped to a corresponding virtual network interfaces
veth0 and
veth1 on the Docker host. The virtual network bridge
docker0 connects the host network interface
eth0 to all container network interfaces.
Docker gives you a lot of freedom in configuring the bridge, so that you can expose only specific ports to the outside world or directly wire two containers together (e.g. a database container and an application which needs access to it) without exposing anything to the outside.
Taking the techniques and features described in the previous sub sections, we are now able to "containerize" our applications. While it is possible to manually create containers using cgroups, namespaces, virtual network adapters, etc., Docker is a tool that makes it convenient and with almost no overhead. It handles all the manual, configuration intensive tasks, making containers accessible to software developers and not only Linux specialists.
In fact there is a nice talk available from one of the Docker engineers where he demonstrates how to manually create a container, also explaining the details we covered in this sub section.Opportunities and Challenges of Docker
By now, many people are using Docker on a daily basis. What benefits do containers add? What does Docker offer that was not there before? In the end everything you need for containerizing your applications was already available in Linux for a long time, wasn't it?
Let's look at some opportunities (not an exhaustive list of course) that you have when moving to a container-based setup. Of course there are not only opportunities, but also challenges that might give you a hard time when adopting Docker. We are also going to name a few in this section.
Docker enables DevOps. The DevOps philosophy tries to connect development and operations activities, empowering developers to deploy their applications themselves. You build it, you run it. Having a Docker based deployment, developers can ship their artifacts together with the required dependencies directly without having to worry about dependency conflicts. Also it allows developers to write more sophisticated tests and execute them faster, e.g., creating a real database in another container and linking it to their application on their laptop in a few seconds (see Testcontainers).
Containers increase the predictability of your deployment. No more "runs on my machine". No more failing application deployments because one machine has a different version of Java installed. You build the image once and you can run it anywhere (given there is a Linux Kernel and Docker installed).
High adoption rate and good integration with many prominent cluster managers. One big part about using Docker is the software ecosystem around it. If you are planning to operate at scale, you won't get around using one or the other cluster manager. It doesn't matter if you decide to let someone else manage your deployment (e.g. Google Cloud, Docker Cloud, Heroku, AWS, ...) or want to maintain your own cluster manager (e.g. Kubernetes, Nomad, Mesos), there are plenty of solutions out there.
Lightweight containers enable fast failure recovery or auto-scaling. Imagine running an online shop. During Christmas time, people will start hitting your web servers and your current setup might not be sufficient in terms of capacity. Given that you have enough free hardware resources, starting a few more containers hosting your web application will take only a few seconds. Also failing machines can be recovered by just migrating the containers to a new machine.
Containers give a false sense of security. There are many pitfalls when it comes to securing your applications. It is wrong to assume that one way to secure them is to put them inside containers. Containers do not secure anything, per se. If someone hacks your containerized web application he might be locked into the namespaces but there are several ways to escape this depending on the setup. Be aware of this and put as much effort into security as you would without Docker.
Docker makes it easy for people to deploy half baked solutions. Pick your favorite piece of software and enter its name it to the Google search bar, adding "Docker". You will probably find at least one if not dozens of already publicly available images containing your software at Dockerhub. So why not just execute it and give it a shot? What can go wrong? Many things can go wrong. Things happen to look shiny and awesome when put into containers and people stop paying attention to the actual software and configuration inside.
The fat container anti-pattern results in large, hard-to-manage deployment artifacts. I have seen Docker images which require you to expose more than 20 ports for different applications inside when a the container. The philosophy of Docker is that one container should do one job and you should rather compose them instead of making them heavier. If you end up putting all your tools together in one container you lose all the advantages, might have different versions of Java or Python inside and end up with a 20 GB, unmanageable image.
Deep Linux knowledge might still be required to debug certain situations. You might have heard your colleague saying that XXX does not work with Docker. There are multiple reasons why this could happen. Some applications have issues running inside a bridged network namespace if they do not distinguish properly between the network interface they bind to and the one they advertise. Another issue can be related to cgroups and namespaces where default settings in terms of shared memory are not the same as on your favorite Linux distribution, leading to OOM errors when running inside containers. However, most of the issues are not actually related to Docker but to the application not being designed properly and they are not that frequent. Still they require some deeper understanding of how Linux and Docker works which not every Docker user has.Frequently Asked Questions
Without diving too much into details about the architecture of virtual machines (VMs), let us look at the main difference between the two on a conceptual level. Containers run inside an operating system, using kernel features to isolate applications. VMs on the other hand require a hypervisor which runs inside an operating system. The hypervisor then creates virtual hardware which can be accessed by another set of operating systems. The following illustration compares a virtual machine based application setup and a container-based setup.
As you can see, the container-based setup has less overhead as it does not require an additional operating system for each application. This is possible because the container manager (e.g. Docker) uses operating system functionality directly to isolate applications in a more lightweight fashion.
Does that mean that containers are superior to virtual machines? It depends. Both technologies have their use cases and it sometimes even make sense to combine them, running a container manager inside a VM. There are many blog posts out there discussing the pros and cons of both solutions so we're not going to go into detail right now. It is important to understand the difference and to not see containers as some kind of "lightweight VM", because internally they are different.
Looking at the definition of containers and what we've learned so far, we can safely say that it is possible to use Docker to deploy isolated applications. By combining control groups and namespaces with stackable image layers and virtual network interfaces plus a virtual network bridge, we have all the tools required to completely isolate an application, possibly also locking the process in the container. The reality shows that it's not that easy though. First, it needs to be configured correctly and secondly, you will notice that completely isolated containers don't make a lot of sense most of the time.
In the end your application somehow needs to have some side effect (persisting data to disk, sending packets over the network, ...). So you will end up breaking the isolation by forwarding network traffic or mounting host volumes into your filesystem namespace. Also it is not required to use all available namespace features. While the network, PID and filesystem namespace features are enabled by default, using the user ID namespace requires you to add extra configuration options.
So it is false to assume that just by putting something inside a container makes it secure. AWS, e.g., uses a lightweight VM engine called Firecracker for secure and multi-tenant execution of short-lived workloads.
Some people argue that containers increase stability because they isolate errors. While this is true to the extent that properly configured namespaces and cgroups will limit side effects of one process going rogue, in practice there are some things to keep in mind.
As mentioned earlier, containers do only contain if configured properly and most of the time you want them to interact with other parts of your system. It is therefore possible to say that containers can help to increase stability in your deployment but you should always keep in mind that it does not protect your applications from failing.Conclusion
Docker is a great piece of technology to independently deploy applications in a more or less reproducible and isolated way. As always, there is no one-size-fits-all solution and you should understand your requirements in terms of security, performance, deployability, observability, and so on, before choosing Docker as the tool of your choice. | https://morioh.com/p/467f4284082b | CC-MAIN-2020-10 | refinedweb | 3,304 | 54.02 |
TIMER_GETOVERRUN(2) Linux Programmer's Manual TIMER_GETOVERRUN(2)
timer_getoverrun - get overrun count for a POSIX per-process timer
#include <time.h> int timer_getoverrun(timer_t timerid); Link with -lrt. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): timer_getoverrun(): _POSIX_C_SOURCE >= 199309L.
On success, timer_getoverrun() returns the overrun count of the specified timer; this count may be 0 if no overruns have occurred. On failure, -1 is returned, and errno is set to indicate the error.
EINVAL timerid is not a valid timer ID.
This system call is available since Linux 2.6.
POSIX.1-2001, POSIX.1-2008..).
See timer_create(2).
clock_gettime(2), sigaction(2), signalfd(2), sigwaitinfo(2), timer_create(2), timer_delete(2), timer_settime(2), signal(7), time(7)
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2020-12-21 TIMER_GETOVERRUN(2)
Pages that refer to this page: sigaction(2), syscalls(2), timer_create(2), timer_delete(2), timerfd_create(2), timer_settime(2), ualarm(3), usleep(3), signal-safety(7), system_data_types(7) | https://man7.org/linux/man-pages/man2/timer_getoverrun.2.html | CC-MAIN-2021-04 | refinedweb | 185 | 57.87 |
02 May 2008 17:06 [Source: ICIS news]
Correction: In the ICIS news story headlined “INSIGHT: Manager rankings - Hambrecht on top” dated 2 May, 2008, please read in the seventh paragraph …Cologne-based monthly financial magazine… instead of …Dubai-based monthly financial magazine…
?xml:namespace>
And please read in the ninth paragraph …judged under a system established by… instead of ...judged by… A corrected story follows.
By Dede Williams
In the halcyon days of ?xml:namespace>
But years of corporate downsizing, restructuring and general belt-tightening have changed this perception dramatically.
Many “ordinary” employees are now convinced that their jobs - and their pay - are being sacrificed on the altar of shareholder value, while the top brass is allowing itself ever heftier bonuses and stock options.
Stock market-listed companies are now required to publish individual board member remuneration in their annual reports, but the jury is still out on whether the current discontent reflects the “envy” predicted by executives, including BASF’s CEO Jürgen Hambrecht, two years ago.
Some whose pay is in public view take it with a grain of salt. “Just so you won’t have to search for it - my salary is on page 28 of the financial report,” Linde CEO Wolfgang Reitzle said at the start of this year’s annual results press conference.
Arousing perhaps greater concern than actual pay numbers, and adding another dimension to the discussion over whether managers deserve what they take home, is a wave of corporate corruption scandals, none affecting chemical companies.
Against this background, Cologne-based monthly financial magazine Capital sought to gather fresh evidence of how well the country’s managers actually perform, or at least are seen to perform by sector professionals.
The magazine asked 90 hand-picked German and international analysts and consultants to rate the CEOs of the DAX stock market index of 30 blue-chip companies on the scale of 1 (excellent) to 6 (failure) used in the German school system.
The managers were judged under a system developed by Kienbaum Management Consultants on the basis of competence, personality and communication skills. The “bottom line” was that not one was worth an “A” (1 to 1.5), but the “best” CEO, the survey found, was head of a chemical company, in fact, the chemical company
BASF’s Jürgen Hambrecht topped the list with a rating of 1.88 (roughly, a B+). The average was 2.54, and three CEOs of the four chemical-related companies in the DAX 30 index topped that notch.
Number four, at 2.13, was Wolfgang Reitzle, number 11 Bayer chairman Werner Wenning at 2.25. Number 22, Karl-Ludwig Kley, head of Merck for only a year, weighed in 2.65. This, Capital said, reflected the company’s weaker earnings performance.
Hambrecht’s ranking contrasts with a 2003 Capital study comparing performance and pay. In that one, which took a different approach, had a different mix of players and didn’t award school marks, he ranked 13th.
Some analysts quoted by the magazine said Hambrecht’s drive to make BASF more independent of chemical cycles through the acquisitions of Engelhard and Degussa’s construction chemical business had lifted his status. Reitzle and Wenning also scored with acquisitions.
Hambrecht won best marks on several important points, notably BASF’s financial performance and his long-term perspective. The overriding factor, the magazine said, was his positive demeanour.
It quotes one survey participant as saying that the chemical executive “is always convincing” at analyst conferences. Stressing the importance of a company’s investor relations department in attracting and keeping investors, the analyst added that BASF has one of the largest and most active staffs.
Hambrecht also achieved the top ranking in supporting and promoting future managerial talent.
Survey participants were invited to express their thoughts on whether executives really earn their keep. The general view was that most do.
More than 97% thought Deutsche Bank CEO Josef Ackermann, who in the past has routinely placed at the top of such surveys but is publicly perceived as overpaid, deserved his annual €14m packet.
The Swiss native pulls in nearly twice Hambrecht’s €7.5m, which more than 90% of the respondents felt he deserved, and almost four times Werner Wenning’s €3.6m annual remuneration at Bayer.
With €8.2m, Linde’s Reitzle earns more than any chemical executive, a situation thought to reflect his international experience and previous positions in the automotive sector. Majority privately owned Merck does not publish individual salaries.
Up to now, the question of whether German managers earn too much has been mainly a national one, but another recent survey that found German supervisory board members to earn considerably less than
In view of the German system of separate managing and supervisory boards, a true comparison is not possible. However, a table of executive board earnings for the new Linde Group for 2007 - the first after full-year consolidation of BOC - may provide some hints.
It shows proportionately higher remuneration for board members coming from BOC compared with board members from pre-merger Linde. The report explains that the figure “includes emoluments provided by BOC | http://www.icis.com/Articles/2008/05/02/9121237/corrected-insight-manager-rankings-hambrecht-on-top.html | CC-MAIN-2014-41 | refinedweb | 857 | 52.19 |
View Complete Post
I would like to take WCF Data Service and produce JSON output to consume on various mobile apps.
Can anyone give me a how-to on the JSON part? I.e., what is different than normal XML outut?
Thanks.
Windows Presentation Foundation has a rich data binding system that includes flexible support for business data validation. We take a look at implementing some complex data input validation scenarios that include customized data errors for users.
Brian Noyes
MSDN Magazine June 2010
Here John Papa demonstrates how to build a Silverlight 2 user interface that communicates through WCF to interact with business entities and a database.
John Papa
MSDN Magazine September 2008" chnage data of input type with readonly attribute Hi
I have webform where i have different input type as text controls with a readonly attribute. I have option to edit the data so through javascript i remove the readonly attribute through javascript, here's the code
function EditBillAddress()
{
MERGE using jQuery with a WCF Data Service I'm a bit new to the whole asp.net thing so this is probably a silly question, but here it goes:
I have created a WCF Data service based based on a ADO.net Entity--it's very basic:
namespace raid{ public class allPeopleDataService : DataService< raid.raidEntities > { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } }}
I've been successfully retrieving data using jquery's '$.getJSON' and inserting data using a modified $.ajax:
$.ajax({url: "/allPeopleDataService.svc/lists",dataType: 'json',contentType: 'application/json',data: $jsonObj,processData: false,type: 'POST'});
I was reading the post at and trying to use the MERGE method to update parts of the data with the following jquery
jsont = { "__metadata": { "uri": "/allPeopleDataService.svc/lists(1)", "type":"raidModel.list" }, "list_name": "list blah blah" }; $.ajax({ url: "/allPeopleDataService.svc/lists(1)", dataType: 'json', All data in class properties disappear across WCF service Hi,
I have a class that i share betwee silverlight client and wcf service.
I populate properties of the class with real data from the client, to be used in the server.
I use debugger to step through the code when the call gets to the server, all the data ara gone and are replaced with null. All DateTime value change to 1/1/1001.
But if pass primitive types such as int, string, no problem.
This is happening only on one machine, but not on ther machine.
Can someone tell me what is going on here?
Thanks in advance Customizing pages with Asp.Net Dynamic Data Domain Service Web Applications? Hola,
It used to be so easy creating new folder under CustomPages with the (Entity/Table name) and copying on it all base default PageTemplates (Details, Insert, Edit, List and ListDetails) and you will get access to those pages only for the specific table (this was the default automatic behavior of the Dynamic Data framework)
When it comes to be using Asp.Net Dynamic Data Domain Service Web Applications it seems that we need to learn a new way of doing this because it is just simply different and you got a lot of errors when compiling your application if you do this the old way? What might be the new way, uh?
HOW DO WE DO THIS NOW ... I MEAN ... CREATING DEFAULT CUSTOM PAGES FOR SPECIFIC TABLES (Details, Insert, Edit, List and ListDetails) with Asp.Net Dynamic Data Domain Service Web Applications?
The problem arises when the definition for your newly created classes is absent from the automatically generated code when you created your domain service !!!!! .
If you simply copy the PageTemplates as they are, you got a duplicated error message because the new .designer file (automatically generated) conains the definition of the class you are copying (not the newer one) so even if you rename your .aspx file the result is not what you would normally expect (the .designer class is not regenerated after renaming).
The .designer file keeps the domain service definition and if y Service Operation - Get proper data when browsing the service, but not when calling the service from Hello,
I recently implemented a service operation in an attempt to pass some filter parameters to a data service (a user guid and an organization, actually, for impersonation of a user on the service's call to the application).
I successfully added the service operation to the data service, and I can browse the service and see the data filtered as I expected.
However, when I call the service operation from the Silverlight client, I don't get any results, even though I should. Am I missing something? Code below if it's helpful. Thanks!
Service Operation Definition:
[WebGet]
public IEnumerable<opportunity> filteredopps(string userid, string org)
{
/);
Documentation on impersonation of Windows LiveID with data service? Hi There,
I'm not finding any documentation on impersonating a Windows LiveID over a WCF data service. I can find documentation on Windows Auth. For example, this article is great, but its first step is "Configure Your Service to Use Windows Authentication":. I'm not sure how to do this when I'm working with the LIVEID.
Alternatively, and even preferably, if there's a way I can manipulate the connection string the data service is using from the WCF endpoint, that would be ideal. I need to grab some info about the user in the Silverlight app that's using the data service,
and somehow impersonate that user. The user is a Windows LiveID, which is what I have hard-coded in my connection string in my web.config.
Thanks!
Web: User Profiles Service Application and Import of SharePoint 2007 SSP data I have setup a test SharePoint 2010 Farm. I will be using this as a test upgrade of a current live SharePoint 2007 Farm. The database attach method will be used.
I have replicated the web application and AAM settings of the SharePoint 2007 Farm to the SharePoint 2010 Farm and have made the 2010 Farm a DC in a new Forest. I don't want to join this to the current domain at the moment. It also has SQL server 2005 with
SP3 and cumulative update 3 installed.
I have just setup the User Profiles Serice Application and when I go to Manage it, I get this.
Error
An unexpected error has occurred.
Troubleshoot issues with Microsoft SharePoint Foundation.
Correlation ID: a1760e87-372f-4711-afac-3ceba34bc599
Date and Time: 8/31/2010 4:22:56 PM
I have verified and configured the following.
Created the Managed Metadata Service. The status is started via Service Applications and Services on Server.
Created the User Profiles Service Application and ensured status is started via Service Applications. I started the User Profile Service and User Profile Synchronisation Service via Manage Services on Server.
<
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/19940-web-service-and-json-input-data.aspx | CC-MAIN-2018-17 | refinedweb | 1,150 | 54.22 |
didn't see this covered by any FAQ. Seems like it would come up
often:
I want to build two builds from the same source tree. The only
difference between the builds is one file, Foo/Bar.py (module
Foo.Bar). I would like one file to have one version of Bar.py
and another build to have another. Bar.py pulls in many other
modules and I want each build to contain only the modules necessary.
I was thinking one way to do this might be to move Foo into
alt1/Foo and alt2/Foo and keep two copies of it. The big headach
here is that there are other files in Foo than Bar (but I could
reorganize my source tree to accomodate if I had to). Then
I could use PYTHONPATH to choose which one when running py2exe.
Another option would be to just replace that one file before making
one build, and then replace it again before making another build.
Is there a more elegant solution here? Can I override one module
from the command line or the setup.py script?
I don't think I can just use an "if" that chooses between them
because py2exe will try to find all the imports from both of the
if alterantives, right?
Tim Newsham
On 9/25/07 5:48 AM, "Larry Bates" <larry.bates@...> wrote:
> Hello Jimmy,
>
> Sure is good to see you post here again. Some of us were getting a little
> worried about you ;-). Glad to have you back.
>
> -Larry Bates
Sorry I've been so quiet for so long... I've been so busy with a couple of
concurrent projects that I haven't been able to do much else, especially on
the computer. That's subsided for now, so hopefully I'll stick around for a
bit. Thanks for the concern. :)
I did a little work on py2exe.org over the weekend. There had been a
terrible slow down recently. Strangely it cleared itself up just a few hours
before I started the work (maybe the hosting provider fixed something on
their end?). I upgraded to the latest stable version of MoinMoin, fixed
email sending so that it works again if you watch a page or forget your
password, and added a feed icon to the "RecentChanges" page in case you want
to watch it in your feed reader (Thomas had already added a link for this on
the front page as well).
Regards,
Jimmy
Jimmy Retzlaff wrote:
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
>
Hello Jimmy,
Sure is good to see you post here again. Some of us were getting a little
worried about you ;-). Glad to have you back.
-Larry Bates
>}'.
Mark}'
I post you a scratch of my code:
class MSOutlook(object):
def __init__(self, called_in_thread):
self.outlookFound = 0
self._called_in_thread = called_in_thread
if called_in_thread:
pythoncom.CoInitialize()
try:
----- Error here -----
self.oOutlookApp =
win32com.client.gencache.EnsureDispatch("Outlook.Application")
----------------------
self.outlookFound = 1
except:
err_msg = traceback.format_exc()
print "MSOutlook: unable to load Outlook. Errors are:"
print err_msg
return None
oOutlook = MSOutlook(False)
Trying to find some help in the net I also found that when i compile the
.exe using the setup.py file I have to specify an option like this (this
example was for Excel, but it think it should work even for Outlook)
python setup.py py2exe --progid "Outlook.Application"
So I've tried to do this, but py2exe doesn't compiles this and tells me
that --progid isn't a valid option...
So have you any suggest for me?
thanks a lot for your help...
Michele
Tony Cappellini schrieb:
>>Can it be that cygwin.dll is pulled in by some binary dependency?
>>Is there any extension (.dll or .pyd) that depends on cygwin.dll?
Hard to say. Cygwin is in the path, but I don't know of anything in
our framework that explicitly uses it, especially since I can make &
run the same project on my laptop, which doesn't have Cygwin.
>>I think you should use the "dll-excludes" option. "excludes" is for
Python modules.
Hmm,somehow I didn't see that option. I'll try that.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/py2exe/mailman/py2exe-users/?style=flat&viewmonth=200709&viewday=25 | CC-MAIN-2017-34 | refinedweb | 743 | 76.72 |
version 2.213
use;
The Email:: namespace was begun as a reaction against the increasing complexity and bugginess of Perl's existing email modules.
Email::* modules are meant to be simple to use and to maintain, pared to the bone, fast, minimal in their external dependencies, and correct. is another name (and the preferred one) for
header.
This is another name (and the preferred one) for
header_set. another name (and the preferred one) for
header_pairs.
Returns the body text of the mail.
Sets the body text of the mail.
Returns the mail as a string, reconstructing the headers.
This method returns the type of newline used in the email. It is an accessor only.
This returns the class used, by default, for header objects, and is provided for subclassing. The default default is Email::Simple::Header.
Email::Simple handles only RFC2822 formatted messages. This means you cannot expect it to cope well as the only parser between you and the outside world, say for example when writing a mail filter for invocation from a .forward file (for this we recommend you use Email::Filter anyway).
This software is copyright (c) 2003 by Simon Cozens.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~rjbs/Email-Simple-2.213/lib/Email/Simple.pm | CC-MAIN-2018-17 | refinedweb | 218 | 67.35 |
Pytrack and.or Deep Sleep break terminal output ?
Guy,
I'm trying a new tracker code with Lopy and Pytrack + sleep and wake interrupt, but I'm suck with console output problem, so I restarted from library example code
accelerometer_waketo reproduce the issue.
I modified a little to have some print and wait to be able to connect terminal (because deep sleep shut down usb device seen in windows)
Here the code
import machine from machine import UART from pytrack import Pytrack from LIS2HH12 import LIS2HH12 from network import Bluetooth from network import WLAN import pycom import time import os uart = UART(0, 115200) os.dupterm(uart) WLAN().deinit() Bluetooth().deinit() # Wait for terminal to connect print("Wait for sync ", end='') for x in range(0, 10): print("#", end='') time.sleep(1) print(" OK!", flush=True) rst = machine.reset_cause() print("starting, Reset=", rst) pycom.heartbeat(False) print("stage 1", flush=True) py = Pytrack() print("stage 2", flush=True) print("Wakeup reason: ", flush=True) print(py.get_wake_reason()) print("; Aproximate sleep remaining: ", flush=True) print(py.get_sleep_remaining()) print(" sec", flush=True) #) # wait 15 second before deep sleep to be able to upload print("Wait for update ", end='', flush=True) for x in range(0, 15): print(".", end='', flush=True) time.sleep(1) print(" OK!", flush=True) # go to sleep for 30 seconds max if no accelerometer interrupt happens print("Go to deep sleep...") py.setup_sleep(30) py.go_to_sleep()
The fact is that the program works pretty fine (I know this because 15 seconds after printing stage 1 the device goes to deep sleep (windows beep) and wake up after 30s of deep sleep (windows beep agin and device shows up in device manager), Then I reconnect and got display until stage 1 again
Here the output of terminal (I manually disconnect when entering deep sleep and reconnect after deep sleep wake)
What I am missing ? Looks like some system call break the dupterm, I tried to reset console after stage 1 but same things.
py = Pytrack() uart = UART(0, 115200) os.dupterm(uart) print("stage 2", flush=True)
I'm using latest firmware as today 1.17.3.b1 and latest pymakr but also tried with another terminal, same thing
Any idea of what's going wrong there ?
The
calibrate_rtc()in library
pycoprocbreak the console output, if I comment this call from
get_sleep_remaining()No problem
Connecting on COM24... # OK! starting, Reset= 0 stage 1 stage 2 Wakeup reason: 4 ; Aproximate sleep remaining: calibrate_rtc 0 sec stage 4 Wait for update .............
may be the deal is out there
def calibrate_rtc(self): # the 1.024 factor is because the PIC LF operates at 31 KHz # WDT has a frequency divider to generate 1 ms # and then there is a binary prescaler, e.g., 1, 2, 4 ... 512, 1024 ms # hence the need for the constant self._write(bytes([CMD_CALIBRATE]), wait=False) self.i2c.deinit() Pin('P21', mode=Pin.IN) pulses = pycom.pulses_get('P21', 100) self.i2c.init(mode=I2C.MASTER, pins=(self.sda, self.scl)) idx = 0
@jcaron Absolutely, thanks for pointing this out
I created a
boot.py
from machine import UART import os uart = UART(0, 115200) os.dupterm(uart)
of course removed this line from main.py
it's far better but console still looks up after call to
py.get_sleep_remaining()
I believe there is a default boot.py which already contains the dupterm stuff on a fresh module, not sure if having that both in the boot.py and the main.py may be an issue?
I took a new Fresh Lopy / Pytrack, updated firmware on both devices and flashed same code
Same result
@jcaron I do not have
boot.pyfor this sample issue (I wanted the sample to be simple)
has suggested, I've done the changes, same thing.
@charly86 I believe so, yes. What do you have in your boot.py? Have you tried clearing the filesystem completely and re-uploading all the files? Sometimes you end up with old files which result in weird stuff like this.
Have you tried adding a pause just after the stage 1 print + another print before instantiating the Pytrack object? Just to see if it's timing-related or directly related to the connection to the Pytrack...
Is the Pytrack firmware upgraded to the latest version? | https://forum.pycom.io/topic/3075/pytrack-and-or-deep-sleep-break-terminal-output/2 | CC-MAIN-2019-39 | refinedweb | 715 | 65.83 |
OpenMP brings the power of multiprocessing to your C, C++, and Fortran programs.
Open.
Scope
In shared memory programming multiple CPUs can access the same variables. This makes the program more efficient and saves copying. In some cases, each thread needs its own copy of the variables – such as the loop variables in parallel for() loops.
Clauses specified in OpenMP directives (see the descriptions Table 1) define the properties of these variables.
You can append clauses to the OpenMP #pragma, for example:
#pragma omp parallel for shared(x, y) private(z)
Errors in shared()/private() variable declarations are some of the most common causes of errors in parallelized programming.
Reduction
Now you now know how to create threads and distribute the workload over multiple threads. However, how can you get all the threads to work on a collated result – for example, to total the values in an array? reduction() (Listing 2) handles this.
Listing 2: reduction()
01 a = 0 ; b = 0 ; 02 #pragma omp parallel for private(i) shared(x, y, n) reduction(+:a, b) 03 for (i=0; i<n; i++) { 04 a = a + x[i] ; 05 b = b + y[i] ; 06 }
The compiler creates a local copy of each variable in reduction() and initializes it independently of the operator (e.g., 0 for “+”, 1 for “*”). If, say, three threads are each handling one third of the loop, the master thread adds up the subtotals at the end.
Who is Faster?
Debugging parallelized programs is an art form in its own right. It is particularly difficult to find errors that do not occur in serial programs and do not occur regularly in parallel processing. This category includes what are known as race conditions: different results on repeated runs of a program with multiple blocks that are executed parallel to one another, depending on which thread is fastest each time. The code in Listing 3 starts by filling an array in parallel and then goes on to calculate the sum of these values in parallel.
Listing 3: Avoiding Race Conditions
01 #ifdef _OPENMP 02 #include <omp.h> 03 #endif 04 #include <stdio.h> 05 int main() { 06 double a[1000000]; 07 int i; 08 #pragma omp parallel for 09 for (i=0; i<1000000; i++) a[i]=i; 10 double sum = 0; 11 #pragma omp parallel for shared (sum) private (i) 12 for ( i=0; i < 1000000; i++) { 13 #pragma omp critical (sum_total) 14 sum = sum + a[i]; 15 } 16 printf("sum=%lf\n",sum); 17 }
Without the OpenMP #pragma omp critical (sum_total) statement in line 13, the following race condition could occur:
- Thread 1 loads the current value of sum into a CPU register.
- Thread 2 loads the current value of sum into a CPU register.
- Thread 2 adds a[i+1] to the value in the register.
- Thread 2 writes the value in the register back to the sum variable.
- Thread 1 adds a[i] to the value in the register.
- Thread 1 writes the value in the register to the sum variable.
Because thread 2 overtakes thread 1 here, thus winning the “race,” a[i+1] would not be added correctly. Although thread 2 calculates the sum and stores it in the sum variable, thread 1 overwrites it with an incorrect value.
The #pragma omp critical statement makes sure that this does not happen. All threads execute the critical code, but only one at any time. The example in Listing 3 thus performs the addition correctly without parallel threads messing up the results. For elementary operations (e.g., i++) #pragma omp atomic will atomically execute a command. Write access to shared() variables also should be protected by a #pragma omp critical statement. multiprocessor.):'s_law
The Author
Wolfgang Dautermann is a system administrator who has tamed many flavors of Linux and various Unix systems – including Solaris, Irix, and Tru64. | http://www.admin-magazine.com/HPC/Articles/Programming-with-OpenMP | CC-MAIN-2015-48 | refinedweb | 641 | 61.77 |
table of contents
NAME¶
perf
Given a list of parameters, perf_event_open() returns a file descriptor,¶
The pid and cpu arguments allow specifying which process and CPU to monitor:
- pid == 0 and cpu == -1
- This measures the calling process/thread on any CPU.
- pid == 0 and cpu >= 0
- This measures the calling process/thread only when running on the specified CPU.
- pid > 0 and cpu == -1
- This measures the specified process/thread on any CPU.
- pid > 0 and cpu >= 0
- This measures the specified process/thread only when running on the specified CPU.
- pid == -1 and cpu >= 0
- This measures all processes/threads on the specified CPU. This requires CAP_PERFMON (since Linux 5.8) or CAP_SYS_ADMIN capability or a /proc/sys/kernel/perf_event_paranoid value of less than 1.
- pid == -1 and cpu == -1
- This setting is invalid and will return an error.
When pid is greater than zero, permission to perform this system call is governed by CAP_PERFMON (since Linux 5.9) and a ptrace access mode PTRACE_MODE_READ_REALCREDS check on older Linux versions; see ptrace(2).:
- available only for system-wide events and may therefore require extra permissions. */
use_clockid : 1, /* use clockid for time fields */
context_switch : 1, /* context switch data */
write_backward : 1, /* Write ring buffer from end
to beginning */
namespaces : 1, /* include namespaces data */
ksymbol : 1, /* include ksymbol events */
bpf_event : 1, /* include bpf events */
aux_output : 1, /* generate AUX records
instead of events */
cgroup : 1, /* include cgroup events */
text_poke : 1, /* include text poke events */
__reserved_1 : 30; __reserved_2; /* align to u64 */ };
The fields of the perf_event_attr structure are described in more detail below:
- PERF_TYPE_HARDWARE
- This indicates one of the "generalized" hardware events provided by the kernel. See the config field definition for more details.
- PERF_TYPE_SOFTWARE
- This indicates one of the software-defined events provided by the kernel (even if no /sys/bus/event_source/devices. In each subdirectory there is a type file whose content is an integer that can be used in the type field. For instance, /sys/bus/event_source/devices/cpu/type contains the value for the core CPU PMU, which is usually 4.
-.
-
- Cache accesses. Usually this indicates Last Level Cache accesses but this may vary depending on your CPU. This may include prefetches and coherency.
- PERF_COUNT_HW_BUS_CYCLES
- Bus cycles, which can be different from total cycles.
- PERF_COUNT_HW_STALLED_CYCLES_FRONTEND (since Linux 3.0)
- Stalled cycles during issue.
- PERF_COUNT_HW_STALLED_CYCLES_BACKEND (since Linux 3.0)
- Stalled cycles during retirement.
- PERF_COUNT_HW_REF_CPU_CYCLES (since Linux Linux (since Linux 2.6.33)
- This counts the number of alignment faults. These happen when unaligned memory accesses happen; the kernel can handle these but it reduces performance. This happens only on some architectures (never on x86).
- PERF_COUNT_SW_EMULATION_FAULTS (since Linux 2.6.33)
-.
config = (since Linux 3.
- sample. They will be recorded in a ring-buffer, which is available to user space using mmap(2). The order in which the values are saved in the sample are documented in the MMAP Layout subsection below; it is not the enum perf_event_sample_format order.
- PERF_SAMPLE_IP
- Records instruction pointer.
- PERF_SAMPLE_TID
- Records the process and thread IDs.
- PERF_SAMPLE_TIME
- Records a timestamp.
- PERF_SAMPLE_ADDR
- Records an address, if applicable.
- PERF_SAMPLE_READ
- Record counter values for all events in a group, not just the group leader.
- PERF_SAMPLE_CALLCHAIN
- Records the callchain (stack backtrace).
- PERF_SAMPLE_ID
- Records a unique ID for the opened event's group leader.
- PERF_SAMPLE_CPU
- Records CPU number.
- PERF_SAMPLE_PERIOD
- Records the current sampling period.
- PERF_SAMPLE_STREAM_ID
- Records a unique ID for the opened event. Unlike PERF_SAMPLE_ID the actual ID is returned, not the group leader. This ID is the same as the one returned by PERF_FORMAT_ID.
- values in the process before the kernel was called).
- PERF_SAMPLE_STACK_USER (since Linux 3.7)
- Records the user level stack, allowing stack unwinding.
- PERF_SAMPLE_WEIGHT (since Linux 3.10)
- Records a hardware provided weight value that expresses how costly the sampled event was. This allows the hardware to highlight expensive events in a profile.
- PERF_SAMPLE_DATA_SRC (since Linux 3.10)
- Records the data source: where in the memory hierarchy the data associated with the sampled instruction came from. This is available only if the underlying hardware supports this feature.
- PERF_SAMPLE_IDENTIFIER (since Linux 3.12)
- Places the SAMPLE_ID value in a fixed position in the record, either at the beginning (for sample events) or at the end (if a non-sample event).
-).
- The PERF_SAMPLE_IDENTIFIER setting makes the event stream always parsable by putting SAMPLE_ID in a fixed location, even though it means having duplicate SAMPLE_ID values in records.
- PERF_SAMPLE_TRANSACTION (since Linux 3.13)
- Records reasons for transactional memory abort events (for example, from Intel TSX transactional memory.
- PERF_SAMPLE_PHYS_ADDR (since Linux 4.13)
- Records physical address of data like in PERF_SAMPLE_ADDR.
- PERF_SAMPLE_CGROUP (since Linux 5.7)
- Records (perf_event) cgroup ID of the process. This corresponds to the id field in the PERF_RECORD_CGROUP event.
- read_format
- This field specifies the format of the data returned by read(2) on a perf_event_open().
- When creating an event group, typically the group leader is initialized with disabled set to 1 and any child events are initialized with disabled set to 0. Despite disabled being 0, the child events will not start until the group leader is enabled.
- inherit
- The inherit bit specifies that this counter should count events of child tasks as well as the task specified. This applies only to new children, not to any existing children at the time the counter is created (nor to any new children of existing children).
- Inherit does not work for some combinations of read_format values, such as PERF_FORMAT_GROUP.
- pinned
- The pinned bit specifies that the counter should always be on the CPU if at all possible. It applies only.
- Note that many unexpected situations may prevent events with the exclusive bit set from ever running. This includes any users running a system-wide measurement as well as any kernel use of the performance counters (including the commonly enabled NMI Watchdog Timer interface).
- running the idle task. While you can currently enable this for any event type, it is ignored for all but software events.
- mmap
- The mmap bit enables.
- freq
- If this bit is set, then sample_frequency not sample_period is used when setting up the sampling interval.
- inherit_stat
- This bit enables saving of event counts on context switch for inherited tasks. This is meaningful only if the inherit field is set.
- enable_on_exec
- If this bit is set, a counter is automatically enabled after a call to exec(2).
- task
- If this bit is set, then fork/exit notifications are included in the ring buffer.
- watermark
- If set, have an overflow notification happen when we cross the wakeup_watermark boundary. Otherwise, overflow notifications happen after wakeup_events samples.
- precise_ip (since Linux the counterpart of the mmap field. This enables generation of PERF_RECORD_MMAP samples for mmap(2) calls that do not have PROT_EXEC set (for example data and SysV shared memory).
- sample_id_all (since Linux 2.6.38)
- If set, then TID, TIME, ID, STREAM_ID, and CPU can additionally.
- write_backward (since Linux 4.6)
- This causes the ring buffer to be written from the end to the beginning. This is to support reading from overwritable ring buffer.
- namespaces (since Linux 4.11)
- This enables the generation of PERF_RECORD_NAMESPACES records when a task enters a new namespace. Each namespace has a combination of device and inode numbers.
- ksymbol (since Linux 5.0)
- This enables the generation of PERF_RECORD_KSYMBOL records when new kernel symbols are registered or unregistered. This is analyzing dynamic kernel functions like eBPF.
- bpf_event (since Linux 5.0)
- This enables the generation of PERF_RECORD_BPF_EVENT records when an eBPF program is loaded or unloaded.
- auxevent (since Linux 5.4)
- This allows normal (non-AUX) events to generate data for AUX events if the hardware supports it.
- cgroup (since Linux 5.7)
- This enables the generation of PERF_RECORD_CGROUP records when a new cgroup is created (and activated).
- text_poke (since Linux 5.8)
- This enables the generation of PERF_RECORD_TEXT_POKE records when there's a changes to the kernel text (i.e., self-modifying code).
- as 1.
- bp_type (since Linux.
The values can be combined via a bitwise or, but the combination of HW_BREAKPOINT_R or HW_BREAKPOINT_W with HW_BREAKPOINT_X is not allowed.
- bp_addr (since Linux 2.6.33)
-)
- config2 is a further extension of the config1 field.
- branch_sample_type (since Linux 3.4)
- If PERF_SAMPLE_BRANCH_STACK is enabled, then this specifies what branches to include in the branch record.
- The first part of the value is the privilege level, which is a combination of one of the values listed below. If the user does not set privilege level explicitly, the kernel will use the event's privilege level. Event and branch privilege levels do not have to match.
- PERF_SAMPLE_BRANCH_USER
- Branch target is in user space.
- PERF_SAMPLE_BRANCH_KERNEL
- Branch target is in kernel space.
- PERF_SAMPLE_BRANCH_HV
- Branch target is in hypervisor.
- PERF_SAMPLE_BRANCH_PLM_ALL
- A convenience value that is the three preceding values ORed together.
In addition to the privilege value, at least one or more of the following bits must be set.
- PERF_SAMPLE_BRANCH_ANY
- Any branch type.
- PERF_SAMPLE_BRANCH_ANY_CALL
- Any call branch
Once a perf_event_open(), the error ENOSPC results.
Here is the layout of the data returned by a read:
- *
- If PERF_FORMAT_GROUP was specified to allow reading all events in a group at once::
struct read_format {
u64 value; /* The value of the event */
u64 time_enabled; /* if PERF_FORMAT_TOTAL_TIME_ENABLED */
u64 time_running; /* if PERF_FORMAT_TOTAL_TIME_RUNNING */
u64 id; /* if PERF_FORMAT_ID */ };
The values read are as follows:
- nr
- The number of events in this file descriptor.
When using perf_event_open() in sampled mode, asynchronous events (like counter overflow or PROT_EXEC mmap tracking) are logged into a ring-buffer. This ring-buffer is created and accessed through mmap(2). the fields in the perf_event_mmap_page structure in more detail:
- version
- Version number of this structure.
- compat_version
- The lowest version this is compatible with.
- lock
- A seqlock for synchronization.
- index
- A unique hardware counter identifier.
- offset
- When using rdpmc for reads this offset value must be added to the one returned by rdpmc to get the current total event count.
- time_enabled
- Time the event was active.
- time_running
- Time the event was running.
- cap_usr_time / cap_usr_rdpmc / cap_bit0 (since Linux 3.4)
- There was a bug in the definition of cap_usr_time and cap_usr_rdpmc from Linux 3.4 until Linux 3.11. Both bits were defined to point to the same location, so it was impossible to know if cap_usr_time or cap_usr_rdpmc were actually set.
- Starting with)
-);
- cap_user_time (since Linux 3.12)
- This bit indicates the hardware has a constant, nonstop timestamp counter (TSC on x86).
- cap_user_time_zero (since Linux 3.12)
- Indicates the presence of time_zero which allows mapping timestamp values to the hardware clock.
- & (((u64 & (((u64)1 << time_shift) - 1); timestamp = time_zero + quot * time_mult +
((rem * time_mult) >> time_shift);
- data_head
- This points to the head of the data section. The value continuously increases, it does not wrap. The value needs to be manually wrapped by the size of the mmap buffer before accessing the samples.
- On SMP-capable platforms, after reading the data_head value, user space should issue an rmb().
- data_tail
- When the mapping is PROT_WRITE, the data_tail value should be written by user space to reflect the last read data. In this case, the kernel will not overwrite unread data.
-ap(2)-ing record (that follows the header) depend on the type selected as shown.
- PERF_RECORD_MMAP
- The MMAP events record the PROT_EXEC mappings so that we can correlate user-space IPs to code. They have the following structure:
struct {
struct perf_event_header header;
u32 pid, tid;
u64 addr;
u64 len;
u64 pgoff;
char filename[]; };
- PERF_RECORD_LOST
- This record indicates when events are lost.
struct {
struct perf_event_header header;
u64 id;
u64 lost;
struct sample_id sample_id; };
- PERF_RECORD_COMM
- This record indicates a change in the process name.
struct {
struct perf_event_header header;
u32 pid;
u32 tid;
char comm[];
struct sample_id sample_id; };
- PERF_RECORD_EXIT
- This record indicates a process exit event.
struct {
struct perf_event_header header;
u32 pid, ppid;
u32 tid, ptid;
u64 time;
struct sample_id sample_id; };
- PERF_RECORD_THROTTLE, PERF_RECORD_UNTHROTTLE
- This record indicates a throttle/unthrottle event.
struct {
struct perf_event_header header;
u64 time;
u64 id;
u64 stream_id;
struct sample_id sample_id; };
- PERF_RECORD_FORK
- This record indicates a fork event.
struct {
struct perf_event_header header;
u32 pid, ppid;
u32 tid, ptid;
u64 time;
struct sample_id sample_id; };
- PERF_RECORD_READ
- This record indicates a read event.
struct {
struct perf_event_header header;
u32 pid, tid;
struct read_format values;
struct sample_id sample_id; };
- */
u64 phys_addr; /* if PERF_SAMPLE_PHYS_ADDR */
u64 cgroup; /* if PERF_SAMPLE_CGROUP */ };
- sample_id
- If PERF_SAMPLE_IDENTIFIER is enabled, a 64-bit unique ID is included. This is a duplication of the PERF_SAMPLE_ID id value, but included at the beginning of the sample so parsers can easily obtain the value.
- ip
- If PERF_SAMPLE_IP is enabled, then a 64-bit instruction pointer value is included.
- pid, tid
- If PERF_SAMPLE_TID is enabled, then a 32-bit process ID and 32-bit thread ID are included.
- time
- If PERF_SAMPLE_TIME is enabled, then a 64-bit timestamp is included. This is obtained via local_clock() which is a hardware timestamp if available and the jiffies value if not.
- addr
- If PERF_SAMPLE_ADDR is enabled, then a 64-bit address is included. This is usually the address of a tracepoint, breakpoint, or software event; otherwise the value is 0.
- id
- If PERF_SAMPLE_ID is enabled, a 64-bit unique ID is included. If the event is a member of an event group, the group leader ID is returned. This ID is the same as the one returned by PERF_FORMAT_ID.
- stream_id
- If PERF_SAMPLE_STREAM_ID is enabled, a 64-bit unique ID size is included followed by an array of 8-bit values of length size. The values are padded with 0 to have 64-bit alignment.
- This RAW record data is opaque with respect to the ABI. The ABI doesn't make any promises with respect to the stability of its content, it may vary depending on event, hardware, and kernel version.
- bnr, lbr[bnr]
- If PERF_SAMPLE_BRANCH_STACK is enabled, then a 64-bit value indicating the number of records is included, followed by bnr perf_branch_entry structures which each include the fields:
- from
- This indicates the source instruction (may not be a branch).
- to
- The branch target.
- mispred
- The branch target was mispredicted.
- predicted
- The branch target was predicted.
- in_tx (since Linux 3.11)
- The branch was in a transactional.
- weight
- If PERF_SAMPLE_WEIGHT is enabled, then a 64-bit value provided by the hardware is recorded that indicates how costly the event was. This allows expensive events to stand out more clearly in profiles.
- data_src
- If PERF_SAMPLE_DATA_SRC is enabled, then a 64-bit value is recorded that is made up of the following fields:
- PERF_MEM_OP_NA
- Not available
- PERF_MEM_OP_LOAD
- Load instruction
- PERF_MEM_OP_STORE
- Store instruction
- PERF_MEM_OP_PFETCH
- Prefetch
- PERF_MEM_OP_EXEC
- Executable code
- mem_lvl
- Memory hierarchy level hit or miss, a bitwise combination of the following, shifted left by PERF_MEM_LVL_SHIFT:
- PERF_MEM_LVL_NA
- Not available
- PERF_MEM_LVL_HIT
- Hit
- PERF_MEM_LVL_MISS
- Miss
- PERF_MEM_LVL_L1
- Level 1 cache
- PERF_MEM_LVL_LFB
- Line fill buffer
- PERF_MEM_LVL_L2
- Level 2 cache
- PERF_MEM_LVL_L3
- Level 3 cache
- PERF_MEM_LVL_LOC_RAM
- Local DRAM
- PERF_MEM_LVL_REM_RAM1
- Remote DRAM 1 hop
- PERF_MEM_LVL_REM_RAM2
- Remote DRAM 2 hops
- PERF_MEM_LVL_REM_CCE1
- Remote cache 1 hop
- PERF_MEM_LVL_REM_CCE2
- Remote cache 2 hops
- PERF_MEM_LVL_IO
- I/O memory
- PERF_MEM_LVL_UNC
- Uncached memory
-:
- PERF_MEM_TLB_NA
- Not available
- PERF_MEM_TLB_HIT
- Hit
- PERF_MEM_TLB_MISS
- Miss
- PERF_MEM_TLB_L1
- Level 1 TLB
- PERF_MEM_TLB_L2
- Level 2 TLB
- PERF_MEM_TLB_WK
- Hardware walker
- PERF_MEM_TLB_OS
- OS fault handler
- transaction
- If the PERF_SAMPLE_TRANSACTION flag is set, then a 64-bit field is recorded describing the sources of any transactional memory aborts.
- The field is a bitwise combination of the following values:
- PERF_TXN_ELISION
- Abort from an elision type transaction (Intel-CPU-specific).
- PERF_TXN_TRANSACTION
- Abort from a generic transaction.
- PERF_TXN_SYNC
- Synchronous abort (related to the reported instruction).
- PERF_TXN_ASYNC
- Asynchronous abort (not related to the reported instruction).
- PERF_TXN_RETRY
- Retryable abort (retrying the transaction may have succeeded).
- PERF_TXN_CONFLICT
- Abort due to memory conflicts with other threads.
- PERF_TXN_CAPACITY_WRITE
- Abort due to write capacity overflow.
- PERF_TXN_CAPACITY_READ
- Abort due to read capacity overflow.
- In addition, a user-specified abort code can be obtained from the high 32 bits of the field by shifting right by PERF_TXN_ABORT_SHIFT and masking with the value PERF_TXN_ABORT_MASK.
- abi, regs[weight(mask)]
- If PERF_SAMPLE_REGS_INTR_intr attr field. The number of values is the number of bits set in the sample_regs_intr bit mask.
- phys_addr
- If the PERF_SAMPLE_PHYS_ADDR flag is set, then the 64-bit physical address is recorded.
- cgroup
- If the PERF_SAMPLE_CGROUP flag is set, then the 64-bit cgroup ID (for the perf_event subsystem) is recorded. To get the pathname of the cgroup, the ID should match to one in a PERF_RECORD_CGROUP .
-; };
- PERF_RECORD_LOST_SAMPLES (since Linux 4.2)
- When using hardware sampling (such as Intel PEBS) this record indicates some number of samples that may have been lost.
struct {
struct perf_event_header header;
u64 lost;
struct sample_id sample_id; };
-.
- PERF_RECORD_NAMESPACES (since Linux 4.11)
- This record includes various namespace information of a process.
struct {
struct perf_event_header header;
u32 pid;
u32 tid;
u64 nr_namespaces;
struct { u64 dev, inode } [nr_namespaces];
struct sample_id sample_id; };
- pid
- is the process ID
- tid
- is the thread ID
- nr_namespace
- is the number of namespaces in this record
- Each namespace has dev and inode fields and is recorded in the fixed position like below:
- NET_NS_INDEX=0
- Network namespace
- UTS_NS_INDEX=1
- UTS namespace
- IPC_NS_INDEX=2
- IPC namespace
- PID_NS_INDEX=3
- PID namespace
- USER_NS_INDEX=4
- User namespace
- MNT_NS_INDEX=5
- Mount namespace
- CGROUP_NS_INDEX=6
- Cgroup namespace
- PERF_RECORD_KSYMBOL (since Linux 5.0)
- This record indicates kernel symbol register/unregister events.
struct {
struct perf_event_header header;
u64 addr;
u32 len;
u16 ksym_type;
u16 flags;
char name[];
struct sample_id sample_id; };
- addr
- is the address of the kernel symbol.
- len
- is the length of the kernel symbol.
- ksym_type
- is the type of the kernel symbol. Currently the following types are available:
- PERF_RECORD_KSYMBOL_TYPE_BPF
- The kernel symbol is a BPF function.
- PERF_RECORD_BPF_EVENT (since Linux 5.0)
- This record indicates BPF program is loaded or unloaded.
struct {
struct perf_event_header header;
u16 type;
u16 flags;
u32 id;
u8 tag[BPF_TAG_SIZE];
struct sample_id sample_id; };
- PERF_BPF_EVENT_PROG_LOAD
- A BPF program is loaded
- PERF_BPF_EVENT_PROG_UNLOAD
- A BPF program is unloaded
- PERF_RECORD_CGROUP (since Linux 5.7)
- This record indicates a new cgroup is created and activated.
struct {
struct perf_event_header header;
u64 id;
char path[];
struct sample_id sample_id; };
- id
- is the cgroup identifier. This can be also retrieved by name_to_handle_at(2) on the cgroup path (as a file handle).
- path
- is the path of the cgroup from the root.
- PERF_RECORD_TEXT_POKE (since Linux 5.8)
- This record indicates a change in the kernel text. This includes addition and removal of the text and the corresponding length is zero in this case.
struct {
struct perf_event_header header;
u64 addr;
u16 old_len;
u16 new_len;
u8 bytes[];
struct sample_id sample_id; };
Overflow handling¶.
-).
- PERF_EVENT_IOC_REFRESH
- Non-inherited overflow counters can use this to enable a counter for a number of overflows specified by the argument, after which it is disabled. Subsequent calls of this ioctl add the argument value to the current count. file descriptor rather than the default one. The file descriptors must all be on the same CPU.
- The argument specifies the desired file descriptor, or -1 if output should be ignored.
- PERF_EVENT_IOC_SET_FILTER (since Linux 2.6.33)
- This adds an ftrace filter to this event.
- The argument is a pointer to the desired ftrace filter.
- PERF_EVENT_IOC_ID (since Linux 3.12)
-_PERFMON (since Linux 5.8) or_PERFMON (since Linux 5.8) or
Files in /proc/sys/kernel/
- /proc/sys/kernel/perf_event_paranoid
- The perf_event_paranoid file can be set to restrict access to the performance counters.
- 2
- allow only user-space measurements (default since Linux 4.6).
- 1
- allow both kernel and user measurements (default before Linux 4.6).
-.
RETURN VALUE¶
perf_event_open() returns the new file descriptor, or -1 if an error occurred (in which case, errno is set appropriately).
ERRORS¶_PERFMON (since Linux 5.8) or.
- EINTR
- Returned when trying to mix perf and ftrace handling for a uprobe.
-_PERFMON (since Linux 5.8) or CAP_SYS_ADMIN permissions (or a more permissive perf_event paranoid setting). This includes setting a breakpoint on a kernel address, and (since Linux 3.13) setting a kernel function-trace tracepoint.
VERSION¶
perf_event_open() was introduced in Linux 2.6.31 but was called perf_counter_open(). It was renamed in Linux 2.6.32.
CONFORMING TO¶
This perf_event_open() system call Linux-specific and should not be used in programs intended to be portable.
NOTES¶
Glibc does not provide a wrapper for this system call; call it using syscall(2). See the example below.
The official way of knowing if perf_event_open() support is enabled is checking for the existence of the file /proc/sys/kernel/perf_event_paranoid.
CAP_PERFMON capability (since Linux 5.8) provides secure approach to performance monitoring and observability operations in a system according to the principal of least privilege (POSIX IEEE 1003.1e). Accessing system performance monitoring and observability operations using CAP_PERFMON rather than the much more powerful CAP_SYS_ADMIN excludes chances to misuse credentials and makes operations more secure. CAP_SYS_ADMIN usage for secure system performance monitoring and observability is discouraged in favor of the CAP_PERFMON capability.
BUGS¶
The F_SETOWN_EX option to fcntl(2) is needed to properly get overflow signals in threads. This was introduced in Linux 2.6.32..
EXAMPLES¶
The following is a short example that measures the total instruction count of a call to printf(3).
(pe));
pe.type = PERF_TYPE_HARDWARE;
pe.size = sizeof(pe);(count));
printf("Used %lld instructions\n", count);
close(fd); }
SEE ALSO¶
perf(1), fcntl(2), mmap(2), open(2), prctl(2), read(2)
Documentation/admin-guide/perf-security.rst in the kernel source tree
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at | https://manpages.debian.org/testing/manpages-dev/perf_event_open.2.en.html | CC-MAIN-2022-21 | refinedweb | 3,532 | 57.87 |
This lab should be done with a partner of your choosing.
The setup procedure for this lab will be similar to previous labs. First, both you and your partner should run setup31 to grab the starting point code for this assignment. Suppose users molly and tejas which to work together. Molly (mdanner1) can start by running
[~]$ setup31 labs/08 tdanner1Once the script finishes, Tejas (tdanner1) should run
[~]$ setup31 labs/08 mdanner1
For the next step only one partner should copy over the starting code
[~]$ cd ~/cs31/labs/08 [08]$ cp -r ~lammert/public/cs31/labs/08/* ./ [08]$ ls Makefile parsecmd.c parsecmd.h tester.cNow push the changes to your partner
[08]$ git add * [08]$ git commit -m "lab 8 start" [08]$ git pushYour partner can now pull the changes. In this case if Tejas wishes to get files Molly pushed, he would run
[~]$ cd ~/cs31/labs/08 [08]$ git pull
// This is the function you used in the shell lab to convert // the comandline string into an array of strings: one per command // line argument. This version uses a fixed-size max length comdline // and argv list. int parse_cmd(const char *cmdline, char **argv); // this is a slightly different version of a command line parsing // function: it dynamically allocates space for the argv list of // strings that it returns to the caller. The bg value is now // passed-by-reference and this functions sets it to either 0 or 1 // depending on if the command line has a & in it or not char **parse_cmd_dynamic(const char *cmdline, int *bg);More details about each of these are described in the "Details and Requirements" section below.
gcc -g -o tester tester.c parsemd.o
#include "parsecmd.h"parsecmd.h contains two function prototypes for the functions you will implement in parsecmd.c.
$ cat foo.texThese functions will be passed the string:
"cat foo.tex\n"And will parse the command line string into the argv array:
argv [0] ---->"cat" argv [1] ---->"foo.tex" argv [2] ----| (NULL)The main difference between the two functions is that the first uses a single statically declared char array into which will be each each argv[i] string, and the second function dynamically allocates space for both the argv array and for each string of command line argument.
/* * parse_cmd - Parse the command line and build the argv array. * cmdline: the command line string entered at the shell prompt * (const means that the function will not modify the cmdline string) * argv: an array of size MAXARGS of char * * parse_cmd will initialize its contents from the passed * cmdline string. * returns: non-zero if the command line includes &, to * run in the background, or zero if not */ int parse_cmd(const char *cmdline, char *argv[]);This function will initialize the passed argv array to point into substrings that it creates in a global char buffer (initialized to a copy of the passed command line string). The buffer is already declared as static global char array in parsecmd.c:
static char cmdline_copy[MAXLINE];The parse_cmd function will:
For example, if the command line entered is the following
$ ls -l -a &The command line string associated with this entered line is:
" ls -l -a &\n"the copy of it in the cmdline_copy buffer looks like:
cmdline_copy 0 | ' ' | 1 | ' ' | 2 | 'l' | 3 | 's' | 4 | ' ' | 5 | ' ' | 6 | '-' | 7 | 'l' | 8 | ' ' | 9 | ' ' | 10 | '-' | 11 | 'a' | 12 | ' ' | 13 | '&' | 14 | '\n'| 15 | '\0'|Your function will TOKENIZE this string and set each argv array bucket to point into the start of its associated token string in the char buffer (cmdline_copy array):
0 1 2 3 ------------------------ argv | * | * | * | * | ---|-----|-----|-----|-- cmdline_copy 0 | ' ' | | | | | 1 | ' ' | | | | | 2 | 'l' |<---------- | | ---- 3 | 's' | | | (NULL) 4 | '\0'| | | 5 | ' ' | | | 6 | '-' |<---------------- | 7 | 'l' | | 8 | '\0'| | 9 | ' ' | | 10 | '-' |<----------------------- 11 | 'a' | 12 | '\0'| 13 | '&' | 14 | '\n'| 15 | '\0'|Note the changes to the cmdline_copy string contents and the assignment of argv bucket values into different starting points in the char buffer. Printing out the argv strings in order will list the
ls -l -aThe function should return 1 if there is an ampersand in the command line or 0 otherwise (so, 1 in the above example)
/* * parse_cmd_dynamic - parse the passed command line into an argv array * * cmdline: the command line string entered at the shell prompt * (const means that this function cannot modify cmdline) * bg: sets the value pointed to by bg 1 if command line is run in * background, 0 otherwise (a pass-by-reference parameter) * * returns: a dynamically allocated array of strings, each element * stores a string corresponding to a command line argument * (the caller is responsible for freeing the returned * argv list). */ char **parse_cmd_dynamic(const char *cmdline, int *bg);This function will find tokens much like the previous version. However, it must also determine how many tokens are in the cmdline string, malloc EXACTLY the right number of argv buckets for the particular cmdline string (remember an extra bucket at the end for NULL), and then fore each token it will malloc up exactly enough space for a a char array to store the string corresponding to a command line argument (remember an extra bucket for the terminating '\0' character).
For example, if the cmdline string is:
" ls -l -a \n"This function will malloc up an argv array of
// local var to store dynamically allocated args array of strings char **args; args --------->[0]-----> "ls" [1]-----> "-l" [2]-----> "-a" [3]-----| (NULL)Your function cannot modify the cmdline string that is passed in to it But, you may malloc up space for a local copy of the cmdline string to tokenize if this helps. If you do this, however, your function must free this copy before it returns; the returned args list should not point into this copy like the parse_cmd function does, but each command line argument should be malloced up separately as a distinct string of exactly the correct size).
This function is more complicated to implement and will likely
require at least more than a single passes through the chars
of the command line string.
cat foo.txt blah.txt & cat foo.txt blah.txt& cat foo.txt blah.txt &TEST that your code works for command lines with any amount of whitespace between command line arguments
strlen, strcpy, strchr, strstr, isspaceHere is an example of using strstr and modifying a string to create a substring:
int i; char *ptr, *str; str = malloc(sizeof(char)*64); if(!str) { exit(1); } ptr = strcpy(str, "hello there, how are you?"); if(!ptr) { exit(1); } ptr = strstr(str, "how"); if(ptr) { printf("%s\n", ptr); // prints: how are you? ptr[3] = '\0'; printf("%s\n", ptr); // prints: how } else { printf("no how in str\n"); }strstr may or may not be useful in this assignment, but you will need to create token strings in a way that has some similarities to this example.
"hello there & how are you?"gets parsed into an argv list as:
argv[0]---->"hello" argv[1]---->"there" argv[2]----| (NULL)
(gdb) display ptr (gdb) display i (gdb) display buffer
To submit your code, simply commit your changes locally using git add and git commit. Then run git push while in the labs/08 directory. Only one partner needs to run the final push, but make sure both partners have pulled and merged each others changes. See the section on using a shared repo on the git help page. | https://web.cs.swarthmore.edu/~adanner/cs31/f14/Labs/lab08.php | CC-MAIN-2022-33 | refinedweb | 1,222 | 58.55 |
Unity 2017.4.15Mettre à jour mainteant
Vous découvrez Unity ? Commencez !
Notes sur la version
Fixes
2D: Fixed tiled animated Sprites glitch when iterating over frames. (1076834, 1093240)
Android: Fixed crash in "AudioManager::ShutdownReinitializeAndReload" with Bluetooth headset pairing. (1086597)
Android: Fixed unpredictable ordering with FixedUpdate. (1071756)
IL2CPP: Fixed COM Objects representing Windows.Foundation.IAsyncAction and Windows.Foundation.IAsyncOperation getting destroyed after converting them to System.Threading.Tasks.Task via "AsTask" extension method. (1086209)
IL2CPP: Support Marshal.SizeOf for types with a generic base class when the base class does not use the generic type in any field. (1083239)
iOS: Fixed a crash in iOS 7 due to the use of [UIScreen coordinateSpace]. (1050777, 1093249)
iOS: Fixed an issue where the namespace UnityEditor.iOS.Xcode was not found when running the Editor in batch mode. (1018716, 1082694)
iOS: Fixed screen.safeArea not reported correctly when orientation is changed. (1028312, 1044173)
Physics: Fixed a crash when setting a too small size to Terrain size. (1048878, 1079802)
Physics: Fixed an issue where colliders without physics material don't return correct default material in Physics Settings. (1058082, 1080052)
Physics: Fixed an issue where mass properties are not correctly updated when changing collider scale. (1024453, 1079803)
Physics: Fixed an issue where transform to CharacterController in OnControllerColliderHit is ignored. (1005564, 1080047)
Scripting: Fixed crash with message box "GetThreadContext failed". (1082246)
Scripting Upgrade: Fixed hang when running tests in Editor. (971923)
Shaders: Fixed not able to load shaders from AssetBundles that were created in 2017.2. (1096788)
Revision: 5d485b4897a7
Changeset: 5d485b4897a7 | https://unity3d.com/fr/unity/whatsnew/unity-2017.4.15 | CC-MAIN-2020-16 | refinedweb | 254 | 61.02 |
On 5/20/2012 9:33 PM, Guido van Rossum wrote: > Generally speaking the PEP is a beacon if clarity. But I stumbled > about one feature that bothers me in its specification and through its > lack of rationale. This is the section on Dynamic Path Computation: > (). > The specification bothers me because it requires in-place modification > of sys.path. Does this mean sys.path is no longer a plain list? I'm > sure it's going to break things left and right (or at least things > will be violating this requirement left and right); there has never > been a similar requirement (unlike, e.g., sys.modules, which is > relatively well-known for being cached in a C-level global variable). > Worse, this apparently affects __path__ variables of namespace > packages as well, which are now specified as an unspecified read-only > iterable. (I can only guess that there is a connection between these > two features -- the PEP doesn't mention one.) Again, I would be much > happier with just a list. sys.path would still be a plain list. It's the namespace package's __path__ that would be a special object. Every time __path__ is accessed it checks to see if it's parent path has been modified. The parent path for top level modules is sys.path. The __path__ object detects modification by keeping a local copy of the parent, plus a reference to the parent, and compares them. >. > >>From my POV, this is the only show-stopper for acceptance of PEP 420. > (That is, either a rock-solid rationale should be supplied, or the > constraints should be removed.) I don't have a preference on whether the feature stays or goes, so I'll let PJE give the use case. I've copied him here in case he doesn't read python-dev. Now that I think about it some more, the motivation is probably to ease the migration from setuptools, which does provide this feature. Eric. | https://mail.python.org/pipermail/python-dev/2012-May/119573.html | CC-MAIN-2018-05 | refinedweb | 329 | 65.32 |
If you’re an Android developer, then you may have heard good things about RxJava, a popular open-source implementation of the ReactiveX library that brings reactive programming to the Java Virtual Machine (JVM).
RxJava is designed to take the pain out of working with asynchronous streams of data—although as you'll see, RxJava's definition of "data" is pretty broad. Because RxJava is a JVM-compatible library, you can use it on a wide range of platforms, but in this series I’ll show you how to use RxJava 2 for Android development.
By the end of this series you’ll have mastered all the RxJava 2 essentials so you can start creating highly reactive apps that can process a wide range of synchronous and asynchronous data—all using code that’s more concise and manageable than you’d typically be able to achieve with Java alone.
In addition to providing an introduction for RxJava newcomers, if you’re a RxJava 1 veteran who’s looking to make the leap to RxJava 2, then this series will help make this transition as smooth as possible. While upgrading to the latest version of a library may not sound like a big deal, RxJava 2 isn't your typical update—it's a complete rewrite of RxJava. With so much change it's easy to get confused, so taking some time to familiarize yourself with RxJava 2 from a beginner's perspective could save you a lot of time and frustration in the long run.
In this first post, I'll be covering what RxJava is and the key benefits it offers Android developers. We'll also take an in-depth look at the core components of any RxJava project:
Observers,
Observables, and subscriptions. By the end of this tutorial, you'll have created a simple "Hello World"-style application that includes all of these core components.
The other major building blocks of RxJava are operators, so in part two I’ll be exploring the different ways you can use operators to transform, combine, filter and manipulate your application's data.
In the final installment, we'll move beyond the core RxJava library and take a look at RxAndroid, an entire library that's packed with all the Android-specific extensions you’ll need to unlock the full potential of reactive programming for Android.
We have lots to cover, so let's start with the essentials:
What Is RxJava, Anyway?
RxJava is a library that lets you create applications in the reactive programming style. At its core, reactive programming provides a clean, efficient way of processing and reacting to streams of real-time data, including data with dynamic values.
These data streams don't necessarily have to take the form of traditional data types, as RxJava pretty much treats everything as a stream of data—everything from variables to properties, caches, and even user input events like clicks and swipes.
The data emitted by each stream can either be a value, an error, or a "completed" signal, although you don’t necessarily have to implement the last two. Once you've created your data-emitting streams, you combine them with reactive objects that consume and then act on this data, performing different actions depending on what the stream has emitted. RxJava includes a whole bunch of useful operators to work with streams, making it easy to do things like filtering, mapping, delaying, counting, and much more.
To create this workflow of data streams and objects that react to them, RxJava extends the Observer software design pattern. Essentially, in RxJava you have
Observable objects that emit a stream of data and then terminate, and
Observer objects that subscribe to
Observables. An
Observer receives a notification each time their assigned
Observable emits a value, an error, or a completed signal.
So at a very high level, RxJava is all about:
- Creating an
Observable.
- Giving that
Observablesome data to emit.
- Creating an
Observer.
- Assigning the
Observerto an
Observable.
- Giving the
Observertasks to perform whenever it receives an emission from its assigned
Observable.
Why RxJava?
Learning any new technology requires time and effort, and as a data-oriented library, RxJava isn't always the easiest API to get to grips with.
To help you decide whether learning RxJava is worth the initial investment, let's explore some of the key benefits of adding the RxJava library to your Android projects.
More Concise, Readable Code
Code that's complex, verbose and difficult to read is always bad news. Messy code is more prone to bugs and other inefficiencies, and if any errors do occur then you’re going to have a much tougher time tracking down the source of these errors if your code is a mess.
Even if your project does build without any errors, complex code can still come back to haunt you—typically when you decide to release an update to your app a few months down the line, boot up your project, and are immediately confronted with a wall of tangled, confusing code!
RxJava simplifies the code required to handle data and events by allowing you to describe what you want to achieve, rather than writing a list of instructions for your app to work through. RxJava also provides a standard workflow that you can use to handle all data and events across your application—create an
Observable, create an
Observer, assign the observable to that observer, rinse and repeat. This formulaic approach makes for very straightforward, human-readable code.
Multithreading Made Easy
Modern Android applications need to be able to multi-task. At the very least, your users will expect to be able to continue interacting with your app's UI while your application is performing some work in the background, such as managing a network connection, downloading a file, or playing music. The problem is that Android is single-threaded by default, so if your app is ever going to multi-task successfully then you'll need to create some additional threads.
Out of the box, Android does provide a number of ways of creating additional threads, such as services and
IntentServices, but none of these solutions are particularly easy to implement, and they can quickly result in complex, verbose code that’s prone to errors.
RxJava aims to take the pain out of creating multi-threaded Android apps, by providing special schedulers and operators. These give you an easy way of specifying the thread where work should be performed and the thread where the results of this work should be posted. RxJava 2.0 includes a number of schedulers by default, including
Schedulers.newThread, which is particularly useful as it creates a new thread.
To change the thread where work is performed, you just need to change where an observer subscribes to an observable, using the
subscribeOn operator. For example, here we’re creating a new thread and specifying that the work should be performed on this new thread:
observable.subscribeOn(Schedulers.newThread())
Another long-standing problem with multithreading on Android is that you can only update your app's UI from the main thread. Typically, whenever you need to post the results of some background work to your app's UI, you have to create a dedicated
Handler.
Once again, RxJava has a much more straightforward solution. You can use the
observeOn operator to specify that an Observable should send its notifications using a different scheduler, which essentially allows you to send your Observable’s data to the thread of your choice, including the main UI thread.
This means that with just two lines of code, you can create a new thread and send the results of work performed on this thread to Android's main UI thread:
.subscribeOn(Schedulers.newThread()) .observeOn(AndroidSchedulers.mainThread())
Okay, so technically we're cheating a little here, as
AndroidSchedulers.mainThread is only available as part of the RxAndroid library, which we won’t be looking at until part three. However, this example does give you a glimpse into RxJava and RxAndroid’s power to simplify an area of Android development that's known for being overly complicated.
Increased Flexibility
Observables emit their data in a way that completely hides the way that data was created. Since your observers can't even see how the data was created, you're free to implement your
Observables in whatever way you want.
Once you've implemented your
Observables, RxJava provides a huge range of operators that you can use to filter, merge and transform the data that’s being emitted by these
Observables. You can even chain more and more operators together until you've created exactly the data stream your application needs.
For example, you could combine data from multiple streams, filter the newly merged stream, and then use the resulting data as the input for a subsequent data stream. And remember that in RxJava pretty much everything is treated as a stream of data, so you can even apply these operators to non-traditional "data", such as click events.
Create More Responsive Apps
Gone are the days when an app could get away with loading a page of content and then waiting for the user to tap the Next button. Today, your typical mobile app needs to be able to react to an ever-growing variety of events and data, ideally in real time. For example, your typical social networking app needs to be constantly listening for incoming likes, comments and friend requests, while managing a network connection in the background and responding immediately whenever the user taps or swipes the screen.
The RxJava library was designed to be able to manage a wide range of data and events simultaneously and in real time, making it a powerful tool for creating the kind of highly responsive applications that modern mobile users expect.
Adding RxJava to Android Studio
If you've decided that RxJava does have something to offer your Android development practice, then the first step to becoming an RxJava master is adding the library to your project.
Create a new Android Studio project with the settings of your choice, and then open your module-level build.gradle file and add the latest version of
io.reactivex.rxjava2:rxjava as a dependency.
At the time of writing, RxJava 2.0.5 was the most recent release, so my build.gradle file looks like' compile 'io.reactivex.rxjava2:rxjava:2.0.5' }
When prompted, click Sync now.
Next, open your
MainActivity file and add the imports you'll need to start working with the core RxJava features:
import io.reactivex.Observable; import io.reactivex.ObservableEmitter; import io.reactivex.ObservableOnSubscribe; import io.reactivex.Observer; import io.reactivex.disposables.Disposable;
If you're migrating from RxJava 1, then these imports might not be what you were expecting, as RxJava 1 used a completely different package name (
rx to be precise).
However, this isn't an arbitrary name change: the different package names give you the option to use RxJava 1 and RxJava 2 side by side in the same project. If you’re halfway through a project that uses RxJava 1, then you can add the RxJava 2 library and start using the updated 2 features immediately, without breaking any of your RxJava 1 code.
If you're starting your RxJava journey with version 2, then just be aware that if you encounter any RxJava tutorials or code using the
rx package name, then this is RxJava 1 code and is unlikely to be compatible with the version 2 library.
The Building Blocks of RxJava
So far, we've only looked at RxJava at a very high level. It's time to get more specific and take an in-depth look at two of the most important components that’ll crop up time and time again throughout your RxJava work:
Observers and
Observables.
By the end of this section, not only will you have a solid understanding of these two core components, but you'll have created a fully functioning application consisting of an
Observable that emits data and an
Observer that reacts to these emissions.
Create an
Observable
An
Observable is similar to an
Iterable in that, given a sequence, it'll iterate through that sequence and emit each item, although
Observables typically don't start emitting data until an
Observer subscribes to them.
Each time an
Observable emits an item, it notifies its assigned
Observer using the
onNext() method. Once an
Observable has transmitted all of its values, it terminates by calling either:
onComplete: Called if the operation was a success.
onError: Called if an
Exceptionwas thrown.
Let's look at an example. Here, we're creating an Observable that emits the numbers 1, 2, 3 and 4, and then terminates.
Observable<Integer> observable = Observable.create(new ObservableOnSubscribe<Integer>() { @Override public void subscribe(ObservableEmitter<Integer> e) throws Exception { //Use onNext to emit each item in the stream// e.onNext(1); e.onNext(2); e.onNext(3); e.onNext(4); //Once the Observable has emitted all items in the sequence, call onComplete// e.onComplete(); } } );
Note that in this example I’m spelling out exactly what’s happening, so don't let the amount of code put you off! This is much more code than you’ll typically use to create
Observables in your real-life RxJava projects.
Create an Observer
Observers are objects that you assign to an
Observable, using the
subscribe() operator. Once an
Observer is subscribed to an
Observable, it’ll react whenever its
Observer emits one of the following:
onNext: The
Observablehas emitted a value.
onError: An error has occurred.
onComplete: The
Observablehas finished emitting all its values.
Let’s create an
Observer that's subscribed to our 1, 2, 3, 4
Observable. To help keep things simple, this
Observer will react to
onNext,
onError and
onComplete by printing a message to Android Studio’s Logcat Monitor:
Observer<Integer> observer = new Observer<Integer>() { @Override public void onSubscribe(Disposable d) { Log.e(TAG, "onSubscribe: "); } @Override public void onNext(Integer value) { Log.e(TAG, "onNext: " + value); } @Override public void onError(Throwable e) { Log.e(TAG, "onError: "); } @Override public void onComplete() { Log.e(TAG, "onComplete: All Done!"); } }; //Create our subscription// observable.subscribe(observer);
Open Android Studio's Logcat Monitor by selecting the Android Monitor tab from the bottom of the Android Studio window (where the cursor is positioned in the screenshot below) and then selecting the Logcat tab.
To put this code to the test, either attach your physical Android device to your development machine or run your project on a compatible AVD. As soon as your app appears onscreen, you should see the data being emitted by your Observable.
Creating an Observable With Less Code
Although our project is emitting data successfully, the code we're using isn't exactly concise—particularly the code we're using to create our
Observable.
Thankfully, RxJava provides a number of convenience methods that allow you to create an
Observable using much less code:
1. Observable.just()
You can use the
.just() operator to convert any object into an
Observable. The result
Observable will then emit the original object and complete.
For example, here we're creating an
Observable that'll emit a single string to all its
Observers:
Observable<String> observable = Observable.just("Hello World!");
2.
Observable.from()
The
.from() operator allows you to convert a collection of objects into an observable stream. You can convert an array into an
Observable using
Observable.fromArray, a
Callable into an
Observable using
Observable.fromCallable, and an
Iterable into an
Observable using
Observable.fromIterable.
3.
Observable.range()
You can use the
.range() operator to emit a range of sequential integers. The first integer you provide is the initial value, and the second is the number of integers you want to emit. For example:
Observable<Integer> observable = Observable.range(0, 5);
4.
Observable.interval()
This operator creates an
Observable that emits an infinite sequence of ascending integers, with each emission separated by a time interval chosen by you. For example:
Observable<Long> observable = Observable.interval(1, TimeUnit.SECONDS)
5.
Observable.empty()
The
empty() operator creates an
Observable that emits no items but terminates normally, which can be useful when you need to quickly create an
Observable for testing purposes.
Observable<String> observable = Observable.empty();
Conclusion
In this article, we covered the fundamental building blocks of RxJava.
At this point, you know how to create and work with
Observers and
Observables, and how to create a subscription so that your
Observables can start emitting data. We also briefly looked at a few operators that allow you to create a range of different
Observables, using much less code.
However, operators aren't just a handy way of cutting down on the amount of code you need to write! Creating an
Observer and an
Observable is simple enough, but operators are where you really begin to see what's possible with RxJava.
So in the next post, we'll explore some of RxJava's most powerful operators, including operators that can finally make multi-threading on Android a pain-free experience. Stay tuned to learn the real power of the RxJava library.
In the meantime, check out some of our other posts and courses about Android development here on Envato Tuts+!
- Android StudioCoding Functional Android Apps in Kotlin: Getting Started
- AndroidCoding an Android App With Flutter and Dart
- Android SDKWhat's New in Firebase? Updates From the Firebase Dev Summit
- AndroidMigrate an Android App to Material Design
Observable dataflow diagrams are from the ReactiveX documentation, and are licensed under Creative Commons Attribution 3.0 License. | http://esolution-inc.com/blog/getting-started-with-rxjava-20-for-android--cms-28345.html | CC-MAIN-2019-18 | refinedweb | 2,932 | 51.18 |
In this tutorial, we will be continuing from where we left off with the “hello world” application. This time adding a graphical user interface (GUI) and a “toast”. The GUI will consist of a button, textbox and a label. The “toast” will be issued onto the screen when the button is pressed.
Some may wonder what a toast is. Well, for non-programmers, a toast is a text notification that for the most part is used only to display an error on the screen (I am a big fan of using toasts instead of an alert on the screen as its less intrusive). For this article we will use a toast to display a message on the screen that will take the text in the textbox and issue a “Hello Greg” onto the bottom of the screen. After this article completed you will be able to successfully make toast commands, design the layout of the hello world program, and pull text from a textbox.
We are going to start off by copy our existing Hello World project so that we can use the original in every way but have two separate projects to show the difference and both can be used as references. To do this we will right click on the root of our HelloWorld project in the right hand pane (Navigation Explorer), navigate to copy (not Copy Qualified Name) and click it. Then find a blank space in the Navigation Explorer, right click again and click paste. You will be asked to supply a new name for this project and whether to use the default location. We will name the new project ImprovedHelloWorld and we will leave the checkbox checked that says “use default location”. Press OK and the new project will be generated from the old one.
The first thing we are going to accomplish is changing the strings.xml file to add another node under app_name. We will do this by copying the node above it and pasting the copied material directly under the last </string> element. Then we will change the name of the string to press and in between we will write Press Me!. Next we will alter the hello node and change the text to say Enter Your Name Here: instead of Hello Android, Hello World!. This being accomplished we now need to design the GUI (Graphical User Interface).
To do this navigate to main.xml and we are going to go over what everything does up to this point. We first off have a node called LinearLayout which essentially creates a space for adding objects such as textboxes, buttons and the like and will format the layout for us. So LinearLayout will organize one thing right after the other in a one column and one row type of deal. Next we have a TextView which in any other label we could call a label. Now to go over what all of the parameters are in the nodes we just mentioned. android:layout_width & android:layout_height are used to determine what will happen to an object when it is used within a layout. There are two options when using this and they are fill_parent or wrap_content. fill_parent will do exactly as it states, it will size the object so that it will fill the screen either vertically or horizontally. wrap_content will format the object to expand or shrink to the size of the content displayed within. Both of these variables can be used in many different objects including but not limited to Layouts, Text Views, Text Boxes, and Buttons. android:text is used in certain objects like TextViews and TextBoxes to display text to the user. As of right now, we are presenting the user with text but calling it from strings.xml instead of entering the text right in the node itself. To reference strings.xml all that is needed is to put @string/press, where press is the name of your variable, inside the quotations.
Now that we are familiar with the terms, we will need to modify this to first house a label, textbox and finally a button. To do this we will simply add a textbox and button since we already took care of the label in the string.xml. To add a Textbox we will start on a new line under ending of the <TextView /> node. Just to be clear I will add code inline and explain why we are adding it afterwards. <EditText android:id=”@+id/helloName” android:layout_width=”fill_parent” android:layout_height=”wrap_content” />. EditText will be our textbox in this instance. Also when giving items an ID it is best to follow these practises of adding @+id/ before your variable name which makes it possible to tie into your .java file later. Next we will add <Button android:id=”@+id/go” android:layout_width=”fill_parent” android:layout_height=”wrap_content” android:text=”@string/press” /> directly underneath the ending of our EditText node. Notice we are referencing the string.xml and calling the node that says Press Me! which will appear on our button now. If you were to run this project now you would be able to see the layout of the program we just made but we are unable to get it to do anything except enter text in the textbox.:
import android.widget.Button; import android.widget.EditText;
After that we will need to include four more imports, the first being for event listen to add to our button, the second will be for the toast that we will call when the event handler runs, the third being the context of the application and the fourth to get the view of the application and handle the layout and interaction. These imports can be added under the previous ones and will look like this:
import android.view.View.OnClickListener; import android.widget.Toast; import android.content.Context; import android.view.View;
After these are added to your imports we are ready to get into coding the event handler for our button and the onCreate functions, which is called when the program is started. To make things easier and to complement the screenshot, I will post the rest of the code and explain what the important lines are doing and why we are using them.
public class HelloMain extends Activity { EditText helloName;
We are creating a reference to our textbox above any function so that it only has to be declared once but instantiated many times if need); }
Above we capture the button from the layout using a variable. With this variable we are going to assign it an onClick Event Handler as shown on the last line above. Below we are creating the Event Handler for it to be hooked in above. After creating this function it will be able to pull the text from the TextBox and display it with static text.
//();
The above code will take Context (the facet to our applications enviroment) and and add it to our Toast along with our dynamic CharSequence text and the length the Toast will appear onscreen, which in this case we want it to be longer. It is key to note how to make a Toast as it is more efficient that popping up textboxes to the user as well as it is less distracting.
} catch (Exception ex) { Context context = getApplicationContext(); CharSequence text = ex.toString() + "ID = " + id; int duration = Toast.LENGTH_LONG; Toast toast = Toast.makeText(context, text, duration); toast.show(); } } }; }
The last thing we are doing for this function is putting all the important stuff mentioned above into a try catch statement which will try our important code and if there is an error it will display a Toast letting us know there was an error and a message about that error. For functions such as these is it crucial to have precautions in place to catch errors and not have a program force close. It is important to put the user first in thinking about UI and any error messages that might occur. If an error somehow sneaks into your program try catch statements will catch the error and make it “cute and fuzzy” for the user.
Top half of code:
Bottom half of code, elapsed by previous view of code:
After we have coded the main content for our .java file, we can now proceed to run the application and view our completed Improved Hello World program. Notice that when you press the button and your textbox has not text in it that the program will still function correctly. This is a good feature to have so that you don’t start seeing Toasts containing error messages. The completed product should look like this when the button is pressed:
This would conclude our Improved Hello World example but the learning is far from over. Next post we will examine Databases and a look into some simple queries as well as building a database from the ground up. As always, if you have any problems with coding this article, feel free to leave a comment and I will assist in any way possible! If you can’t wait for the next post you can read up on databases before the next posting. Until next time, Happy Hacking!
Continue on to Part3: Introduction to Databases
I would love to do these tutorials, but for some reason your images are scaled down and don’t have larger versions. Any chance you could update this one, and the hello world tutorial with links to larger screenshots?
@Abyss Knight. Right click on image and copy link location. Paste in new window or download etc etc.
Nice tutorial. Not my cup of tea but appreciated nonetheless
@Abyss Knight – Pictures of code have been adjusted so you just have to click on them and they will enlarge. Hope this helps and happy hacking!
after all , “this is hack a day” not “tutorial a day”
@somebody:Most of the hacks on here come with a tutorial of some kind. Not to mention Android is open source which is totally about hacking.
Really? I thought it was “jackass comment a day.” Thanks for the articles, now I just need to ditch my VX9100 and get a worthy phone.
@Luke, you win.
I really like these tutorials. Too bad I don’t have an Android phone. :( I hope more tutorials for other topics come soon!
@:D – If you are interested in Android development then you can use the emulator that comes with the AndroidSDK until you convert to Android :)
While this may not be a hack within itself, it is a tutorial helping get non-phone-programming hackers get into the realm of taking advantage of these great new devices. Just by posting this, some reader is going to follow along, get inspired, and put together some new hacks that can interact with Android phones and utilize their features (easy access to bluetooth, wifi, compass, accelerometer) so who’s to say this sort of guide doesn’t belong here? Granted it doesn’t line up with the idea of “a new hack each day” blog but I feel this is definitely a great addition to this resource.
Two recommendations:
1. You really should indent your code properly
2. Rather than creating the OnClickListener a separate class, you should embed it withing the class. For example:
myButton.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
/*Do Stuff*/
}
)};
(I’m sorry if the tabbing is wrong on here…I’m not sure how wp.com sanatises it’s comments).
@Leif Andersen
Code formatting is one of the things when programming I look out for the most. Unfortunately WordPress doesn’t feel the same way as I do. I will include well formatted text documents for the next post :)
@leif, you should really indent your code properly.
These tutorials are gr8t. Keep them coming. As an analog n mci guy, I am interested in building accessories for these. Could you please do a tutorial on rs232 comm with android. So then android can talk to arduino or something.
I dont get the whole “not a hack” thing.
a load of stuff on HaD isnt a hack… a hack being repurposing hardware in a way that wasnt facilitated by the designer.
1)Nixie tubes are for displaying numbers on, nixie sudoku uses tubes to display number…not a hack.
2)Android dev – android designed so that people can write their own apps – not a hack.
3)Capacitor bank – stores a lot of charge – not a hack
4)Servo control board – programming an AVR to do stuff …like it was made to do – not a hack.
The site is probabily 50% hacks – and the rest is just normal design. give the “not a hack” stuff a break
@steaky: this thing has been discussed a lot; i don’t want to be rude, but if you don’t like the site, just don’t visit HaD.
Sure that hack links are the most interesting, but there are lots of people who also are interested in programming, and all that stuff.
And also take in mind that if they don’t publish hack articles maybe because simply there aren’t any.
Do you want hacks? then, do one yourself and send it here. I’m sure we all apreciate a lot. ;)
Are there any good UI editors available yet? The one that comes with the Eclipse plugin is atrocious, and DroidDraw seems to be quite limited as well.
In your code you are referencing @string, but shouldn’t it really be @strings? The file name is strings.xml. Or am I missing that the SDK just knows that you’re referring to the original strings XML file?
Jack,
You misinterpret what I was saying. I like the site – sure I hope there would be more hacking as it is interesting seeing how people modify kit etc, but at the same time I like seeing what people make too.
I cant stand it when people say “not a hack” and that was the ponit I was making – hence the give it a break.
also, I do work on my own projects – currently a USB PC fan controller, car stereo based erm.. stereo (?) etc, as well as a load of programming – plus I write articles here and here
I followed the tutorial and came up with errors on the build. Certain sections above do not accurately indicate which file code is being added to (paragraph 6&7). Nor does this tutorial ever mention which editor to open these files with (Java editor, text editor, xml editor). Here’s my code build in HelloMain.java and there’s errors so it will not build in the app.
package com.jspencersworld.helloworld;
import android.app.Activity;
import android.os.Bundle;
import android.widget.Button;
import android.widget.EditText;
import android.view.View.OnClickListener;
import android.widget.Toast;
public class HelloMain extends Activity {
EditText helloN();
}
catch (Exception ex)
{
Context context = getApplicationContext();
CharSequence text = ex.toString() + “ID = ” + id;
int duration = Toast.LENGTH_LONG;
Toast toast = Toast.makeText(context, text, duration);
toast.show();
}
}
};
}
public class HelloMain extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
}
Jesus f’in christ! Cant we all settle down with the stupid “this isn’t a hack” comments. Don’t like something, fuck off then. Read something else. Or, holy shit imagine this, come up with your own hack of some kind. I swear some of you retards must equate reading a hack on hack-a-day with actually doing it yourself. Your not even equivalent to a fucking script kiddie. at least they actually do something with the shit they read on the internet.
Thanks for this, I just got told yesterday I needed to learn to write android apps asap. Yours was the first tutorial I tried and (after a few dumb mistakes on my part) – it all worked – most of all because of your clarity and detail, I know what it all does and why.
Excellent article, I can’t wait for the next. The google docs seemed to gloss over the fundamental of “how to integrate a button” and just wanted to show you every type of layout available. Great for when you’re designing, as reference, but not so much for trying to build simple applications for demonstrative purposes.
Good post Greg.
If you follow this post though make sure to add those 2 lines in your import section otherwise you won’t be able to run the application. (here’s my full list of import)
import android.app.Activity;
import android.content.Context;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
thanks again Greg! keep’em going!
@John
I also came up with errors following the tutorial. I looked at the “top half” code screenshot and saw that there were two import statements that seem to have been forgotten about in the tutorial text.
~Line 6: import android.content.Context;
~Line 8: import android.view.View;
I added these two statements in and it compiled with no errors, and ran as intended.
Nice tutorial. Cant wait for 3D game programming tutorial in android :)
And yah, can we get some:
int i;
if comment[i]=”not a hack”
{
remove_comment(i); //how do we tab in our comments
}
Or maybe a filter we can enable. So we dont waste time and energy reading stupid peoples responses.
I like how you shown how to toast (great name by itself as well. Like how people say “google” it instead of generic search it. But I digress).
What I don’t like is how overly complicated this seems for coding compared to regular java. I’m not a programmer by trade, but am fairly competent in knowing how to work with java/javascript/bash, etc. Comparing this to the equivalent java code for a pc, and its just complicated for no reason.
ANyway, what I would like to see is a barebones project for downloading a webpage and then grepping through it, either from memory or from a cache file, in android. Then I could modify it from there.
I love this stuff, it just so happens to be that I am learning it at the same time you are doing tutorials. And it looks like you are about to catch up with me.
One thing that would help is syntax highlighting and all that jazz that makes code easier to look at.
I’ve been read HaD for a long time, so don’t think i’m coming here just for this, I still expect good hacks along with the tutorial.
@phishiphree,
Who are you directing that tirade towards. I clearly explained that I like the site, I appreciate the “non-hacks” and that I do my own projects – plus included evidence of such. Just because I dont go shouting about them and handing everything I do to the tips line doesnt mean I dont do projects – It just means they are targeted towards a different audience.
@admin, is there a way to enable voting on the comments so that they become self-moderating.
@John – not sure if you figured it out by now, but the article is missing a few imports – the images of the code are correct, but the steps are missing android.content.Context and android.view.View. The complete list of imports you need is:
import android.app.Activity;
import android.content.Context;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
Aside from that, the rest of the article seems correct and I was able to create the app as described and run it on the Android emulator.
@John – Please give me an email at greg@hackaday.com to let me know the issues and we will work through this and get you up an running!
@Everyone
There has been an error in the tutorial which would have made it so you couldn’t compile it is now fixed but i will show you before and afetr
Before:
import android.view.View.OnClickListener;
import android.widget.Toast;
After:
import android.view.View.OnClickListener;
import android.widget.Toast;
import android.content.Context;
import android.view.View;
@John Spencer:
I agree, it is slightly confusing on just reading the text, but if you look at the screenshots everything is perfectly clear.
As you worked out, the code does need to be edited in HelloMain.java. I just double-clicked it to invoke the default editor.
Your code isn’t working for following reasons I think:
1) you need to add 2 more imports:
import android.content.Context;
import android.view.View;
2) all the changes should be made in the existing public class HelloMain – you’ve ended up with 2 of them.
@Nick – Sorry about not replying to your comment earlier, the noise was getting in the way of answering your question. It would appear that at first glance this wouldn’t make sense because main.xml and strings.xml have no relation. They need a common ground on which to reference each other and I have avoided mentioning this class as it could completely ruin any project your working on if you dont know how to modify it correctly. The file R.java located in the Root of your project under gen/ contains all the information to pull your application together and make it work. You will notice that one of the classes here is layout and one is string. Layout references main.xml and the string class references all of our string entries. Because of R.java these 2 xml files are able to communicate even though the XML file strings.xml doesnt use @string anywhere in it. Hopefully this clears that up and that was actually a great question. Any more feel free to ask and I will hopefully have an answer!
@Greg – Thanks! That makes sense… not to have resources set in stone and there to actually be a link file, it was just a little unclear that there was a reference file that linked out to all of these other files. I suppose if you really wanted, you could rename @string to be something shorter, but it probably just muds up the ability for others to read your code.
I’ve made a Hello World app off the Android Developers walk-through, but never really went into understanding /how/ the whole deal works together.
What Marco said — is there a good graphical editor for doing UIs? I like that the UI is defined in a static XML file, not assembled in procedural code… more like a markup language, which is nice. But some kind of WYSIWYG would be handy, and I didn’t see any references for that yet.
@Nick, @Greg: The “R” class is like duct tape and “the force”; it’s an unseen entity that holds the universe together! :-)
Love the tutorials thus far. One note: Starting off with the process of making a copy of the last “Hello World” project really threw me off — I ended up screwing up a bunch of things and having to start the whole project over from scratch. Might’ve just been my programming inexperience, but still.
Second thing — I had to create a brand-new Android VM to run this properly. Not sure why. Anyone have any ideas?
Greg, Something wrong with my set up? When I create the String resouce to be used as text on the button of the Improved Hello World, things seem to be OK, However, using it is another matter, an error is always generated indicating –
error: Error: No resource found that matches the given name (at ‘text’ withvalue ‘@string/press’).main.xml /HelloWorld/res/layout line 17 Android AAPT Problem
if I hardcode the text in no such error is generated. Any idea?
Ramon
@Saragon:
For your second thing: I’ve had similar issues where sometimes the emulator ignores the current project, and insists on running *the last* project I was working on. Usually, cleaning the current project makes the problem go away:
– Select Project>Clean…
– Choose “Clean projects selected below”
– Select the project you want to clean.
– Check “Start a build immediately”.
– Select “Build only the selected projects”.
– Click OK.
That worked for me, YMMV.
@Everyone Thank you for the helpful tips. I apologize as I’m pretty green to this. I also copy and pasted from the tutorial which meant I needed to go back and adjust my quotations and such. I worked myself down from 8 errors to 2 last night before going to bed.
@Greg I’ll email you later tonight if I get the chance. I also want to tell you how empowering this Tutorial is to a new user. Thank you very much! And thank you for making yourself available to help!
Thanks to the Author and thanks to all that posted the missing imports!
This was fun and I look forward to the next one.
@Greg,
I noticed that if I copy your code verbatim the quotes are seen as something else, not sure what.
Anyway, I found that deleting them and typing them in fixes the problem. Must be some weird copy paste bug? I am using chrome on win7.
Hi Greg,
I’ve copied the program exactly as you’ve put it up, except for two small changes. I named my Activity “HelloWorldMain” and the target is Android 2.2.
Now, I get errors whenever I run the app. The errors occur at lines where R.id is used, and the error thrown by eclipse is “R.id cannot be resolved”. I checked the autogenerated R.java file, and there seems to be no id field. I’ve pasted its contents below.
/* AUTO-GENERATED FILE. DO NOT MODIFY.
*
* This class was automatically generated by the
* aapt tool from the resource data it found. It
* should not be modified by hand.
*/
package com.testworld.helloworld;;
}
}
What am I doing wrong?
I have two lines with errors.
-> Button button = (Button)findViewById(R.id.go);
go cannot be resolved or is not a field
and
-> helloName = (EditText)findViewById(R.id.helloName);
helloName cannot be resolved or is not a field
Any ideas?
Thanks,
John
Found my answer. If you copied the code from this page, go to the main.xml and replace the quotation marks. :) Now to build!
again thanks for the tutorial, I can’t wait to try more. also thanks @john I was having the same issue as you
@Apoorva
You are getting the R.id error because you copied the main.xml content from here directly. As the contents of R.java file is auto generated, it depends on what you type in the xml file and copying usually creates the problem.
To solve this issue, open the main.xml file again. Click wherever you have used “@+id/”. Press Ctrl+space and select the value again from the list it pops up.
OR
Delete the id value and type the same thing again.
Hope this helps.
i am getting the R.id error also, and i didnt copy and paste the text :[. anyone know why?
at this section:
:”
I found confusing, because this section, I think, means to refer to the “HelloMain.java” file, but doesn’t seem to explain this transition from editing the “main.xml” to the “HelloMain.java” file. That is, I first assumed that the above quoted commands for adding imports to the “imports secton at the top” where meant in the “main.xml” file. Kinda confusing…
Otherwise awesome though, thanks!
Nice Tutorial,specially for beginner. I appreciate. But can this be scale down to Dialogbox instead of using Toast?
Thank you for the great tutorials. They are helping a lot in helping me to learn how to use this Android dev environment.
For the folks that keep getting the r.id errors, make sure that you modify the right “HelloMain.java” file. I got the error at first as well, and realized that I added the code to the HelloMain from the first HelloWorld app. A good mistake to learn from though. Good to know that Eclipse keeps files open from multiple projects at the same time. :) | http://hackaday.com/2010/07/19/android-development-101-part-2improved-hello-world/?like=1&source=post_flair&_wpnonce=18b5244a97 | CC-MAIN-2014-35 | refinedweb | 4,735 | 74.19 |
Hello and Welcome to another edition of TWITTER RESPONDS! From a post announcement on twitter
Our tools can kill maintainability. My thoughts on how Frameworks can be a negative to our products.
Tool Impact On Developer Discipline - Frameworks
there was a question posed on twitter about how to get data from the object.
Do you have any examples on how you implement web requests in an OO world now? I follow your content for a while now and try to write objects like you advocate, but the moment i need to make an outgoing webrequest, or have to return an object back to the controller, i seem to need to expose the internal data again somehow. be it by providing at least some getters or a toMap method so that jackson/gson/spring can do something.
I do not have an example... but now I do! I love getting twitter questions.
This is a very frequent question I get about the practices I use. "If we never return our data, how do we return our data?"
A lot like the post about frameworks being destructive to our code - tools like jackson/gson/spring/Json.Net are destructive. These kinda of tools are exceptionally destructive to our object oriented code because they do many things. They do things FOR us; which is great when it is done in an object oriented fashion. Few do, so few work well.
All of the JSON convertors tend to require some fashion of properties and reflection to accomplish the task. This bleeds into our code, forcing properties. Forcing our code to know about this tool.
It's destructive.
Before we worry about reinventing JSON parsing; I use JSON.NET for it, I'm not writing my own. I'll reinvent a few things; that's not one of them. I don't use all of the "make it quick" features because they are aligned with a data centric view. If we want to be doing Object Oriented Programming, we MUST... MUST! interact with objects.
OK, let's get to the focus of the post - How do we return the data?
There's kinda two questions here, serialization and a model. They have similar answers. Let's look at the first one!
An outgoing webrequest
I'm going to assume this is an API of some fashion returning a json string.
What's the behavior? What do we want to have happen?
I VERY heavily ascribe to looking at just the behavior of the system for it doing what I want. Not much into the implementation details, within the constraints of the MicroObject practices. These practices make the code maintainable; so I keep those. It does include "No Getters" so we can't use the default annotations to use the auto serializers.
It depends on where we're strarting from. If we're in a legacy system; I'd deal wtith this level last. Use discipline to remove all use, and continue to let the tool use the pieces.
public class WithPublicProperties { public string Value1 { get; set; } public string Value2 { get; set; } } [TestMethod] public void PublicPropertiesSerialized() { WithPublicProperties withPublicProperties = new WithPublicProperties {Value1 = "MyName", Value2 = "SomethingElse"}; string actual = JsonConvert.SerializeObject(withPublicProperties, Formatting.None); Assert.AreEqual("{\"Value1\":\"MyName\",\"Value2\":\"SomethingElse\"}", actual); }
Is this the best state? No. If the properties aren't used anywhere except JSON parsing... it's better than using the raw data allover.
Because it's good to demonstrate our failures... I tried the same class/test as above with private properties; Newtonsoft didn't serialize them... yay? I thought it might.
Preferred way
My preferred way is to answer, "What's the behavior?". The answer is - Serialization. We want the object to be serialized for transmission across the wire.
How do we interact with a class when we want a behavior?
A method.
This was mentioned in the twitter comment. Instead of "ToMap", I'd use a method named for what we want -
Serialize. What's the full signature? Whatever works best. The one I use most often is
void Serialize(ISerialization serialization). This enables the caller to determine the actual form of serialization. JSON, XML, BSON... whatever - It just needs to implement your
ISerialization and it's all good.
In my C# world, this is usually an adapter/facade to the Newtonsoft JObject.
public sealed class NewtonsoftJsonSerialization : ISerialization { private readonly JObject _jObject; public NewtonsoftJsonSerialization() : this(new JObject()) { } private NewtonsoftJsonSerialization(JObject jObject) => _jObject = jObject; public void Add(string key, string value) => _jObject.Add(key, value); public void Add(string key, int value) => _jObject.Add(key, value); }
The class that needs to serialize does something like this:
public interface IMySerializable { void Serializable(ISerialization serialization); } public sealed class WithSerialization : IMySerializable { private readonly string _val1; private readonly string _val2; public WithSerialization(string val1, string val2) { _val1 = val1; _val2 = val2; } public void Serializable(ISerialization serialization) { serialization.Add("Value1", _val1); serialization.Add("Value2", _val2); } }
I don't have "ONE WAY" to do this. I have a number of examples from previous projects; but that's what THAT code needed. This code might need something different. I'll use them as guides, but I'll re-implement these classes in every project that needs them. It's so easy to re-create the adapters to exactly how the code needs it, any generalization isn't worth it. You'd just end up with the full API of JObject. (I do cheat and copy/paste w/Edits. I'm lazy)
Serialization becomes recursive. If it's not raw data, we call
Serialization on the object we hold. Keep doing that until every object has serialized themself; then you have a full serialization.
This gives the control back to the object. If we want any data manipulation prior to serialization - How do we do that with any of these tools AND NOT in for the code? When the object controls it's own serialization; done and done - It simply doesn't.
I can hear the shouting about annotations and attributes - Yes. Then the class is tightly coupled to whatever JSON tool you use. If you also serialize to XML; then the XML tool; and BSON... It never ends. Your class becomes so complicated because your endpoints need something? Sounds like a vast violation of good architecture. :)
Let the object serialize itself.
There is some additional complexity this way. For databags; it doesn't make much sense. For ACTUAL objects; they normally have no access to the data to be serialized; or the resulting JSON is nested and more complex than it should be. Your JSON definition becomes coupled to your code's class coupling structure... how messed up is that? And naming; unless you start to tightly couple to your serialization tool...
Letting the object serialize itself decouples from the serialization tool. The tool is destructive when it's invasive in the code.
OK - Next Question
To the Controller
First - I think MVC is horrible. Any of these Model/Controller patterns are ANTI-OOP. OOP doesn't use models. Classes ending in '-er' are an anti-pattern. These UI patterns FORCE us to write non-object oriented code. Which works great when you're not writing good OO.
Other ways to interact with the UI were some of the early thoughts I wrote for the blog The Hotel Pattern. This came from a MASSIVE mis-understanding of the MVVM pattern. But I liked it. I don't use it. It's better than MVC (IMO), but ... not stricly what I ended up with in the Premera Windows Store app. Similar though. I write about what I think the UI layer should do here - Hint: It should do nothing. :) That's what my UI did for the Premera App.
I know that most of the software world is hooked on these anti-patterns for the UI (that's right, I said anti-pattern.) They work GREAT for bad OO code that involves databags. Especially databinding... which you should never do.
BUT - That being said - We don't always have a choice. We need to play nice with the UI side and we want our code to be good object oriented code... what do we do?
We serialize. wait... we're in terrible controller land... we model.
It's back to behavior. What do we want to accomplish? Having a databag representation of our object. Let the object control the behavior we want. The behavior we want is to generate a databag representation of our object. Let the object do it.
Another key is that the databag is a REPRESENTATION of the object; not the object itself. When we blend the data representation with the abstract object, the system becomes complicated. It's harder to understand, use, and maintain. Decoupling the data representation from the behavior representation help kepe the complexity of the system down.
public sealed class ADatabag { public string UiNamingValue { get; set; } public string DecoupledNames { get; set; } } public sealed class AGoodObject : IToModel<ADatabag> { private readonly string _val1; private readonly string _val2; public ADatabag ToModel() => new ADatabag{UiNamingValue = _val1, DecoupledNames = _val2}; } public interface IToModel<out T> { T ToModel(); }
This does add abstraction to the system. Abstraction is how we protect components from change elsewhere. If it makes sense to change the name of the field the UI interacts with; we modify
ADataBag. Our tools will update
AGoodObject. No where else in the system has to know. Only 2 files change. Our abstraction keeps the our scope of change exceptionally narrow. I don't think it could be any narrower.
The abstraction, the decoupling, from what the UI interacts with frees the UI side to make changes as appropriate without the need to worry about the impact to the reset of the system.
Summary
I hope that helps demonstrate some of the ways I approach providing the data to exits the 'system'. | https://quinngil.com/2019/08/29/microobjects-data-out/ | CC-MAIN-2020-29 | refinedweb | 1,637 | 59.7 |
nfc_hce_is_aid_registered()
Check if an AID is registered.
Synopsis:
#include <nfc/nfc.h>
NFC_API nfc_result_t nfc_hce_is_aid_registered(const uint8_t *aid, size_t aid_len, bool *is_registered)
Since:
BlackBerry 10.3.0
Arguments:
- aid
Pointer to the buffer containing the AID.
- aid_len
Length of the AID.
- is_registered
A pointer to a boolean value that is set when the function returns NFC_RESULT_SUCCESS. This value is set to true if the calling application has already registered the AID, false otherwise.
Library:libnfc (For the qcc command, use the -l nfc option to link against this library)
Description:
An application uses this function to check if it has already registered the specified AID.
Feature set 1 doesn't support this function. When called in feature set 1, the function returns NFC_RESULT_UNSUPPORTED_API.
Returns:
- NFC_RESULT_INVALID_PARAMETER: A parameter is invalid.
- NFC_RESULT_SERVICE_CONNECTION_ERROR: The application is not connected to the NFC system.
- NFC_RESULT_UNSUPPORTED_API: This function is not supported in this feature set.
Last modified: 2015-04-16
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.nfc.lib_ref/topic/nfc_hce_is_aid_registered.html | CC-MAIN-2019-35 | refinedweb | 170 | 52.87 |
At Nodes we use Deep Learning to classify thousands of user-generated feedback messages across our digital solutions. We do this to create actionable insights for our clients, giving them a way to decipher large amounts of data that otherwise would be very expensive to manually label.
You should read this if you are interested in
- Machine Learning in production
- Natural Language Processing
- TensorFlow using Keras
- Hand-drawn illustrations of questionable aesthetic quality
What’s this about?
At Nodes we collect large amounts of user generated data through our feedback module. This is an optional module that some of our clients choose to activate in the applications we make for them. The module can be set to prompt users to provide feedback at specific times, like after a period of sustained use, at a certain date, or when some use-pattern emerges.
Since 2017 this module has been active in several of our applications, and since then 42000 feedback messages have made their way through to our system, and this number keeps increasing every day.
The end-goal of this solution is to offer our clients a detailed overview of how their applications are doing, as phrased by the users. Ultimately, this allows for a data-driven approach to development, where product owners can prioritise the bugs that are most frequency reported, and as solutions are released they can monitor the development of certain kinds of user-reported issues.
While this solution requires several parts, this blogpost will focus on machine learning. Because the data is text messages, we will be looking at how to process text data with Natural Language Processing. The essence of NLP is to make words, sentences or entire documents into vectors so we can apply standard machine learning methods to them.
Why are we doing this?
Our primary focus with this project is to empower our clients with valuable and actionable insights based on the messages users provide. To achieve this we identified two important questions our solution needed to address:
- What is the core content of this message?
- What kind of specific functionality - if any - is this message about?
In our analysis of the data, which I will expand on later, we identified four different kinds of messages: bugs, usability, update and feedback.
Bugs are usually the messages that underline that some functionality is not working as intended, or that an interaction with the application was unsuccessful.
Usability messages are often directed at how users find interaction unintuitive, or can’t find certain functionality where they expected it.
Update messages are characterised by users pointing out that a functionality used to work and that a recent update to the solution broke this functionality, or made is less user-friendly.
Feedback, are the rare messages where users take their time and suggest new functionality or changes to the solution.
Take these example sentences:
As you can tell, both sentences have aspects of bugs and usability issues. Because of the nature of these types of multifaceted messages, it makes a lot of sense to allow for multiple labels.
For example, a bug can come around as a consequence of an update, so it seems appropriate to have both the bug and update label. In machine learning lingo, when something we want to classify has several labels, we call it a multi-label problem.
As you can imagine, reading, understanding and labelling thousands of messages is a daunting manual task. So, instead of spending precious resources on manual labour, why not let a machine do the work?
The solution (Yes, its deep learning)
Since this is a ML post, I wont go through the details of production, but quickly summarise the core of the infrastructure.
In short, messages come in from all our different solutions and are stored in our backend (NStack). For each message, NStack sends the free-form message to the ML API, that lives in a AWS Lambda function, where the four steps below are executed.
Detects the input language.
Translates the message to english.
Returns the translated message, the predicted label and any keywords that pertain to functionality.
This is stored in NStack and used as data in a dashboard.
The API and ML model lives in the cloud as a serverless AWS Lambda function. How to deploy models like this on AWS Lambda will be the focus of a later blog-post, so stay tuned.
Using semi-supervised clustering to generate labels
Our first problem is that the data was not annotated. This means that we dont know what categories the messages belong to. This information is essential, because that’s the data we want to use to train our model.
To solve this problem, we used a simple approach of generating labels based clustering and then confirming edge cases manually. Hence the name semi-supervised because the labels are generated part supervised (manually) and part unsupervised (clustering).
Before we can cluster anything, we need to turn our messages into something our clustering algorithms can understand. We often call this step preprocessing.
Preprocessing
With NLP problems, preprocessing can be quite different depending on what you’re trying to achieve. It’s beyond the scope of this post to explain all the nuances, but I encourage anyone interesting in starting with NLP to check out this excellent guide to spaCy a NLP framework for Python.
Let’s get get into it!
To do simple clustering, we need to complete three key steps:
- Clean each message
- Remove stop words in each sentence
- Tokenise the sentences
Each of those steps can be illustrated with simple examples.
Consider this made-up sentence:
“I cant find the bøtton that closes my 5,, tabs when they are OPEN!!”
Cleaning the text is basically about standardisation. Removing superfluous characters such as
!/&#, numbers, extra spaces, making everything lowercase, etc. So the sentence becomes:
“i cant find the btton that closes my tabs when they are open”
Stop words are words like “is”, “an”, “the” etc. They often bring little information to our model, so we remove them. After removing stop words, the sentence would look like:
“i cant find btton closes my tabs open”
Tokenization is the task of chopping up a sentence into pieces called tokens. For our sentence, there are eight unique tokens because there are eight different words after removing the stop words. If the same word appears twice, it will be counted in the hash table with frequency two.
Before we can use clustering, we apply a method called TFIDF (term frequency–inverse document frequency). This method is a product of how frequent a word is in a document, multiplied with how unique (inverse frequency) it is in the entire collection of messages.
Using the TFIDF distribution, we apply a simple K-Means clustering algorithm to identify the clusters. As illustrated below, we’re trying to get our algorithm to draw a sensible squiggly line that separates messages in a manner that is semantically meaningful.
After some experimentation with defining the clusters and manual inspection of sentences, we decide our four main clusters are bug, usability, update and feedback.
As an example of what we found in the clusters, consider the sentence
“I dont like the new drop down menu.”
This sentence is semantically different from
“The new drop down menu does not work”
because it allures to usability or design issue (the user does not like the design) versus a bug (the menu does not work).
The rest of the effort here is to manually go through most of the sentences where clusters overlap and manually decide on their labels.
We manually went through approximately 1000 sentences and labelled them manually. This means that we labelled less than 2 percent of our data manually and 98% automatically.
The Deep Neural Network
After the step above, we have the labels we need to train our machine learning model.
Now you might ask: Why not just use the unsupervised method to classify future messages as well?
Well, we can’t be bothered to run our semi-supervised approach every single time a new message comes into our system. The reason we cant be bothered to do this is because K-Means clustering is a greedy algorithm with a time complexity of , which is computer science language for slow.
Also, we manually tagged a lot of messages, so we would like our machine learning model to: a) Learn the patterns that our KMeans clustering found but still be computationally cheap to run inference on. b) Be able to learn from the messages we tagged manually.
To achieve this we use a Long Short-Term Memory (LSTM) model. This is a specific type of Recurrent Neural Network (RNN) that - unlike feed forward neural networks such as the Perceptron - allows for feedback connections. Feedback connections in sequence allow for memory, something a feed forward network does not have.
Deep learning models that have memory are very useful when the order of the input matters, which is very much the case for sentences (unless you are Yoda).
Let’s do as Master Yoda says and remember our imports.
import tensorflow as tf import numpy as np from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Embedding, SpatialDropout1D, LSTM from tensorflow.keras.optimizers import SGD from tensorflow.keras.callbacks import EarlyStopping import tensorflow_docs as tfdocs import tensorflow_docs.plots import tensorflow_docs.modeling from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt
Steps before training
Building a deep learning model with LSTM layers is relatively simple with the Keras API for TensorFlow 2.0.
From their own site:
tf.kerasis TensorFlow’s high-level API for building and training deep learning models. It’s used for fast prototyping, state-of-the-art research, and production, …
To train the model, we apply the same preprocessing steps as we did when we did the clustering, with one additional step.
Before training the model, all sentences need to be equal length. Thos is done by choosing a maximum length that a sentence can be, and then either removing tokens or padding them with zeros.
The following code shows the four steps: Cleaning, removing stop words, tokenisation and padding.
def get_train_test_sequences(messages, labels): tokenized_messages = [] tokenized_labels = [] for m, l in zip(messages, labels): tm, ll = remove_stopwords(m, l) tokenized_messages.append(tm) tokenized_labels.append(ll) X_train, X_test, y_train, y_test = train_test_split(tokenized_messages, tokenized_labels, test_size=0.2, random_state=2020) tokenizer = Tokenizer(num_words = VOCAB_SIZE, oov_token=OOV_TOK) tokenizer.fit_on_texts(X_train) word_index = tokenizer.word_index train_sequences = tokenizer.texts_to_sequences(X_train) train_padded = pad_sequences(train_sequences, maxlen=MAX_LENGTH, padding=PADDING_TYPE, truncating=TRUNC_TYPE) print(len(train_sequences)) print(train_padded.shape) test_sequences = tokenizer.texts_to_sequences(X_test) test_padded = pad_sequences(test_sequences, maxlen=MAX_LENGTH, padding=PADDING_TYPE, truncating=TRUNC_TYPE) print(len(test_sequences)) print(test_padded.shape) return train_padded, test_padded, np.array(y_train), np.array(y_test)
This function takes in clean messages, removes stop words, splits the data into test and training sets, tokenises, adds padding and returns them.
Looking at a randomly chosen sentence after these steps, we see that it is arranged as sequence of tokens (just numbers) followed by zeros until we get to the desired padding length of 20. No matter how long our sentences are, they will be represented by a length of 20 tokens.
array([ 5, 1039, 1657, 82, 227, 562, 229, 855, 65, 227, 18, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)
Arrays like the above one is the training input for our LSTM model. Similarly, the training labels are simply lists that correspond to one, or several, of the labels, like this:
array([1, 0, 0, 0])
Where
array([1, 0, 0, 0]) is binary encoding for
bug. Remember, that a message can have several tags, so this could also look like this:
array([1, 1, 0, 0])
Which would mean the sentence was both a bug and a usability issue.
Now we are ready for training!
Building and training the LSTM
As mentioned earlier, Keras is a high-level API for TensorFlow that allows putting a lot of functionality into a few lines of code. The real model that we use in production is slightly more complicated, but I have removed some complexity in order to make it easier to explain.
Thus, behold a minimal viable example of the model training code:
def train_model(): model = Sequential() model.add(Embedding(VOCAB_SIZE, EMBEDDING_DIM, input_length=train_padded.shape[1])) model.add(SpatialDropout1D(0.2)) model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(y_train.shape[1], activation='sigmoid')) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)()]) return model, history
So we are training a relatively advanced model in something like ten lines of code. Impressive!
However, loads of things are happening behind the scenes here. Lets look at a few of them in detail.
1. Model architecture
Keras allows us to explore our small model by simply calling
model.summary().
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding (Embedding) (None, 20, 64) 3200000 _________________________________________________________________ spatial_dropout1d (SpatialDr (None, 20, 64) 0 _________________________________________________________________ lstm (LSTM) (None, 100) 66000 _________________________________________________________________ dense (Dense) (None, 4) 404 ================================================================= Total params: 3,266,404 Trainable params: 3,266,404 Non-trainable params: 0
First, we define a sequential model by calling
model = Sequential(). Then we add layers.
The first layer is a
Embedding layer, and is interesting in its own right. This layer takes a sentence and learns a representation (basically a function) of that sentence. This has a lot of advantages over other methods, can you can read more about here.
The second layer is a
SpatialDropout1D layer. Coarsely put, this layer remove features from the data, which has been shown to help greatly in preventing overfitting.
Our third layer is the
LSTM layer. I wont go into detail with what’s the layer as its quite complex, but look here for a more detailed blogpost. The
100 parameter is the number of units, which dimensions the output space for our next layer.
The fourth and last layer is the output of the model. This is where we get the classification, e.g.
[1, 1, 0, 0] for a sentence which has the bug and usability tags.
One important thing to notice is the choice of activation function. This function is what sits between the layers and “interprets” the output between the layers.
When doing multi-label classification like we are doing, its important not to have an activation function that decreases the probability of one class as a function of the probability of the other classes. If had two outputs that were mutually exclusive, like predicting if a sentence was either a bug or a usability issue, it would be fine to use something like
softmax that does exactly this.
2. Parameter optimisation
Training the model happens with gradient descent. If you dont know what that is, you should do some googling as this method is at the core of 95% of methods for optimising parameters in machine learning.
We choose stochastic gradient descent with a learning rate of
0.01.
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
By defining the model and our choice of optimiser, we are ready to compile our model.()])
There are two important things to notice here. First, our choice of loss function. Second the
EarlyStopping call.
The loss function is the function that we are trying to minimise by choosing certain parameters for our neural network - this is the essence of learning in machine learning.
The choice of loss function is pivotal, but also a quite mathematical subject. You can read a great visual explanation of binary cross entropy here.
Early stopping is an important trick we use to prevent overfitting. It simply stops model training when the loss has not improved in a number of epochs. This decreases the probability of the model running for too many epochs, which in some cases leads to overfitting the training data.
Looking at test run of the model, we see that early stopping is invoked after epoch 38, because the model loss hasn’t improved for the last three rounds.
3. Training
Training is an iterative process where our algorithm is constantly choosing new parameters with the constant aim of minimising the loss function. By doing this, the model is learning an effective representation of the training data, which is the goal of all this effort. After 38 epochs of 750 iterations, our model grind to a halt because of our early stop criteria explained above.
Epoch 36/750 19752/19752 [==============================] - 27s 1ms/sample - loss: 0.1240 - val_loss: 0.1286 Epoch 37/750 19752/19752 [==============================] - 28s 1ms/sample - loss: 0.1213 - val_loss: 0.1250 Epoch 38/750 19752/19752 [==============================] - 28s 1ms/sample - loss: 0.1191 - val_loss: 0.1296
Lets looks at the loss that our loss function outputs over the 38 epochs we are training the model.
The training and test loss decrease steadily, and slowly stops to improve a lot after the 35th epoch. Because it does not see further improvement, our early stop rule halts the training.
4. Evaluating our model
Now we have a trained model we can use to generate predictions with on our test set, to see how well the model does on data it has never seen before (the test set).
preds = model.predict(test_padded) preds[preds>=0.5] = 1 preds[preds<0.5] = 0 _ = abs(preds - y_test) sum(_) array([364., 530., 114., 93.])
These numbers show that out of 6173 messages in the test set, our model labels 363 bug, 530 usability, 114 update and 93 feedback messages wrong. This translates to an accuracy of 95%, 93%, 98% and 99% respectively for the four classes. Not too bad for a relatively simple setup!
Is accuracy the right or the only metric we should look at when training a model? Definitely not - but evaluating a model like this takes some effort, and that’s for another day.
Summary and next steps
In the above, I sketched the overarching approach we at Nodes take to labelling all the feedback users provide when they interact with our digital solutions. To do this I demonstrated how to use natural language processing to process text so it can be fed to a clustering algorithm that automatically identifies clusters in the text. After resolving some edge cases manually, I showed how to feed this data to a deep learning model that learns the structure of each sentence and is able to classify new messages as either being either a bug, a usability issue, an issue with an update or constructive feedback.
The aim of all of this is to provide a solution that generates actionable insights into what users are saying about the digital solutions we build for our clients. Our next step is to build a dashboard that summarises this information and makes it available for product owners across our digital solutions. | https://engineering.monstar-lab.com/2020/05/25/Using-deep-learning-to-label-feedback-messages | CC-MAIN-2020-45 | refinedweb | 3,160 | 54.32 |
Thanks in advance. =)
import java.util.*; import java.io.*; import javax.swing.JOptionPane; public class WorkingFiles { public static void main(String args[]) throws FileNotFoundException { // Local variable declarations. String file_name, search_text; String data_text = "the quick brown fox jumps over the lazy dog."; // Input instructions to the console. System.out.print("Enter filename: "); // Ask for the file name. Scanner s = new Scanner(System.in); file_name = s.next(); // Create the text file with filename from the user. PrintWriter oFile = new PrintWriter(file_name + ".txt"); // Input for search keyword. System.out.print("Search for: "); search_text = s.next(); if(data_text.indexOf(search_text) != -1) JOptionPane.showMessageDialog(null, "\"" + search_text + "\" found!", "Keyword Found", JOptionPane.INFORMATION_MESSAGE); else JOptionPane.showMessageDialog(null, "\"" + search_text + "\" not found!", "Keyword Not Found", JOptionPane.ERROR_MESSAGE); // Close the file. oFile.close(); System.exit(0); } }
EDIT: Sorry about the initialization of the search_text. I just forgot to erase that. My teacher told me that even though the progam will stop, chances are there will be times that it won't. [That's what he meant.]
EDIT: Ok, so the code works. The dialog was just behind my IDE. LOL!
This post has been edited by Ricendithas: 13 July 2009 - 04:05 PM | http://www.dreamincode.net/forums/topic/114524-the-messagedialogbox-wont-appear/ | CC-MAIN-2017-47 | refinedweb | 193 | 56.11 |
An easy to use library for fulscreen modal image presentation with zooming capability. Opened image could be dismissed by tap anywhere or swipe image too far from center. You could change background opacity and color, duration of showing, use or not bars stubs (see in examples).
Planned:
- Implement image sharing
- Horizontal orientation support
- Carthage support
Example
To run the example project, clone the repo, and run
pod install from the Example directory first.
Installation
ModalImage is available through CocoaPods. To install
it, simply add the following line to your Podfile:
pod 'ModalImage'
Easy To Use
First of all add next import to your UIViewController
import ModalImage
and next call
showFullScreenImage(from: imageView)
And voila
Author
License
ModalImage is available under the MIT license. See the LICENSE file for more info.
Latest podspec
{ "name": "ModalImage", "version": "0.1.1", "summary": "ModalImage is for modal presenting images in an app.", "homepage": "", "license": { "type": "MIT", "file": "LICENSE" }, "authors": { "Sofia Rozhina": "[email protected]" }, "source": { "git": "", "tag": "0.1.1" }, "swift_version": "4.2", "platforms": { "ios": "12.1" }, "source_files": "ModalImage/**/*.{swift}", "resources": "ModalImage/**/*.{png,jpeg,jpg,storyboard,xib,xcassets}", "frameworks": "UIKit" }
Tue, 04 Dec 2018 11:08:12 +0000 | https://tryexcept.com/articles/cocoapod/modalimage | CC-MAIN-2019-47 | refinedweb | 193 | 51.04 |
I have already written about tentakel tool and shell script hack to run a single command on multiple Linux / UNIX / BSD server. This is useful to save time and run UNIX commands on multiple machines. Linux.com has published an article about a new and better tool called pssh:
Recently I come across a nice little nifty tool called pssh to run a single command on multiple Linux / UNIX / BSD servers. You can easily increase your productivy with this SSH tool. more about pssh here.
🐧 8 comments so far... add one ↓
Take a look at Dancer’s shell, too.
Hi guys,
I’ve tried running these versions of pssh on RHEL 5.2:
pssh-1.2.2-1.el4.rf.noarch.rpm
pssh-1.2.2-1.fc3.rf.noarch.rpm
pssh-1.2.2-1.rh9.rf.noarch.rpm
pssh-1.4.0-1.fc10.noarch.rpm
pssh-1.4.3-1.noarch.rpm
The older versions install but say
Traceback (most recent call last):
File "/usr/bin/pssh", line 21, in ?
from basethread import BaseThread
ImportError: No module named basethread
The latest version doesn’t install
error: Failed dependencies:
python(abi) = 2.5 is needed by pssh-1.4.3-1.noarch
Any ideas?
Thanks
I am also getting the same error on RHEL4 any success?
error: Failed dependencies:
python(abi) = 2.5 is needed by pssh-1.4.3-1.noarch
installing version pssh-1.2.2-1 is OK
Hey !!
I want to use pssh –options (-O) to add my key-file path and StrictHostKeyChecking=no
How ca n i do it ??
does anyone faced tty alocation within pssh?
being more specific pseudo-tty allocation …something like
ssh -t commandThatRequeriesTty
instead of this “Parallel Distributed shell” (pdsh) is the good option.
PSSH is a bit typical to configure. | https://www.cyberciti.biz/tips/parallel-ssh-execution.html | CC-MAIN-2021-25 | refinedweb | 302 | 76.93 |
I am trying to use the moment library. It's installed as a node module so that I can access it from my web module. Here is the client code:
export function button2_click(event) {
console.log("About to test moment");
testMoment().then( str => console.log("Result: " + str));
}
Here is the web module code:
import {moment} from 'moment';
export function testMoment() {
console.log("Inside testMoment()");
return new Promise( function(resolve, reject) {
try {
// resolve("This is a test");
resolve(moment().format("dddd MMM D YY"));
} catch(err) {
reject(err.toString());
}
});
}
This is what I get in the console:
About to test moment
Inside testMoment()
That's it. I would expect to see a third line containing the formatted date.
When I move the comment down so that "This is a test" resolves and the moment() is not used, I get this:
About to test moment
Result: This is a test
Inside testMoment()
So, it works as long as I'm not using the moment library. The call to moment().format() is killing the thread on the back end but I have no visibility into what's happening. Is there anything I can do to see what happened on the server? I'm using the sandbox database, not live. Of course an actual resolution to this problem would be great but I'd settle for some visibility into the error that's presumably showing up in the server log so I can figure it out for myself.
I use the following web module code for moment:
export function parseDates (datetime,format) { var moment = require('moment'); return moment(datetime).format(format); } The difference I see is that I use require rather than import... Yes, require gives a syntax error, but it still works.
Works for me - thanks! Problem in the transpiler? Like who would have thunk? | https://www.wix.com/corvid/forum/community-discussion/moment-library-fails-on-back-end-no-diagnostics | CC-MAIN-2019-47 | refinedweb | 302 | 67.35 |
The task is to either.
print countSubstring("the three truths","th") 3 // do not count substrings that overlap with previously-counted substrings: print countSubstring("ababababab","abab") 2
The matching should yield the highest number of non-overlapping matches. In general, this essentially means matching from left-to-right or right-to-left (see proof on talk page).
#include <iostream> #include <string> // returns count of non-overlapping occurrences of 'sub' in 'str' int countSubstring(const std::string& str, const std::string& sub) { if (sub.length() == 0) return 0; int count = 0; for (size_t offset = str.find(sub); offset != std::string::npos; offset = str.find(sub, offset + sub.length())) { ++count; } return count; } int main() { std::cout << countSubstring("the three truths", "th") << '\n'; std::cout << countSubstring("ababababab", "abab") << '\n'; std::cout << countSubstring("abaabba*bbaba*bbab", "a*b") << '\n'; return 0; }
- Output:
3 2 2
Content is available under GNU Free Documentation License 1.2. | https://tfetimes.com/c-count-occurrences-of-a-substring/ | CC-MAIN-2019-30 | refinedweb | 151 | 56.35 |
Modules are one of the four prominent features of C++20. They overcome the restrictions of header files and promise a lot: faster build-times, fewer violations of the One-Definition-Rule, less usage of the preprocessor. Today, I want to create a simple math module.
Modules may be older than you think. My short historic detour should give only an idea, how long it takes to get something such valuable into the C++ standard.
In 2004, Daveed Vandevoorde wrote the proposal N1736.pdf, which described the first time idea of modules. It took until 2012 to get a dedicated Study Group (SG2, Modules) for modules. In 2017, Clang 5.0 and MSVC 19.1 provided the first implementation. One year later, the Modules TS (technical specification) was finalized. Around the same time, Google proposed the so-called ATOM (Another Take On Modules) proposal (P0947) for modules. In 2019, the Modules TS and the ATOM proposal was merged into the C++20 committee draft (N4842), which is the syntax I present in my posts to modules.
The C++ standardization process is democratic. Section Standardization gives you more information about the standard and the standardization process. The image to the right shows the various study groups.
Explaining modules from the user's perspective is quite easy, but this will not hold for the implementer's perspective. My plan for this post is to start with a simple modules math and add more features to it as we go.
First, here is my first module:
// math.ixx
export module math;
export int add(int fir, int sec){
return fir + sec;
}
The expression export module math is the module declaration. By putting export before the function adds, add is exported and can, therefore, be used by a consumer of my module.
// client.cpp
import math;
int main() {
add(2000, 20);
}
import math imports the module math and makes the exported names in the module visible to the client.cpp. Let me say a few words about module declaration files before I build the module.
Did you noticed the strange name of the module: math.ixx.
To compile the module, you have to use a very current Clang, GCC, or cl.exe compiler. I go in this post with cl.exe on Windows. The Microsoft blog provides two excellent introductions to modules: Overview of modules in C++ and C++ Modules conformance improvements with MSVC in Visual Studio 2019 16.5. In contrast, the lack of introductions to the Clang and GCC compiler makes it quite difficult to use modules.
Here are more details of my used Microsoft compiler:
These are the steps to compile and use the module with the Microsoft compiler. I only show the minimal command line. With an older Microsoft compiler, you have to use at least /std:cpplatest.
cl.exe /experimental:module /c math.ixx // 1
cl.exe /experimental:module client.cpp math.obj // 2
For obvious reasons, I will not show you the output of the program execution. Let me change this.
The global module fragment is meant to compose module interfaces. It's a place to use preprocessor directives such as #include so that the module interface can compile. The code in the global module fragment is not exported by the module interface.
The second version of the module math supports the two functions add and getProduct.
// math1.ixx
module; // global module fragment (1)
#include <numeric>
#include <vector>
export module math; // module declaration (2)
export int add(int fir, int sec){
return fir + sec;
}
export int getProduct(const std::vector<int>& vec) {
return std::accumulate(vec.begin(), vec.end(), 1, std::multiplies<int>());
}
I included the necessary headers between the global module fragment (line 1) and the module declaration (line 2).
// client1.cpp
#include <iostream>
#include <vector>;
}
The client imports the module math and uses its functionality:
Maybe, you don't like to use a Standard Library Header anymore. Microsoft supports modules for all STL headers. Here is what I have found in the post "Using C++ Modules in Visual Studio 2017" from the Microsoft C++ team blog.
std.regex
<regex>
std.filesystem
<experimental/filesystem>
std.memory
<memory>
std.threading
<atomic>
<condition_variable>
<future>
<mutex>
<shared_mutex>
<thread>
std.core
To use the Microsoft Standard Library modules, you have to specify the exception handling model (/EHsc) and the multithreading library (/MD). Additionally, you have to use the flag /std:c++latest.
Here are the modified versions of the interface file math2.ixx and the source file client2.cpp.
// math2.ixx
module;
import std.core; // (1)
export module math;
export int add(int fir, int sec){
return fir + sec;
}
export int getProduct(const std::vector<int>& vec) {
return std::accumulate(vec.begin(), vec.end(), 1, std::multiplies<int>());
}
// client2.cpp
import std.core; // (1);
}
Both files use in line (1) the module std.core.
My first modules math.ixx, math1.ixx, and math2.ixx defined its functionality in one file. In the next post, I separate the module definition into a so-called module interface unit and a so-called module implementation unit. 135
Yesterday 5796
Week 34566
Month 211887
All 9685519
Currently are 126 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
RSS feed for comments to this post | https://modernescpp.com/index.php/cpp20-a-first-module | CC-MAIN-2022-27 | refinedweb | 872 | 59.4 |
If you have ever worked with more than one relational data server, you know that SQL is not one language. In fact, SQL standard is an artificial construct not supported by any major player. Every vendor has their own idiosyncrasies, extensions, and shortcomings.
For you, as an application developer, that means you are left with a choice:
- Limit your SQL usage to the absolute minimum required (typically barely more than SELECT * FROM T) and do all meaningful work in the application. That sure defeats the purpose of SQL and bypasses the powerful optimization techniques for which you or your customers have already paid.
- Exploit the data server and its proprietary capabilities to the fullest and deal with the consequences. Among these consequences you will find that you have become hostage to the data server vendor. It is hard to quibble with the dealer on price when he knows you need your fix and you have to come to him. Your threats to leave will sound hollow at best.
- Shades of gray of the above. Typically you choose some sort of abstraction layer that allows you to customize data access at some level of complexity. This choice requires a great deal of restraint.
Unfortunately, falling into this second category happens quite easily and often remains undetected. Did you know, for example, that it is impossible to write a query with something as simple as an inline view in Oracle® and have it work against DB2, or the other way around?
This article introduces a grab bag full of tweaks and features in DB2 Viper 2 that greatly extend the scope of queries written against Oracle that, out of the box, run against DB2. None of these features show up in marketing slides, but they just might be what makes the difference to you when porting to DB2.
Some of the features described in this article are bound to be controversial. This may not only be because they are non-standard, but also because they are either obsolete to begin with and therefore not allowed under coding guidelines, or the features may be in outright conflict with DB2's behavior. In order to accommodate backward compatibility, and allow a level of control for the DBA over what is being used, these particular features have been placed under registry control. The DB2 registry variable is called DB2_COMPATIBILITY_VECTOR and it is set to a hex string. To turn on all features, perform the following set-up from a shell:
Listing 1. Enable all compatibility features
Which specific bit corresponds to which individual feature are highlighted in this article as topics are introduced.
The article "Port CONNECT BY to DB2" (developerWorks, Oct 2006) discussed how to map Oracle style recursion using the CONNECT BY clause to SQL standard recursive common table expressions using WITH and UNION ALL. It even proved that SQL standard recursion is at least as powerful as CONNECT BY. In fact, the standard notation is even more powerful given that it can effectively produce more rows than provided by the input data.
This power is fine, but when porting an application, technology is not the issue. It turns out that some applications rely heavily on recursion. In those cases, the effort of translating from CONNECT BY to SQL standard recursion alone can become prohibitively expensive. This is especially true if the application relies on the implied ordering produced by CONNECT BY.
For that reason, DB2 Viper 2 introduces built-in support for CONNECT BY recursion.
When looking at a given language syntax, both expressive power and simplicity are of importance. Unfortunately, these properties collide and recursion shows this clearly:
- The SQL standard recursion is extremely powerful using only a negligible language extension:
- Allow reference of the table expression within its own definition.
- Use UNION ALL to model the seed of a recursion and the recursive step.
- On the other hand, the CONNECT BY recursion is very user friendly for a common case: walking of hierarchies.
This section discusses not only how CONNECT BY works, but also attempts to describe when best to use which style of recursion.
The easiest example to explain, and perhaps the most popular usage scenario for CONNECT BY, is a reports-to chain in a company. Assume the following:
Listing 2. Employee table for CONNECT BY
The following query returns all the employees working for Goyal as well as some additional information, such as the reports-to chain:
Listing 3. CONNECT BY example
The individual pieces of the query have the following meaning:
- Lines 7 and 8 comprise the core of the recursion. The optional START WITH describes the WHERE clause to be used on the source table for the seed of the recursion. In this case, you select only the row of employee Goyal. If START WITH is omitted, the entire source table is the seed of the recursion.
- CONNECT BY describes how, given the existing rows, the next set of rows is to be found. To distinguish values from the previous step with those from the current step, CONNECT BY recursion uses a unary operator PRIOR. PRIOR describes EMPID to be the employee ID of the previous recursive step, while MGRID originates from the current recursive step. As a unary operator, PRIOR has the highest precedence. However, using parentheses the operator can cover an entire expression.
- LEVEL in line 2 is a pseudo column that describes the current level of recursion. Pseudo columns are a concept foreign to the SQL standard. If the resolution of an identifier in DB2 fails completely, meaning there is neither a column nor a variable with that name in scope at all, then DB2 considers pseudo columns.
- CONNECT_BY_ROOT is another unary operator. It always returns the value of its argument as it was for the first recursive step. That is the values returned by the START WITH clause.
- SYS_CONNECT_BY_PATH() is a binary function. It prepends the second argument to the first, and then appends the result to the value it produced in the previous recursive step. Its arguments must be character types. Note that in DB2 the length of the result type is the greater of 1024 bytes and the length of the second argument.
Unless explicitly overridden, CONNECT BY recursion returns a result set in a partial order. That is, the rows produced by a recursive step always follow the row that produced them. Siblings at the same level of recursion with the same parent have no specific order.
- ORDER SIBLINGS BY in line 9 defines an order for siblings that further refines the partial order potentially into a total order.
Not shown in this example is that CONNECT BY recursion raises an error if a cycle occurs. A cycle happens when a row produces itself directly or indirectly. Using the optional CONNECT BY NOCYCLE clause, the recursion can ignore the duplicated row and therefore avoid the cycle and the error. DB2 for Viper 2 support 64 levels of recursion for CONNECT BY.
There is one caveat that you must be aware of when dealing with CONNECT BY. The semantics of the WHERE clause in conjunction with implicit joins.
- Any search condition that is used to join two tables in the FROM clause (such as S.pk = T.fk) is executed before the recursion is executed. Therefore, it is a filter reducing the rows that are being recursed over.
- Any local search condition (such as c1 > 5) is being executed on the result of the recursion.
Presumably this behavior is rooted in the history of the (+) join syntax.
Because of this interesting quirk, CONNECT BY has been placed under registry control. To enable only CONNECT BY use:
Listing 4. Enabling CONNECT BY
So, when should CONNECT BY be used and when should a recursive common table expressions be used?
Use CONNECT BY when:
- You need to return an ordered result set of an existing hierarchy or an email thread.
- There may be cycles that need to be caught or skipped.
- The recursion is over a self-referencing RI constraint.
Use recursive common table expressions when:
- You need to "generate rows" through recursion such as "all the business days of 2007."
- Ordering of the result set does not matter.
- You expect recursion of more than 64 levels.
DB2 UDB V8.1 introduced TO_CHAR() and TO_DATE() as synonyms for the generic VARCHAR_FORMAT() and TIMESTAMP_FORMAT() functions. However, these functions supported only a very narrow set of formats. In DB2 Viper 2, the coverage for formats has been greatly expanded. In essence, just about all formats commonly used in Oracle applications are supported unless they require National Language support (such as "Monday" vs. "Montag" or "December" vs. "Dezember"). Here is a list of the supported formats:
Table 1. Supported formats
More details on the formats can be found in the DB2 Viper 2 Information Center (see the Resources section). For this article, some examples will suffice:
Listing 5. Usage of TO_CHAR/TO_DATE
Sometimes it is necessary to limit the number of rows returned by a query. Oracle applications typically utilize the ROWNUM pseudo column for that purpose. In DB2, there are two options to achieve the same effect:
- FETCH FIRST n ROWS clause
- ROW_NUMBER() OLAP function
To make porting of SQL using ROWNUM easier, three changes have been made:
- ROWNUM is now supported as a synonym for ROW_NUMBER() OVER(). This captures all common usages of ROWNUM in the select list.
- ROW_NUMBER() OVER() (and thus ROWNUM) can now be specified in the WHERE clause. This extension not only covers all the common usages of ROWNUM in Oracle, but also adds capabilities very close to the LIMIT OFFSET clause employed by some open source vendors because ROWNUM can be used together with a BETWEEN clause to allow easy result set pagination.
- When a ROW_NUMBER() OVER() or ROWNUM is followed by an ORDER BY clause, the order is inherited into the OLAP function, guaranteeing that the values generated match the outer order.
Here are some example usages:
Listing 6. Usage of ROWNUM
Presently, ROWNUM is under registry control. To enable only ROWNUM use:
Listing 7. Enabling ROWNUM
DB2 Viper 2 introduces a number of synonyms or new functions that make porting easier. These functions include:
- NVL
- DECODE
- LEAST and GREATEST
- BITAND, BITANDNOT, BITOR, BITXOR, and BITNOT
These functions are generally available with no need to set a registry variable.
NVL has been a precursor to COALESCE with the limitation that it can only handle two arguments. Its workings are quite simple. If the first argument NOT NULL NVL returns that argument, otherwise it returns the second argument. Therefore, NULL is returned only if both arguments are NULL. For simplicity in DB2 Viper 2, NVL has been made a synonym to COALESCE. Meaning that NVL accepts any number of arguments with a minimum of two.
Listing 8. Usage of NVL()
Before the CASE expression, there was DECODE. DECODE, in a nutshell, is a function notation for what is called a simple case expression. All major vendors have long since added CASE expression support to their products. However, DECODE has retained a rather loyal following mainly because it is syntactically tight. There is one significant difference between CASE and DECODE though. In DECODE, NULLs are considered to be equal. The following is a comparison of CASE and DECODE
Listing 9. Usage of DECODE()
LEAST and GREATEST return the smallest or biggest value of a set of arguments. Alternative names for these functions are the MIN and MAX scalar functions, which must not be confused with the aggregate functions of the same name. While MIN/MAX with one argument aggregate values from a set of rows and return the smallest or biggest value respectively, MIN (aka LEAST) and MAX (aka GREATEST) with two or more arguments operate only on their input arguments for the current row with no grouping effect. Aside from this principle difference, LEAST and GREATEST also return a NULL if any argument is NULL, where the MIN and MAX aggregate functions ignore NULLs. The following is an example comparing the functions.
Listing 10. Usage of LEAST() and GREATEST()
DB2 Viper 2 introduces a set of functions that are used to efficiently encode, decode, and test bit arrays. Simply put, the bit manipulation functions view a whole number in its binary two's complement representation and allow the setting, resetting, toggling, or testing of individual bits. Note that the binary representation is independent of the internal representation. Meaning it is independend of endianess or the encoding of the base type, such as DECFLOAT. The number of bits supported depends on the datatypes used:
Table 2. Number of bits supported for a data type
The meaning of each function and its usage is rather straight forward. Nonetheless, Table 3 explains each function followed by some examples.
Table 3. Bit operation functions
In general, you should be aware of the data types used as arguments for these functions. If the types do not match up, DB2 goes through regular type promotion from one type to the other. Meaning a positive value gets padded with a number of binary zeros to the left while a negative value is padded to the left with binary ones. So using a consistent type is strongly advised. Also, be aware that while DECFLOAT supports 113 bits, two BIGINTS that take the same amount of storage support 128 bits. The price of convenience.
Listing 11 provides code examples:
Listing 11. Usage of BIT manipulation functions
Before supporting the SQL standard OUTER JOIN syntax, Oracle used a proprietary syntax to model outer joins using an extension to the WHERE clause. While this syntax is long since deprecated, there are still many old applications that use this syntax. To make matters worse, plenty of developers have not adopted the SQL standard syntax for various reasons and continue to produce new applications based on the old syntax.
To support porting of those old applications (+), style join syntax has been added to DB2 Viper 2. DB2 development strongly discourages usage of this feature for any other purpose. Therefore, (+) join syntax is under registry control. To enable only (+) join syntax use:
Listing 12. Enabling (+) join syntax
Having given this disclaimer, the syntax will now briefly be introduced, assuming that those who need to know, already know, and merely need confirmation that support is available.
Simply put, the (+) join syntax uses the implicit join syntax where tables are listed in the FROM clause and predicates correlate the tables in the WHERE clause. Do denote that a predicate in the WHERE clause is an outer join. All column references to the inner (that is the null producing side) are trailed by a "(+)." If the predicate references more than one column, then all columns must be marked in the same way. Full outer joins are not supported. Also "bushy" joins, that is joins which would require the setting of braces in the SQL standard, are not supported. Anyway, this feature has already been given more attention than it deserves.
Listing 13 provides a couple of examples comparing (+) notation and SQL standard outer joins.
Listing 13. (+) join examples
Often the differences between SQL dialects are annoyingly trivial. Meaning that it can be incomprehensible for the developer why identical behavior uses different syntax, often without leaving any option for compatible SQL code. The items in this section are geared to decrease these nuisance differences.
Sometimes all that is required of the data server is the computation of a simple scalar expression, or the retrieval of a built-in variable, such as the current time. However, since SQL is table centric such a task is not nearly as trivial and consistently solved as one might expect. One possible solution is to provide a dummy table that has one row and one column, and can serve up the environment to perform the computation. In DB2, this dummy table is called SYSIBM.SYSDUMMY1. In Oracle, it is called DUAL. So in DB2 Viper 2, you can also use DUAL. Note that DUAL does not require a schema. If the functionality is enabled, any unqualified table reference named "DUAL" is presumed to be this one. So it is best not to use DUAL in your regular schema. Listing 14 shows a quick example:
Listing 14. DUAL and its DB2 alternative
Due to the potential conflict with existing tables, DUAL has been placed under registry control. To enable only DUAL, use:
Listing 15. Enabling DUAL
When sequences were introduced in DB2 V7.2, there were three changes that had to be made from the precedent to make the feature palatable to the SQL standard:
- Separation of namespaces between tables, routines, and sequences
- Clear separation of the previous value from the next value
- Usage of an expression instead of a pseudo column
As a result, any sequence generation and sequence look-up is incompatible between Oracle and DB2. DB2 Viper 2 adds the following toleration:
- Allow pseudo column notation to retrieve or generate a sequence value. Note that pseudo columns are only resolved after regular name resolution fails.
- Allow CURRVAL as notation to retrieve the previous value in pseudo column notation.
Note that the second bullet does not imply that a CURRVAL in the same select list as a NEXTVAL returns NEXTVAL, as it does in Oracle. DB2 maintains that separation. Listing 16 shows examples of the new support:
Listing 16. CURRVAL and NEXTVAL usage
DB2 users know inline views as nested subqueries. In other words, they are SELECTs in the FROM clause. In DB2, an inline view traditionally must be named. In Oracle it must not be named. The conundrum to a SQL developer is self evident. Only in recent releases of Oracle, can complex queries be written using common table expressions (also known as the WITH clause), and also be run against DB2. DB2 Viper 2 makes naming of nested subqueries optional, allowing for shared SQL for complex queries. DB2 users will appreciate that omitting a name exposes the function name of a table function. Listing 17 provides a simple example illustrating the behavior:
Listing 17. Inline view usage
When subtracting one query from another, DB2 and Oracle have also chosen different paths. In DB2, the EXCEPT keyword is used. In Oracle, the keyword is MINUS. DB2 Viper 2 now accepts MINUS as a synonym for EXCEPT. So the two following queries are identical:
Listing 18. Usage of MINUS
While Oracle has long since adopted the DISTINCT keyword to filter out duplicate rows, there are still applications that use UNIQUE as a synonym for DISTINCT. Furthermore, UNIQUE is also found in Informix IDS where it can be used in some additional places. DB2 Viper 2 makes UNIQUE a synonym for the DISTINCT keyword anywhere except for CREATE DISTINCT TYPE. Listing 19 shows its usage:
Listing 19. Usage of UNIQUE
Besides the compatibility features highlighted in the preceding sections, DB2 Viper 2 sports a host of other major new functionality. Some of them can make porting easier and deserve an honorable mention here and in dedicated articles in this forum.
DB2 Viper 2's support for optimistic locking strategies includes two core pieces:
- Automatic time stamping or change marking of rows as they are modified in the database.
- Two new functions RID() and RID_BIT() that externalize the physical position of a row.
While some may disapprove of exposing physical properties of a row through SQL, it is undeniable that RID() and RID_BIT() make porting of applications that use ROWID a lot easier.
The SQL standard ARRAY data type is supported in DB2 Viper 2 within SQL procedures as well as for procedure parameters. This feature allows for more efficient movement of data sets from the application to the DB2 server. It enables simplified porting of not only VARRAY types, but also of FORALL and BULK COLLECT constructs through array aggregation using the ARRAY_AGG() function, as well as unnesting of arrays into tables using the UNNEST() operator.
Global session variable support
DB2 Viper 2 extends SQL with a new data object called global session variable. Global session variables have a schema qualified, persistent definitions in the catalog, but their content is private to any given session. Global session variables facilitate porting of package variables when mapping a package to a DB2 schema.
Traditionally, DB2 uses operating system facilities to authenticate users and verify group memberships. There is no user information or passwords stored in the DBMS. This has not changed and is unlikely to in the future. However, there is value in managing group memberships within the data server. In a nutshell, groups managed within the data server are called roles. Roles are supported by DB2 Viper 2. This simplifies porting of applications that rely on roles.
Decimal floating point type
There has long been a need for a numeric data type that provides both decimal accuracy as well as floating point behavior. However, that need is not limited to the database. Instead it covers programming languages, databases, and ultimately hardware. After all, what good is a type that is slow to do arithmetic on? There are few, if any, companies in the world who have the breadth and patience to rise to such a broad challenge these days. Over the past years, IBM has quietly worked to introduce a true standardized decimal floating point into C, Java, its new Power 6 processors, DB2 9 for zOS, and now DB2 Viper 2. In SQL, this new type is called DECFLOAT and presently has an accuracy of 16 or 34 digits at a range beyond 10 to the power of 6000. DB2 Viper 2 exploits the Power 6 hardware acceleration. I am confident other hardware vendors will follow suit.
DECFLOAT support greatly simplifies porting of the proprietary (and software emulated) NUMBER data type.
When enabling DB2 for your application, the cost of changing and maintaining your code is a major cost factor. Any difference between SQL dialects has to be found and worked around increasing the time it takes to do the port as well as complicating testing. DB2 Viper 2 provides significant enhancements that help you drive this cost down, making a port more straight forward.
As DB2 Viper 2 nears release, you may find additional enhancements with the same thrust. The goal is that all these enhancements are exploited by the migration toolkit, increasing its conversion rate as well as overall performance of converted application.
Throughout this article there has been reference to the DB2_COMPATIBILITY_VECTOR registry variable. The purpose of this variable is to enable or disable some of the features discussed. To switch on a feature, you must add its bit to the current setting of the register. To switch a feature off, the respective bit needs to be unset. After setting the variable, you need to restart DB2. The settings are defined as follow:
Table 4. DB2_COMPATIBILITY_VECTOR settings
For example, to enable both ROWNUM (0x01) and DUAL (0x02), use:
Listing 20. Enabling ROWNUM and DUAL
To add hierarchical query support, or 0x03 with 0x08 which is 0x0B, and enter:
Listing 21. Enabling ROWNUM, DUAL, and CONNET BY
Learn
- "Port CONNECT BY to DB2" (developerWorks, Oct 2006): Map CONNECT BY to DB2 prior to DB2 Viper 2.
- DB2 Viper 2 Information Center: Find DB2 Viper 2 information in more detail.
- The Migration Web site serves all your migration needs including a free migration tool kit.
- Browse the technology bookstore for books on these and other technical topics.
- developerWorks Information Management zone: Learn more about DB2. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
Get products and technologies
- Download DB2 Viper 2 open beta.
- Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
Discuss
- Participate in the discussion forum.
- Ask questions in the DB2 Viper 2 open beta forum.
- Check out developerWorks blogs and get involved in the developerWorks community.
Serge Rielau is part of the DB2 Solutions Development team in IBM Canada, where he works closely with customers, business partners, and development to port or migrate applications from competitive RDBMS to DB2 for Linux, UNIX, and Windows. Prior to this role, he spent seven years as a team lead and technical manager in the DB2 SQL Compiler Development team. As an expert in the SQL language, Serge is an active participant in the comp.databases.ibm-db2 newsgroup. | http://www.ibm.com/developerworks/data/library/techarticle/dm-0707rielau/index.html | crawl-003 | refinedweb | 4,064 | 54.83 |
Lesson 3. Create Word Frequency Counts and Sentiments Using Twitter Data and Tweepy in Python
Learning Objectives
After completing this tutorial, you will be able to:
- Clean or “munge” social media data to prepare it for analysis.
- Explore and analyze word counts associated with tweets.
- Analyze sentiments (i.e. attitudes) in tweets.
What You Need
You will need a computer with internet access to complete this lesson.
In this lesson, you will learn how to take a set of tweets and clean them in order to analyze the frequency of words found in the tweets. You will learn how to do several things including:
- Remove URLs from tweets.
- Clean up tweet text including differences in case (e.g. upper, lower) that will affect unique word counts.
- Summarize and count individual and sets of words found in tweets.
Get and Analyze Tweets Related to Climate
When you work with social media and other text data, the user community creates and curates the content. This means there are NO RULES! This also means that you may have to perform extra steps to clean the data to ensure you are analyzing the right thing.
Next, you will explore the text associated with a set of tweets that you access using tweepy and the Twitter API. You will use some standard natural language processing (also known as text mining) approaches to do this.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import itertools import collections import tweepy as tw import nltk from nltk.corpus import stopwords import re import networkx import warnings warnings.filterwarnings("ignore") sns.set(font_scale=1.5) sns.set_style("whitegrid")
Remember to)
Now that you’ve authenticated you’re ready to search for tweets that contain
#climatechange. Below you grab 1000 recent tweets.
search_term = "#climate+change -filter:retweets" tweets = tw.Cursor(api.search, q=search_term, lang="en", since='2018-11-01').items(1000) all_tweets = [tweet.text for tweet in tweets] all_tweets[']
Clean Up Twitter Data - Remove URLs (links) From Each Tweet
The tweets above have some elements that you do not want in your word counts. For instance, URLs will not be analyzed in this lesson. You can remove URLs (links) using regular expressions accessed from the
re package. Re stands for
regular expressions. Regular expressions are a special syntax that is used to identify patterns in a string.
While this lesson will not cover regular expressions, it is helpful to understand that this syntax below:
([^0-9A-Za-z \t])|(\w+:\/\/\S+)
Tells the search to find all strings that look like a URL, and replace it with nothing –
"". It also removes other punctionation including hashtags -
#.
re.sub allows you to substitute a selection of characters defined using a regular expression, with something else.
In the function defined below, this line takes the text in each tweet and replaces the URL with
"" (nothing):
re.sub("([^0-9A-Za-z \t])|(\w+:\/\/\S+)", "", tweet
def remove_url(txt): """Replace URLs found in a text string with nothing (i.e. it will remove the URL from the string). Parameters ---------- txt : string A text string that you want to parse and remove urls. Returns ------- The same txt string with url's removed. """ return " ".join(re.sub("([^0-9A-Za-z \t])|(\w+:\/\/\S+)", "", txt).split())
After defining the function, you can call it in a list comprehension to create a list of the clean tweets.
all_tweets_no_urls = [remove_url(tweet) for tweet in all_tweets] all_tweets_no_urls[']
Text Cleanup - Address Case Issues
Capitalization is also a challenge when analyzing text data. If you are trying to create a list of unique words in your tweets, words with capitalization will be different from words that are all lowercase.
# Note how capitalization impacts unique returned values ex_list = ["Dog", "dog", "dog", "cat", "cat", ","] # Get unique elements in the list set(ex_list)
{',', 'Dog', 'cat', 'dog'}
To account for this, you can make each word lowercase using the string method
.lower(). In the code below, this method is applied using a list comprehension.
# Note how capitalization impacts unique returned values words_list = ["Dog", "dog", "dog", "cat", "cat", ","] # Make all elements in the list lowercase lower_case = [word.lower() for word in words_list] # Get all elements in the list lower_case
['dog', 'dog', 'dog', 'cat', 'cat', ',']
Now all of the words in your list are lower case. You can again use
set() function to return only unique words.
# Now you have only unique words set(lower_case)
{',', 'cat', 'dog'}
Create List of Words from Tweets
Right now you have a list of lists that contains each full tweet. However, to do a word frequency analysis, you need a list of all of the words associated with each tweet. You can use
.split() to split out each word into a unique element in a list.
# Split the words from one tweet into unique elements all_tweets_no_urls[0].split()
['Climate', 'change', 'is', 'fueling', 'wildfires', 'warns', 'National', 'Climate', 'Assessment']
Of course, you will notice above that you have a capital word in your list of words. You can combine
.lower() with
.split() to remove capital letters and split up the tweet in one step.
# Split the words from one tweet into unique elements all_tweets_no_urls[0].lower().split()
['climate', 'change', 'is', 'fueling', 'wildfires', 'warns', 'national', 'climate', 'assessment']
To split words in all of the tweets, you can then string both methods together in a list comprehension.
# Create a sublist of words for each tweet, all lower case words_in_tweet = [tweet.lower().split() for tweet in all_tweets_no_urls]
Tweet Length Analysis
A tweet is limited to 280 characters. You can explore how many words (not including links) were used by people that recently tweeted about climate change. To do this, you will use the
len() function to calculate the length of each list of words that are associated with each tweet.
tweet_word_count = [len(word) for word in words_in_tweet] tweet_word_count[:3]
[9, 18, 13]
You can use the list you created above to plot the distribution of tweet length.
# Get the average word count average_word_count = np.mean(tweet_word_count) # Print this value out in a text statement print('The average number of words in each tweet is %0.6f' % average_word_count) fig, ax = plt.subplots(figsize=(8, 6)) # Plot the histogram ax.hist(tweet_word_count, bins=50, color="purple") # Add labels of specified sizes ax.set(xlabel="Word Count", ylabel="Frequency", title="Tweet Word Count Distribution") # Plot a line for the average value ax.axvline(x=average_word_count, lw=2, color='red', linestyle='--') plt.show()
The average number of words in each tweet is 15.650000
Remove Stopwords From Tweet Text With
nltk
The
Python package
nltk is commonly used for text analysis. Included in this package is a list of “stop words”. These include commonly appearing words such as who, what, you, ect. that generally do not add meaningful information to the text you are trying to analysis.
nltk.download('stopwords')
[nltk_data] Downloading package stopwords to [nltk_data] /home/jpalomino/nltk_data... [nltk_data] Package stopwords is already up-to-date!
True
stop_words = set(stopwords.words('english')) # View a few words from the set list(stop_words)[0:10]
['against', 'on', "it's", 'does', 're', 'ma', 'd', 'can', 'was', 'haven']
Notice that the stop words provided by
nltk are all lower-case. This works well given you already have converted all of your tweet words to lower case using the
Python
string method
.lower().
Next, you will remove all stop words from each tweet. First, have a look at the words in the first tweet below.
words_in_tweet[0]
['climate', 'change', 'is', 'fueling', 'wildfires', 'warns', 'national', 'climate', 'assessment']
Below, you remove all of the stop words in each tweet. The list comprehension below might look confusing as it is nested. The list comprehension below is the same as calling:
for all_words in words_in_tweet: for a word in all_words: # remove stop words
Compare the words in the original tweet to the words in the tweet once the stop words are removed:
# Remove stop words from each tweet list of words tweets_nsw = [[word for word in tweet_words if not word in stop_words] for tweet_words in words_in_tweet] tweets_nsw[0]
['climate', 'change', 'fueling', 'wildfires', 'warns', 'national', 'climate', 'assessment']
Remove Collection Words
In additional to removing stopwords, it is common to also remove collection words. Collection words are the words that you used to query your data from Twitter. In this case, you used
climate change as a collection term. Thus, you can expect that these terms will be found in each tweet. This could skew your word frequency analysis.
Remove the words - climate, change, and climatechange - from the tweets.
collection_words = ['climatechange', 'climate', 'change']
tweets_nsw_nc = [[w for w in word if not w in collection_words] for word in tweets_nsw] tweets_nsw_nc[0]
['fueling', 'wildfires', 'warns', 'national', 'assessment']
Calculate Word Frequency
Now that you have cleaned up your data, you are ready to calculate word frequencies.
To begin, flatten your list. Note that you could flatten your list with another list comprehension like this:
all_words = [item for sublist in tweets_nsw for item in sublist]
But it’s actually faster to use itertools to flatten the list as follows.
# All words all_words = list(itertools.chain(*tweets_nsw)) len(all_words)
10513
# All words not including the collection words all_words_nocollect = list(itertools.chain(*tweets_nsw_nc)) len(all_words_nocollect)
8865
Now you have two lists of words from your tweets: one with and one without the collection words. Remember that in this sample, the average tweet length was about 16 words, before the stop words are removed, so the number of words seem reasonable.
To get the count of how many times each words appears in the sample, you can use the built-in
Python library
collections, which helps create a special type of a
Python dictonary.
counts_with_collection_words = collections.Counter(all_words) type(counts_with_collection_words)
collections.Counter
Look at the counts for your data including the collection words. Notice that the words climate, change and climatechange are prevalent in your analysis given they were a collection term.
Thus, it likely does make sense to remove them from this analysis.
The
collection.Counter object has a useful built-in method
most_common that will return the most commonly used words, and the number of times that they are used.
# View word counts for list that includes collection terms counts_with_collection_words.most_common(15)
[('climate', 927), ('change', 599), ('report', 159), ('climatechange', 122), ('trump', 105), ('us', 96), ('new', 67), ('amp', 59), ('believe', 57), ('government', 49), ('globalwarming', 44), ('national', 40), ('via', 40), ('world', 37), ('says', 36)]
# View word counts for list that does NOT INCLUDE collection terms cleaned_tweet_word_list = collections.Counter(all_words_nocollect) cleaned_tweet_word_list.most_common(15)
[('report', 159), ('trump', 105), ('us', 96), ('new', 67), ('amp', 59), ('believe', 57), ('government', 49), ('globalwarming', 44), ('national', 40), ('via', 40), ('world', 37), ('says', 36), ('gpwx', 36), ('could', 36), ('assessment', 35)]
To find out the number of unique words, you can take the
len() of the object counts you just created.
len(cleaned_tweet_word_list)
3536
Finally, you can turn your list of words into a
Pandas Dataframe for analysis and plotting.
df_tweet_words = pd.DataFrame.from_dict(cleaned_tweet_word_list, orient='index').reset_index() df_tweet_words.columns = ['words', 'count'] df_tweet_words.head()
# Sort dataframe by word count sorted_df = df_tweet_words.sort_values(by='count', ascending=False) # Select top 16 words with highest word counts sorted_df_s = sorted_df[:16] sorted_df_s = sorted_df_s.sort_values(by='count', ascending=True)
fig, ax = plt.subplots(figsize=(8, 8)) # Plot horizontal bar graph ax.barh(sorted_df_s['words'], sorted_df_s['count'], color = 'purple'); ax.set(xlabel="Count", ylabel="Words",) ax.set_title("Common Words Found in Tweets") plt.show()
Explore Networks of Words
You might also want to explore words that occur together in tweets. You can do that next using
bigrams from
nltk.
Begin by creating a list of bigrams (i.e. co-occurring) in the tweets.
from nltk import bigrams
# Create list of bigrams in tweets terms_bigram = [list(bigrams(tweet)) for tweet in tweets_nsw_nc] # View bigrams for the first tweet terms_bigram[0]
[('fueling', 'wildfires'), ('wildfires', 'warns'), ('warns', 'national'), ('national', 'assessment')]
Notice that the words are paired due to co-occurrence. You can remind yourself of the original tweet or the cleaned list of words to see how co-occurrence is identified.
all_tweets_no_urls[0]
'Climate change is fueling wildfires warns National Climate Assessment'
tweets_nsw_nc[0]
['fueling', 'wildfires', 'warns', 'national', 'assessment']
You can use a counter combined with a for loop to calculate the count of occurrence for each bigram. The counter is used to store the bigrams as dictionary keys and their counts are as dictionary values.
You can then query attributes of the counter to identify the top 20 common bigrams across the tweets.
from collections import Counter bigram_counts = Counter() for lst in terms_bigram: for bigram in lst: bigram_counts[bigram] += 1
bigram_counts.most_common(20)
[(('gpwx', 'globalwarming'), 34), (('national', 'assessment'), 29), (('government', 'report'), 25), (('dont', 'believe'), 20), (('report', 'warns'), 19), (('us', 'government'), 14), (('new', 'report'), 13), (('doesnt', 'believe'), 13), (('global', 'warming'), 13), (('us', 'economy'), 12), (('climateaction', 'climatechangeisreal'), 11), (('climatechangeisreal', 'poetry'), 11), (('poetry', 'poem'), 11), (('federal', 'report'), 11), (('trump', 'administration'), 10), (('fourth', 'national'), 10), (('soil', 'takes'), 10), (('takes', 'decades'), 10), (('decades', 'catch'), 10), (('catch', 'changes'), 10)]
Visualizing Bigrams
You can visualize the top 20 occurring bigrams using the
Python packages
NetworkX. To do this, it is helpful to understand how to query the bigrams and their counts.
Note that you can select from the top 20 occurring bigrams using indexing (e.g.
[1] to select the second most occurring bigram and count, and then
[0] to select the bigram words without the counts).
bigram_counts.most_common(20)[1][0]
('national', 'assessment')
You can use indexing to create lists of the bigrams and their counts.
bigrams = [bigram_counts.most_common(20)[x][0] for x in np.arange(20)]
bigrams
[('gpwx', 'globalwarming'), ('national', 'assessment'), ('government', 'report'), ('dont', 'believe'), ('report', 'warns'), ('us', 'government'), ('new', 'report'), ('doesnt', 'believe'), ('global', 'warming'), ('us', 'economy'), ('climateaction', 'climatechangeisreal'), ('climatechangeisreal', 'poetry'), ('poetry', 'poem'), ('federal', 'report'), ('trump', 'administration'), ('fourth', 'national'), ('soil', 'takes'), ('takes', 'decades'), ('decades', 'catch'), ('catch', 'changes')]
bigram_count = [bigram_counts.most_common(20)[x][1] for x in np.arange(20)]
bigram_count
[34, 29, 25, 20, 19, 14, 13, 13, 13, 12, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10]
You can then combine these lists into a
Pandas Dataframe.
bigram_df = pd.DataFrame({'bigram': bigrams, 'count': bigram_count})
bigram_df
You can also use the
Pandas Dataframe to plot a network of the bigrams using the
Python package
networkx.
# Create dictionary of bigrams and their counts d = bigram_df.set_index('bigram').T.to_dict('records')
import networkx as nx G = nx.Graph() # Create connections between nodes for k, v in d[0].items(): G.add_edge(k[0], k[1], weight=(v * 10)) G.add_node("china", weight=100)
fig, ax = plt.subplots(figsize=(16, 20)) pos = nx.spring_layout(G, k=1) # Plot networks nx.draw_networkx(G, pos, font_size=15, width=3, edge_color='grey', node_color='purple', with_labels = False, ax=ax) # Create offset labels for key, value in pos.items(): x, y = value[0]+.01, value[1]+.05 ax.text(x, y, s=key, bbox=dict(facecolor='red', alpha=0.5), horizontalalignment='center') plt.show()
Sentiment Analysis
You may also want to analyze the tweets to identify attitudes (i.e. sentiments) toward the subject of interest. To do this, you can use the
Python package
textblob.
Sentiment is scored using polarity values in a range from 1 to -1, in which values closer to 1 indicate more positivity and values closer to -1 indicate more negativity.
Begin with the climate+change tweets that you previously cleaned up to remove URLs, and recall that it is helpful for the tweets to be formatted as lower case.
from textblob import TextBlob
# All tweets all_tweets_no_urls # Format tweets as lower case tweets_clean = [tweet.lower() for tweet in all_tweets_no_urls] tweets_clean[0]
'climate change is fueling wildfires warns national climate assessment'
# Create textblob objects of the tweets sentiment_number = [TextBlob(tweet) for tweet in tweets_clean] sentiment_number[0]
TextBlob("climate change is fueling wildfires warns national climate assessment")
# Calculate the polarity values for the textblob objects s_n = [[tweet.sentiment.polarity, str(tweet)] for tweet in sentiment_number] s_n[0]
[0.0, 'climate change is fueling wildfires warns national climate assessment']
# Create dataframe containing the polarity value and tweet text sent_df = pd.DataFrame(s_n, columns=["polarity", "tweet"]) sent_df.head()
fig, ax = plt.subplots() # Plot histogram of the polarity values sent_df.hist(bins=[-1, -0.75, -0.5, -0.25, 0.25, 0.5, 0.75, 1], ax=ax) plt.show()
What does the histogram of the polarity values tell you about sentiments in the tweets gathered from the search “#climate+change -filter:retweets”? Are they more positive or negative?
Get and Analyze Tweets Related to the Camp Fire
Next, explore a new topic, the recent Camp Fire in California. Begin by reviewing what you have learned about searching for and cleaning tweets.
search_term = "#CampFire -filter:retweets" tweets = tw.Cursor(api.search, q=search_term, lang="en", since='2018-09-23').items(1000) all_tweets = [TextBlob(remove_url(tweet.text.lower())) for tweet in tweets] all_tweets[:5]
[TextBlob("support cafirefound this givingtuesday as they continue to support the families affected by the campfire and"), TextBlob("were supporting the butte county community by brewing resilienceipa we will donate 100 of resilience sales to t"), TextBlob("1 missingpets foundpets1127 campfire campfirepets paradise foundcat cats these kitties are all at san"), TextBlob("drove up from la at 5am this morning to volunteer with north state public radio nsprnews to report on the"), TextBlob("more than 1000 breweries from around the world help sierra nevada brew resilience ipa to help campfire victims")]
Then, you can calculate the polarity values and plot the histogram for the Camp Fire tweets, just like you did for the climate change data.
wild_sent = [[tweet.sentiment.polarity, str(tweet)] for tweet in all_tweets] wild_sent_df = pd.DataFrame(wild_sent, columns=["polarity", "tweet"]) fig, ax = plt.subplots(figsize=(8, 8)) wild_sent_df.hist(bins=[-1, -0.75, -0.5, -0.25, 0.25, 0.5, 0.75, 1], ax=ax, color="purple") plt.show()
It can also be helpful to remove the polarity values equal to zero and create a break in the histogram at zero, so you can get a better visual of the polarity values.
Does this revised histogram highlight whether sentiments from the Camp Fire tweets are more positive or negative?
# Create dataframe without polarity values equal to zero df = wild_sent_df[wild_sent_df.polarity != 0] fig, ax = plt.subplots() df.hist(bins=[-1, -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 1], ax=ax, color="purple") plt.show()
Share onTwitter Facebook Google+ LinkedIn
| https://www.earthdatascience.org/courses/earth-analytics-python/get-data-using-apis/calculate-tweet-word-frequencies-sentiments-in-python/ | CC-MAIN-2018-51 | refinedweb | 3,047 | 55.64 |
Hi guys,
I have 241 nucleotide sequences (~1500bp) that I would like to calculate all pairwise sequence identities for.
I wrote a tool for this, but it keeps crashing for some reason.
Does anyone know of a (online) tool that will allow me to get this information?
thanks
Hi guys,
In can be easily done in Python.
from itertools import combinations from Bio import SeqIO from Bio import pairwise2 seqs = SeqIO.to_dict(SeqIO.parse(open('file.fasta'),'fasta')) for sr1, sr2 in combinations(seqs, 2): aln = pairwise2.align.globalxx(str(seqs[sr1].seq), str(seqs[sr2].seq))[0] print sr1, sr2, aln[2]/float(aln[4])*100
Say
file.fasta contains 3 fasta records.
>seq1 ATGCTGATGATG >seq2 AGTCGCTGATGATAGAATAGATAGGA >seq3 ATGCTGATGATG
Then, the output is:
seq3 seq2 46.1538461538 seq3 seq1 100.0 seq2 seq1 46.1538461538
thanks, it seems to be working well for the example you provided. however, Running two 1500bp alignments causes my entire machine to freeze. i did not expect pairwise2 do be this memory intensive...
For two sequences of length 1500bp, typical implementation requires 1500x1500=2.25MB memory. Even a careless implementation will not use more than 3x4x2.25MB~30MB. This is the memory if you use the programs I recommended. The core of pairwise2 is implemented in C. I would not expect RAM to be a problem. Nonetheless, I do not use pairwise2. What I said above may not be applicable to it.
thanks. I took a.zielinski's framework and edited it to use muscle alignments. It works pretty well. I will post my code when I get home tonight! I agree, I am surprised at the memory use of pairwise2. However, both on my PC at work and my mac at home it eats up all memory and takes forever...
Why not use ssearch from FASTA3 or swat from phrap? | https://www.biostars.org/p/58802/ | CC-MAIN-2019-09 | refinedweb | 307 | 69.99 |
Created on 2012-10-19 18:34 by rbcollins, last changed 2016-09-13 11:52 by chris.jerdonek.
TextTestRunner calls str(TestCase) directly, which makes it hard for testscenarios to rename the test cases as it parameterises them (because __str__ is a descriptor). While testscenarios could use a decorator instead, thats undesirable as the test case object would still need to be customised so that calls to self.id() and self.shortDescription() inside it still return consistent information.
So the relevant code is this:
def getDescription(self, test):
41 if self.descriptions:
42 return test.shortDescription() or str(test)
43 else:
44 return str(test)
What I'd like is to have this be something like:
41 if self.descriptions:
42 return test.shortDescription() or test.id()
43 else:
44 return test.id()
Which would let testscenarios adjust both shortDescriptions and id, and Just Work.
Or anther way this could be done would be to make TestCase.__str__ call self.id(), and then __str__ never needs to be adjusted - id() or shortDescription are the only things to tweak.
By "descriptor" I think you really mean it is a special method (ie: looked up on the class only, not on an instance). A descriptor is something else.
This is a feature request and could only be implemented in 3.4. Assuming I'm understanding what you are asking for (which is not certain), as a partial workaround you could reset _testMethodName.
They aren't descriptors? They certainly aren't normal methods:
>>> class foo(object):
... def __str__(self): return "foo"
...
>>> f = foo()
>>> f.__str__ = lambda: "bar"
>>> str(f)
'foo'
I *thought* the mechanism by which they can only be replaced by altering the class *was* descriptors, but I gladly seek better info!
_testMethodName is overloaded though: its both 'what is the test method' and 'part of the description or id of the test', so resetting it would be - at best - risky when only changing the description or id.
The special handling of special methods is baked into the attribute lookup machinery. The discussion of this is in the language reference somewhere, as is the explanation of what descriptors are and how they work.
> So the relevant code is this:
> def getDescription(self, test):
> ...
> 43 return str(test)
> ...
> What I'd like is to have this be something like:
> 44 return test.id()
> Or anther way this could be done would be to make TestCase.__str__ call self.id()
Note that str(test) and test.id() don't return the same value. The former is the "friendly" name. It is probably intentional that TextTestResult uses the TestCase instance's friendly name in its getDescription() method as opposed to the id.
> TextTestRunner calls str(TestCase) directly, which makes it hard for testscenarios to rename the test cases as it parameterises them
What about testscenarios subclassing the TestCase instances as it parametrizes them?
class ScenariosTestCase(case.__class__):
def __str__(self):
return "testscenarios test name..."
case.__class__ = ScenariosTestCase
print(str(case))
From the documentation, it looks like testscenarios creates new test objects from the originals anyways. Alternatively (although I don't if testscenarios can control what test runner is being used), TextTestRunner could be subclassed with its own getDescription() method.
testscenarios copies the tests, it doesn't call the constructor for the class; this makes things a lot simpler than trying to reconstruct whatever state the object may have from scratch again.
As for str(test) and test.id() being different - well sure they are today, but I don't know that the str(test) format is /useful/ today, as its not a particularly good str() anyhow. It doesn't identify that its a test instance even.
This suggests two alternatives to me:
- decide that we want three things: id, friendly-id and shortDescription, and have three non-magic methods, which TextTestRunner can call depending on what the user wants to see.
- decide we want two things, id and shortDescription, and TextTestRunner can combine these things to meet the users request. (e.g. id + ' ' + shortDescription)
And separately, as the __str__ isn't particularly good anyhow, perhaps we should take the opportunity to think about what we want from it and adjust it.
So three including str sounds sufficient to me: short description, long description and repr (with str == repr) for debugging.
Robert: I don't know if there's something funky going on with your browser, but every time you post the 'enhancement' setting for type seems to get lost.
@Michael I'll put a patch together next week then.
@R.david.murray no idea - but I've refreshed the page, we'll if it behaves better. My guess would be a buggy in-flight-collision detection in the issue tracker code.
> testscenarios copies the tests, it doesn't call the constructor for the class;
My suggestion on how to override __str__ (by assignment to __class__) doesn't require that the constructor be called.
Much more likely that you just needed to refresh the page, going by my own experience with this kind of problem (especially seeing as that seems to have fixed it :)
@Chris - I don't like the idea of making new classes on the fly like that, it seems more likely to provoke bugs (as type(case) != SomeSpecificClass) anymore after that, vs just not relying on __str__ directly.
Going back to Michael's proposal of short description, long description and repr (with str == repr) for debugging. - that is missing id(), and id() is IMO definitely still needed.
I was proposing id(), friendlyId(), shortDescription(), and __str__ calls friendlyId(), and __repr__ is a regular <...> repr.
An idea occurred to me on this recently. Instead of changing TextTestResult to call test.id() everywhere instead of str(test), what about making TextTestResult DRY by having it call a new method called something like self.getName(test)?
With this approach, customizing the test name would simply be a matter of subclassing TextTestResult and overriding TextTestResult.getName() (which unittest makes easy). This is much easier than trying to modify the test cases themselves (which unittest doesn't make easy). It's also more natural as the responsibility for formatting should lie with the test result classes rather than with the test case classes themselves (e.g. as discussed in #22431). | http://bugs.python.org/issue16288 | CC-MAIN-2017-04 | refinedweb | 1,042 | 65.12 |
What You Can Do To Deal With Java’s Memory Retention Problems
When you work with the Java programming language, a feature that you will want to be familiar with is called finalization.
Finalization will allow you to conduct a cleanup on objects that the garbage collector is not capable of reaching. Finalization will generally be used to recapture resources which are connected to an object. Below is an example of a basic final;
}
Once the Imagel becomes impossible to reach, the JVM will request a finalize technique to make sure the resource that holds the image information will be recaptured. The finalize() technique is a method which holds arbitrary information. It can access any part of the object, and in the above example it has accessed dim and pos. In addition to this, it can also allow the object to become reachable. While this technique is not recommended by some programmers, the Java programming language allows it. I will now list the steps to show the lifespan of an object which is finalized. A finalized object is one that has a class which is non-trivial.
When the object is first registered, the Java Virtual Machine will record that this object is finalized. This may slow down the fast register path that the newer JVMs contain. After this occurs, the garbage collector will decide if the object is able to be reached. It will notice that the object has been finalized, and it will then add the object to the finalization queue. It will also make sure that all the objects which can be reached are retained, and this will allow the finalizer to gain access to them. Once this has occured, the Java Virtual Machine will use what is called a finalizer thread. The finalizer thread will dequeue the object, record the finalizer of the object, and bring forth the finalize method. Once these three things have been done, the object will be finalized.
Once the object has been finalized, it will be studied by the garbage collector. When the garbage collector sees that the object cannot be reached, it will recapture the space for the object in addition to everything which can reached within this space. The garbage collector will need two cycles to recapture an object, and it will need to recapture other objects which are reachable from the object during this procedure. If the programmer is not cautious, they could create a temporary data retention problem which will be unstable. There is no guarantee that the Java Virtual Machine will request finalizers of all the objects which have been finalized. It may cancel before the garbage collector finds that some of the objects can’t be reached.
Even if you use finalization correctly, you may find that the recapture process is extended. It is possible to correct this problem by arranging the code so that it uses a "contains" instead of "extends" pattern. An example of this code can be seen here:
public class RGBImage2 {
private Image1 img;
private byte rgbData[];
public void dispose() {
img.dispose();
}
}
This example contains an appearance of Image1 instead of just extending it. If an appearance of RGBImage2 can’t be reached, the garbage collector will quickly recapture it, and it will also capture the rgbData array. After this, it will queue up the appearance of Image1 to be finalized. It is not always possible to arrange your code in the method used above. You may be required to do a bit more work to make sure the instances do not take up more than they are supposed to when they are finalized. In the example below, I will show you how this can be done:
public class RGBImage3 extends Image1 {
private byte rgbData[];
public void dispose() {
super.dispose();
rgbData = null;
}
}
As you can see, RGBImage3 is the same as RGBImage1, but it uses the dispose( ) technique, which cancels the rgbData field. The techniques discussed in this article can effectively allow you to deal with the memory retention problems that are inherent in Java. Memory plays an important role in virtually any application, and it is important for you to learn how to utilize it properly. | http://www.exforsys.com/tutorials/j2ee/what-you-can-do-to-deal-with-java-memory-retention-problems.html | CC-MAIN-2017-26 | refinedweb | 696 | 60.45 |
API for deleting all rows from a table or a database. All the functions here are located in RDM DB Engine Library. Linker option:
-l
rdmrdm
#include <rdmdbapi.h>
Remove all rows from a database.
This function removes all rows from all tables in the database.
No triggers or other checks will be performed when a database has all of it's rows delete in this manner. Once this is called the database will be completely empty with no rows in any of the database tables.
This function must be performed within a transaction (optional in exclusive mode or when used with the standalone transactional file server). If the transaction is aborted, the database will remain unchanged.
Please note that any row IDs that was used prior to this call will be reused after a successful call to this function.
#include <rdmdbapi.h>
Remove all rows from a table.
This function deletes all of the rows in the specified table.
The ability to delete rows will be based on the triggered action specified in the table definition. Rows that only references other rows will be deleted and the references automatically removed. The behavior of a row that is referenced by other rows will be dependent on the on delete triggered action.
If the triggered action is: | https://docs.raima.com/rdm/14_1/group__db__delete.html | CC-MAIN-2019-18 | refinedweb | 216 | 66.44 |
Giacomo Pati wrote:
>
> Stefano, your recently commited XMLSerializer seems to have problems
> with namespaces. When you enter the following snippet into a pipeline:
>
> <map:match
> <map:generate
> <map:serialize type=xml/>
> </map:match>
>
> and request it with you'll see that
> you get the sitemap.xmap file without namespace attributes. I've cross
> checked that with the former XMLSerializer from Pier and that one works
> correctly. I've also cross checked if the xerces serializers which are
> used by the new XMLSerializer works correct. When I do a transformation
> at the command line with xalan and giving it the -SX flag (explicitly
> using xerces serializers) everything is fine.
>
> Could you take a look at that? Maybe the Bridges are wrong?
Hmmm, could definately be, yeah.
I'll check it out as soon as I'm able to start C2 (I'm currently
fighting with Catalina's classloader since I don't want to touch its
classpath but just using /WEB-INF/lib but it doesn't work..!
------------------------- --------------------- | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200009.mbox/%3C39B97958.AC1AF173@apache.org%3E | CC-MAIN-2016-18 | refinedweb | 167 | 64.71 |
Today’s digital cameras take pictures with much higher resolution than many computer screens. My Canon PowerShot SD800 IS camera takes pictures at 3072 x 2204 resolution.
One of my laptops died recently, and I noticed that local laptop retailers have machines with 1280 X 1024 resolution. I much prefer a higher resolution display, so I ordered a customizable Dell Inspiron 1525 with 1680 x 1050.
(I’ve been ordering computers from Dell for over 20 years, back in the days when it was called PC’s Limited. Why does Dell distinguish between business/government/home laptops?)
I was playing around with some photos using a PictureBox control, and I wanted to add a feature that would allow click/zoom using the mouse wheel to designate a point to magnify.
It was pretty easy to create my own class MyPictureBox which inherits from PictureBox and handles the MouseWheel and zooming.
Below are C# and VB versions. If I get enough requests, maybe I’ll make a Fox version (although I like this Fox version of zooming: Enable crop and zooming in on your digital photograph display form)
Start Visual Studio 2008 (I think these should work in VS2005)
File->New->Project->VB/C#->Windows Application.
View->Code, then paste the VB/C# code below, hit F5
Move the mouse to designate a zoom anchor point, then mouse wheel to zoom in/out
If you switch back to the form designer, the ToolBox has the MyPictureBox control on it, which you can drag/drop onto a form.
If you already have your own form that uses a PictureBox or several, you can avoid manually changing all the instances by carefully editing the Form1.Designer.vb file to replace the type with MyPictureBox (make a backup first).
As an exercise, try extending this feature by adding the ability to change the anchor point while zoomed or pan the zoomed picture.
Some VB/C# coding issues:
· semicolons
· capitalization
· intermediate arithmetic rounding results
· event handling
See also:
Enable crop and zooming in on your digital photograph display form
Create your own media browser: Display your pictures, music, movies in a XAML tooltip
<VB Code>
Public Class Form1
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
Me.Left = 0
Me.Top = 0
Me.Size = My.Computer.Screen.WorkingArea.Size
Dim oPict = New MyPictureBox
oPict.Size = Me.Size
oPict.SizeMode = PictureBoxSizeMode.StretchImage
oPict.Image = New Bitmap("d:\kids.jpg") ' path to some picture
Me.Controls.Add(oPict)
Me.ActiveControl = oPict
End Sub
End Class
Public Class MyPictureBox
Inherits PictureBox
Private zmLevel As Integer = 1
Private zmPt As Point
Overloads Property Image() As Image ' we want to hook when client's set the image to reset the zoom level to unzoomed
Get
Return MyBase.Image
End Get
Set(ByVal value As Image)
zmLevel = 1 ' reinit
MyBase.Image = value
End Set
End Property
Protected Overrides Sub OnPaint(ByVal pe As System.Windows.Forms.PaintEventArgs)
'MyBase.OnPaint(pe) ' don't call baseclass to paint
If Me.Image IsNot Nothing Then
Dim loc As Point
Dim sz As Size
If zmLevel <> 1 Then
sz = New Size(Me.Image.Width / zmLevel, Me.Image.Height / zmLevel)
' center on zmPt
loc = New Point(Me.Image.Width * (zmPt.X / Me.ClientRectangle.Width) - sz.Width / 2, _
Me.Image.Height * (zmPt.Y / Me.ClientRectangle.Height) - sz.Height / 2) '
Else
loc = New Point(0, 0) ' no zoom: we want the entire source picture
sz = Me.Image.Size
End If
Dim rectSrc = New Rectangle(loc, sz)
' now draw the rect of the source picture in the entire client rect of MyPictureBox
pe.Graphics.DrawImage(Me.Image, Me.ClientRectangle, rectSrc, GraphicsUnit.Pixel)
End If
End Sub
Sub PictureBox_MouseWheel(ByVal sender As Object, ByVal e As MouseEventArgs) Handles Me.MouseWheel
If Me.zmLevel = 1 Then ' can only anchor when unzoomed
Me.zmPt = New Point(e.X, e.Y)
End If
If e.Delta > 0 Then
If zmLevel < 20 Then
zmLevel += 1
End If
Else
If e.Delta < 1 Then
If zmLevel > 1 Then
zmLevel -= 1
End If
End If
End If
Me.Invalidate() ' queue msg to repaint
End Sub
End Class
</VB Code>
<C# Code>
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
this.Left = 0;
this.Top = 0;
this.Size = Screen.PrimaryScreen.WorkingArea.Size;
var oPict = new MyPictureBox();
oPict.Size = this.Size;
oPict.SizeMode = PictureBoxSizeMode.StretchImage;
oPict.Image = new Bitmap("d:\\kids.jpg");
this.Controls.Add(oPict);
this.ActiveControl = oPict;
}
}
}
public class MyPictureBox : PictureBox
{
private int zmLevel = 1;
private Point zmPt;
public MyPictureBox()
{
this.MouseWheel += new MouseEventHandler(MyPictureBox_MouseWheel);
}
void MyPictureBox_MouseWheel(object sender, MouseEventArgs e)
{
if (this.zmLevel == 1)
{
this.zmPt = new Point(e.X, e.Y);
}
if (e.Delta > 0)
{
if (zmLevel < 20)
{
zmLevel += 1;
}
}
else
{
if (e.Delta < 1)
{
if (zmLevel > 1)
{
zmLevel -= 1;
}
}
}
this.Invalidate();
}
new public Image Image // overrides
{
get
{
return base.Image;
}
set
{
zmLevel = 1;
base.Image = value;
}
}
protected override void OnPaint(PaintEventArgs pe)
{
//base.OnPaint(pe);
if (this.Image != null)
{
Point loc;
Size sz;
if (zmLevel != 1)
{
sz = new Size(this.Image.Width / zmLevel, this.Image.Height / zmLevel);
// center on zmPt. Casts are needed so integer divide doesn't occur (intermediate double result)
loc = new Point((int)(this.Image.Width * (zmPt.X / (double)this.ClientRectangle.Width)) - sz.Width / 2,
(int)(this.Image.Height * (zmPt.Y / (double)this.ClientRectangle.Height)) - sz.Height / 2);
}
else
{
loc = new Point(0, 0);
sz = this.Image.Size;
}
Rectangle rectSrc = new Rectangle(loc, sz);
// now draw the rect of the source picture in the entire client rect of MyPictureBox
pe.Graphics.DrawImage(this.Image, this.ClientRectangle, rectSrc, GraphicsUnit.Pixel);
}
}
}
</C# Code>
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
OK, this is one request for the VFP code to do the same. I look forward to your explanation on how to do the interop. Thanks.
Alex
I second Alex’s request …
Please give us a VFP version.
Thanks in advance.
One more request for the VFP version
PingBack from
I suspect that it is mostly to limit or alter the warrenty and support options that are displayed. If an average home user saw the warrenty/support options offered to businesses they might actually be shocked (by the prices mostly) but would certainly be confused.
And another request for the VFP version.
One more request for a VFP version.
PingBack from
PingBack from
PingBack from
I am really grateful for your kindness, wish you all success:-)
Great work. Thank you for sharing. How can i move using MouseMove event? or how to add a horizontal and vertical scroll bar?
Thank you again
Described in very Simplified Manner. Should i reproduce it on my blog ?
i got this when i run “An unhandled exception of type ‘System.ArgumentException’ occurred in System.Drawing.dll
Additional information: Parameter is not valid.”
how to solve?
Thanks for Sharing the knowledge… will it be possible using cpp code ??
if yes, can you please help me in that
thanks in advance | https://blogs.msdn.microsoft.com/calvin_hsia/2008/08/21/magnify-your-pictures-using-a-picturebox-so-that-you-can-zoom-with-the-mouse-wheel/ | CC-MAIN-2017-39 | refinedweb | 1,182 | 51.85 |
This site uses strictly necessary cookies. More Information
When i start the game the camera is facing the player but then when I press the W key my player walks backwards and same goes for A,S,D they do the opposite. Don't know what I did wrong with the scripts.
Script 1/ MouseOrbit.js:
var xSpeed = 250.0;
var ySpeed = 120.0;
var yMinLimit = -20;
var yMaxLimit = 80;
private var x = 0.0;
private var y = 0.0;
@script AddComponentMenu("Camera-Control/Mouse Orbit")
function Start () {
var angles = transform.eulerAngles;
x = angles.y;
y = angles.x;
// Make the rigid body not change rotation
if (GetComponent.<Rigidbody>())
GetComponent.<Rigidbody>().freezeRotation = true;
}
function Update () {
distance = Raycast3.distance3;
if(distance > 2){
distance = 0.8;
if(( Input.GetKey(KeyCode.A)) || ( Input.GetKey(KeyCode.LeftArrow))) {
x -= 3; //The higher the number the faster the camera rotation
}
if(( Input.GetKey(KeyCode.D)) || ( Input.GetKey(KeyCode.RightArrow))) {
x += 3;
}
}
}
function LateUpdate () {
if (target) {
x += Input.GetAxis("Mouse X") * xSpeed * 0.02;
y -= Input.GetAxis("Mouse Y") * ySpeed * 0.02;
y = ClampAngle(y, yMinLimit, yMaxLimit);
var rotation = Quaternion.Euler(y, x, 0);
var position = rotation * Vector3(0.0, 0.0, -distance) + target.position;
transform.rotation = rotation;
transform.position = position;
}
}
static function ClampAngle (angle : float, min : float, max : float) {
if (angle < -360)
angle += 360;
if (angle > 360)
angle -= 360;
return Mathf.Clamp (angle, min, max);
}
Script 2/ Raycast3.js:
static var distance3 : float = 5;
function Update () {
var hit: RaycastHit;
if( Physics.Raycast(transform.position,transform.TransformDirection(Vector3.forward),hit)){
distance3 = hit.distance;
}
}
Script 3/ Lookatcamera.js:
var target : Transform;
function Update() {
transform.LookAt(target);
}
I watched this tutorial:
Answer by shadowpuppet
·
Aug 15, 2016 at 06:35 PM
try this maybe for an Orbit camera. I know it works so if you still have the same problem then check your input settings - maybe pos and neg are swapped or try changing the pos and negs in the script. I don't know, i didn't really look at your script. but maybe compare to the one below and maybe spot some differences. this came from a scripting pack that I don't use. I just use the FPS and TPS cameras
using UnityEngine;
using System.Collections;
public class OrbitCamera : MonoBehaviour {
/* These variables are what tell the camera how its going to function by
* setting the viewing target, collision layers, and other properties
* such as distance and viewing angles */
public Transform viewTarget;
public LayerMask collisionLayers;
public float distance = 6.0f;
public float distanceSpeed = 150.0f;
public float collisionOffset = 0.3f;
public float minDistance = 4.0f;
public float maxDistance = 12.0f;
public float height = 1.5f;
public float horizontalRotationSpeed = 250.0f;
public float verticalRotationSpeed = 150.0f;
public float rotationDampening = 0.75f;
public float minVerticalAngle = -60.0f;
public float maxVerticalAngle = 60.0f;
public bool useRMBToAim = false;
/* These variables are meant to store values given by the script and
* not the user */
private float h, v, newDistance, smoothDistance;
private Vector3 newPosition;
private Quaternion newRotation, smoothRotation;
private Transform cameraTransform;
/* This is where we initialize our script */
void Start () {
Initialize ();
}
/* This is where we set our private variables, check for null errors,
* and anything else that needs to be called once during startup */
void Initialize () {
h = this.transform.eulerAngles.x;
v = this.transform.eulerAngles.y;
cameraTransform = this.transform;
smoothDistance = distance;
NullErrorCheck ();
}
/* We check for null errors or warnings and notify the user to fix them */
void NullErrorCheck () {
if (!viewTarget) {
Debug.LogError("Please make sure to assign a view target!");
Debug.Break ();
}
if (collisionLayers == 0) {
Debug.LogWarning("Make sure to set the collision layers to the layers the camera should collide with!");
}
}
/* This is where we do all our camera updates. This is where the camera
* gets all of its functionality. From setting the position and rotation,
* to adjusting the camera to avoid geometry clipping */
void LateUpdate () {
if (!viewTarget)
return;
/* We check for right mouse button functionality, set the rotation
* angles, and lock the mouse cursor */
if (!useRMBToAim) {
/* {
if(Input.GetMouseButton(1)) {
/* {
Screen.lockCursor = false;
}
}
/* We set the distance by moving the mouse wheel and use a custom
* growth function as the time value for linear interpolation */
distance = Mathf.Clamp (distance - Input.GetAxis ("Mouse ScrollWheel") * 10, minDistance, maxDistance);
smoothDistance = Mathf.Lerp (smoothDistance, distance, TimeSignature(distanceSpeed));
/*We give the rotation some smoothing for a nicer effect */
smoothRotation = Quaternion.Slerp (smoothRotation, newRotation, TimeSignature((1 / rotationDampening) * 100.0f));
newPosition = viewTarget.position;
newPosition += smoothRotation * new Vector3(0.0f, height, -smoothDistance);
/* Calls the function to adjust the camera position to avoid clipping */
CheckSphere ();
smoothRotation.eulerAngles = new Vector3 (smoothRotation.eulerAngles.x, smoothRotation.eulerAngles.y, 0.0f);
cameraTransform.position = newPosition;
cameraTransform.rotation = smoothRotation;
}
/* This is where the camera checks for a collsion hit within a specified radius,
* and then moves the camera above the location it hit with an offset value */
void CheckSphere () {
/* Add height to our spherecast origin */
Vector3 tmpVect = viewTarget.position;
tmpVect.y += height;
RaycastHit hit;
/* Get the direction from the camera position to the origin */
Vector3 dir = (newPosition - tmpVect).normalized;
/* Check a radius for collision hits and then set the new position for
* the camera */
if(Physics.SphereCast(tmpVect, 0.3f, dir, out hit, distance, collisionLayers)) {
newPosition = hit.point + (hit.normal * collisionOffset);
}
}
/* Keeps the angles values within their specificed minimum and maximum
* inputs while at the same time putting the values back to 0 if they
* go outside of the 360 degree range */
private float ClampAngle (float angle, float min, float max) {
if(angle < -360)
angle += 360;
if(angle > 360)
angle -= 360;
return Mathf.Clamp (angle, min, max);
}
/* This is our custom logistic growth time signature with speed as input */
private float TimeSignature(float speed) {
return 1.0f / (1.0f + 80.0f * Mathf.Exp(-speed * 0.02f));
}
}
Answer by emir3100
·
Aug 20, 2016 at 11:31 PM
@shadowpuppet I changed the settings in the input settings but then when I turn the camera at the other side then I get the same problem so I need somehow make that the character turns when the camera turns, I will look into the script and see if it solves the problem. thanks
sounds more like a TPS camera. I think the purpose of an orbit cam is to circle around the player. a TPS camera moves the player with the camera
You're right
@ emir3100 sounds more like a TPS camera. I think the purpose of an orbit cam is to circle around the player. a TPS camera moves the player.
mouse orbit
0
Answers
Can somebody please help me with my code - what do i need to add to make my camera turn SMOOTHLY towards the enemy?
2
Answers
Scene camera aspect ratio and game camera aspect ratio rotated
0
Answers
Rotate camera when colliding
0
Answers
How to rotate camera diagonally over players shoulder while still facing towards players direction
0
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1229299/how-to-make-third-person-camera-were-the-player-wa.html | CC-MAIN-2021-39 | refinedweb | 1,127 | 50.73 |
Facebook Tweaks with Swift Tutorial
Update 04/16/2015: Updated for Xcode 6.3 / Swift 1.2.
Note from Ray: This is a brand new Swift tutorial released as part of the iOS 8 Feast. Enjoy!
Have you ever been in the final stages of developing an awesome new app and spent 80 percent of your time on the last 20 percent of the project? Yeah, we’ve all been there!
It’s not you, it’s the Pareto principle, and if you work on projects you probably know all about the concept, even if you don’t know it by name. Here’s an example:
Assume you built an app that helps its user become more polite, because polite is the new cool. In the app, the user has a “Nice Jar.” Whenever they are nice to someone else, they open the app and adds a coin to the jar. When it’s full, they can reward themselves with an ice cream.
The app uses UIDynamics to add physics behavior to the coins, and the user interface shows a jar, a label and some buttons.
It’s nearly finished and you’re quite pleased with the result, so you put it onto your iPhone and show it to a friend. They look at it and say, “It’s nice, but why are the coins so small?”
Then you show it your colleagues the next time you’re at work and they say, “I can barely read the labels.” and “The coins just drop like rocks, shouldn’t they bounce?” And so on.
So you go to your computer, adjust some constants and put it on your phone again, but instead of rave reviews you get yet more suggestions and critiques. Obviously, your friends and colleagues are not pleased with the result and it seems like you’ll never break away from Xcode.
During the development of Paper, Facebook was in the same situation. To handle these small, subjective, late-stage adjustments, they came up with a rather clever solution that lets a tester change values or parameters for themselves directly from within the app, without touching code or any recompiling.
It’s called Facebook Tweaks, or FBTweak for short.
The best part is that Facebook published the source code of FBTweak under a BSD license, which means you can use it in your iOS app development. How polite — Facebook should add a coin to its Nice Jar.
In this tutorial, you’ll add FBTweak to the “Nice Jar” App.
Getting Started
To start this tutorial, download the starter project here. Open the project, and then build and run. You should see this:
If you touch the jar, a new coin appears and falls to the bottom of the jar, but you can only touch it if you’ve been nice to somebody today. Oh, you tipped your Barista this morning? That qualifies as polite. :]
Switch to Xcode and take a minute to look around the project, especially the interesting code in ViewController.swift. You should notice the following parts:
- To animate the coins:
UIDynamicAnimator,
UIDynamicItemBehavior,
UIGravityBehaviorand
UICollisionBehavior(in
viewDidLoad)
- To set the gravity direction according to the device orientation:
CMMotionManager.
- For touch handling:
touchesEnded(_:withEvent:)
Getting Tweaks
Open this link and click the Download ZIP button.
OS X will automatically unzip the file into a folder named Tweaks-master. Find the FBTweak folder and drag into the NiceJar project. Make sure you check Copy items if needed.
FBTweak is an Objective-C library, so if you want to use it in a Swift project, you have to add a bridging header.
Back in Xcode, go to File\New\File…, choose the iOS\Source\Header File template and click Next. Name the file NiceJar-Bridging-Header.h, select the folder NiceJar and click Create.
Replace the contents of NiceJar-Bridging-Header.h with the following:
#ifndef NiceJar_NiceJar_Bridging_Header_h #define NiceJar_NiceJar_Bridging_Header_h #import "FBTweak.h" #import "FBTweakStore.h" #import "FBTweakCategory.h" #import "FBTweakCollection.h" #import "FBTweakViewController.h" #import "FBTweakShakeWindow.h" #endif
Select the Project Navigator on the left, and then select the project.
Select Build Settings, choose the filters All and Levels and then enter bridgin as the search term.
Add NiceJar/NiceJar-Bridging-Header.h as an Objective-C Bridging Header. This file path is relative to your project, so if you saved NiceJar-Bridging-Header.h in another folder, make sure to use the correct path.
And just like that, you’ve enabled Swift to access these Objective-C classes. You can find out more about mixing Swift and Objective-C in the same project here.
FBTweak comes with a view controller that shows the available tweaks, and allows the user make changes to them.
The easiest way to present this view controller is to exchange the
UIWindow in AppDelegate.swift with an instance of
FBTweakShakeWindow. Open AppDelegate.swift and change the line:
var window: UIWindow?
to
lazy var window: UIWindow? = { let window = FBTweakShakeWindow(frame: UIScreen.mainScreen().bounds) return window }()
Within the closure, an instance of
FBTweakShakeWindow is instantiated and returned. The return value is then set to the window property of the project. This means instead of a
UIWindow, the application window is now an instance of
FBTweakShakeWindow.
lazymodifier tells the app delegate not to initialize the window until it’s actually accessed. This handy trick speeds up load times by preventing the creation of objects until they’re needed. You can read more about property modifiers here.
FBTweakShakeWindow is a subclass of
UIWindow that adds methods to present the tweaks view controller with a shake gesture. Take a look at the source code to see how this happens.
Build and run. With the app running and the simulator in the foreground, select Hardware\Shake Gesture (^⌘Z). The tweaks view controller will be presented modally, but doesn’t yet list any tweaks.
The First Tweak
On the FBTweak Github page, you can read an explanation of the simplest way to add tweaks to your project, and that is to use one of the provided macros like
FBTweakValue(...),
FBTweakBind(...) and
FBTweakAction(...).
Hold up right there! Swift doesn’t support macros, so you’ll have to add tweaks with a slightly more elaborate syntax.
Open ViewController.swift and find
touchesEnded(_:withEvent:). Exchange the line:
let coinDiameter = CGFloat(50)
With this:
let identifier = "de.dasdev.nicejar.coinsize" //1 let store = FBTweakStore.sharedInstance() //2 var category = store.tweakCategoryWithName("Jar View") if category == nil { category = FBTweakCategory(name: "Jar View") store.addTweakCategory(category) } //3 var collection = category.tweakCollectionWithName("Coins") if collection == nil { collection = FBTweakCollection(name: "Coins") category.addTweakCollection(collection) } //4 var tweak = collection.tweakWithIdentifier(identifier) if tweak == nil { tweak = FBTweak(identifier: identifier) tweak.name = "Size" tweak.defaultValue = CGFloat(50) collection.addTweak(tweak) } //5 let coinDiameter: CGFloat = CGFloat((tweak.currentValue ?? tweak.defaultValue).floatValue)
- You get the shared instance of the tweak store that holds a reference to all tweak categories that have been defined.
- Then you get the category with the name Jar View, and if there isn’t a category with that name, it’s created and added to the store.
- You get a collection with the name Coins, and if there isn’t a collection with that name, it’s created and added to the category.
- You get the tweak with the identifier
de.dasdev.nicejar.coinsize, and if it can’t be found it creates a tweak with that identifier, adds it to the collection and gives it the default value of
CGFloat(50).
- The
coinDiameteris then set to the
currentValueof the tweak if it’s non-nil, otherwise it’s set to its
defaultValue.
Build and run. Reward your recent politeness by tapping the jar. Then go to Hardware\Shake Gesture. You should now see the entry Jar View in the list of tweaks.
Tap it and you’ll push the tweak category view onto the navigation stack.
Change the value of the coin size to 70 and tap Done. Tap the jar again to see bigger, more satisfying coins fall into the jar.
Congratulations! You added your very first tweak to the project. You could now hand the phone back to your friend and let them determine the perfect coin size, without ever touching Xcode.
How Does it Work?
FBTweak has properties to hold the default value and the current value. When the user changes the tweak value in the tweaks view controller, it modifies the current value and writes it to
NSUserDefaults with the identifier as the key.
The next time you run the tweak, the current value is set from
NSUserDefaults — this means you should always use different identifiers for the tweaks.
The rest of the classes present the tweaks view controller, store the tweaks categories and collections, and create tweaks using fancy macros in Objective-C. If you want to see some complex Objective-C macros, feel free to take a look in FBTweakInlineInternal.h.
A Helper in the Dark
If you thought the coin size tweak was cool, then prepare to be amazed – you’re going to add a lot more tweaks. Duplicating the code from your first tweak over and over again isn’t a wise thing to do, and as you already know Swift has no love for macros.
However, you can create a helper class to ease the workload.
Go to File\New\File… and chose iOS\Source\Swift File. Enter the name Tweaks.swift and click Create.
Replace the contents of the new file with the following:
import Foundation class Tweaks { //1 class func collectionWithName(collectionName: String, categoryName: String) -> FBTweakCollection { let store = FBTweakStore.sharedInstance() var category = store.tweakCategoryWithName(categoryName) if category == nil { category = FBTweakCategory(name: categoryName) store.addTweakCategory(category) } var collection = category.tweakCollectionWithName(collectionName) if collection == nil { collection = FBTweakCollection(name: collectionName) category.addTweakCollection(collection) } return collection } //2 class func tweakValueForCategory<T:AnyObject>(categoryName: String, collectionName: String, name: String, defaultValue: T) -> T { let identifier = categoryName.lowercaseString + "." + collectionName.lowercaseString + "." + name let collection = collectionWithName(collectionName, categoryName: categoryName) var tweak = collection.tweakWithIdentifier(identifier) if tweak == nil { tweak = FBTweak(identifier: identifier) tweak.name = name tweak.defaultValue = defaultValue collection.addTweak(tweak) } return (tweak.currentValue ?? tweak.defaultValue) as! T } }
This code essentially does the same thing as the code of your first tweak.
- This method returns a collection with the name
collectionNamein a category with the name
categoryName. If there isn’t a collection and/or a category with the specified names, they’re created and added to the store. This method is a helper method for
tweakValueForCategory(_:collectionName:name:defaultValue:).
- This adds a tweak to a collection with the specified name.
What does the
mean? This is a generic. If you’re unfamiliar with generics, see the explanation here.
In short it means that the parameter with the type
T can be any type as long as it conforms to the
AnyObject protocol. The default value has to conform to
AnyObject because in FBTweak.h it’s defined to be of type
FBTweakValue, and that is an alias for
id.
Now you can clean up the code of your first tweak. Go to
touchesEnded(_:withEvent:) in ViewController.swift and replace these lines:
let identifier = "de.dasdev.nicejar.coinsize" let store = FBTweakStore.sharedInstance() var category = store.tweakCategoryWithName("Jar View") if category == nil { category = FBTweakCategory(name: "Jar View") store.addTweakCategory(category) } var collection = category.tweakCollectionWithName("Coins") if collection == nil { collection = FBTweakCollection(name: "Coins") category.addTweakCollection(collection) } var tweak = collection.tweakWithIdentifier(identifier) if tweak == nil { tweak = FBTweak(identifier: identifier) tweak.name = "Size" tweak.defaultValue = CGFloat(50) collection.addTweak(tweak) } let coinDiameter = CGFloat((tweak.currentValue ?? tweak.defaultValue).floatValue)
With this cute one-liner:
let coinDiameter = CGFloat(Tweaks.tweakValueForCategory("Jar View", collectionName: "Coins", name: "Size", defaultValue: 50).floatValue)
Much better! Now the code to add a tweak won’t detract from the app’s main code.
Build and run. Make sure the tweak still works by following the same steps as before.
Note: You have to add a coin to the jar before you change the coin size, because the tweak is added to the tweak store in
touchesEnded(_:withEvent:).
More Tweaks!
Now that you have a convenient way to add tweaks, go crazy and add a bunch more! Go to
viewWillAppear(_:) in ViewController.swift and add the following lines right below the call to
super:
//1 let elasticity = CGFloat(Tweaks.tweakValueForCategory("Jar View", collectionName: "Coins", name: "Elasticity", defaultValue: 0.35).floatValue) propertyBehavior.elasticity = elasticity //2 let labelConstaintConstant = CGFloat(Tweaks.tweakValueForCategory("Jar View", collectionName: "Label", name: "Y Offset", defaultValue: 102).floatValue) verticalLabelConstraint.constant = labelConstaintConstant //3 let showButton = Tweaks.tweakValueForCategory("Summary", collectionName: "General", name: "Show", defaultValue: false).boolValue summaryButton.hidden = !showButton
- The first tweak lets you change the elasticity of the coins. Allowed values are between 0.0 and 1.0.
- The second tweak lets you change the vertical position of the coin label.
- The last tweak lets you show and hide the summary button. This is especially useful if you have view controllers you’re currently working on and they should only be visible to a select group of testers. For any other testers, you can simply disable this button and hide the unfinished parts of your app.
Build and run. Add some coins to the jar and go to Hardware\Shake Gesture. There are now two tweak categories: Jar View and Summary. Tap Jar View and you should see the following:
Change the Elasticity of the coins to 0.9 and the Y Offset of the label to 200. Tap Done and add coins to the jar.
Wow! That’s some serious bounce.
Forbidden Values
With the app running, open the tweaks window again (Hardware\Shake Gesture), then open Jar View and increase the elasticity to a value that makes no sense, for instance something over 2.0.
The fact you can do this is somewhat unfortunate. How can your testers possibly know what the allowed values are if they can enter anything they want? Don’t worry, Facebook has an answer to this; take a look at the following properties declared in FBTweak.h:
/** @abstract The minimum value of the tweak. @discussion Optional. If nil, there is no minimum. Should not be set on tweaks representing actions. */ @property (nonatomic, strong, readwrite) FBTweakValue minimumValue; /** @abstract The maximum value of the tweak. @discussion Optional. If nil, there is no maximum. Should not be set on tweaks representing actions. */ @property (nonatomic, strong, readwrite) FBTweakValue maximumValue;
Now you just need to add support for those to your helper class.
Open Tweaks.swift and add the two parameters
minimumValue: T? = nil and
maximumValue: T? = nil to
tweakValueForCategory(...). The declaration should now resemble the following:
class func tweakValueForCategory<T:AnyObject>(categoryName: String, collectionName: String, name: String, defaultValue: T, minimumValue: T? = nil, maximumValue: T? = nil) -> T
Take a look at the new parameters — they’re special. First, they’re optional and that means they can also be
nil. In addition, they have default values of nil. A nice side effect of this is that you don’t have to change the call to this method in your tweaks; if the default values are okay for you, you can omit these parameters in the call to this method.
Build and run. Open the tweaks view controller and tap Reset to reset the tweaks to the default values. Now the app should behave as it did before.
Next you have to use those parameters in the creation of the tweak. Open Tweaks.swift again and add the following lines below
tweak.defaultValue = defaultValue in
tweakValueForCategory(...):
if minimumValue != nil && maximumValue != nil { tweak.minimumValue = minimumValue tweak.maximumValue = maximumValue }
Go to
viewWillAppear(_:) in ViewController.swift and change the elasticity tweak to the following:
let elasticity = CGFloat(Tweaks.tweakValueForCategory("Jar View", collectionName: "Coins", name: "Elasticity", defaultValue: 0.35, minimumValue: 0.0, maximumValue: 1.0).floatValue)
Build and run. Open the tweak view controller, go to Jar View category and try to increase the elasticity beyond 1.0. It doesn’t work! Nice!
More Complicated Tweaks
One of your testers said that the label text should be bigger. You could use the method you already have to add a tweak for the font size, but there’s a cooler way to go about it.
You’ll add a tweak that executes a closure when the tester changes a tweak value — a tweak with action, if you will.
Instances of FBTweak can have observers. When the value of a tweak changes, the method
tweakDidChange(tweak: FBTweak) is called on all observers. You’ll use this to implement the tweak with action.
Open Tweaks.swift and change the following line:
class Tweaks {
To this:
class Tweaks: NSObject, FBTweakObserver {
With this change, you tell the compiler that
Tweaks will implement
tweakDidChange.
Now, add the following to the beginning of the
Tweaks class in Tweaks.swift:
typealias ActionWithValue = ((currentValue: AnyObject) -> ()) var actionsWithValue = [String:ActionWithValue]()
The variable
actionsWithValue is a dictionary that stores functions with a parameter of type
AnyObject and a return value of
().
Now, add the following method to the class:
func tweakActionForCategory<T where T: AnyObject>(categoryName: String, collectionName: String, name: String, defaultValue: T, minimumValue: T? = nil, maximumValue: T? = nil, action: (currentValue: AnyObject) -> ()) { let identifier = categoryName.lowercaseString + "." + collectionName.lowercaseString + "." + name let collection = Tweaks.collectionWithName(collectionName, categoryName: categoryName) var tweak = collection.tweakWithIdentifier(identifier) if tweak == nil { tweak = FBTweak(identifier: identifier) tweak.name = name tweak.defaultValue = defaultValue if minimumValue != nil && maximumValue != nil { tweak.minimumValue = minimumValue tweak.maximumValue = maximumValue } tweak.addObserver(self) collection.addTweak(tweak) } actionsWithValue[identifier] = action action(currentValue: tweak.currentValue ?? tweak.defaultValue) }
The method
tweakActionForCategory(...) looks very similar to
tweakValueForCategory(...). However, the difference is that it has an additional parameter,
action: (currentValue: AnyObject) -> (), that is added to the dictionary
actionsWithValue within the method body.
The instance of the Tweaks class (
self) is added as an observer to the tweak. This means when a tweak that was added via this method is changed, the method
tweakDidChange(...) on the instance is called.
Finally, add the following:
func tweakDidChange(tweak: FBTweak!) { let action = actionsWithValue[tweak.identifier] action?(currentValue: tweak.currentValue ?? tweak.defaultValue) }
In
tweakDidChange(...), you use the tweak identifier to retrieve the action for this tweak from the
actionsWithValue dictionary. This action is then executed with the current value, provided it’s not
nil, otherwise it’s set to the default value.
To get this tweak with action to work, you need an instance of the
Tweaks class in ViewController.swift.
Go to the beginning of the class
ViewController and add:
let tweaks = Tweaks()
Now add the following code in
viewDidLoad:
tweaks.tweakActionForCategory("Jar View", collectionName: "Coin Label", name: "Text Size", defaultValue: 30, minimumValue: 20, maximumValue: 60, action: { (currentValue) -> () in self.coinLabel.font = self.coinLabel.font.fontWithSize(CGFloat(currentValue.floatValue)) })
This code adds a tweak with ab action to the tweak store. When you change the value of the tweak, the action executes. In this case, you’re changing the font size of the
coinLabel to the current value of the tweak.
Build and run. Pull up the tweaks view controller. Change the font size to 60 and tap Done. You should see something like this:
Even More Tweaks
Add a few more tweaks, just to practice your new skills. Add the following code to
viewDidLoad() in ViewController.swift:
tweaks.tweakActionForCategory("Jar View", collectionName: "Coin Label", name: "Orange Text", defaultValue: false, action: { (currentValue) -> () in if currentValue.boolValue == true { self.coinLabel.textColor = UIColor(red: 0.98, green: 0.58, blue: 0.13, alpha: 1.0) } else { self.coinLabel.textColor = UIColor.blackColor() } })
This tweak lets you toggle the text color of the
coinLabel.
How would the app look if you were to change the gravity? Well, add a tweak for it and find out. Back within
viewDidLoad(), find the closure of the
startAccelerometerUpdatesToQueue method of
motionManager, and replace these two lines:
let y = CGFloat(data.acceleration.y) let x = CGFloat(data.acceleration.x)
With the following:
let magnitude = CGFloat(Tweaks.tweakValueForCategory("Jar View", collectionName: "Dynamics", name: "Gravity Magnitude", defaultValue: 1.0).floatValue) let y = magnitude * CGFloat(data.acceleration.y) let x = magnitude * CGFloat(data.acceleration.x)
This tweak gives you and your testers control of a fundamental element — gravity! It’s like being a super hero! Unfortunately, this only works when you test on a device as the simulator doesn’t have an accelerometer and the iPhone doesn’t know how to defy gravity — yet.
Play around with the tweaks and find the values that make the app behave and look its best.
Production Code
I recommend you remove the tweaks code as soon as you settle on the right values. But just in case you forget or something weird happens, like you have one beer too many and decide to publish your app after letting your friends tweak it, you should add code to disable tweaks in your release builds.
Select your project in the Project Navigator and then open Build Settings. Search for other swift. Add the value -DDEBUG in Other Swift Flags to the Debug configuration, as shown here:
With this addition, you can use preprocessor directives to disable Tweaks in release builds.
Open AppDelegate.swift and replace the existing definition of
window with the following:
#if DEBUG lazy var window: UIWindow? = { let window = FBTweakShakeWindow(frame: UIScreen.mainScreen().bounds) return window }() #else var window: UIWindow? #endif
Now go to Tweaks.swift and add
#if DEBUG at the beginning of both
tweakActionForCategory(...) and
tweakValueForCategory(...). At the end of the method
tweakActionForCategory(...), add the lines:
#else action(currentValue: defaultValue) #endif
And add this at the end of
tweakValueForCategory(...):
#else return defaultValue #endif
Now, when you make a release build, the tweaks fall back to default values. So even if you do publish your app when you shouldn’t the worst you’ll do is publish it exactly how you developed it to be.
Where to go From Here?
If you’d like to see the app in its perfect form, download the final project for this tutorial here.
At this point, you have a functional app that looks and behaves perfectly, thanks to Facebook Tweaks.
To improve the project, you could add a second view for the summary. Also, the alert view doesn’t quite fit with the overall look and feel of the app, so add tweaks to find the right text size, and position of the summary label.
The coin’s physics aren’t quite perfect either, hence why you occasionally see strange gaps between the coins. SpriteKit is often better suited to simulating physics in this way. Here is a starter project you can use to build NiceJar using SpriteKit instead of UIKit.
Thank you for taking the time to work through this tutorial! I hope you learned some cool new tricks which should make it faster and easier to move through that last 20 percent of your project. Feel free to weigh in, ask questions, or share your brilliant ideas for how to use Facebook Tweaks by leaving a comment below. | https://www.raywenderlich.com/80970/facebook-tweaks-swift-tutorial | CC-MAIN-2017-43 | refinedweb | 3,798 | 51.24 |
So what is AMD?
As web applications continue to grow more advanced and more heavily rely on JavaScript, there has been a growing movement towards using modules to organize code and dependencies. Modules give us a way to make clearly distinguished components and interfaces that can easily be loaded and connected to dependencies. The AMD module system gives us the perfect path for using JavaScript modules to build web applications, with a simple format, asynchronous loading, and broad adoption.
The Asynchronous Module Definition (AMD) format is an API for defining reusable modules that can be used across different frameworks. AMD was developed to provide a way to define modules such that they could be loaded asynchronously using the native browser script element-based mechanism. The AMD API grew out of discussions in 2009 in the Dojo community which then moved to discussions with CommonJS on how to better adapt the CommonJS module format (used by NodeJS) for the browser. It has since grown into its own standard with its own community. AMD has taken off in popularity, with numerous module loaders and widespread usage. At SitePen we have worked extensively with AMD in Dojo, adding support and now actively building applications with this format.
Glossary
- Module – An encapsulated JavaScript file that follows a module format, indicating dependencies and providing exports.
- Module ID – This is the unique string that identifies a module. There are relative module ids that resolve to absolute module ids relative to the current module’s id.
- Module Path – This is the URL that is used to retrieve a module. A module id is mapped to the module path based on the loader’s configuration rules (by default, modules are assumed to be relative to the base path, typically the parent of the module loader package).
- Module Loader – This is the JavaScript code that resolves and loads modules and their associated dependencies, interacts with plugins, and handles configuration.
- Package – A collection of modules grouped together. For example, dojo, dijit, and dgrid are packages.
- Builder – This is a tool that will generate a concatenated JavaScript file composed of a module (or modules) and its dependencies, thus making it possible to take an application composed of numerous modules and create a number of built layers that can be loaded in a minimal number of HTTP requests.
- Layer – A file that contains modules that have been optimized into a single file by a Builder.
- Dependency – This is a module that must be loaded for another module to function properly.
- AMD – Asynchronous Module Definition, a module format optimized for browser usage.
- Factory – The function provided to the module loader via define that is to be executed once all the dependencies are ready
Why AMD Modules?
The basic premise of a module system is to:
- allow the creation of encapsulated segments of code, called modules
- define dependencies on other modules
- define exported functionality that can in turn be used by other modules
- discreetly access the functionality provided by these modules
AMD satisfies this need, and uses a callback function with dependencies as arguments so that dependencies can be asynchronously loaded before the module code is executed. AMD also provides a plugin system for loading non-AMD resources.
While alternate methods can be used to load JavaScript (XHR + eval), using script elements to load JavaScript has an edge in performance, eases debugging (particularly on older browsers), and has cross-domain support. Thus AMD aims to provide an optimal development experience in the browser.
The AMD format provides several key benefits. First, it provides a compact declaration of dependencies. Dependencies are defined in a simple array of strings, making it easy to list numerous dependencies with little overhead.
AMD helps eliminate the need for globals. Each module defines dependencies and exports by referencing them with local variables or return objects. Consequently, modules can define functionality and interact with other modules without having to introduce any global variables. AMD is also “anonymous”, meaning that the module does not have to hard-code any references to its own path, the module name relies solely on its file name and directory path, greatly easing any refactoring efforts.
By coupling dependencies with local variables, AMD encourages high-performance coding practices. Without an AMD module loader, JavaScript code has traditionally relied on the nested objects to “namespace” functionality of a given script or module. With this approach, functions are typically accessed through a set of properties, resulting in a global variable lookup and numerous property lookups, adding extra overhead and slowing down the application. With dependencies matched to local variables, functions are typically accessed from a single local variable, which is extremely fast and can be highly optimized by JavaScript engines.
Using AMD
The foundational AMD API is the
define() method which allows us to define a module and its dependencies. The API for writing modules consists of:
define(dependencyIds, function(dependency1, dependency2,...){ // module code });
The
dependencyIds argument is an array of strings that indicates the dependencies to be loaded. These dependencies will be loaded and executed. Once they have all been executed, their export will be provided as the arguments to the callback function (the second argument to
define()).
To demonstrate basic usage of AMD, here we could define a module that utilizes the
dojo/query (CSS selector query) and
dojo/on (event handling) modules:
define(["dojo/query", "dojo/on" ], function (query, on) { return { flashHeaderOnClick: function(button){ on(button, "click", function () { query(".header").style("color", "red"); }); } }; });
Once dojo/query and dojo/on are loaded (which doesn’t happen until their dependencies are loaded, and so on), the callback function is called, with the
query argument given the export of
dojo/query (a function that does CSS selector querying), and the
on argument given the export of
dojo/on (a function that adds an event listener). The callback function (also known as the module factory function), is guaranteed to be called only once.
Each of the module ids listed in the set of dependencies is an abstract module path. It is abstract because it is translated to a real URL by the module loader. As you can see the module path does not need to include the “.js” suffix, this is automatically appended. When the module id starts with a name, this name is considered to be an absolute module id. In contrast, we can specify relative ids by starting with a “./” or a “../” to indicate a sibling path or parent path, respectively. These are resolved to their absolute module ids by standard path resolution rules. You can then define a module path rule to determine how these module paths are converted to URLs. By default, the module root path is defined relative to the parent of the module loader package. For example, if we loaded Dojo like (note that we set async to true to enable true async loading of AMD):
Then the root path to modules would be assumed to be “/path/to/”. If we specified a dependency of “my/module”, this would be resolve to “/path/to/my/module.js”.
Initial Module Loading
We have described how to create a simple module. However, we need an entry point to trigger the chain of dependencies. We can do this by using the
require() API. The signature of this function is basically the same as
define(), but is used to load dependencies without defining a module (when a module is defined, it is not executed until it is required by something else). We could load our application code like:
Dojo provides a shortcut for loading an initial module. The initial module can be loaded by specifying the module in the
deps configuration option:
This is an excellent way of loading your application because JavaScript code can be completely eliminated in HTML, and only a single script tag is needed to bootstrap the rest of the application. This also makes it very easy to create aggressive builds that combine your application code and dojo.js code in a single file without having to alter the HTML script tags after the build. RequireJS and other module loaders have similar options for loading the top level modules.
The progression of dependency loading from the
require() call to the modules is illustrated in the diagram above. The
require() call kicks off the loading of the first module, and dependencies are loaded as needed. Modules that are not needed ("module-d" in the diagram) are never loaded or executed.
The
require() function can also be used to configure the module path look-ups, and other options, but these are generally specific to the module loader, and more information on configuration details are available in each loader's documentation.
Plugins and Dojo Optimizations
AMD also supports plugins for loading alternate resources. This is extremely valuable for loading non-AMD dependencies like HTML snippets and templates, CSS, and internationalized locale-specific resources. Plugins allow us to reference these non-AMD resources in the dependency list. The syntax for this is:
"plugin!resource-name"
A commonly used plugin is the
dojo/text plugin which allows you to directly load a file as text. With this plugin, we list the target file as the resource name. This is frequently used by widgets to load their HTML template. For example, with Dojo we can create our own widget like this:
define(["dojo/_base/declare", "dijit/_WidgetBase", "dijit/_TemplatedMixin", "dojo/text!./templates/foo.html" ], function(declare, _WidgetBase, _TemplatedMixin, template){ return declare([_WidgetBase, _TemplatedMixin], { templateString: template }); });
This example is instructive on multiple levels for creating Dojo widgets. First, this represents the basic boilerplate for creating a widget using the Dijit infrastructure. You might also note how we created a widget class and returned it. The
declare() (class constructor) was used without any namespace or class name. As AMD eliminates the need for namespaces, we no longer need to create global class names with
declare(). This aligns with a general strategy in AMD modules of writing anonymous modules. Again, an anonymous module is one that does not have any hardcoded references to its own path or name within the module itself and we could easily rename this module or move it to a different path without having to alter any code inside the module. Using this approach is generally recommended, however if you will be using this widget with declarative markup, you will still need to include namespace/class names in order to create a namespaced global for Dojo's parser to reference in Dojo 1.7. Improvements coming in Dojo 1.8 allow you to use module ids.
There are several other plugins that are included with Dojo that are useful. The
dojo/i18n plugin is used to load internationalized locale-specific bundles (often used for translated text or regional formatting information). Another important plugin is
dojo/domReady, which is recommended to be used as a replacement for dojo.ready. This plugin makes it very simple to write a module that also waits for the DOM to be ready, in addition to all other dependent modules, without having to include an extra callback level. We use
dojo/domReady as a plugin, but no resource name is needed:
define(["dojo/query", "dojo/domReady!" ], function(query){ // DOM is ready, so we can query away query(".some-class").forEach(function(node){ // do something with these nodes }); });
Another valuable plugin is
dojo/has. This module is used to assist with feature detection, allowing you to choose different code paths based on the presence of certain browser features. While this module is often used as a standard module, providing a
has() function to the module, it can also be used as a plugin. Using it as a plugin allows us to conditionally load dependencies based on a feature presence. The syntax for the
dojo/has plugin is to use a ternary operator with conditions as feature names and module ids as values. For example, we could load a separate touch UI module if the browser supports touch (events) like:
define(["dojo/has!touch?ui/touch:ui/desktop" ], function(ui){ // ui will be ui/touch if touch is enabled, //and ui/desktop otherwise ui.start(); });
The ternary operators can be nested, and empty strings can be used to indicate no module should be loaded.
The benefit of using
dojo/has is more than just a run-time API for feature detection. By using
dojo/has, both in
has() form in your code, as well as a dependency plugin, the build system can detect these feature branches. This means that we can easily create device or browser specific builds that are highly optimized for specific feature sets, simply by defining the expected features with the
staticHasFeatures option in the build, and the code branches will automatically be handled correctly.
Data Modules
For modules that do not have any dependencies, and are simply defined as an object (like just data), one can use a single argument
define() call, where the argument is the object. This is very simple and straightforward:
define({ foo: "bar" });
This is actually similar to JSONP, enabling script-based transmission of JSON data. But, AMD actually has an advantage over JSONP in that it does not require any URL parameters; the target can be a static file without any need for active code on the server to prefix the data with a parameterized callback function. However, this technique must be used with caution as well. Module loaders always cache the modules, so subsequent
require()'s for the same module id will yield the same cached data. This may or may not be an issue for your data retrieval needs.
Builds
AMD is designed to be easily parsed by build tools to create concatenated or combined sets of modules in a single file. Module systems provide a tremendous advantage in this area because build tools can automatically generate a single file based on the dependencies listed in the modules, without requiring manually written and updated lists of script files to be built. Builds dramatically reduce load time by reducing requests, and this is an easy step with AMD because the dependencies are specifically listed in the code.
Without build
With build
Performance
As noted before, using script element injection is faster than alternate methods because it relies more closely on native browser script loading mechanisms. We setup some tests on dojo.js in different modes, and script element loading was about 60-90% faster than using XHR with eval. In Chrome, with numerous small modules, each module was loaded in about 5-6ms, whereas XHR + eval was closer to 9-10ms per module. In Firefox, synchronous XHR was faster than asynchronous, and in IE asynchronous XHR was faster than synchronous, but script element loading is definitely the fastest. It is also surprising that IE9 was the fastest, but this is probably at least partly due to Firefox and Chrome's debugger/inspector adding more overhead.
Module Loaders
The AMD API is open, and there are multiple AMD module loader and builder implementations that exist. Here are some key AMD loaders that are available:
- Dojo - This is a full AMD loader with plugins and a builder, and this is what we typically use since we utilize the rest of the Dojo toolkit.
- RequireJS - This is the original AMD loader and is the quintessential AMD loader. The author, James Burke, was the main author and advocate of AMD. This is full-featured and includes a builder.
- curl.js - This is a fast AMD loader with excellent plugin support (and its own library of plugins) and its own builder.
- lsjs - An AMD module loader specifically designed to cache modules in local storage. The author has also built an independent optimizer.
- NeedJS - A light AMD module loader.
- brequire - Another light AMD module loader.
- inject - This was created and is used by LinkedIn. This is a fast and light loader without plugin support.
- Almond - This is a lightweight version of RequireJS.
Getting AMD Modules
There is an increasing number of packages and modules that are available in AMD format. The Dojo Foundation packages site provides a central place to see a list of some of the packages that are available. The CPM installer can be used to install any of the packages (along with automatically installing the dependencies) that have been registered through the Dojo Foundation packages site.
Alternately, James Burke, the author of RequireJS has created Volo, a package installer that can be used to easily install packages directly from github. Naturally, you can also simply download modules directly from their project site (on github or otherwise) and organize your directory structure yourself.
With AMD, we can easily build applications with any package, not just Dojo modules. It is also generally fairly simple to convert plain scripts to AMD. You simply add a
define() with an empty array, and enclose the script in the callback function. You can also add dependencies, if the script must be executed after other scripts. For example:
my-script.js: // add this to top of the script defined([], function(){ // existing script ... // add this to the end of script });
And we could build application components that pull modules from various sources:
require(["dgrid/Grid", "dojo/query", "my-script"], function(Grid, query){ new Grid(config, query("#grid")[0]); });
One caveat to be aware of when converting scripts to modules is that if the script has top level functions or variables, these would ordinarily result in globals, but inside of a
define() callback they would be local to the callback function, and no globals will be created. You can either alter the code to explicitly create a global (remove the
var or
function prefix (you would probably want to do this if you need the script to continue working with other scripts that rely on the globals that it produces), or alter the module to return the functions or values as exports and arrange the dependent modules to use those exports (rather than the globals, this allows you to pursue the global-free paradigm of AMD).
Directly Loading Non-AMD Scripts
Most module loaders also support direct loading of non-AMD scripts. We can include a plain script in our dependencies, and denote that they are not AMD by suffixing them with ".js" or providing an absolute URL or a URL that starts with a slash. The loaded script will not be able to provide any direct AMD exports, but must provide its functionality through the standard means of creating global variables or functions. For example, we could load Dojo and jQuery:
require(["dojo", "jquery.js"], function(dojo){ // jquery will be loaded as plain script dojo.query(...); // the dojo exports will be available in the "dojo" local variable $(...); // the other script will need to create globals });
Keep It Small
AMD makes it easy to coordinate and combine multiple libraries. However, while it may be convenient to do this, you should exercise some caution. Combining libraries like Dojo and jQuery may function properly, but it adds a lot of extra superfluous bytes to download since Dojo and jQuery have mostly overlapping functionality. In fact, a key part of Dojo's new module strategy is to avoid any downloading of unnecessary code. Along with converting to AMD, the Dojo base functionality has been split into various modules that can be individually used, making it possible to use the minimal subset of Dojo that is needed for a given application. In fact, modern Dojo application and component development (like dgrid) often can lead to an entire application that is smaller than earlier versions of Dojo base by itself.
AMD Objections
There have been a few objections raised to AMD. One objection is that using the original CommonJS format, from which AMD is somewhat derived, is simpler, more concise, and less error prone. The CommonJS format does indeed have less ceremony. However, there are some challenges with this format. We can choose to leave the source files unaltered and directly delivered to the browser. This requires the module loader to wrap the code with a header that injects the necessary CommonJS variables, and thus relies on XHR and eval. The disadvantages of this approach have already been discussed, and include slower performance, difficult with debugging on older browsers, and cross-domain restrictions. Alternately, one can have a real-time build process, or on-request wrapping mechanism on the server, that essentially wraps the CommonJS module with the necessary wrapper, which actually can conform to AMD. These approaches are not necessarily showstoppers in many situations, and can be very legitimate development approaches. But to satisfy the broadest range of users, where users may be working on a very simple web server, or dealing with cross-browser, or older browsers, AMD decreases the chance of any of these issues becoming an obstacle for the widest range of users, a key goal of Dojo.
The dependency listing mechanism in AMD specifically has been criticized as being error prone because there are two separate lists (the list of dependencies and the callback arguments that define the variables assigned the dependencies) that must be maintained and kept in sync. If these lists become out of sync the module references are completely wrong. In practice, we haven't experienced much difficulty with this issue, but there is an alternate way of using AMD that addresses this issue. AMD supports calling
define() with a single callback argument where the factory function contains
require() calls rather than a dependency list. This actually can not only help mitigate dependency list synchronization issues, but also can make it extremely easy to add CommonJS wrappers, since the factory function's body essentially conforms to the CommonJS module format. Here is an example of how to define a module with this approach:
define(function(require){ var query = require("dojo/query"); var on = require("dojo/on"); ... });
When a single argument is provided, require, exports and module are automatically provided to the factory. The AMD module loader will scan the factory function for require calls, and automatically load them prior to running the factory function. Because the require calls are directly inline with the variable assignment, I could easily delete one of the dependency declarations without any further need to synchronize lists.
A quick note about the
require() API: When
require() is called with a single string it is executed synchronously, but when called with an array it is executed asynchronously. The dependent modules in this example are still loaded asynchronously, prior to executing the callback function, at which time dependencies are in memory, and the single string require calls in the code can be executed synchronously without issue or blocking.
Limitations
AMD gives us an important level of interoperability of module loading. However, AMD is just a module definition, it does not make any prescriptions on the API's that the module create. For example, one can't simply ask the module loader for a query engine and expect to return the functionality from interchangeable query modules with a single universal API. There may be benefit to defining such APIs for better module interchange, but that is beyond the scope of AMD. And most module loaders do support mapping module ids to different paths, so it would be very feasible to map a generic module id to different target paths if you had interchangeable modules.
Progressive Loading
The biggest issue that we have seen with AMD is not so much a problem with the API, but in practice there seems to be an overwhelming tendency to declare all dependencies up front (and that is all we have described so far in this post, so we are just as guilty!). However, many modules can operate correctly while deferring the loading of certain dependencies until they are actually needed. Using a deferred loading strategy can be very valuable for providing a progressively loaded page. With a progressive loading page, components can be displayed as each one is downloaded, rather than forcing the download of every byte of JavaScript before the page is rendered and usable. We can code our modules in a way to defer loading of certain modules by using the asynchronous require([]) API in the code. In this example, we only load the necessary code for this function to create children container nodes for immediate visual interaction, but then defer the loading of the widgets that go inside the containers:
// declare modules that we need up front define(["dojo/dom-create", "require" ], function(domCreate, require){ return function(node){ // create container elements for our widget right away, // these could be styled for the right width and height, // and even contain a spinner to indicate the widgets are loading var slider = domCreate("div", {className:"slider"}, node); var progress = domCreate("div", {className:"progress"}, node); // now load the widgets, we load them independently // so each one can be rendered as it downloads require(["dijit/form/HorizontalSlider"], function(Slider){ new Slider({}, slider); }); require(["dijit/Progress"], function(Progress){ new Progress({}, progress); }); } });
This provides an excellent user experience because they interact with components as they become available, rather than having to wait for the entire application to load. Users are also more likely to feel like an application is fast and responsive if they can see the page progressively rendering.
require, exports
In the example above, we use a special dependency "require", which give us a reference to a module-local
require() function, allowing us to use a module reference relative to the current module (if you use the global "require", relative module ids won't be relative to the current module).
Another special dependency is "exports". With exports, rather than returning the exported functionality, the export object is provided in the arguments, and the module can add properties to the exports object. This is particularly useful with modules that have circular references because the module factory function can start running, and add exports, and another function utilize the factory's export before it is finished. A simple example of using "exports" in a circular reference:
main.js: define(["component", "exports" ], function(component, exports){ // we define our exported values on the exports // which may be used before this factory is called exports.config = { title: "test" }; exports.start = function(){ new component.Widget(); }; }); component.js: define(["main", "exports", "dojo/_base/declare"], function(main, exports, declare){ // again, we define our exported values on the exports // which may be used before this factory is called exports.Widget = declare({ showTitle: function(){ alert(main.config.title); } }); });
This example would not function properly if we simply relied on the return value, because one factory function in the circular loop needs to execute first, and wouldn't be able to access the return value from the other module.
As shown in one of the earlier examples, if the dependency list is omitted, the dependencies are assumed to be "require" and "exports", and the
require() calls will be scanned, so this example could be written:
define(function(require, exports){ var query = require("dojo/query"); exports.myFunction = function(){ .... }; });
Looking Forward
The EcmaScript committee has been working on adding native module support in JavaScript. The proposed addition is based on new syntax in the JavaScript language for defining and referencing modules. The new syntax includes a
module keyword for defining modules in scripts, an
export keyword defining exports, and an
import keyword for defining the module properties to be imported. These operators have fairly straightforward mappings to AMD, making it likely that conversion will be relatively simple. Here is an example of how this might look based upon the current proposed examples in EcmaScript Harmony, if we were to adapt the first example in this post to Harmony's module system.
import {query} from "dojo/query.js"; import {on} from "dojo/on.js"; export function flashHeaderOnClick(button){ on(button, "click", function(){ query(".header").style("color", "red"); }); }
The proposed new module system includes support for custom module loaders that can interact with the new module system, which may also still be used to retain certain existing AMD features like plugins for non-JavaScript resources.
Conclusion
AMD provides a powerful module system for browser-based web applications, leveraging native browser loading for fast asynchronous loading, supporting plugins for flexible usage of heterogeneous resources, and utilizing a simple, straightforward format. With great AMD projects like Dojo, RequireJS, and others, the world of AMD is an exciting and growing opportunity for fast, interoperable JavaScript modules.
Pingback: What Are Modules and Why Use Them? | dBlogIt by Dustin Boston()
Pingback: okeowoaderemi.com | Introduction to AMD()
Pingback: Linkage: June 22nd, 2012 to July 2nd, 2012 | ben lowery()
Pingback: AMD for the Business-Side | Blog | SitePen()
Pingback: How To Migrate a Module to AMD | Blog | SitePen()
Pingback: Learning Dojo 1.8 | Blog | SitePen()
Pingback: Introducing dojo-amd-converter | Blog | SitePen()
Pingback: 062 JSJ Dojo with Dylan Schiemann()
Pingback: Translate modular pattern example to CommonJS or AMD | DL-UAT() | https://www.sitepen.com/blog/2012/06/25/amd-the-definitive-source/ | CC-MAIN-2016-07 | refinedweb | 4,796 | 50.16 |
jGuru Forums
Posted By:
Erik_Runia
Posted On:
Thursday, April 11, 2002 10:32 AM
Hello all,
I have written a baseclass which extends httpservlet. In this class I handle doget, dopost and so on. I have a few custom methods in this class which are basically stubs, empty methods meant to be overridden from another class.
My question is, can I make a servlet, which extends from this baseclass and have it work as a servlet?
I tried to do so but I'm getting an error that says the class "myBaseClass" cannot be found in type declaration or import. But it is clearly available in the same package and all looks well with the code, but this error points to the public class myServlet extends myBaseClass line.
Any help?
Re: can I extend from a baseclass servlet?
Posted By:
Laurent_Mihalkovic
Posted On:
Thursday, April 11, 2002 11:25 AM | http://www.jguru.com/forums/view.jsp?EID=834817 | CC-MAIN-2014-52 | refinedweb | 150 | 69.72 |
SCJP Objective 6.5 Question
karl czukoski
Greenhorn
Joined: Dec 30, 2010
Posts: 22
posted
Apr 28, 2011 12:39:57
0
I found this question to be a bit challenging and was wondering if someone could walk through the question in detail to help me better understand.
import java.util.* public class VLA2 implements Comparator<VLA2> { int dishSize; public static void main(String[] args) { VLA2[] va = {new VLA2(40), new VLA2(200), new VLA2(60) }; Arrays.sort(va. va[0]); int index = Arrays.binarySearch(va, new VLA2(40), va[0]); System.out. print(index + " "); index = Arrays.binarySearch(va, new VLA(80), va[0]); System.out.print(index); } public int compare(VLA2 a, VLA2 b) { return b.dishSize - a.dishSize; } VLA2(int d) { dishSize = d } }
result is 2 -2
dennis deems
Ranch Hand
Joined: Mar 12, 2011
Posts: 808
posted
Apr 28, 2011 13:37:50
1
import java.util.Arrays; import java.util.Comparator; public class VLA2 implements Comparator<VLA2> { // This class implements Comparator for itself. This // means that anywhere we must pass a parameter of // Comparator<VLA2>, we can pass an instance of the class. int dishSize; public static void main(String[] args) { // create three VLA2 objects with unique dish sizes. // store them in an array named va. VLA2[] va = { new VLA2(40), new VLA2(200), new VLA2(60) }; // sort the array using the instance located at array index zero for the comparator // note we could use any of the other instances and get the same result Arrays.sort(va, va[0]); // note the compare method sorts elements in reverse order // so now the array looks like: // {vla2(200), vla2(60), vla2(40)} // now we are performing a binary search in the array va // we ask for a VLA2 object whose dish size is 40 // the new VLA2 object is a dummy we pass to the search // that holds the attributes we wish to find // finally, we pass the comparator we wish the binary search to use int index = Arrays.binarySearch(va, new VLA2(40), va[0]); // after the sort, the desired element is at index 2 System.out.print(index + " "); // Now we search again for an element that is not in the array. // This time, the returned value will indicate the insertion point // -- the index where the element WOULD be if it HAD been found // according to this formula: (-(insertion point) - 1) // If we get a positive value from binarySearch, then we know // the element was found in the array. Negative value tells us // it was NOT found. index = Arrays.binarySearch(va, new VLA2(80), va[0]); // there is no VLA2 object in the array with dish size of 80 // the element we wanted to find would be inserted at index 1 // so the value of "index" is -2 System.out.print(index); } public int compare(VLA2 a, VLA2 b) { // Sorting in reverse order! return b.dishSize - a.dishSize; } VLA2(int d) { dishSize = d; } }
Shaikh Ali
Ranch Hand
Joined: Jan 26, 2011
Posts: 51
posted
Apr 28, 2011 13:42:48
0
Let me know if the comments help.
import java.util.* public class VLA2 implements Comparator<VLA2> { // Comparator declaration (used by Array.sort() in the code) int dishSize; public static void main(String[] args) { VLA2[] va = {new VLA2(40), new VLA2(200), new VLA2(60) }; // array of 3 VLA2 objects Arrays.sort(va. va[0]); // va array sorted in descending order (200, 60, 40) int index = Arrays.binarySearch(va, new VLA2(40), va[0]); // va searched for 40 with comparator, the answer is 2 using 0-based index System.out. print(index + " "); index = Arrays.binarySearch(va, new VLA(80), va[0]); // va searched for 80 with comparator, if there is no matching object, the answer is the negative of insertion position subtract 1, // so the 80 should be at position 1 of the array if it existed, so the answer is -1-1=-2 System.out.print(index); } public int compare(VLA2 a, VLA2 b) { // Comparator implemented return b.dishSize - a.dishSize; // sorts in descending order } VLA2(int d) { dishSize = d }
karl czukoski
Greenhorn
Joined: Dec 30, 2010
Posts: 22
posted
Apr 29, 2011 11:46:19
0
I am still having a difficult time understanding the descending order sort sequencing using the overridden compare, could i ask you to explain this in a little more detail?
dennis deems
Ranch Hand
Joined: Mar 12, 2011
Posts: 808
posted
Apr 29, 2011 12:05:52
0
Remember the contract of the Comparator interface:
compare returns a zero if the first argument is equal to the second.
compare returns a negative integer if the first argument is less than the second
compare returns a positive integer if the first argument is greater than the second.
This language could be more explicit, but "a less than b" means that
a comes before b in the sort order. And "a greater than b" means that
a comes after b in the sort order.
Comparing numerical arguments is simple; we can do arithmetic on the arguments.
For an ascending sort we can return (first argument) - (second argument).
We could implement a comparator for Integers like this:
class CompareInt implements Comparator<Integer>{ public int compare(Integer a, Integer b) { return a-b; } }
Suppose we are sorting Integers in ascending order, and we call compare(3, 7).
Our comparator subtracts 7 from 3 and returns the result -4.
This negative value tells the sort routine calling compare that argument a comes before argument b.
The sort routine doesn't care about the actual value -- just whether it is negative, zero, or positive.
If we call compare(6, 5), the comparator returns the value 1, telling us that 6 comes after 5.
If we want to reverse the sort order, to perform a descending sort, all we need do is swap the arguments:
class ReverseCompareInt implements Comparator<Integer>{ public int compare(Integer a, Integer b) { return b-a; } }
Now the same call compare(3, 7) will subtract 3 from 7 and return 4,
and signify that 3 comes
after
7 in the sort order.
Likewise compare(6, 5) now returns -1, meaning that 6 comes
before
5.
All the values returned from the new implementation
are the inverse of those returned by the original implementation.
Note that in either implementation, a call like compare(5, 5) will return zero.
I agree. Here's the link:
subject: SCJP Objective 6.5 Question
Similar Threads
java.lang.Comparable and classcastexception probs yo - from OCP Exam 2 by K and B
K&B SCJP 5 Study Guide p628 problem 9
Need help with practice question implememting Comparator Interface
Please explain the code.
Question regarding binarySearch....
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/536106/java-programmer-SCJP/certification/SCJP-Objective | CC-MAIN-2015-06 | refinedweb | 1,125 | 59.94 |
A substring is the part of a string. Python string provides various methods to create a substring, check if it contains a substring, index of substring etc. In this tutorial, we will look into various operations related to substrings.
Table of Contents
Python String Substring
Let’s first look at two different ways to create a substring.
Create a Substring
We can create a substring using string slicing. We can use split() function to create a list of substrings based on specified delimiter.
s = 'My Name is Pankaj' # create substring using slice name = s[11:] print(name) # list of substrings using split l1 = s.split() print(l1)
Output:
Pankaj ['My', 'Name', 'is', 'Pankaj']
Checking if substring is found
We can use in operator or find() function to check if substring is present in the string or not.
s = 'My Name is Pankaj' if 'Name' in s: print('Substring found') if s.find('Name') != -1: print('Substring found')
Note that find() function returns the index position of the substring if it’s found, otherwise it returns -1.
Count of Substring Occurrence
We can use count() function to find the number of occurrences of a substring in the string.
s = 'My Name is Pankaj' print('Substring count =', s.count('a')) s = 'This Is The Best Theorem' print('Substring count =', s.count('Th'))
Output:
Substring count = 3 Substring count = 3
Find all indexes of substring
There is no built-in function to get the list of all the indexes for the substring. However, we can easily define one using find() function.
def find_all_indexes(input_str, substring): l2 = [] length = len(input_str) index = 0 while index < length: i = input_str.find(substring, index) if i == -1: return l2 l2.append(i) index = i + 1 return l2 s = 'This Is The Best Theorem' print(find_all_indexes(s, 'Th'))
Output:
[0, 8, 17]
program executed,but the list is not printed
Thanks for the examples and explanations, are great. | https://www.journaldev.com/23774/python-string-substring | CC-MAIN-2021-04 | refinedweb | 319 | 63.39 |
~/Mail
I guess I should expect people that are new to unix and at least IMAP to not know this. But, why wouldn't apple set the default to ~/Mail? Thats what it's suppose to be. Gesh.
Possibly because on .Mac accounts, Mail Folders aren't in ~/Mail. They are in ~. And the same may be true of other mail servcies out there as well.
---
-> Capt Cosmic <-
It's not standard and would break anyone using a different IMAP server and even uw-imapd users with different server configurations. People using modern mail servers which use namespaces to provide the appropriate path don't need it, either.
Not necessarily. With my ISP I use '~/mail/', and this being on a Solaris machine, '~/Mail/' wouldn't work. As with OS X itself, this is a case-sensitive OS. It would be foolish on Apple's part to make assumptions on the path prefix.
As others said, it's not a standard. And I guess it's not even common for mail server to use "~/Mail" directory. None of my IMAP accounts uses "~/Mail" prefix. I once had an account that needed some prefix but it wasn't "~/Mail" either.
I am not completely sure I understand whats going on here. The path you are setting in IMAP is for the server location of files/dirs?
Just curious - I tried the ~/Mail and it failed miserably. I have a Linux box running Qmail/vpopmail for my LAN and with virtual domains the users actual mail dirs are located at /home/vpopmail/domains/xyz.com/usernam (for Linux install). So I would suspect I need use that path with this hint. Am I on teh right road here?
I am having the same issues.. for a while Mail.app found all my mailboxes (I run Qmail/Vpopmail/Courier IMAP with virtual domains as well). Now it cant find the mail boxes... I remember I got it working again.. but it was obvious.. anyone else solve this problem?
ok.. I FOUND how I fixed it.. you don't need to set the prefix... what you do is add a bogus account (can be imap or pop3) which bogus info.. then save and quit.. when you restart Mail , your mailboxes should appear
See the Apple Tech Support Article
This is a side-effect of the way uw-imap works. It only supprts mbox-format mailboxes and thus has no way of determing whether something is a mailbox without opening the file to see. They took the lazy way out of simply showing everything in the home directory.
Whether your mail shows up in the default home directory, some place like ~/Mail or /var/spool/mail depends on your server config. Note that any decent IMAP server can tell Mail.app what to use for the prefix, so this isn't necessary unless you're using uw-imapd.
As a side note, uw-imapd is extremely inefficient. Switching to a better imap server like Cyrus or Courier which uses something other than mbox to store mail will be significantly faster with larger mailboxes and has the side-effect of not needing to worry about this sort of thing as the modern IMAP servers correctly send their namespaces.
Visit other IDG sites: | http://hints.macworld.com/article.php?story=20030211174703628 | CC-MAIN-2016-26 | refinedweb | 546 | 76.32 |
?
This is definitely doable. Sublime Text already does something similar with the Find Results page.
Remember, the console is just a view that you can easily show/hide. It's the default debug output of pretty much everything and a bit overloaded. You might consider using a view for this instead of a console:
output = sublime.active_window().new_file()
output.set_read_only(True)
output.set_scratch(True)
output.set_name('Server Output View')
output.settings().set('server_output', True)
By setting an arbitrary setting like 'server_output' to True on your output view, you can check if the current view has this setting when determining whether to act on an event.
You can easily insert text into this output view using my Edit class:
from .edit import Edit
with Edit(output) as edit:
edit.append('new line of stuff\n')
I implemented a double-click action in SublimeXiki:Plugin Codesublime-mousemap
It manages to not break double-click-drag using a bit of hackery.
Thanks for your answer lunixbochs! I implemented the functionality discussed above and I am using Edit class to show the output in a view.
Now I've run into another problem. When I am printing enough lines to output view the plugin_host.exe crashes. Whenever I append text to output view the memory consumption of the plugin_host.exe increases but I do not have any way to release it. Erasing the whole text or shutting down the view does not release the memory thus after running my plugin some time it eventually crashes sooner or later.
Have you experienced this behavior and if, do you have any solution for this?
How many lines are you talking?
I see it too if I use SublimeXiki and just print hundreds of thousands of lines.
It looks like this is not a memory leak on the Python side. According to the gc module, I'm not getting past 512 objects before it collects down to 12 or so. I'll see if I have any obvious leaks of internal Sublime objects.
I made this dummy TextCommand:
import sublime
import sublime_plugin
class DummyCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
if view.size() > 1024:
view.erase(edit, sublime.Region(0, view.size()))
view.insert(edit, 0, 'ajklsdf\n' * 5)
and ran it a ton by running this many times in the console:
exec("for i in range(10000):\n\tview.run_command('dummy')")
My plugin_host memory usage is now over 500mb from a fresh Sublime start. gc.get_count() is reliably between 0-512.
Smells like a leak in plugin_host to me. The OS X leaks command shows me a ton of 528-byte structs in the heap with data like this: bochs.info/p/bpczg
leaks
This is on OS X 10.9 (Mavericks) with build 3047.
I'm not getting a leak with your sample command. I am on build 3054, maybe try upgrading?
I still see a leak in 3054, and I noticed it's matched by the Sublime Text process.
I see the same 528-byte sections in plugin_host: bochs.info/p/wkq6zI see this leak pattern in Sublime Text itself: bochs.info/p/j5qtr
Ah, sorry, for some reason I didn't see it increase in size in Activity Monitor, but now I clearly do. I probably just wasn't waiting long enough for it to update the display (didn't realize the default was 5 seconds). I would expect a small increase in the sublime process size to contain the undo information, but the memory is never released. Looks clearly like a bug in sublime. | https://forum.sublimetext.com/t/mouse-clicking-a-console-line-and-jumping-to-file/11806/3 | CC-MAIN-2016-44 | refinedweb | 595 | 67.45 |
ASP.NET MVC: Enhancing The WebGrid - Lazy Loading with SignalR
In this fifth post about how to enhance your WebGrid, we talk about using SignalR to lazy load records dynamically.
After the last post about lazy loading records using WebAPI into a WebGrid and an earlier post about how to build your own real-time Like button, I thought this would be an excellent opportunity to use SignalR yet again to maximize our WebGrid (and it gives me a chance to play around with SignalR again!)
The last post used WebAPI for our communication to the server. Today, we'll tighten up that gap with a direct connection using SignalR.
SignalR is fantastic when it comes to real-time Internet communication. I've said in the past when AJAX was first introduced that AJAX was a game-changer and that it blurred the lines between desktop applications and websites.
Well, SignalR closes that gap even further for websites. SignalR makes it even easier to create two-way communication between the client and server using JavaScript and Web Sockets.
Once you finish this post, you'll have a good understanding of how SignalR is quite an awesome and easy .NET technology to implement.
This may be the smallest and quickest post since we did most of the JavaScript and HTML work in the last post.
Installation
So let's get started by replacing WebAPI with SignalR.
First, we need to install SignalR using the Package Manager Console (View / Other Windows / Package Manager Console)
install-package Microsoft.AspNet.SignalR
When you finish your installation of SignalR, you immediately see a Read.me on the screen reminding you to create a Startup.cs file that kicks off when your webapp initially runs.
Here is what the Startup.cs file looks like:
using Owin; namespace WebGridExample { public class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); } } }
Pretty darn simple. I placed Startup.cs in the root folder of my web application.
Script Placement
We also need to include two new scripts in our User/Index.cshtml View.
Place these two scripts directly after the bootstrap.js file at the bottom so you have the following (bold are the changes):
<script src="~/Scripts/jquery-2.1.3.min.js"></script> <script src="~/Scripts/bootstrap.min.js"></script> <script src="~/Scripts/jquery.signalR-2.2.0.min.js"></script> <script src="/signalr/hubs"></script>
The SignalR/Hubs script is something that most people forget about including in their script list. This is a dynamic script that reads your hub and generates a JavaScript file based on your C# hub class.
Pretty mind-blowing! Wait until we're done building our Hub class and I'll show you what I mean.
We'll come back to this Index page and finish up our JavaScript after we complete our Hub class.
Hub-ba, Hub-ba!
Hubs are required for SignalR to communicate with the client and server and they are a core component for SignalR apps.
Our WebGridHub inherits from the Hub class and, as you can see, uses an asynchronous Task type.
Hubs\WebGridHub.cs
using System.Threading.Tasks; using Microsoft.AspNet.SignalR; using Microsoft.AspNet.SignalR.Hubs; using WebGridExample.Repository; namespace WebGridExample.Hubs { [HubName("webGridHub")] public class WebGridHub : Hub { private readonly UserRepository _repository; public WebGridHub() : this(new UserRepository()) { } public WebGridHub(UserRepository repository) { _repository = repository; } public Task GetPagedUsers(int page, int pageSize) { var records = _repository.GetPagedUsers(page, pageSize); return Clients.Caller.populateGrid(records); } } }
Even though it's not necessary, I like to include the HubName attribute on my Hub classes so that I don't have a problem with case-sensitive issues on the JavaScript side.
When this Hub initializes, we setup a UserRepository to pull our records similar to how we pulled them in the Web API example.
The only method we have in this class is our GetPagedUsers that does exactly what we expect based on the Web API code.
However, the last line in the method is rather strange. We are telling SignalR to return this result to just the caller. There are other properties such as:
- Clients.Caller - Send the results only to the client that made that call.
- Clients.All - Send the response to all clients that are connected, so everyone will see the results on their screen.
- Clients.Others - Send a response to everyone except the caller.
- Clients.OthersInGroup - Send the results to others in a group.
- Clients.OthersInGroups - Send the results to others in multiple groups.
The populateGrid that is called is a JavaScript function on the client-side.
Yeah, I said client-side, which we'll look at now.
Setting Up The Hub Client
Let's head back to the User\index.cshtml View that has the JavaScript at the bottom.
The "moreButton" click simply needs replaced with our SignalR JavaScript code.
Since this code was Web API based, we can comment out this JavaScript code:
$("#moreButton").click(function () { var pageSize = 5; var getNextPageNo = ($("#grid tbody tr").size() / pageSize) + 1; var</td><td></td><td></td><td></td><td></td></tr>"; $.getJSON('/api/User/GetPaging?page=' + getNextPageNo + "&pageSize=" + pageSize, function (users) { if (users.length > 0) { $>"); } }); });
and replace it with this code:
var webGridHubClient = $.connection.webGridHub; webGridHubClient.client.populateGrid = function (users) { if (users.length > 0) { var</td><td></td><td></td><td></td><td></td></tr>"; $>"); } }; $("#moreButton").on("click", function () { var pageSize = 5; var getNextPageNo = ($("#grid tbody tr").size() / pageSize) + 1; webGridHubClient.server.getPagedUsers(getNextPageNo, pageSize); }); $.connection.hub.start();
Starting at the top, the first thing we need to do is setup our connection to our C# Hub class (webGridHub) using the connection object. This is the attribute on your C# Hub Class.
Next is the "method" that we are defining in our client that the server will call. Always use hubName.client and then your method signature.
The funny thing about the populateGrid method is that it's the exact same code as the Web API JavaScript code. The only difference is that we are replacing it with SignalR syntax.
Now, at the bottom, you may be asking, "how do we contact the server to get our records?" How do we do that?
By issuing a hubName.server.methodName with your parameters. Now, since this is JavaScript syntax, the first letter of your methods in your C# Hub class will be lower-case when you call them from the client.
This is how the chain of code gets executed:
- The script is loaded and executed and the connection.hub.start() establishes a connection to the server.
- When the user clicks the "Load More" button, a call is made to the server to "GetPagedUsers()"
- On the server-side, the GetPagedUsers loads the records based on the parameters passed in by the client.
- The GetPagedUsers method takes the records returned from the repository and sends them back to the client by calling the client's populateGrid method.
- The populateGrid method takes the records and using JavaScript, populates the WebGrid.
When you run your web application, it's the same effect as the Web API, but it's a tad bit faster because a connection to the server is already made instead of the latency of a web service call.
The Surprise
Remember when I said that the SignalR/Hubs script is dynamic?
Type that into your browser () to see the script that is dynamically generated. If you look through the script, you'll find your GetPagedUsers method in the JavaScript code.
Scary, I know (but ohhh so cool) ;-)
Conclusion
Today, we used a second alternative technology (first, WebAPI, second, SignalR) on your existing WebGrid to give your users an even bigger "Wow" factor: Your records returned even faster to the WebGrid.
With the latest technologies of WebAPI and SignalR, you can now perform a number of tasks in your WebGrid that would make your users believe that this wasn't possible to do in a web application.
For example, it is now extremely easy to create a CRUD (Create, Retrieve, Update, Delete) WebGrid, a hierarchical tree grid, or even a master-detail view within a Web Grid.
I might also add that both technologies work with mobile devices as well.
I hope this WebGrid series has opened your eyes to the possibilities your can provide to your users to empower them to manipulate and view their grid data.
The source code is available at GitHub:
Looking for a new way to do something in a WebGrid? Post your comments below! | http://www.danylkoweb.com/Blog/aspnet-mvc-enhancing-the-webgrid-lazy-loading-with-signalr-A0 | CC-MAIN-2016-44 | refinedweb | 1,409 | 57.47 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
Django ForumBR
Please, refer to github for pull requests and latest commits, as the current development is taking course there.
A simple, everyday django forum
Django-forumbr is a project to create a simple, easy-to-use and complete forum for django projects on the run.
Releases
- django-forumbr 0.4 (23/04/2014): Compatible with Django 1.6
- django-forumbr 0.3.1.3 (15/12/2012): Compatible with Django 1.3/1.4
Install instructions
pip install django-forumbr
# then add forum to your settings INSTALLED_APPS # and url(r'^forum/', include('forum.urls', namespace='forum')) to your urls.py
# create the models and you're done python manage.py syncdb
Optional
ForumBR uses a set of libraries to provide text formatting support. The default formatting syntax for replies is plain text. To turn on the support, install the disired library:
pip install bbcode pip install markdown pip install textile pip install docutils pip install html5lib # for plain html support
and then, set the FORUM_MARKUP language of your choice.
App Settings
ForumBR comes with a large set of configurable settings. For a complete list of options, take a look at django-forumbr/app_settings.
Auth
See django-forum/test_project for an example on how to integrate with django-registration. | https://bitbucket.org/italomaia/django-forumbr/overview | CC-MAIN-2018-17 | refinedweb | 232 | 51.04 |
Separating Design and Functionality
When we create a new windows form, a class that inherits from System.Windows.Forms.Form is generated. This class is separated in two files thanks to the feature called partial classes. Partial classes allow you to separate definitions of a class in different files within the same project and namespace. With the introduction of partial classes in .NET 3.5, Visual Studio was able to separate design from functionality. This allows a programmer to concentrate on the functionality of the application. The name of the files will have the following pattern:
FormName.cs FormName.Designer.cs
where FormName is the name of the form, for example, Form1. A third file with .resx extension is used for resources of the form and will be discussed in a separate lesson. When we create the form, add controls, and modify properties, all the code is written in a somewhat hidden file with a .Designer.cs extension. If you can’t see it, find the .cs file for the form in the Solution Explorer and click the arrow button beside it.
Double-clicking that file will allow you to see all the codes that were generated by Visual Studio. The code contains methods for disposing and initializing the controls. You can see the controls being declared and properties, such as Location and Text, are set depending on what you specified in the Properties Window. You will also see event handlers being attached to events of controls. If you can’t see the code, it is hidden by default and is labeled “Windows Forms Designer generated code”. Just click the plus icon to its left to show it. You will see that the code for initializing the controls and their properties and events is located inside a method called InitializeComponent. This method is called in the form’s constructor located at the main code file for the form’s class. The codes in the Designer file is initially hidden because Visual Studio want’s you to use the Designer and the Properties Window instead of writing all these codes manually. | https://compitionpoint.com/separating-design-and-functionality/ | CC-MAIN-2021-21 | refinedweb | 350 | 65.12 |
14
Steel Trap:
How Subsidies and Protectionism Weaken
the U.S. Steel Industry
by Dan Ikenson
Executive Summary
Dan Ikenson is a trade policy analyst with the Center for Trade Policy Studies at the
Cato Institute.
It would be difficult downright foolish, though, to do so when the
to find another U.S. Introduction punished sector is of overwhelmingly greater
economic significance.
industry already On March 6, President Bush is expected to Protectionism has been characterized as an
more coddled and announce specific measures to further insulate exercise in picking winners and losers; in the
the domestic steel industry from import compe- case of protecting steel, however, there are only
protected from the tition. By any relevant economic measure, the losers. Despite decades of intervention, the
realities of the costs of such a decision will far exceed the ben- same problems persist. It is reasonable to con-
marketplace than efits. And any benefits accruing to steel firms clude that even with a new layer of import
from that protection are sure to be fleeting. restrictions, the steel industry will continue to
the steel industry. The current round of import restraints under suffer many of the ills currently cited as evi-
consideration stem from an investigation that the dence of import-caused injury.
Bush administration launched last June under In all industries there are winners and
Section 201 of the Trade Act of 1974. That inves- losers. The key to ensuring the vitality of an
tigation led the U.S. International Trade Com- industry, however, is that the losers contract or
mission (ITC) to conclude that imports have been cease operations altogether. Otherwise, the
a “substantial cause of serious injury”1 to the health of the entire industry is compromised.
domestic industry and to recommend a variety of Since December 31, 1997, there have been 30
“remedies” to the president, including quotas and bankruptcies within the “steel industry.” Only
tariffs on imported steel. The president now must 18 of those 30 are actually steel producers. 4
decide whether to accept, reject, or modify the But many of those companies are still operat-
ITC recommendations. ing or have emerged from bankruptcy status.
It would be difficult to find another U.S. The continued operation of steel companies
industry already more coddled and protected that are chronic money losers poses a threat to
from the realities of the marketplace than the the profitable firms. Allowing these firms to
steel industry. This fact, more than any other, expire will do more for the health of the steel
explains the steel industry’s perennial problems. industry than any reality-deferring protection-
Over the past three decades, U.S. steel produc- ism possibly can.
ers have been shielded from foreign competition
by quotas, voluntary export restraints, minimum
price undertakings, and hundreds of antidump- The “Unfair Trade” Diversion
ing, countervailing duty, and safeguard mea-
sures. Federally subsidized loan guarantees, pen- Winning the hearts and minds of policy-
sion bailouts, and “Buy American” preferences makers and the general public has been a
have likewise fostered uneconomic excess capac- strategic goal of the steel industry for years.
ity within the industry and discouraged unsuc- And, in large measure, the industry has suc-
cessful firms from the otherwise rational deci- ceeded in drumming up sympathy for its
sion to exit the market. plight. The public relations and lobbying
Steel users employ 57 workers for every one efforts to this end have relied on systematic
employed in steel production.2 Steel users misstatements and factual distortions.
account for 13.1 percent of gross domestic prod- Representative of these tactics is a recent
uct (GDP), while steel producers account for opinion piece by Sen. John D. Rockefeller IV
only 0.5 percent. 3 Yet responding to steel pro- (D–WV), which appeared in some national
ducers’ self-inflicted problems with more pro- newspapers. Rockefeller perpetuates the myth
tectionism will only saddle downstream steel- that “our steelmakers simply can’t compete with
using industries with price hikes and supply subsidized foreign competitors operating in pro-
shortages that handicap them vis-à-vis their tected sanctuary markets.”5 What he doesn’t
international competitors. It is bad enough to acknowledge is that the U.S. industry itself is
punish one sector for the failures of another; it is heavily subsidized and protected—and has been
2
so for decades. He also neglects to mention that “subsidized” as a matter of generic qualification,
the 1997–98 Asian crisis—the catalyst for the as if all imports warranted a derisive label. A
steel import surge that followed—and the cur- recent press release from the office of Rep.
rent Section 201 proceeding have nothing to do Sherrod Brown (D-Ohio) incorrectly
with sanctuary markets or urban legends about announced that, “… on October 22, 2001, the
“unfair trade.” The steel industry and its han- U.S. International Trade Commission (ITC)
dlers intentionally ignore these facts because found that illegal imports had caused significant
stigmatizing imports as unfair is a useful smoke- injury to the domestic steel industry.”7
screen for protectionism as usual. (Emphasis added.) There was nothing illegal
The fact is that Section 201 cases are about about the imports in question.
injury caused by fair trade. In bringing the case, Brown went on to implore, “Without imme-
the domestic industry formally acknowledges diate action by the President, the situation is
the crucial distinction that its condition is not dire. Inundated by illegal imports, 29 domestic
necessarily a result of unfair or predatory foreign steel companies have either declared bankruptcy
practices.6 Section 201 does not require any or gone out of business entirely in the last four
finding of unfair trade—only that a rising level years. Nearly 30,000 American steelworkers
of imports has caused injury to the domestic have lost their jobs in just the last 16 months.
industry. The protection-seeking industry is The pace is accelerating, with another steel Import restraints
obliged to explain why it should be entitled to company declaring bankruptcy, on average, will only delay
relief and how it intends to improve its condi- every nine days. In order for the domestic genuine recovery
tion if protection is granted. Hence, the law industry to consolidate and survive, the federal
requires—and relief is conditioned on—submis- government must vigorously enforce U.S. trade by prolonging the
sion of a detailed and credible recovery plan. laws.”8 (Emphasis added.) Again, the mischar- industry’s
Unfortunately, import restraints will only delay acterization of imports as “illegal” is used to sup-
genuine recovery by prolonging the industry’s port the conclusion that the government “must
overcapacity
overcapacity problem. vigorously enforce U.S. trade laws.” In reality, problem.
But the tenor of the debate shows no signs of exercising Section 201 “rights,” also known as
contrition on the part of the steel industry. the “escape clause” because it allows countries to
Despite the fact that Section 201 relief for steel escape temporarily from their commitments to
producers burdens steel users disproportionate- tariff reduction and market access, is an excep-
ly, could invite WTO-legal retaliation against tion to those commitments and not a rule to be
other U.S. export sectors, and undermines “vigorously enforced.”
prospects for trade agreements and related job
growth, the steel industry remains on the offen-
sive, deflecting any blame for its own state of Antidumping and
affairs, as if it is owed an enormous debt. Of
course, contrition would undermine the careful-
Double Standards
ly crafted public relations message. That mes- Integrated steel production entails high
sage: the U.S. government, through its failed fixed costs—that is, heavy investments in plant
trade policies, is responsible for the unfair and equipment that must be amortized regard-
imports that have driven the industry to the less of the volume actually produced and sold.
brink of collapse, and must therefore make A large volume must therefore be produced
amends. In other words, the steel industry bears and sold before an integrated mill can cover
no responsibility for its condition. fixed costs and turn a profit. After a certain
Many in the press, Congress, and the admin- production level is surpassed, the average cost
istration have adopted, knowingly or not, the of producing a ton of steel starts to decline and
slanted terminology so central to crafting the profit margins can increase.
steel industry’s victim’s image. Popular press In times of falling demand, firms respond by
accounts refer to steel imports as “dumped” or some combination of cutting prices and curtailing
3
output. When output is cut, however, the average explosion of antidumping cases as evidence that
unit cost of producing a ton of steel increases. “unfair trade” is the cause of its problems.
Profitability is squeezed at both ends: costs If selling below cost really does constitute
increase at the same time prices fall. This state of unfair trade, U.S. producers have some explain-
affairs thus imposes severe pressures on steel firms ing to do. Under the current definition of dump-
to keep costs as low as possible so that occasional ing under U.S. law, every U.S. steel company
storms can be weathered. that is losing money is guilty of dumping here in
But the steel industry found a way to reduce its home market. Bethlehem Steel lost money in
these pressures with an ingenious political fix. 6 of the 10 years between 1991 and 2000.
Specifically, it recognized that the cyclical price-cost National Steel and Weirton Steel lost money in
squeeze could be manipulated to domestic produc- 4 of those 10 years. LTV and Geneva Steel lost
ers’ advantage if changes to the then seldom-used money in 4 of the 9 years between 1992 and
antidumping law were implemented. Changes in 2000, while Wheeling-Pittsburgh lost money in
1974 included the introduction of what has come 3 of those 9 years.10 If selling below cost is a
to be known as the “cost test.” The cost test requires problem, which under most definitions of eco-
that sales made by a foreign firm in its home mar- nomic rationality it is not, then why are there
ket at prices below the full cost of production be different policy responses to foreign firms than
excluded from calculating average prices in that there are to domestic firms engaging in this
market. It does not, however, exclude sales by that practice? If healthy firms are compromised by
foreign firm made at prices below cost from the cal- unhealthy ones selling below cost, which could
culation of average prices in the U.S. market. As a be the only tenable objection to the practice,
result, comparisons of a foreign firm’s U.S. and then should they be more compromised when
home-market price averages (the dumping margin the offending firm is foreign?
calculation) are almost always skewed in favor of California Steel Industries thinks not. Its
finding dumping because the home-market price chief executive officer, C. Lourenco Goncalves,
average is based on a higher-priced (above full cost) said in an article in American Metal Market, that
subset of all home-market sales. while integrated and mini-mills in the East and
This change in the antidumping law in 1974 Midwest continue to blame imports for their
sets up foreign firms to be punished for respond- woes, those same producers “are currently the
ing rationally to slackening demand. To avoid responsible parties for the extremely low prices
antidumping exposure, foreign producers would on the West Coast.” The article cites reports
be required to always sell above their costs of pro- from steel buyers who claimed that imported
duction, which would require price hikes during hot-rolled sheet was not widely available at low
Under the current economic downturns—a patently irrational busi- prices, but that producers east of the Rockies
ness strategy. As a result of this poorly under- had been “offering low-priced hot rolled that has
definition of stood feature of the antidumping law, findings of undercut the local mills’ attempt to increase
dumping under dumping have nothing to do with market reali- [price] tags even when freight to the West Coast
U.S. law, every U.S. ty—and certainly are no evidence of “unfair is included.” Mr. Goncalves said, “If I had anti-
trade.”9 They are simply an artifact of a deeply dumping laws to protect CSI against this irra-
steel company that flawed methodology. The steel industry has tional behavior, I would use them against these
is losing money is exploited this flaw with a vengeance: Although mills. Unfortunately, I don’t have.”11
steel imports constitute only about 2 percent of
guilty of dumping total U.S. imports, more than 50 percent of all
here in its outstanding U.S. antidumping measures are Protectionism, Subsidies,
home market. against steel imports (while most of the rest are
against imports in other high fixed-cost indus-
and Overcapacity
tries). And because the details of how antidump- The verdict of the marketplace is unam-
ing law actually works are not generally under- biguous: The U.S. steel industry suffers from
stood, the U.S. industry is able to point to this excessive, uneconomic capacity. The proof lies
4
in the number of steel producers now in bank- ed in U.S. steel capacity over and above what The large number
ruptcy. The large number of bankrupt firms would have existed in the absence of that sup- of bankrupt firms
makes clear that there is more steelmaking port. Consequently, the steel industry’s vulnera-
capacity in the United States than is currently bility as of 1997–98 when the industry’s current makes clear that
economically viable. problems began was heightened by past policies. there is more
The steelmakers’ response to this excess is Further, more interventionist policies since
to call for more subsidies and protectionism. 1997–98 have exacerbated the burdens of excess
steelmaking
But the fact is that the current excess exists in capacity by creating “barriers to exit”—that is, capacity in the
no small part precisely because of subsidies and distortions of market signals that discourage United States than
protectionism. Responding to the industry’s failed firms from ceasing operations. The contin-
woes with more of the same will not resolve the ued existence of these inefficient firms weakens is currently
problem; indeed, over the long term it will the entire industry, both by depressing prices and economically viable.
merely add fuel to the fire. by robbing healthier firms of the scale economies
Government interventions to assist the steel they could enjoy with larger market shares.
industry have been more or less continuous for The Organization for Economic Cooperation
the past three decades. Pension guarantees, and Development (OECD) is currently sponsor-
loan guarantees, special tax and environmental ing international negotiations designed to reduce
exemptions, research and development grants, excess, uneconomic steel capacity around the world.
and “Buy American” provisions have been per- In the context of those talks, the U.S. government
vasive. By conservative estimates, these subsi- has taken the position that there are no significant
dies have equaled more than $23 billion since structural barriers to exit in the U.S. market:
1975.12 An Ernst & Young LLP study report-
ed that the U.S. steel industry received more In light of the primacy of market
than $30 billion in government subsidies dur- forces in shaping the U.S. steel indus-
ing the 1980s alone.13 try, there currently are limited govern-
Protection from import competition has ment impediments to the closure of
also been the norm over the past 30 years. excess inefficient steelmaking capacity
From 1969 to 1974, there were “voluntary” in the United States. The U.S. govern-
import restraints—restraints observed under ment does not provide significant sub-
the threat of statutory quota legislation. From sidies or similar assistance to the U.S.
1978 to 1982, there were minimum import steel industry. The most visible federal
price arrangements—a scheme perpetrated program directed toward the steel
while 19 antidumping petitions were pending. industry, the Emergency Steel Loan
From 1982 to 1992, there were new quotas Guarantee Program, adopted in 1999,
affecting a range of steel products from many is limited in scope and, to date, has
different countries. Since then, there have resulted in the disbursement of a sin-
been literally hundreds of antidumping and gle loan by a private lender under the
countervailing duty cases brought against every guarantee program—$110 million to
relevant foreign producer from every region of Geneva Steel last year.14
the world. Many of these measures are still in
effect today. It is on top of this that the indus- This assessment ignores important realities.
try is now pursuing an enormous campaign for First, exit is systematically discouraged by the
global import restraints under Section 201. ongoing spate of antidumping and countervailing
Any student of Economics 101 knows that duty cases. As of December 2001 there were 290
when you subsidize something, you get more of outstanding antidumping and countervailing
it. For three decades interventionist policies duty measures in place, of which 156 (53.8 per-
have been subsidizing U.S. steel production. It cent) cover steel products. 15 But as significant as
therefore follows that the artificial support pro- these steel figures are, they belie the trend that
vided by these interventionist policies has result- demonstrates the industry’s growing addiction to
5
trade remedy laws. Since 1997, 78 of the 103 steelmaking capacity. There may be too many steel
antidumping and countervailing duty measures companies, but there are not too many steelwork-
imposed were on steel products—more than ers, and any restructuring must preserve the jobs of
three-quarters of the caseload.16 And the number the workers who have made sacrifice after sacrifice
of cases still pending (where a final determination in order to keep the industry alive in the face of a
has not been rendered) as of January 2002 was flood of unfairly dumped foreign steel imports.” 20
104, of which 70 cover steel products.17 (Emphasis added.) This is utter nonsense. What
These cases result in temporary shifts in mar- exactly is “rational consolidation that ensures the
ket share and temporary increases in prices rela- preservation and revitalization of existing” capaci-
tive to market outcomes. These jolts of artificial ty? How can there be too many steel companies
stimulus help to maintain weaker firms that oth- but not too many steelworkers? The union’s inter-
erwise would go under. If President Bush accedes est in maintaining membership supersedes its
to steel industry lobbying and imposes new trade interest in its members.
barriers under Section 201, some of the industry’s Attempts to preserve jobs at failing firms
weaker firms will have won yet another reprieve have come at the expense of jobs at relatively
from market accountability. healthy firms. To preserve jobs, the USWA has
Another structural barrier to exit can be led efforts toward employee purchases of failed
Reorganization found in the federal bankruptcy process. and failing mills. Although on their face these
proceedings under Reorganization proceedings under Chapter 11 efforts are pursued to preserve jobs, they have
Chapter 11 can act can act as a subsidy for failing firms, allowing invariably failed to change the fate of the mills
them to continue to operate despite insolvency, and have maintained a cloud of uncertainty
as a subsidy for then relieving them of debts and sending them throughout the industry. Research on employee
failing firms, out into the world to fail again. As one bank- stock ownership plans (ESOP) has revealed that
ruptcy expert concluded, “Chapter 11 reorga- “The proposition that employees with an equity
allowing them to nization often protects inefficient businesses stake will be more productive and improve firm
continue to operate and results in the continued waste of performance is not supported.”21
despite insolvency, resources.”18 A number of steel producers have Al Tech Specialty Steel Corporation of
been repeat wards of the bankruptcy courts. Dunkirk, New York, entered bankruptcy at the
then relieving them LTV Corporation, Wheeling-Pittsburgh Steel end of 1997. In an effort to keep the plant from
of debts and Corporation, Laclede Steel Company, and shutting down, the USWA oversaw the partial
Edgewater Steel have all gone through employee purchase of the plant, which emerged
sending them out Chapter 11 only to return. Many producers from bankruptcy as Empire Specialty Steel in
into the world to treat bankruptcy as a strategic makeover, which 1998. Despite new capital and ownership, the
fail again. painlessly removes the wrinkles of unwanted firm shut down permanently in June 2001.
debt. Geneva Steel’s chairman, Joseph Laclede Steel of Missouri and Erie Forge &
Cannon, said of the bankruptcy process, “With Steel of New York were both temporarily resus-
our balance sheet restructured, Geneva Steel citated from bankruptcy through partial
will be producing steel for many years to employee funding coordinated by the USWA.
come.”19 The current lenient procedures are a Each subsequently failed anyway.
significant contributor to excess capacity. The case of Bar Technologies (BarTech)
Another major barrier to exit is supplied by the demonstrates how USWA resistance to plant clo-
steelworkers’ union, the United Steelworkers of sures has made matters worse for the industry.
America (USWA). The union uses the full extent After reports suggesting that there was already too
of its considerable bargaining power to resist plant much steel bar capacity on the market, Bethlehem
closings. USWA’s role in this process is aptly sum- Steel abandoned plans to refurbish and expand
marized by its president, Leo Gerard, who said, bar-making facilities at its Johnstown, Pennsyl-
“We are committed to working constructively vania, and Lackawanna, New York, locations.
toward a rational consolidation, one that ensures Despite the assessment that overcapacity would
the preservation and revitalization of existing U.S. preclude appropriate returns on capital, the
6
USWA led efforts to purchase the plant and add into account—a state of affairs that disqualifies
more than 1 million tons of capacity, which was these firms as viable acquisition targets.
effectuated in 1994 through subsidies from the Accordingly, integrated steelmakers are insist-
states of New York and Pennsylvania and the ing that the federal government assume those
Veritas Investment Group.22 legacy costs as a precondition to any consolida-
The primary customers for these specialty bar tion. In particular, U.S. Steel has expressed
products were the big automakers and compa- interest in purchasing three failing firms,
nies that produce axles and cranks for automo- National, Wheeling-Pitt, and Bethlehem. But
biles. Because these big customers already com- these would-be targets have legacy liabilities of
manded a high level of market power, the intro- at least $12 billion. As U.S. Steel’s CEO Tom
duction of the extra capacity drove prices down Usher argues, the government should cover
substantially. Despite losing money, BarTech those costs since his company would be help-
pursued expansion plans through acquisition and ing the Bush Administration accomplish its
upgrades, purchasing Republic Engineered goal of worldwide capacity reduction. 25
Steels in 1998 and then most of USS/Kobe in The chutzpah here is striking. Section 201
1999. In 2001, the combined entity, Republic requires that industries seeking import relief jus-
Technologies, entered bankruptcy. What began tify that temporary protectionism with credible
as a union effort to stop two plant closures, plans to restructure and become more competi-
affecting a few hundred workers, ballooned into tive. Yet the integrated steelmakers’ recovery plan
a bankruptcy jeopardizing 4,600 employees. 23 is simply to seek yet another government bailout.
Although the steel industry tries to blame Legacy costs are the legacy of greed. They are
unfair foreign practices for all of its problems, the product of an intransigent union and a man-
the fact is that the U.S. market is itself badly agement confident that the government would
distorted by past and present interventionist bail them out of obligations they could not meet.
policies. These direct and implicit subsidies In the 1980s, several steel companies intentional-
block the market signals that would lead to ly underfunded their pension obligations, forcing
reductions in uneconomic capacity through taxpayers, through the federally funded Pension
attrition of the least efficient producers. As a Benefits Guarantee Corporation (PBGC), to Legacy costs are
result, healthier steelmakers must bear the bur- cover their indiscretions.26 By 1999, nearly 40
den of excess capacity. percent of the value of all claims on the PBGC
the product of an
were attributable to steel firms.27 intransigent union
In labor negotiations during the late 1980s and a management
Legacy Costs and early 1990s, unions agreed to smaller pay
increases for workers in exchange for larger bene- confident that the
The request for new trade barriers under fits packages for retirees. The impetus for this government would
Section 201 has forced the steel industry to approach was that retiree benefits were “off bal- bail them out of
acknowledge the need for some kind of restruc- ance sheet” items, while labor expenses were
turing. The present focus is on consolidation— observed as current expenses on income state- obligations they
that is, mergers between firms or acquisitions of ments. In an “Enronesque” effort to show better could not meet. In
small firms by larger ones. By consolidating, the income results to potential investors, the legacy
industry will supposedly regain its strength and cost issue was born. Changes to tax reporting an “Enronesque”
no longer require Section 201 relief. requirements by the Financial Accounting effort to show
There is, however, a catch. Any move Standards Board in 1993 required the reporting better income
toward consolidation is currently stymied by of the formerly unreported retiree benefits on the
the issue of “legacy costs”—certain health care financial statements.28 Moving these expenses results to potential
and benefits owed to retired, unionized steel- onto the balance sheet, combined with soaring investors, the
workers. These costs total about $13 billion for health care costs, drove current reported expenses
the industry. 24 A number of firms now have through the roof, reducing profits and dissuading
legacy cost issue
negative net worth once legacy costs are taken further investment. was born.
7
It should be no The union, for its part, is opposed to capaci- the mini-mills. The former is where the indus-
mystery why there ty reduction through consolidation. Instead, it try has been; the latter is where it’s going.
argues that ailing firms should be kept afloat “Mini-mills as a group are highly competi-
have been with loan guarantees, while legacy costs should tive in the North American market and will
employment be funded from special surcharges on steel ship- continue to take a larger share of that market.
ments. But if the unions are really concerned U.S. shipments of steel rose about 20 million
declines in the steel about their retirees, why not advocate direct sub- tons from the level obtained in the early 1990s,
industry. Mini-mills, sidies to them? Clearly, it would be less costly and the mini-mills now account for close to
which are than funding retirement plans through the con- one half of total steel shipments. In 2000, the
tinued operation of inefficient mills under the mini-mills’ share of U.S. shipments was nearly
responsible for a stewardship of a self-interested union. 100 percent of the long products and a third of
greater share of Many of today’s healthy steel firms, like flat products. In three or four years they will
Nucor, Steel Dynamics, and other electric-arc account for a preponderant share of U.S. steel
domestic production furnace producers known as “mini-mills,” have production.”31 These are the recent words of
each year, require no legacy costs. They had nothing to do with Thomas Danjczek, president of the Steel
less labor. the backroom deals that led the industry down Manufacturers Association (the trade associa-
this path. Why should they be forced to subsi- tion for mini-mills). Is this irrational exuber-
dize their competition with surcharges? And ance or is there something to these claims?
why should the government favor mills that the Gross steelmaking capacity in the United
capital markets have deemed unworthy? The States grew by 16 million metric tons to 124
mini-mills are opposed to these unwarranted million metric tons between 1995 and 2000.
interventions: “Government assistance to trou- Nearly all of this increase was capacity added to
bled steel companies for continued operation electric-arc furnace operations (i.e. the mini-
or legacy costs is unacceptable. That assistance mills).32 Mini-mills produce a ton of steel with
is unfair to those steel companies who are not far less labor than is required at integrated mills.
troubled. Government funding to aid displaced A recent comparison of actual man-hours per
workers from closed facilities through retrain- ton demonstrates that mini-mills are seven
ing and relocation is encouraged.”29 times more efficient than integrated producers
in terms of labor productivity. 33 It should be no
mystery why there have been employment
Rise of the Mini-mills declines in the steel industry. Mini-mills, which
are responsible for a greater share of domestic
“As you know we really have increasingly two production each year, require less labor.
steel industries in this country. One is based on The mini-mills are smaller, nimbler, relatively
the older technologies…and the other is the new creations that make steel products primarily
mini-mills, which are evolving at a very dramat- from scrap metal in electric-arc furnaces. They do
ic pace…”30 not produce raw steel, and therefore depend less
Despite this important reminder by Federal on dwindling inputs that have become increas-
Reserve Chairman Alan Greenspan, one could ingly costly to ship to the large mills, many of
easily gather from popular accounts that the which are located great distances from iron ore
steel industry has fallen into a bottomless stocks and from their markets. Essentially, their
descent. There are reports of 30 bankruptcies operations by-pass the labor-intensive, high-cost
since 1998 and 10,000 annual job losses in the process of making raw steel. Importantly, the
industry—statistics, of course, that are ascribed labor force is generally not unionized.
to unfair or illegal import competition. The Mini-mills do not rely on iron ore or coal,
reality, however, is that some firms are doing but rather use steel scrap as their primary
quite well. It is important to appreciate the dif- material input. In the United States, scrap is
ference between these two distinct groups of cheap relative to the cost of the equivalent
U.S. steel producers—the integrated mills and process of bringing iron ore and coal to a com-
8
parable property, and relative to scrap prices We are talking about our national defense. We are
abroad. The clincher for mini-mills, though, is threatened on a daily basis by imported steel,
the pro-cyclical nature of its primary input and which is a direct threat to our national security.
its output. When steel demand is high, prices Foreign countries have been working to cripple
are high, and profits are good. When demand this country economically.”37
is low, prices fall, but so do the costs of scrap, The president of the USWA, Leo Gerard,
which account for about 50 percent of the total also tried to capitalize on the tragedy, calling for
cost of producing steel at a mini-mill. 34 This “…immediate and comprehensive relief to pre-
relationship mitigates profit contractions dur- vent America from being seriously compro-
ing periods of slackening demand. mised in its ability to satisfy the steel demand so
The largest mini-mill operation, Nucor Steel of critical for our national security...Wall Street has
Charlotte, North Carolina, has shown operating literally turned its back on the American steel
profits and net profits in each of the 10 years from industry. But in light of the tragic events of the
1991 to 2000. 35 Only 2 of the 18 steel producers in past week, we do not believe that America can
bankruptcy since 1998 are mini-mills, but neither afford to turn its back on the reality that, unless
of the two achieved full operational status before this government gives immediate relief to our
technical problems crippled their operations. industry and its workers, it is likely that steel will
Until recent years, mini-mills were averse to soon become a commodity, like oil, whose price U.S. steel capacity
using trade remedies. Mini-mills had no griev- is controlled by governments abroad.”38 and production so
ance with imports and they were generally prof- Even President Bush lent support to the exceed military
itable. But eventually they succumbed to the national security argument at a USWA picnic
temptation of the inflated prices—and profits— last summer. He said, “If you’re the Commander demand that even
that trade barriers could deliver. Also, the mini- in Chief, it makes sense, common sense, not to massive production
mills began to see their destiny of preeminence be heavily reliant upon materials such as steel. If
challenged by domestic firms who produce fin- you’re worried about the security of the country
cutbacks have no
ished steel products from imported slab. In rare and you become over-reliant upon foreign security
common cause, the mini-mills and the unions sources of steel, it can easily affect the capacity of implications.
fought hard to ensure that slab was included in our military to be well supplied.”39
the recent injury finding. In reality, claims of a threat to national secu-
rity are totally without foundation. In 2000, the
U.S. military accounted for only 0.03 percent
The National Security Canard of steel industry deliveries. Shipments to the
military in 1991—the year of the Persian Gulf
The terrorist attacks in September drew War—were only 0.1 percent of total industry
attention to a frequently repeated claim of the deliveries.40 The fact is that U.S. steel capacity
steel industry—that domestic steel production is and production so exceed military demand that
a matter of national security. Using September 11 even massive production cutbacks have no
as a segue, Jim Robinson, an assistant director of security implications.
the USWA, said, “This should be a reminder to Last year the U.S. Department of
people that steel is a critical industry for the Commerce conducted an investigation under
United States, both strategically and economical- Section 232 of U.S. trade law to determine
ly. Driving steel out of business economically has whether imports of iron ore and semifinished
the same impact as physical bombings.”36 steel pose a threat to national security. The
Paul Gipson, president of a local union repre- Commerce Department’s final report, issued
senting workers at Bethlehem Steel, had the fol- last October, announced the following conclu-
lowing insights: “We have become so lax. We sion: “National defense requirements, as com-
have opened our borders to anyone. We have municated to the Department of Commerce by
opened our industries to anyone in the world. DOD, for finished steel—and thus for iron ore or
What happened Tuesday is a threat to our nation. semifinished steel as inputs—are very low and
9
likely to remain flat over the next five years. trade agreements, which benefit all Americans.
DOD’s current and projected demand for iron Protectionism is always bad policy. But in the
ore and steel can be readily satisfied by domestic case of the steel industry, in which protectionism
production…”41 (Emphasis added.) has been tried so often with such consistently dis-
Those sincerely concerned with U.S. nation- appointing results, it would be delinquent to pass on
al security should be relieved by the Commerce the current opportunity to change course once and
Department’s findings. It is interesting, then, for all. There are simply too many inefficient firms
that the USWA was “bitterly disappointed”42 by suppressing prices and profitability. This situation
the results of the Section 232 investigation. would not pose a long-term problem if these firms
Will we be less secure if American bridges and could shut down, but failed mills face exit barriers
roads (termed “transportation security infrastruc- that undermine this vital attrition process. Trade
ture” by American Iron and Steel Institute presi- restrictions only strengthen these barriers.
dent Andrew Sharkey)43 as well as petroleum Some say that industry consolidation is nec-
refineries and storage tanks (“energy security essary, but that significant legacy costs are pre-
infrastructure”)44 are made with steel from venting mergers and acquisitions. That may be
Canada or France or Brazil? Notwithstanding the true, but consolidation is not the only alternative
steel industry’s special pleading, the fact is that we for eliminating inefficient capacity. Attrition
are more secure when foreign steel is available, works too. Attrition works if the inefficient firms
because competition creates better products and are liquidated in bankruptcy, and their assets are
lower prices for the people and companies that auctioned to the highest bidders.
use this input to manufacture value-added mate- The largest obstacles to attrition are subsidy
rials and equipment. Most revealing of the indus- programs like the Emergency Steel Loan
try’s descent into desperate diatribe is its simulta- Guarantee Program, unrealistic unions seeking
neous insistence on curtailing supply while warn- to prevent shutdowns, and the U.S. trade reme-
ing of impending shortages. dy laws. Inefficient operations need to be retired,
and this can be accomplished only if market sig-
nals are not distorted by these interferences.
Conclusion When an operation is inefficient and losing
money, access to investment naturally dwindles.
The U.S. steel industry—but more impor- Attempts to mitigate this outcome perpetuate
tant, the country—will be best served if the root problem. Although its expansion is
President Bush resists the temptation to impose being considered under different legislation
new trade restrictions. As politically expedient as pending in Congress, the Emergency Steel
Consolidation is this approach may seem, additional trade barri- Loan Guarantee program should be abolished.
ers will hamper adjustment in the industry, Within the context of multilateral negotia-
not the only unfairly and unwisely burdening healthy steel tions initiated at Doha, Qatar, last year, the
alternative for producers. Such restrictions will cause the pro- U.S. government should agree to reforms
eliminating duction costs of the far more numerous and eco- designed to improve the aim of the antidump-
nomically significant steel users to rise, unjusti- ing law. Currently, the antidumping law goes
inefficient capacity. fiably saddling them with cost disadvantages. well beyond its purpose, punishing rational
Attrition works if More steel protection will encourage retaliation behavior and practices that are legal in a
from abroad, punishing U.S. exporters in other domestic context. This scapegoating represents
the inefficient firms sectors and dissuading foreign governments a significant barrier to exit, which maintains
are liquidated in from undertaking necessary reforms to make inefficient domestic capacity.
bankruptcy, and their markets more accessible to imports. The United States is currently, and will be for
Finally, the gross hypocrisy of another layer of the foreseeable future, the largest steel market in the
their assets are steel protection in the midst of U.S.-led efforts world. As such, there is demand for import compe-
auctioned to the to reduce worldwide steel capacity undermines tition. A strong domestic steel industry, one com-
U.S. leadership on trade and prospects for new prising primarily electric-arc furnace producers and
highest bidders.
10
a few integrated mills, can compete with imports. 8. Ibid.
The weak domestic firms are the ones most threat- 9. For a detailed discussion on how the antidump-
ened by import competition. If they cannot com- ing law fails at measuring and redressing dumping,
pete, they should shut down permanently. see Brink Lindsey, “The U.S. Antidumping Law:
Rhetoric versus Reality,” Cato Trade Policy Anal-
If the effects of a one-time surge in imports— ysis no. 7, August 16, 1999.
even a dramatic surge—can reverberate for 4 years
and precipitate 30 bankruptcies, it is evident that 10. See various 10-K financial reports, Securities and
the industry requires some restructuring. An Exchange Commission,.
industry in which margins are that volatile, in 11. Frank Haflich, “CSI says East mills play import
which any exogenous event could spell disaster for role on W. Coast,” American Metal Market, May 4,
so many firms, should not be artificially supported. 2001, p. 1.
Many other industries experienced similar shocks 12. William H. Barringer and Kenneth J. Pierce,
in 1998 caused by the same macroeconomic events Paying the Price for Big Steel (Washington: American
abroad. But those industries have prevailed without Institute for International Steel, 2000), p. 112.
resort to protectionism and subsidies. The future of 13. Ibid.
the U.S. steel industry depends on its ability to dis-
pense with these crutches as well. 14. OECD, “Follow-up to Special Meeting at
High-Level on Steel Issue: U.S. Government
Report” (Paris: OECD), December 17, 2001, p. 6.
11
25. “USS Asks $12 Billion for Merger,” Pittsburgh Commission,.
Post-Gazette , January 8, 2002.
36. Robin Biesen, The Times (Munster, Ind.) Knight
26. Barringer and Pierce, pp. 120-122. Ridder/Tribune Business News via COMTEX, Sep-
tember 13, 2001.
27. Ibid., p. 127.
37. Ibid.
28. American Institute of Certified Public
Accountants, “OPEBs: FASB prescribes strong 38. United Steel Workers of America, “USWA Tells
medicine (accounting for other postretirement bene- ITC ‘Immediate and Comprehensive Relief’
fits according to the Financial Accounting Standards Necessary to Prevent American Steel Industry’s
Board),”. Collapse,” Press Release, September 17, 2001.
29. Comments of Thomas Danjczek, President, Steel 39. United Steel Workers of America, “USWA
Manufacturers Association, at a Cato Institute Policy ‘Bitterly Disappointed’ With Bush Administration
Forum entitled “What’s Wrong with the U.S. Steel Iron Ore Ruling,” Press Release, January 10, 2002.
Industry—Again?” February 20, 2001.
40. David Phelps, President, American Institute for
30. Alan Greenspan, Chairman, Federal Reserve Board, International Steel, “The Steel Industry: Still in
before the Senate Banking Committee, July 28, 1999. Distress After all These Years?” speech before the
Break Bulk Expo 2001, October 1, 2001.
31. Thomas Danjczek, “The Sun Never Sets on Mini-
mill Adjustment,” Steel Manufacturers Association 41. U.S. Department of Commerce, Bureau of
Web site, SMA Export Administration, “The Effect of Imports of
_AMM.htm, June 2001. Iron Ore and Semi-Finished Steel on the National
Security,” An Investigation Conducted Under
32. OECD, “Follow-up to Special Meeting at High- Section 232 of the Trade Expansion Act of 1962, as
Level on Steel Issue: U.S. Government Report” amended, October 2001, p. 1.
(Paris: OECD), December 17, 2001, p. 10.
42. United Steel Workers of America, “USWA
33. Barringer and Pierce, pp. 256-257. ‘Bitterly Disappointed’ With Bush Administration
Iron Ore Ruling,” Press Release, January 10, 2002.
34. Ibid., p. 259. The table demonstrates that scrap
costs equal $155 out of total cost of $315 to produce 43. Andrew Sharkey, The National Post (Toronto,
one ton of hot-rolled steel at a mini-mill. Canada), Editorial Section, October 22, 2001.
35. See Nucor’s 2000 10-K, Securities and Exchange 44. Ibid.
12
Trade Briefing Papers from the Cato Institute
“Steel Trap: How Subsidies and Protectionism Weaken the U.S. Fast Track to Freer Trade” by Daniel T. Griswold (no. 34; October 30, 1997)
“Anti-dumping Laws Trash Supercomputer Competition” by Christopher M. Dumler (no. 32; October 14, 1997)
13
Trade Policy Analysis Papers from the Cato Institute
“Safety Valve or Flash Point? The Worsening Conflict between U.S. Trade Laws and WTO Rules” by Lewis E. Leibowitz (no. 17;
November 6, 2001)
“Safe Harbor or Stormy Waters? Living with the EU Data Protection Directive” by Aaron Lukas (no. 16; Octorber 30, 2001)
)
14)
15
José Piñera people in peaceful cooperation and mutual prosperity.
International Center for
The center is part of the Cato Institute, an independent policy research organization in
Pension Reform
Washington, D.C. The Cato Institute pursues a broad-based research program rooted in the
Razeen Sally traditional American principles of individual liberty and limited government.
London School of
Economics For more information on the Center for Trade Policy Studies,
visit.
George P. Shultz
Hoover Institution Other Trade Studies from the Cato Institute
Walter B. Wriston
Former Chairman and
“America’s Bittersweet Sugar Policy” by Mark A. Groombridge, Trade Briefing Paper no. 13
CEO, Citicorp/Citibank
(December 4, 2001)
Clayton Yeutter “Safety Valve or Flash Point? The Worsening Conflict between U.S. Trade Laws and WTO
Former U.S. Trade Rules” by Lewis E. Leibowitz, Trade Policy Analysis no. 17 (November 6, 2001)
Representative
“Safe Harbor or Stormy Waters? Living with the EU Data Protection Directive” by Aaron Lukas,
Trade Policy Analysis no. 16 (October 30, 2001)
, | https://www.scribd.com/document/21069129/Steel-Trap-How-Subsidies-and-Protectionism-Weaken-the-U-S-Steel-Industry-Cato-Trade-Briefing-Paper-No-14 | CC-MAIN-2019-39 | refinedweb | 7,491 | 53.31 |
Real time Chat App with OnsenUI and Horizon!
>>IMAGE.
The entire source code of the app is available at GitHub. You can try out the app here.
We will first have a look at RethinkDB and then later build our app.
What is RethinkDB and Horizon.js?
RethinkDB is a real time database. The difference to a traditional database is the ability to listen to database changes. This makes real time updates very easy. Horizon is a JavaScript framework that makes it easier to interact with RethinkDB. You can install Horizon by doing:
$ npm install -g horizon
With the following command one can start a simple Horizon server in development mode on port 5000:
$ hz serve --dev --bind all -p 5000
The parameter
--bind all makes it accessible throughout the entire network. In development mode there is no user authentication and no security rule. For this tutorial we are not going to deal with authentication, if you are interested to learn about it, please check the documentation.
To start Horizon in our application we will need to just use the
connect() function
import Horizon from '@horizon/client';
const horizon = Horizon();
horizon.onReady(() => {
console.log('horizon is ready');
});
console.log('connect horizon');
horizon.connect();
For the chat app we will create two tables, one for the rooms which contains just the room names with their IDs, and a message table that will contain the
roomID, the message itself and the username.
To create a room and messages we can write some simple functions:
createRoom: (roomName) => {
horizon('chatRooms').store({ name: roomName })
}
createMessage: (authorName, roomID, message) => {
horizon('messages').store({
author: authorName,
date: new Date(),
message: message,
roomID: roomID
});
}
The function
createRoom creates a simple room with the provided room name and
createMessage creates a message for the room. The interesting part is how we can listen to changes of the database. We want to get all the messages from one room ordered by their date:
horizon('messages').findAll({roomID: roomID}).order('date').watch().subscribe((data) => {
// update messages here
});
The nice part about this is that the function in subscribe will be called every time the data is updated. In React we could just update the state and have updated data with almost no code.
Amazingly, this is almost all the backend code we will need for our App! Now let’s start building using MobX and the React Components for Onsen UI.
Building the components in Onsen UI
The React Components for Onsen UI make it very easy to build simple pages with Navigation without too much code. Our application will contain two screens: The first one will be a simple login screen, where the user can enter the name of the chatroom and the user name. We will render everything in a component called
App that will contain the children:
Before we start looking at the views, let’s look at the application state first. The application state is managed via MobX. MobX is a JavaScript library that uses observables to do certain actions automatically once an observed variable changes. In the case of React, MobX calls
setState() if a variable changes which the state depends on. If you want to learn more about MobX, I highly recommend a look at our recent tutorial that uses MobX to create a simple stopwatch.
The following code defines the application state: Mainly it will contain the current username, chatroom, the database connection to Horizon and some additional page-related information. In MobX we mark variables with the decorator
@observable to indicate that MobX should observe it. If they change, the associated views will call a rerender automatically. Functions marked with
@action are functions that change observable variables and
@computed functions getters that depend on observables that will be only updated once an associated variable is changed.
The Login Screen
In the following explanation we will only look at the structure and the basic interaction of the components, the curious reader can look the details up at GitHub.
For the navigation we will use Onsen UI
Navigator: We provide it an initial route which will contain a component we will render to. In our case this component will be
LoginPage.
The structure of the
Login component will look something like this:
The
LoginPage consists of an image, two inputs and a button. The application state is passed down to the component and updating is very simple, as can be seen on the
UserInput:
Once the value changes, we just update the variable and MobX will automatically rerender the
UserInput component and only this component, since the username is only displayed there.
After the username and room name is entered, we will either create a room if it does not already exist or load the initial one:
Chat Room Page
The second page will contain the main chat messages. On the left side your messages will be presented, while on the right side other people’s messages will be shown. Again, let’s look at a small overview of the messages:
The
ChatRoomPage component consists of a toolbar at the top, a bottom toolbar at the bottom and the messages in the middle. We will use RethinkDB and MobX to update the view: Whenever a new message is received that is written by oneself, the view will scroll down. When a message is written by somebody else, a small popup message will be shown that one can click to go to the bottom. The code of the main component looks like this:
Conclusion
Horizon combined with Onsen UI makes it really simple and fast to build a real time chat application. There are many resources and videos to learn more about RethinkDB. I highly recommend this video and their website. If you have any questions or feedback feel free to ask in our community. We would also appreciate a 🌟 on our GitHub repo! | https://medium.com/the-web-tub/real-time-chat-app-with-onsenui-and-horizon-952647c3fd13 | CC-MAIN-2018-51 | refinedweb | 977 | 60.95 |
CFD-Wiki - User contributions [en] 2018-02-18T00:22:39Z From CFD-Wiki MediaWiki 1.16.5 Siemens FAQ 2011-06-22T08:59:31Z <p>Seumasm: </p> <hr /> <div>This section is empty. This is just a suggestion on how to structure it. Please feel free to add questions and answers here!<br /> <br /> == STAR-CD ==<br /> <br /> === Using multiple POSDAT.F - simple trick ===<br /> <br /> Answer:<br /> <br />:<br /> <br />:<br /> <br /> INCLUDE 'POSDAT1.TXT'<br /> <br /> if you need to run the first job. this command automatically transfers the control to the POSDAT1.TXT. Mind that though they are named .txt they still need to be in proper fortran format. else, this will result in compilation error. <br /> <br /> Also they cannot have the extension '.f' because star tries to compile them as separate files and ends up giving an error.<br /> <br /> === Using scene files to produce 3D images ===<br /> <br />.<br /> <br />.<br /> <br /> === Using PRODEFS, PROINIT and .Prostar.Defaults to customise pro-STAR ===<br /> <br /> Setup a directory and point the STARUSR environmental variable at it<br /> <br /> ==== PRODEFS ====<br /> <br /> This file can be used to create customised commands that perform one or more operations (obviously the commands cannot conflict with currently defined pro-STAR commands). <br /> <br />.<br /> <br /> ==== PROINIT ====<br /> <br /> This file controls what happens during the startup of pro-STAR, the file is read and any commands in it issued. <br /> <br /><br /> <br /> ==== .Prostar.Defaults ====<br /> <br /> This file can be used to control the action of your function keys. Again all that needs to be done is the creation of a file called .Prostar.Defaults in your STARUSR directory.<br /> <br /><br /> <br /> == STAR-CCM+ ==<br /> <br /> === What is STAR-CCM+? ===<br /> <br />. <br /> <br /> STAR-CCM+ is a face-based code, which means that it can mesh and solve on arbitrary polyhedral cells. It uses object-oriented (OO) programming and has a client-server architecture. The server is written in C++ and the client in Java.<br /> <br /> One of its key features is its simulation process. The user is able to go from CAD geometry to post-processing entirely within the STAR-CCM+ environment.<br /> <br /> There is a STAR-CCM+ lite version available with more limited functionality and CD-adapco's own 3D CAD tool ( 3D-CAD ) has been integrated into the STAR-CCM+ client.<br /> <br /> === How can I try STAR-CCM+? ===<br /> <br /> Please contact your nearest CD-adapco office or agent (full list at [[]]) or e-mail CD-adapco at<br /> <br /> [[info@us.cd-adapco.com]] or<br /> <br /> [[info@uk.cd-adapco.com]]<br /> <br /> to request a trial license of STAR-CCM+<br /> <br /> You can now also download 20 example cases, with instructions on how to set-up and run and post-process the calculations.<br /> <br /> === How do I produce an animation in STAR-CCM+ ===<br /> <br /> Animations can be produced for either a transient of steady-state calculation. <br /> <br />.<br /> <br /> Steady: You can generate an animation for a steady result using a Java macro, varying the position of a plane section, length of a streamline etc, and writing an image file. Example macro is below.<br /> <br /> // STAR-CCM+ macro: AnimatePlaneSection.java<br /> package macro;<br /> <br /> import java.util.*;<br /> <br /> import star.common.*;<br /> import star.base.neo.*;<br /> import star.vis.*;<br /> <br /> <br /> public class AnimatePlaneSection extends StarMacro {<br /> <br /> public void execute() {<br /> double xCoord ;<br /> int FileName ;<br /> String path = "/INSERT_PATH/" ;<br /> StringBuffer path2 = new StringBuffer("") ;<br /> xCoord = -1.0 ;<br /> FileName = 0 ;<br /> <br /> Simulation simulation_0 = <br /> getActiveSimulation();<br /> <br /> // select plane section<br /> PlaneSection planeSection_0 = <br /> ((PlaneSection) simulation_0.getPartManager().getObject("plane section 2"));<br /> // select scene<br /> Scene scene_0 = <br /> simulation_0.getSceneManager().getScene("Geometry Scene 1");<br /> <br /> // define number of loops<br /> int counter = 150; <br /> <br /> for (int i=0; i<counter;i++) <br /> {<br /> // set and increment the X coordinate of the plane section<br /> xCoord = xCoord + 0.1;<br /> <br /> // increment the file name so that it doesn't overwrite the last one<br /> FileName = FileName + 1 ;<br /> <br /> // create the file name <br /> path2.append("PlaneSection").append(i).append(".png") ;<br /> <br /> // change the X coordinate of the plane section, leaving the others fixed.<br /> planeSection_0.setOrigin(new DoubleVector(new double[] {xCoord, -11.0, 10.98441335}));<br /> <br /> // write image file giving the name (path + path2.toString()) and the screen resolution (1068, 605)<br /> scene_0.printAndWait(resolvePath(path + path2.toString()), 1, 1068, 605);<br /> <br /> // clear path2 which contains the file name<br /> path2.delete(0,path2.length());<br /> }<br /> }<br /> }<br /> <br /> == STAR-CAD Series ==<br /> <br /> === STAR-Design ===<br /> <br />. <br /> <br /> === STAR-Cat5 ===<br /> <br /> Catia 5 plugin <br /> <br /> === STAR-NX ===<br /> <br /> Unigraphics NX plugin<br /> <br /> === STAR-Works ===<br /> <br /> SolidWorks plugin<br /> <br /> === STAR-Pro/E ===<br /> <br /> Pro-Engineer plugin<br /> <br /> == es-soltiuons ==<br /> * [ CD Printing]<br /> [[Category: FAQ's]]<br /> <br /> {{stub}}</div> Seumasm | https://www.cfd-online.com/W/index.php?title=Special:Contributions/Seumasm&feed=atom&limit=50&target=Seumasm&year=&month= | CC-MAIN-2018-09 | refinedweb | 843 | 57.87 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.