text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On Tue, 2005-05-10 at 10:51 +0100, Simon Marlow wrote: > Yes, the module overlap restriction has started to bite us. It now > turns out that you can't build GHC from CVS if you have wxHaskell > installed. > > Why? Because wxHaskell depends on some older non-hierarchical packages > (util in particular, which itself depends on lang). These package > populate the top-level module namespace, and steal a bunch of module > names that you might want to use in your program. There are a few libs that do this and it makes them essentially unpackagable when using ghc 6.4. Not because of conflicts with ghc itself but just conflicts with each other and totally unrelated packages. Just to make it clear to people how annoying it is, suppose I create a library with this cabal description: name: dummy version: 1.0 depends: util-1.0 exposed: True and then I register this package (to show the problem, this package doesn't need to expose any modules or contain any object code) $ ghc-pkg register dummy.cabal --user Now if we try to compile any package that contains a module called GetOpt (and lets face it there are dozens of programs that have made local copies of this module): GetOpt.hs: module GetOpt where -- module needn't have any code in it Now if we try to compile it: $ ghc-6.4 -c GetOpt.hs GetOpt.hs:1:0 Module `GetOpt` is a mamber of package util-1.0 To compile this module, please use -ignore-package util-1.0.. We discovered this in Gentoo, when after installing Cabal-0.5 (which is exposed by default and uses the util package), compiling happy failed. Our intrim policy in Gentoo Haskell packaging land is that we cannot package any library that is exposed by default and depends on any of the old hslibs packages because it will cause compile failures in unrelated programs. (We didn't realise wxhaskell did this too) > Thoughts? Ditch hslibs asap! We can probably draw up a list of the packages we know about that do this. Duncan | http://www.haskell.org/pipermail/libraries/2005-May/003766.html | CC-MAIN-2014-23 | refinedweb | 351 | 72.05 |
scale and orient glyph(s) according to eigenvalues and eigenvectors of symmetrical part of tensor More...
#include <vtkTensorGlyph.h>
scale and orient glyph(s) according to eigenvalues and eigenvectors of symmetrical part of tensor
vtkTensorGlyph is a filter that copies a geometric representation (specified as polygonal data) to every input point. The geometric representation, or glyph, can be scaled and/or rotated according to the tensor at the input point. Scaling and rotation is controlled by the eigenvalues/eigenvectors of the symmetrical part of the tensor as follows: For each tensor, the eigenvalues (and associated eigenvectors) are sorted to determine the major, medium, and minor eigenvalues/eigenvectors. The eigenvalue decomposition only makes sense for symmetric tensors, hence the need to only consider the symmetric part of the tensor, which is 1/2 (T + T.transposed()).
If the boolean variable ThreeGlyphs is not set the major eigenvalue scales the glyph in the x-direction, the medium in the y-direction, and the minor in the z-direction. Then, the glyph is rotated so that the glyph's local x-axis lies along the major eigenvector, y-axis along the medium eigenvector, and z-axis along the minor.
If the boolean variable ThreeGlyphs is set three glyphs are produced, each of them oriented along an eigenvector and scaled according to the corresponding eigenvector.
If the boolean variable Symmetric is set each glyph is mirrored (2 or 6 glyphs will be produced)
The x-axis of the source glyph will correspond to the eigenvector on output. Point (0,0,0) in the source will be placed in the data point. Variable Length will normally correspond to the distance from the origin to the tip of the source glyph along the x-axis, but can be changed to produce other results when Symmetric is on, e.g. glyphs that do not touch or that overlap.
Please note that when Symmetric is false it will generally be better to place the source glyph from (-0.5,0,0) to (0.5,0,0), i.e. centred at the origin. When symmetric is true the placement from (0,0,0) to (1,0,0) will generally be more convenient.
A scale factor is provided to control the amount of scaling. Also, you can turn off scaling completely if desired. The boolean variable ClampScaling controls the maximum scaling (in conjunction with MaxScaleFactor.) This is useful in certain applications where singularities or large order of magnitude differences exist in the eigenvalues.
If the boolean variable ColorGlyphs is set to true the glyphs are colored. The glyphs can be colored using the input scalars (SetColorModeToScalars), which is the default, or colored using the eigenvalues (SetColorModeToEigenvalues).
Another instance variable, ExtractEigenvalues, has been provided to control extraction of eigenvalues/eigenvectors. If this boolean is false, then eigenvalues/eigenvectors are not extracted, and the columns of the tensor are taken as the eigenvectors (the norm of column, always positive, is the eigenvalue). This allows additional capability over the vtkGlyph3D object. That is, the glyph can be oriented in three directions instead of one.
Definition at line 90 of file vtkTensorGlyph.h.
Definition at line 93 of file vtkTensorGlyph.h.
Definition at line 192 of file vtkTensorGlyph.
Construct object with scaling on and scale factor 1.0.
Eigenvalues are extracted, glyphs are colored with input scalar data, and logarithmic scaling is turned off.
Specify the geometry to copy to each point.
Note that this method does not connect the pipeline. The algorithm will work on the input data as it is without updating the producer of the data. See SetSourceConnection for connecting the pipeline.
Specify a source object at a specified table location.
New style. Source connection is stored in port 1. This method is equivalent to SetInputConnection(1, id, outputPort).
Definition at line 121 of file vtkTensorGlyph.h.
Turn on/off scaling of glyph with eigenvalues.
Specify scale factor to scale object by.
(Scale factor always affects output even if scaling is off.)
Turn on/off drawing three glyphs.
Turn on/off drawing a mirror of each glyph.
Set/Get the distance, along x, from the origin to the end of the source glyph.
It is used to draw the symmetric glyphs.
Turn on/off extraction of eigenvalues from tensor.
Turn on/off coloring of glyph with input scalar data or eigenvalues.
If false, or input scalar data not present, then the scalars from the source object are passed through the filter.
Set the color mode to be used for the glyphs.
This can be set to use the input scalars (default) or to use the eigenvalues at the point. If ThreeGlyphs is set and the eigenvalues are chosen for coloring then each glyph is colored by the corresponding eigenvalue and if not set the color corresponding to the largest eigenvalue is chosen. The recognized values are: COLOR_BY_SCALARS = 0 (default) COLOR_BY_EIGENVALUES = 1
Definition at line 211 of file vtkTensorGlyph.h.
Definition at line 212 of file vtkTensorGlyph.h.
Turn on/off scalar clamping.
If scalar clamping is on, the ivar MaxScaleFactor is used to control the maximum scale factor. (This is useful to prevent uncontrolled scaling near singularities.)
Set/Get the maximum allowable scale factor.
This value is compared to the combination of the scale factor times the eigenvalue. If less, the scale factor is reset to the MaxScaleFactor. The boolean ClampScaling has to be "on" for this to work. 245 of file vtkTensorGlyph.h.
Definition at line 246 of file vtkTensorGlyph.h.
Definition at line 247 of file vtkTensorGlyph.h.
Definition at line 248 of file vtkTensorGlyph.h.
Definition at line 249 of file vtkTensorGlyph.h.
Definition at line 250 of file vtkTensorGlyph.h.
Definition at line 251 of file vtkTensorGlyph.h.
Definition at line 252 of file vtkTensorGlyph.h.
Definition at line 253 of file vtkTensorGlyph.h.
Definition at line 254 of file vtkTensorGlyph.h. | https://vtk.org/doc/nightly/html/classvtkTensorGlyph.html | CC-MAIN-2019-51 | refinedweb | 975 | 58.38 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
django-treebeard
django-treebeard is a library that implements efficient tree implementations for the Django Web Framework 1.0+. It includes 3 different tree implementations: Adjacency List, Materialized Path and Nested Sets. Each one has it's own strength and weaknesses but share the same API, so it's easy to switch between implementations.
django-treebeard uses Django Model Inheritance with abstract classes to let you define your own models. To use django-treebeard:
- Run easy_install django-treebeard to install the latest treebeard version from PyPi 1.1. If you don't like easy_install, download a release from the treebeard download page or get a development version from the treebeard mercurial repository and run python setup.py install
- Add 'treebeard' to the INSTALLED_APPS section in your django settings file.
- Create a new model that inherits from one of django-treebeard's abstract tree models: mp_tree.MP_Node (materialized path), ns_tree.NS_Node (nested sets) or al_tree.AL_Node (adjacency list).
- Run python manage.py syncdb
- (Optional) If you are going to use the admin.TreeAdmin class for the django admin, you should install treebeard as a directory instead of an egg: easy_install --always-unzip django-treebeard. If you install treebeard as an egg, you'll need to enable django.template.loaders.eggs.load_template_source in the TEMPLATE_LOADERS setting in your django settings file. Either way, you need to add the path (filesystem or python namespace) to treebeard's templates in TEMPLATE_DIRS.
You can find the documentation in | https://bitbucket.org/larsc/django-treebeard | CC-MAIN-2015-48 | refinedweb | 266 | 50.84 |
HOUSTON (ICIS)--US base oil margins are crunched on high vacuum gas oil (VGO) costs and recent reductions in posted prices, market players said this week.
“Base oil economics are crazy right now, with high low sulphur VGO prices and low base oil netbacks,” a base oil market consultant said.
“With VGO prices so high, a price increase might have been more warranted than the recent price reductions,” one market source said.
VGO is a key feedstock for Group I and some Group II base oil production streams.
It is also a primary feedstock for fluid catalytic cracking (FCC) units where other refined products such as gasoline and distillates are produced.
Market sources said that a bout of refinery maintenance work in the US Gulf coast region is one underlying support factor for high VGO prices.
VGO was trading in the mid $2.50/gal range in November 2013, but jumped into the $2.90 range in December and stayed at that level, bouncing between the low $2.90s and the high $2.90s into this month.
The highest VGO prices in the last two years took place in April 2013, when values hit $3.30/gal.
The volatility in the price fluctuations are also a factor, with the climb to the 2013 high happening over a period of less than 30 days.
?xml:namespace>
Motiva’s light viscosity Group II base oil posted prices are at $3.37/gal following the decrease. The company’s heavy grade Group II posted price is at $4.25/gal after the decrease.
Other producers announced posted price reductions on various dates, with decreases ranging from 10 cents/gal to 25 cents/gal, depending upon the producer and the base oil grade.
VGO costs form a main component in the degree of base oil margin opportunity versus the price of the base stock. | http://www.icis.com/resources/news/2014/02/07/9751501/us-base-oil-margins-crunched-on-high-gas-oil-costs/ | CC-MAIN-2015-32 | refinedweb | 310 | 72.76 |
. In December 2015, Apple open sourced Swift at and made binaries available for Linux as well as OS X. The content in this chapter can be run on either Linux or OS X, but the remainder of the book is either Xcode-specific or depends on iOS frameworks that are not open source. Developing iOS applications requires Xcode and OS X.
This chapter will present the following topics:
How to use the Swift REPL to evaluate Swift code
The different types of Swift literals
How to use arrays and dictionaries
Functions and the different types of function arguments
Compiling and running Swift from the command line
Apple released Swift as an open source project in December 2015, hosted at and related repositories. Information about the open source version of Swift is available from the site. The open-source version of Swift is similar from a runtime perspective on both Linux and OS X; however, the set of libraries available differ between the two platforms.
For example, the Objective-C runtime was not present in the initial release of Swift for Linux; as a result, several methods that are delegated to Objective-C implementations are not available.
"hello".hasPrefix("he") compiles and runs successfully on OS X and iOS but is a compile error in the first Swift release for Linux. In addition to missing functions, there that are outside of the base-collections library, is implemented in Objective-C on OS X and iOS, but on Linux, it is a clean-room reimplementation! on the same machine without conflicting.
On Linux, the
swift binary can be executed provided that it and the dependent libraries are in a suitable location.
The Swift prompt displays
> for new statements and
. for a continuation. Statements and expressions that are typed into the interpreter are evaluated and displayed. Anonymous values are given references so that they can be used subsequently:
> "Hello World" $R0: 3 + 4 $R1: Int = 7 > $R0 $R2: $R1 $R3: Int = 7
Numeric types in Swift can represent both signed and unsigned integral values with sizes of. Please floating point types that are available in Swift which use the IEEE754 floating point standard. The
Double type represents 64 bits worth of data, while. For example:
> start with a slash (\) and can be one of the following:
\\: This is a literal slash
\
\0: This is the null character
\':This is a literal single quote
'
\": This is a literal double quote
"
\t: This is a tab
\n: This is a line feed
\r: This is a carriage return
\u{NNN}: This is a Unicode character, such as the Euro symbol
\u{20AC}, or a smiley
\u{1F600}
An interpolated string has an embedded expression, which is evaluated, converted into a
String, and inserted into the result:
> that is assigned to a variable of a different type, provided that three collection types: Array, Dictionary, and Set. They are strongly typed and generic, which ensures that the values of types that are assigned are compatible with the element type. Collections that are defined with
var are mutable; collections defined with
let are immutable.
The literal syntax for arrays uses
[] to store a comma-separated list:
> var shopping = [ "Milk", "Eggs", "Coffee", ] shopping: [String] = 3 values { [0] = "Milk" [1] = "Eggs" [2] = "Coffee" }
Literal dictionaries are defined with a comma-separated
[key:value] format for entries:
> var costs = [ "Milk":1, "Eggs":2, "Coffee":3, ] costs: [String : Int] = { [0] = { key = "Coffee" value = 3 } [1] = { key = "Milk" value = 1 } [2] = { key = "Eggs" value = 2 } }
Tip that are reassigned and added to:
> shopping[0] $R0: costs["Milk"] $R1: Int? = 1 > shopping.count $R2: Int = 3 > shopping += ["Tea"] > shopping.count $R3: Int = 4 > costs.count $R4: Int = 3 > costs["Tea"] = "String" error: cannot assign a value of type 'String' to a value of type 'Int?' > costs["Tea"] = 4 > costs.count $R5: Int = 4
Sets are similar to dictionaries; the keys are unordered and can be looked up efficiently. However, unlike dictionaries, keys don't have an associated value. As a result, they don't have array subscripts, but they do have the
insert,
remove, and
contains methods. They also have efficient set intersection methods, such as
union and
intersect. They can be created from an array literal if the type is defined or using the set initializer directly:
> var shoppingSet: Set = [ "Milk", "Eggs", "Coffee", ] > // same as: shoppingSet = Set( [ "Milk", "Eggs", "Coffee", ] ) > shoppingSet.contains("Milk") $R6: Bool = true > shoppingSet.contains("Tea") $R7: Bool = false > shoppingSet.remove("Coffee") $R8: String? = "Coffee" > shoppingSet.remove("Tea") $R9: String? = nil > shoppingSet.insert("Tea") > shoppingSet.contains("Tea") $R10: Bool = true
In the previous example, the return type of
costs["Milk"] is
Int? and not
Int. This is an optional type; there may be an
Int value or it may be empty. For a dictionary containing elements of type
T, subscripting the dictionary will have an
Optional<T> type, which can be abbreviated as
T? If the value doesn't exist in the dictionary, then the returned value will be
nil. Other object-oriented languages, such as Objective-C, C++, Java, and C#, have optional types by default; any object value (or pointer) can be
null. By representing optionality in the type system, Swift can determine whether a value really has to exist or might be
nil:
> var cannotBeNil: Int = 1 cannotBeNil: Int = 1 > cannotBeNil = nil error: cannot assign a value of type 'nil' to a value of type 'Int' cannotBeNil = nil ^ > var canBeNil: Int? = 1 canBeNil: Int? = 1 > canBeNil = nil $R0: Int? = nil
Optional types can be explicitly created using the
Optional constructor. Given a value
x of type
X, an optional
X? value can be created using
Optional(x). The value can be tested against
nil to find out whether it contains a value and then unwrapped with
opt!, for example:
> var opt = Optional(1) opt: Int? = 1 > opt == nil $R1: Bool = false > opt! $R2: Int = 1
If a
nil value is unwrapped,. Similar to the
|| shortcut, and the
&& operators, the right-hand side is not evaluated unless necessary:
> costs["Tea"] ?? 0 $R2: Int = 4 > costs["Sugar"] ?? 0 $R3: Int = 0
There are three key types of conditional logic in Swift (known as branch statements in the grammar): the
if statement, the
switch statement, and the
guard statement. Unlike other languages, the body of the
if must be surrounded with braces
{}; and if typed in at the interpreter, the
{ opening brace must be on the same line as the
if statement. The
guard statement is a specialized
if statement for use with functions and is covered in the section on functions later in this chapter.
Conditionally unwrapping an optional value is so common that a specific Swift pattern optional binding has been created to avoid evaluating the expression twice:
> var shopping = [ "Milk", "Eggs", "Coffee", "Tea", ] > var costs = [ "Milk":1, "Eggs":2, "Coffee":3, "Tea":4, ] > var cost = 0 > if let cc = costs["Coffee"] { . cost += cc . } > cost $R0: Int = 3
The
if block only executes if the optional value exists. The definition of the
cc constant only exists for the body of the
if block, and it does not exist outside of that scope. Furthermore,
cc is a non-optional type, so it is guaranteed not to be
nil.
Note
Swift 1 only allowed a single
let assignment in an
if block causing a pyramid of nested
if statements. Swift 2 allows multiple comma-separated
let assignments in a single
if statement.
> if let cm = costs["Milk"], let ct = costs["Tea"] { . cost += cm + ct . } > cost $R1: Int = 8
To execute an alternative block if the item cannot be found, an
else block can be used:
> if let cb = costs["Bread"] { . cost += cb . } else { . print("Cannot find any Bread") . } Cannot find any Bread
Other boolean expressions can include the
true and
false literals, and any expression that conforms to the
BooleanType protocol, the
== and
!= equality operators, the
=== and
!== identity operators, as well as the
<,
<=,
>, and
>= comparison operators. The
is type operator provides a test to see whether an element is of a particular type.
Tip
The difference between the equality operator and the identity operator is relevant for classes or other reference types. The equality operator asks Are these two values equivalent to each other?, whereas the identity operator asks Are these two references equal to each other?
There is a boolean operator that is.
As well as the
if statement, there is a ternary if expression that is similar to other languages. After a condition, a question mark (?) is used followed by an expression to be used if the condition is true, then a colon (:) followed by the false expression:
> var i = 17 i: Int = 17 > i % 2 == 0 ? "Even" : "Odd" $R0: String = "Odd"
Swift has a
switch statement that is statement, the evaluation jumps to the end of the
switch block unless the
fallthrough keyword is used. If no
case statements match, the
default statements are executed.
Note
A
default statement is required when the list of cases is not exhaustive. If they are not, the compiler will give an error saying that the list is not exhaustive and that a
default statement is required.
> var position = 21 position: Int = 21 > switch position { . case 1: print("First") . case 2: print("Second") . case 3: print("Third") . case 4...20: print("\(position)th") . case position where (position % 10) == 1: . print("\(position)st") . case let p where (p % 10) == 2: . print("\(p)nd") . case let p where (p % 10) == 3: . print("\(p)rd") . default: print("\; so
1...12 will give a list of integers between one and twelve. The half-open range is specified with two dots and a less than operator; so
1..<10 will provide integers from 1 to 9 but excluding { . print("i is \(i)") . }
If the number is not required, then an underscore (
_) can be used as a hole to act as a throwaway value. An underscore can be assigned to but not read:
> for _ in 1...12 { . print(0: [String] = 4 values { [0] = "Coffee" [1] = "Milk" [2] = "Eggs" [3] = "Tea" } > Array(costs.values) $R1: [Int] = 4 values { [0] = 3 [1] = 1 [2] = 2 [3] = 4 }
Note
The order of keys in a dictionary is not guaranteed; as the dictionary changes, the order may change.
Converting a dictionary's values to an array will result in a copy of the data being made, which can lead to poor performance. As the underlying
keys and
values are of a
LazyMapCollection type, they can be iterated over directly:
> costs.keys $R2: LazyMapCollection<[String : Int], String> = { _base = { _base = 4 key/value pairs { [0] = { key = "Coffee" value = 3 } [1] = { key = "Milk" value = 1 } [2] = { key = "Eggs" value = 2 } [3] = { key = "Tea" value = 4 } } _transform =} }
To print out all the keys in a dictionary, the
keys property can be used with a
for in loop:
> for item in costs.keys { . print more) of values at a time:
> var (a,b) = (1,2) a: Int = 1 b: Int = 2
Tuples can be used to iterate pairwise over both the keys and values of a dictionary:
> for (item,cost) in costs { . print( (in Swift 1 and 2) to use a more traditional form of the
for loop. This has an initialization, a condition that is tested at the start of each loop, and a step operation that is evaluated at the end of each loop. Although the parentheses around the
for loop are optional, the braces for the block of code are mandatory.
Note
It has been proposed that both the traditional
for loop and the increment/decrement operators should be removed from Swift 3. It is recommended that these forms of loops be avoided where possible.
Calculating the sum of integers between 1 and 10 can be performed without using the range operator:
> { . print("\ of statements. The
return statement can be used to leave a function:
> var shopping = [ "Milk", "Eggs", "Coffee", "Tea", ] > var costs = [ "Milk":1, "Eggs":2, "Coffee":3, "Tea":4, ] > func costOf(items:[String], _ costs:[String:Int]) -> Int { . var cost = 0 . for item in items { . if let ci = costs[item] { . cost += ci . } . } . return cost . } > costOf(shopping,costs) $R0: Int = 10
The return type of the function is specified after the arguments with an arrow (
->). If missing, the function cannot return a value; if present, the function must return a value of that type.
Note
The underscore (
_) on the front of the
costs parameter is required to avoid it being a named argument. The second and subsequent arguments in Swift functions are implicitly named. To ensure that it is treated as a positional argument, the
_ before the argument name is required.
Functions with positional arguments can be called with parentheses, such as the
costOf(shopping,costs) call. If a function takes no arguments, then the parentheses are still required.
The
foo() expression calls the
foo function with no argument. The
foo expression represents the function itself, so an expression, such as
let copyOfFoo = foo, results in a copy of the function; as a result, ci = costs[item] { . cost += ci . } . } . of the same type).
Swift functions can have optional arguments by specifying default values in the function definition. When the function is called, if an optional argument is missing, the default value for that argument is used.
Note equals (
=) and then the expression. This expression is re-evaluated each time the function is called without a corresponding argument.
In the
costOf example, instead of passing the value of
costs each time, it could be defined with a default parameter:
> func costOf(items items:[String], costs:[String:Int] = costs) -> Int { . var cost = 0 . for item in items { . if let ci = costs[item] { . cost += ci . } . } . return cost . } > costOf(items:shopping) $R2: Int = 10 > costOf(items:shopping, costs:costs) $R3: Int = 10
Please note that the captured
costs variable is bound when the function is defined.
It is a common code pattern for a function to require arguments that meet certain conditions before the function can run successfully. For example, an optional value must have a value or an integer argument must be in a certain range.
Typically, the pattern to implement this is either to have a number of
if statements that break out of the function at the top, or to have an
if block wrapping the entire method body:
if card < 1 || card > 13 { // report error return } // or alternatively: if card >= 1 && card <= 13 { // do something with card } else { // report error }
Both of these approaches have drawbacks. In the first case, the condition has been negated; instead of looking for valid values, it's checking for invalid values. This can cause subtle bugs to creep in; for example,
card < 1 && card > 13 would never succeed, but it may inadvertently pass a code review. There's also the problem of what happens if the block doesn't
return or
break; it could be perfectly valid Swift code but still include errors.
In the second case, the main body of the function is indented at least one level in the body of the
if statement. When multiple conditions are required, there may be many nested
if statements, each with their own error handling or cleanup requirements. If new conditions are required, then the body of the code may be indented even further, leading to code churn in the repository even when only whitespace has changed.
Swift 2 adds a
guard statement, which is conceptually identical to an
if statement, except that it only has an
else clause body. In addition, the compiler checks that the
else block returns from the function, either by returning or by throwing an exception:
> func cardName(value:Int) -> String { . guard value >= 1 && value <= 13 else { . return "Unknown card" . } . let cardNames = [11:"Jack",12:"Queen",13:"King",1:"Ace",] . return cardNames[value] ?? "\(value)" . }
The Swift compiler checks that the
guard
else block leaves the function, and reports a compile error if it does not. Code that appears after the
guard statement can guarantee that the value is in the
1...13 range without having to perform further tests.
The
guard block can also be used to perform optional binding; if the
guard condition is a
let assignment that performs an optional test, then the code that is subsequent to the
guard statement can use the value without further unwrapping:
> func firstElement(list:[Int]) -> String { . guard let first = list.first else { . return "List is empty" . } . return "Value is \(first)" . }
As the
first element of an array is an optional value, the
guard test here acquires the value and unwraps it. When it is used later in the function, the unwrapped value is available for use without requiring further unwrapping. of Chapter 6, Parsing Networked Data.
Separately, it is also possible to take a variable number of arguments. A function can easily take an array of values with
[], but Swift provides a mechanism to allow calling with multiple arguments, using a variadic parameter, which is denoted as an ellipses (…) after the type. The value can then be used as an array in the function.
Note
Swift 1 only allowed the variadic argument as the last argument; Swift 2 relaxed that restriction to allow a single variadic argument to appear anywhere in the function's parameters.… argument indicates that a variable number of arguments can be passed into the function. Inside the function, it is processed as an ordinary array; in this case, iterating through using a
for in loop.
Note
Int.max is a constant representing the largest
Int value, and
Int.min is a constant representing the smallest
Int value. Similar constants exist for other integral types, such as
UInt8.max, and
Int64.min.
What if no arguments are passed in? If run on a 64 bit system, then the output will be:
> > minmax(1,2,3,4) $R3: (Int, Int)? = (0 = 1, 1 = 4) > var (minimum,maximum) = minmax(1,2,3,4)! minimum: Int = 1 maximum: Int = 4
Returning an optional value allows the caller to determine what should happen in cases where the maximum and minimum are not present.
A tuple is an ordered set of data. The entries in the tuple are ordered, but it can quickly become unclear as to what data is stored, particularly if they are of the same type. In the
minmax tuple, it is not clear which value is the minimum and which value is the maximum, and this can lead to subtle programming errors later on.
A structure (
struct) values of another.
A
struct is defined with the
struct keyword and has variables or values in the body:
> struct MinMax { . var min:Int . var max:Int . }
This defines a
MinMax type, which can be used in place of any of the types that are initializer; if
MinMax() is used, then the default values for each of the structure types are given (based on the structure definition), but these can be overridden explicitly if desired with
MinMax(min:-10,max:11). For example, if the
MinMax struct is defined as
struct MinMax { var min:Int = Int.max; var max:Int = Int.min }, then
MinMax() will return a structure with the appropriate minimum and maximum values filled in.
Note
When a structure is initialized, all the non-optional fields must be assigned. They can be passed in as named arguments in the initializer or specified in the structure definition.
Swift also has classes; these are covered in the Swift classes section in the next chapter.
In the original Swift release, error handling consisted of either returning a
Bool or an optional value from function results. This tended to work inconsistently with Objective-C, which used an optional
NSError pointer on various calls that was set if a condition had occurred.
Swift 2 adds an exception-like error model, which allows code to be written in a more compact way while ensuring that errors are handled accordingly. Although this isn't implemented in quite the same way as C++ exception handling, the semantics of the error handling are quite similar.
Errors can be created using a new
throw keyword, and errors are stored as a subtype of
ErrorType. Although swift
enum values (covered in Chapter 3, Creating an iOS Swift App) are often used as error types,
struct values can be used as well.
Exception types can be created as subtypes of
ErrorType by appending the supertype after the type name:
> struct Oops:ErrorType { . let message:String . }
Exceptions are thrown using the
throw keyword and creating an instance of the exception type:
> throw Oops(message:"Something went wrong") $E0: Oops = { message = "Something went wrong" }
Functions can declare that they return an error using the
throws keyword before the return type, if any. The previous
cardName function, which returned a dummy value if the argument was out of range, can be upgraded to throw an exception instead by adding the
throws keyword before the return type and changing the
return to a
throw:
> func cardName(value:Int) throws -> String { . guard value >= 1 && value <= 13 else { . throw Oops(message:"Unknown card") . } . let cardNames = [11:"Jack",12:"Queen",13:"King",1:"Ace",] . return cardNames[value] ?? "\(value)" . }
When the function is called with a real value, the result is returned; when it is passed an invalid value, an exception is thrown instead:
> cardName(1) $R1: cardName(15) $E2: Oops = { message = "Unknown card" }
When interfacing with Objective-C code, methods that take an
NSError** argument are automatically represented in Swift as methods that throw. In general, any method whose arguments ends in
NSError** is treated as throwing an exception in Swift.
Note
Exception throwing in C++ and Objective-C is not as performant as exception handling in Swift because the latter does not perform stack unwinding. As a result, exception throwing in Swift is equivalent (from a performance perspective) to dealing with return values. Expect the Swift library to evolve in the future towards a throws-based means of error detection and away from Objective-C's use of
**NSError pointers.
The other half of exception handling is the ability to catch errors when they occur. As with other languages, Swift now has a
try/catch block that can be used to handle error conditions. Unlike other languages, the syntax is a little different; instead of a
try/catch block, there is a
do/catch block, and each expression that may throw an error is annotated with its own
try statement:
> do { . let name = try cardName(15) . print("You chose \(name)") . } catch { . print("You chose an invalid card") . }
When the preceding code is executed, it will print out the generic error message. If a different choice is given, then it will run the successful path instead.
It's possible to capture the error object and use it in the catch block:
. } catch let e { . print("There was a problem \(e)") . }
Both of these two preceding examples will catch any errors thrown from the body of the code.
Note
It's possible to catch explicitly based on type if the type is an
enum that is using pattern matching, for example,
catch Oops(let message). However, as this does not work for struct values, it cannot be tested here. Chapter 3, Creating an iOS Swift App introduces
enum types.
Sometimes code will always work, and there is no way it can fail. In these cases, it's cumbersome to have to wrap the code with a
do/try/catch block when it is known that the problem can never occur. Swift provides a short-cut for this using the
try! statement, which catches and filters the exception:
> let ace = try! cardName(1) ace: String = "Ace"
If the expression really does fail, then it translates to a runtime error and halts the program:
> let unknown = try! cardName(15) Fatal error: 'try!' expression unexpectedly raised an error: Oops(message: "Unknown card")
Tip
Using
try! is not generally recommended; if an error occurs then the program will crash. However, it is often used with user interface codes as Objective-C has a number of optional methods and values that are conventionally known not to be
nil, such as the reference to the enclosing window.
A better approach is to use
try?, which translates the expression into an optional value: if evaluation succeeds, then it returns an optional with a value; if evaluation fails, then it returns a
nil value:
> let ace = try? cardName(1) ace: String? = "Ace" > let unknown = try? cardName(15) unknown: String? = nil
This is handy for use in the
if let or
guard let constructs, to avoid having to wrap in a
do/catch block:
> if let card = try? cardName(value) { . print("You chose: \(card)") . }
It is common to have a function that needs to perform some cleanup before the function returns, regardless of whether the function has completed successfully or not. An example would be working with files; at the start of the function the file may be opened, and by the end of the function it should be closed again, whether or not an error occurs.
A traditional way of handling this is to use an optional value to hold the file reference, and at the end of the method if it is not
nil, then the file is closed. However, if there is the possibility of an error occurring during the method's execution, there needs to be a
do/catch block to ensure that the cleanup is correctly called, or a set of nested
if statements that are only executed if the file is successful.
The downside with this approach is that the actual body of the code tends to be indented several times each with different levels of error handling and recovery at the end of the method. The syntactic separation between where the resource is acquired and where the resource is cleaned up can lead to bugs.
Swift has a
defer statement, which can be used to register a block of code to be run at the end of the function call. This block is run regardless of whether the function returns normally (with the
return statement) or if an error occurs (with the
throw statement). Deferred blocks are executed in reverse order of execution, for example:
> func deferExample() { . defer { print("C") } . print("A") . defer { print("B") } . } > deferExample() A B C
Please note that if a
defer statement is not executed, then the block is not executed at the end of the method. This allows a
guard statement to leave the function early, while executing the
defer statements that have been added so far:
> func deferEarly() { . defer { print("C") } . print("A") . return . defer { print("B") } // not executed . } > deferEarly() A C
As Swift can be interpreted, it is possible to use it in shell scripts. By setting the interpreter to
swift with a hashbang, the script can be executed without requiring a separate compilation step. Alternatively, Swift scripts can be compiled to a native executable that can be run without the overhead of the interpreter.
Save the following as
hello.swift:
#!/usr/bin/env xcrun swift print("Hello World")
Tip
In Linux, the first line should point to the location of the
swift executable, such as
#!/usr/bin/swift.
After saving, make the file executable by running
chmod a+x hello.swift. The program can then be run by typing
./hello.swift, and the traditional greeting will be seen:
Hello World
Arguments can be passed from the command line and interrogated in the process using the
Process class through the
arguments constant. As with other Unix commands, the first element (0) is the name of the process executable; the arguments that are passed from the command line start from one (1).
The program can be terminated using the
exit function; however, this is defined in the operating system libraries and so it needs to be imported in order to call this function. Modules in Swift correspond to Frameworks in Objective-C and give access to all functions that are defined as public API in the module. The syntax to import all elements from a module is
import module although it's also possible to import a single function using
import func module.functionName.
Note
Not all foundation libraries are implemented for Linux, which results in some differences of behavior. In addition, the underlying module for the base functionality is
Darwin on iOS and OS X, and is
Glibc on Linux. These can also be accessed with
import Foundation, which will include the appropriate operating system module.
A Swift program to print arguments in uppercase can be implemented as a script:
#!/usr/bin/env xcrun swift import func Darwin.exit # import func Glibc.exit # for Linux let args = Process.arguments[1..<Process.arguments.count] for arg in args { print("\ example, the previous example could be called
upper and it would still work.
While interpreted Swift scripts are useful for experimenting and writing, each time the script is started, it is interpreted using the Swift compiler, the standard data types, such as strings and numbers, that is available on OS X, through the Xcode playground. | https://www.packtpub.com/product/swift-essentials-second-edition/9781785888878 | CC-MAIN-2021-17 | refinedweb | 4,820 | 62.07 |
This action might not be possible to undo. Are you sure you want to continue?
Endian Firewall Reference Manual r. 2.2.1.8
Copyright (c) 2008 Endian srl,“.
Endian Unified Network Security
Content
Content
3 Preface
Accessing the Endian Firewall GUI Features and enhancements in version 2.2 Legal notice Acknowledgments Endian web site 3 3 4 5 5
15 The Status Menu
System status Network status System graphs Traffic graphs Proxy graphs Connections OpenVPN connections 6 7 7 8 8 9 10 11 11 11 11 12 12 13 13 13 13 14 14 14 14 14 SMTP mail statistics Mail queue 15 16 16 16 16 16 16 16 17
6
The System Menu
Home Network configuration Choose type of RED interface Choose network zones Network Preferences Internet access preferences Configure DNS resolver Apply configuration Support Endian Network This screen contains three tabs. Passwords SSH access GUI settings Backup Backup sets Encrypt backup Import Backup files Reset to factory defaults Scheduled backups Shutdown Credits
18 The Network Menu
Edit hosts Routing Static routing Policy routing Interfaces 18 18 18 19 20
21 The Services Menu
DHCP server Fixed leases List of current dynamic leases Dynamic DNS ClamAV antivirus Anti archive bomb configuration ClamAV signature update schedule configuration ClamAV virus signatures Time server Traffic shaping Traffic shaping per uplink Traffic shaping services Spam Training Intrusion detection High availability Master setup Slave setup Traffic Monitoring 21 22 22 23 23 23 24 24 24 24 24 25 25 26 26 26 27 27
2
Endian Unified Network Security
Content
28 The Firewall Menu
Port forwarding / NAT Port forwarding Source NAT Outgoing traffic Inter-Zone traffic VPN traffic System access 28 28 29 30 30 31 31
48 The VPN Menu
OpenVPN server Server configuration Accounts Advanced VPN client download OpenVPN client (Gw2Gw) IPsec 48 48 49 49 50 51 52
32 The Proxy Menu
HTTP Configuration Authentication Default policy Content filter Antivirus Group policies Policy profiles POP3 Global settings Spam filter SIP FTP SMTP Main Antivirus Spam File Extensions Blacklists/Whitelists Domains Mail Routing Advanced DNS DNS proxy Custom nameserver Anti-spyware 32 32 34 36 37 38 38 38 38 38 38 39 39 40 40 41 41 42 43 44 44 45 46 46 47 47
56 The Hotspot Menu
Hotspot Accounts Quick Ticket Ticket rates Statistics Active Connections Connection Log Settings Dialin Password Allowed sites 56 56 58 58 58 59 59 60 61 61 61
62 The Logs Menu
Live Summary System Service Firewall Proxy HTTP Content filter HTTP report SMTP SIP Settings 62 63 63 63 64 64 64 64 64 64 64 65
66 GNU Free Documentation License
3
Endian Unified Network Security
Preface
Preface
Endian Firewall is an Open Source Unified Threat Management (UTM) appliance software. This document is a concise reference to the Endian Firewall web interface.
Accessing the Endian Firewall GUI.
Features and enhancements in version 2.2
• Web Interface Completely redesigned web interface; Many usability enhancements • • Networking VLAN support (IEEE 802.1Q trunking); policy routing: routing based on user, interface, mac, protocol or port • Port Forwarding / NAT Multiple uplink support, allowing different rules per uplink; port forwarding of traffic coming from VPN endpoints; source NAT management; option for rule based logging • • Outgoing Firewall Support for ICMP protocol; handling of multiple sources/ports/protocols per rule • Zone Firewall The DMZ pinholes section has been enhanced and renamed to zone firewall; fine grained filtering of local network traffic; rules based on zones, physical interfaces, MAC addresses; support for ICMP protocol; handling of multiple sources/ports/protocols per rule • Intrusion Detection New version of Snort IDS with reduced RAM usage and enhanced performance; support for inline intrusion detection • High Availability Multi-node appliance cluster; hot standby (active/passive); automatic node data synchronization; process monitoring/ watchdog • HTTP Proxy Time based access control with multiple time intervals; group based web access policies; zone based operation mode: transparent, with or without authentication
Accessing the Endian Firewall GUI
4
Endian Unified Network Security
Preface
• Content Filter Better handling of content filter categories; enhanced performance • SMTP Proxy Enhanced performance; optional setting for smarthost port; additionally secures SMTP traffic coming from VPNs (roadwarrior and gateway to gateway) • DNS Proxy Route specific domains to a custom DNS • •) • • IPsec Rewrite of the base; added debugging possibilities; IPsec on orange; default MTU can be overridden; simplified GUI by removing side (left/right) configuration and swapped completely to local/remote labeling; added ID fields; added dead peer detection options • Live Log Viewer Realtime log viewer with filtering and highlighting; displays all the logfiles you are interested in at the same time • Logs Every service supports remote logging; daily log rotation • Backup Zero-configuration backups to USB stick: plug in an USB stick and it “just works”; restore from any USB stick • Support One click to grant access to Endian support team; integrated ticketing support
Legal notice
The Endian Firewall Reference Manual 2.2 (“this document”) is copyright (c) 2008 Endian srl, Italy (“End.
Legal notice
5
Endian Unified Network Security
Preface
Acknowled!
Endian web site
For more information please visit Endian’s web site at.
Acknowledgments
6
Endian Unified Network Security
The System Menu
The System Menu
Select System from the menu bar at the top of the screen. The following links will appear in a submenu on the left side of the screen. They allow for basic administration and monitoring of your Endian Firewall. • Home: system and internet connection status overview • Network configuration: network and interface configuration • Support: support request form • Endian Network: Endian Network registration information • Passwords: set system passwords • SSH access: enable/configure Secure Shell (SSH) access to your Endian Firewall • GUI settings: such as interface language • Backup: backup/restore Endian Firewall settings as well as reset to factory default • Shutdown: shutdown/reboot your Endian Firewall • Credits: thanks to all contributors Each link will be explained individually in the following sections.
Select System from the menu bar at the top of the screen, then select Home from the submenu on the left side of the screen. This page displays an overview of the uplink connection(s) and general system health. A table is displayed, detailing the connection status of each uplink. Usually you will see just a single uplink called main, since it is the primary uplink. Of particular interest is the status field of the individual uplink:
Stopped Connecting Connected Disconnecting Failure Failure, reconnecting Dead link
The uplink is not connected. The uplink is currently connecting. The uplink is connected and fully operational. The uplink is currently disconnecting. Endian Firewall keeps pinging the gateway and announces when it becomes available. There was a failure while connecting the uplink. There was a failure while connecting to the uplink. Endian Firewall is trying again. The uplink is connected, but the hosts that were defined in Network, Interfaces to check the connection could not be reached. Essentially this means that the uplink is not operational.
Each uplink can be operated in either managed mode (default) or manual mode. In managed mode Endian Firewall monitors and restarts the uplink automatically when needed. If managed mode is disabled, the uplink can be activated or deactivated manually. There will be no automatic reconnection attempt if the connection is lost.
7
Endian Unified Network Security
The System Menu
Finally, after the uplink table, you can find a system health line, which looks similar to the following example: efw-1203950372.localdomain - 13:45:49 up 1 min, 0 users, load average: 4.84, 1.89, 0.68
This is basically the output of the Linux uptime command. It shows the current time, the days/hours/minutes that Endian Firewall has been running without a reboot, the number of console logins and the load averages for the past 1, 5, and 15 minutes.
Network configuration
Select System from the menu bar at the top of the screen, then select Network configuration from the submenu on the left side of the screen. Network and interface configuration is fast and easy with the wizard provided in this section. The wizard is divided into steps: you can navigate back and forth using the <<< and >>> buttons. You can freely navigate all steps and decide to cancel your actions at any moment. Only in the last step you will be asked to confirm the new settings. If you confirm, the new settings will be applied. This might take some time during which the web interface might not respond. Following is a detailed list of each wizard step.
Choose type of RED interface
When Endian Firewall was installed, the trusted network interface (called the GREEN interface) has already been chosen and set up. This screen allows to choose the untrusted network interface (called the RED interface): the one that connects your Endian Firewall to the “outside” (typically the uplink to your internet provider). Endian Firewall does support the following types of RED interfaces: ETHERNET STATIC You want to operate an Ethernet adapter and you need to setup network information (IP address and netmask) manually. This is typically the case when you connect your RED interface to a simple router using an Ethernet crossover cable. You want to operate an Ethernet adapter that gets network information through DHCP. This is typically the case when you connect your RED interface to a cable modem/router or ADSL/ISDN router using an Ethernet crossover cable. You want to operate a Ethernet adapter that is connected via an Ethernet crossover cable to an ADSL modem. Note that this option is only needed if your modem uses bridging mode and requires your firewall to use PPPoE to connect to your provider. Pay attention not to confuse this option with the ETHERNET STATIC or ETHERNET DHCP options used to connect to ADSL routers that handle the PPPoE themselves. You want to operate an ADSL modem (USB or PCI devices). You want to operate an ISDN adapter. You want to operate an analog (dial-up) or UMTS (cell-phone) modem. Your Endian Firewall has no RED interface. This is unusual since a firewall normally needs to have two interfaces at least - for some scenarios this does make sense though. One example would be if you want to use only a specific service of the firewall. Another, more sophisticated example is an Endian Firewall whose BLUE zone is connected through a VPN to the GREEN interface of a second Endian Firewall. The second firewall’s GREEN IP address can then be used as a backup uplink on the first firewall. If you choose this option, you will need to configure a default gateway later on.
ETHERNET DHCP
PPPoE
ADSL (USB, PCI) ISDN ANALOG/UMTS Modem GATEWAY
Choose type of RED interface
8
Endian Unified Network Security
The System Menu
Choose network zones
Endian Firewall borrows IPCop’s idea of different zones. At this point you’ve already encountered the two most important zones: GREEN RED is the trusted network segment. is the untrusted network segment.
This step allows you to add one or two additional zones, provided you have enough interfaces. Available zones are: ORANGE is the demilitarized zone (DMZ). If you host servers, it is wise to connect them to a different network than your GREEN network. If an attacker manages to break into one of your servers, he or she is trapped within the DMZ and cannot gain sensible information from local machines in your GREEN zone. is the wireless zone (WLAN). You can attach a hotspot or WiFi access point to an interface assigned to this zone. Wireless networks are often not secure - so the purpose is to trap all wirelessly connected machines into their own zone without access to any other zone except RED (by default).
BLUE
Note that one network interface is reserved for the GREEN zone. Another one may already be assigned to the RED zone if you have selected a RED interface type that requires a network card. This might limit your choices here to the point that you cannot choose an ORANGE or BLUE zone due to lack of additional network interfaces.
Network Preferences
This step allows you to configure the GREEN zone and any additional zone you might have set up in the previous step (ORANGE or BLUE). Each zone is configured in its own section with the following options: IP address Specify one IP address (such as 192.168.0.1). Pay attention not to use addresses that are already in use in your network. You need to be particularly careful when configuring the interfaces in the GREEN zone to avoid locking yourself out of the web interface! If you change IP addresses of an Endian Firewall in a production environment, you might need to adjust settings elsewhere, for example the HTTP proxy configuration in web browsers. Specify the CIDR / network mask from a selection of possible masks (such as /24 255.255.255.0). It is important to use the same mask for all devices on the same subnet. You can add additional IP addresses from different subnets to the interface here. Map the interfaces to zones. Each interface can be mapped to only one zone and each zone must have at least one interface. However, you might assign more than one interface to a zone. In this case these interfaces are bridged together and act as if they were part of a switch. All shown interfaces are labeled with their PCI identification number, the device description as returned by lspci and their MAC addresses. A symbol shows the current link status: a tickmark shows that the link is active, an X means there is no link and a question mark will tell you that the driver does not provide this information.
Network mask Additional addresses Interfaces
Note that Endian Firewall internally handles all zones as bridges, regardless of the number of assigned interfaces. Therefore the Linux name of the interfaces is brX, not ethX. Additionally, the system’s host and domain name can be set at the bottom of the screen.
Choose network zones
9
Endian Unified Network Security
The System Menu
You need to use IP addresses in different network segments for each interface, for example: IP = 192.168.0.1, network mask = /24 - 255.255.255.0 for GREEN IP = 192.168.10.1, network mask = /24 - 255.255.255.0 for ORANGE IP = 10.0.0.1, network mask = /24 - 255.255.255.0 for BLUE It is suggested to follow the standards described in RFC1918 and use only IP addresses contained in the networks reserved for private use by the Internet Assigning Numbers Authority (IANA): The first and the last IP address of a network segment are the network address and the broadcast address respectively and must not be assigned to any device.
Internet access preferences
This step allows you to configure the RED interface, that connects to the internet or any other untrusted network outside Endian Firewall. You will find different configuration options on this page, depending on the type of the RED interface you have chosen earlier. Some interface types require more configuration steps than others. Below is a description of the configuration for each type. ETHERNET STATIC You need to enter the IP address and network mask of the RED interface, as well as the IP address of your default gateway - that is, the IP address of the gateway that connects your Endian Firewall to the internet or another untrusted network. Optionally, you can also specify the MTU (maximum transmission unit) and the Ethernet hardware address (MAC address) of the interface - usually this is not needed. You just need to specify whether you want DHCP to set the IP address of the DNS (domain name server) automatically or you want to set it manually. You need to enter the username and password assigned to you by your provider, the authentication method (if you do not know whether PAP or CHAP applies, keep the default PAP or CHAP) and whether you want the IP address of the DNS (domain name server) to be assigned automatically or you want to set it manually. Optionally, you can also specify the MTU (maximum transmission unit) and your provider’s service and concentrator name usually this is not needed.
ETHERNET DHCP PPPoE
Internet access preferences
10
Endian Unified Network Security
The System Menu
ADSL (USB, PCI)
There are 3 sub-screens for this choice. 1. 2. 3. First you need to select the appropriate driver for your modem. Then you need to select the ADSL type: PPPoA, PPPoE, RFC 1483 static IP or RFC 1483 DHCP. Next, you need to provide some of the following settings (depending on the ADSL type fields are available or not): the VPI/VCI numbers as well as the encapsulation type; the username and password assigned to you by your provider and the authentication method (if you don’t know whether PAP or CHAP applies, use the default PAP or CHAP); the IP address and network mask of the RED interface, as well as the IP address of your default gateway (RFC 1483 static IP only); whether you want the IP address of the DNS (domain name server) to be assigned automatically or you want to set it manually. Optionally, you can also specify the MTU (maximum transmission unit) - usually this is not needed.
ISDN
You need to select your modem driver, phone numbers (your provider’s number and the number used to dial out), as well as. Optionally, you can also specify the MTU (maximum transmission unit) - usually this is not needed. There are 2 sub-screens for this choice. 1. First you need to specify the serial port your modem is connected to and whether it is a simple analog modem or a UMTS/HSDPA modem. Note that /dev/ttyS0 is normally used as serial console and is therefore not available for modems. Next you need to specify the modem’s bit-rate, the dial-up phone number or access point name,. For UMTS modems it is also necessary to specify the access point name. Optionally, you can also specify the MTU (maximum transmission unit) - usually this is not needed. Please read the note below for problems with modems.
ANALOG/UMTS Modem
2.
3. GATEWAY
You just need to specify the IP address of your default gateway - that is, the IP address of the gateway that connects your Endian Firewall to the internet or another untrusted network.
Some modern UMTS modems are USB mass storage devices as well. These modems usually register two devices (e.g. /dev/ttyUSB0, /dev/ttyUSB1). In this case the second device is the modem. This type of modem can cause problems when restarting the firewall because the firewall tries to boot from the USB mass storage device. SIM cards that require a personal identification number (PIN) are not supported by Endian Firewall.
Configure DNS resolver
This step allows you to define up to two addresses for DNS (domain name server), unless they are assigned automatically. Should only one nameserver be used it is necessary to enter the same IP address twice. The IP addresses that are entered must be accessible from this interface.
Configure DNS resolver
11
Endian Unified Network Security
The System Menu
Apply configuration
This last step asks you to confirm the new settings. Click the OK, apply configuration button to go ahead. Once you did this, the network wizard will write all configuration files to the disk, reconfigure all necessary devices and restart all depending services. This may take up to 20 seconds, during which you may not be able to connect to the administration interface and for a short time no connections through the firewall are possible. The administration interface will then reload automatically. If you have changed the IP address of the GREEN zone’s interface, you will be redirected to the new IP address. In this case and/or if you have changed the hostname a new SSL certificate will be generated.
Support
Select System from the menu bar at the top of the screen, then select Support from the submenu on the left side of the screen. A support request can be created directly from this screen. Fill in all necessary information and submit your request. A member of the Endian support team will contact you as soon as possible. Please provide a detailed problem description in order to help the support team to resolve the issue as quickly as possible. Optionally, you can grant access to your firewall via SSH (secure shell). This is a secure, encrypted connection that allows support staff to log in to your Endian Firewall to verify settings, etc. This option is disabled by default. When enabled, the support team’s public SSH key is copied to your system and access is granted via that key. Your root password is never disclosed in any way.
Endian Network
Select System from the menu bar at the top of the screen, then select Endian Network from the submenu on the left side of the screen. Your Endian Firewall can connect to Endian Network (EN). Endian Network allows for easy and centralized monitoring, managing and upgrading of all your Endian Firewall systems with just a few clicks.
This screen contains three tabs.
The Subscriptions tab shows a summary of your Endian Network support status. The last section lists your activation keys. You need at least one valid activation key (not expired) to receive updates from and participate in Endian Network. There is a key for each support channel (typically just one). If the firewall has not yet been registered the registration form is shown. The Remote Access tab allows to specify whether your Endian Firewall can be reached through Endian Network at all, and if so, through which protocol: HTTPS means the web interface can be reached through Endian Network and SSH means it is possible to login via secure shell through Endian Network. The Updates tab displays and controls the update status of your system. There are three sections. • Firstly, pressing the Check for new updates! button will access your support channels looking for new updates. If any updates are found they will be listed (updates are distributed as RPM packages). Pressing the Start update process NOW! button will install all updated packages. Secondly - to save you some time - the system retrieves the update list automatically. You may choose the interval to be hourly, daily, weekly (the default) or monthly - do not forget to click on Save to save the settings. Thirdly, by pressing Update signatures now you can update the ClamAV antivirus signatures. This works only if ClamAV is in use, for example in combination with the email or HTTP proxy.
• •
Apply configuration
12
Endian Unified Network Security
The System Menu
Passwords
Select System from the menu bar at the top of the screen, then select Passwords from the submenu on the left side of the screen. You can change one password at a time here. Specify each new password twice and press Save. The following users are available: • Admin: the user that can connect to the web interface for administration. • Root: the user that can login to the shell for administration. Logins can be made locally to the console, through the serial console or remotely via SSH (secure shell) if it has been activated. • Dial: the Endian Firewall client user.
SSH access
Select System from the menu bar at the top of the screen, then select SSH access from the submenu on the left side of the screen. This screens allows you to enable remote SSH (secure shell) access to your Endian Firewall. This is disabled by default which is the recommended setting. SSH access is always on when one of the following is true: • • • Endian support team access is allowed in System, Support. SSH access is enabled in System, Endian Network, Remote Access. High availability is enabled in Services, High Availability.
Some SSH options can be set: SSH protocol version 1 TCP forwarding password authentication public key authentication This is only needed for old SSH clients that do not support newer versions of the SSH protocol. This is strongly discouraged since there are known vulnerabilities in SSH protocol version 1. You should rather upgrade your SSH clients to version 2, if possible. Check this if you need to tunnel other protocols through SSH. See the note below for a use case example. Permit logins through password authentication. Permit logins through public keys. The public keys must be added to /root/.ssh/authorized_keys.
Finally there is a section detailing the public SSH keys of this Endian Firewall that have been generated during the first boot process. Assume you have a service such as telnet (or any other service that can be tunneled through SSH) on a computer inside your GREEN zone, say port 23 on host 10.0.0.20. This is how you can setup a SSH tunnel through your Endian Firewall to access the service securely from outside your LAN. 1. 2. Enable SSH and make sure it can be accessed (see Firewall, System access). From an external system connect to your Endian Firewall using ssh -N -f -L 12345:10.0.0.20:23 root@endian_firewall where -N tells SSH not to execute commands, but just to forward traffic, -f runs SSH in the background and -L 12345:10.0.0.20:23 maps the external system’s port 12345 to port 23 on 10.0.0.20 as it can be seen from your Endian Firewall. 3. The SSH tunnel from port 12345 of the external system to port 23 on host 10.0.0.20 is now established. In this example you can now telnet to port 12345 on localhost to reach 10.0.0.20.
This screen contains three tabs.
13
Endian Unified Network Security
The System Menu
GUI settings
Select System from the menu bar at the top of the screen, then select GUI settings from the submenu on the left side of the screen. In the community release it is also possible to click on the Help translating this project link which will open the Endian Firewall translation page. Any help is appreciated. Two options regarding the web interface can be set in this screen: whether to display the hostname in the browser window title and the language of the web interface (English, German and Italian are currently supported).
Backup
Select System from the menu bar at the top of the screen, then select Backup from the submenu on the left side of the screen. In this section you can create backups of your Endian Firewall configuration and restore the system to one of these backups when needed. Backups can be saved locally on the Endian Firewall host, to a USB stick or downloaded to your computer. It is also possible to reset the configuration to factory defaults and to create fully automated backups.
Backup sets
By clicking on the Create new Backup button a dialog opens up where you can configure the new system snapshot: configuration database dumps log files log archives remark includes all configurations and settings you have made, that is the content of the directory / var/efw. includes a database dump, which for example includes hotspot accounting information. includes the current log files includes older log files, backups with this option checked will get very big after some time an additional comment can be added here
Click on the Create new Backup button again to go ahead and create the backup. Following is the list of available backups (initially empty): you can choose to download them, delete them or restore them by clicking on the appropriate icon in this list. Each backup is annotated with zero or more of the following flags: S - Settings. The backup contains your configurations and settings. D - Database. The backup contains a database dump. E - Encrypted. The backup file is encrypted. L - Log files. The backup contains log files. A - Archive. The backup contains older log files. ! - Error! The backup file is corrupt. C - Created automatically. The backup has been created automatically by a scheduled backup job. U - This backup has been saved to a USB stick.
Encrypt backup
You can provide a GPG public key that will be used to encrypt all backups. Select your public key by clicking on the Browse button and then choosing the key file from your local file system. Make sure Encrypt backup archives is checked. Confirm and upload the key file by clicking Save.
Backup sets
14
Endian Unified Network Security
The System Menu
Import Backup files
You can upload a previously downloaded backup. Select your backup by clicking on the Browse button and then choosing the backup file from your local file system. Fill in the Remark field in order to name the backup and upload it by clicking Save. It is not possible to import encrypted backups. You must decrypt such backups before uploading them. The backup appears in the backup list above. You can now choose to restore it by clicking on the restore icon.
Reset to factory defaults
Clicking the Factory defaults button allows you to reset the configuration of your Endian Firewall to factory defaults and reboot the system immediately after. A backup of the old settings is saved automatically.
Scheduled backups
Select the Scheduled backups tab if you wish to enable and configure automated backups. First, enable and configure automatic backups. You can choose what should be part of the backup: the configuration, database dumps, log files and old log files as seen in the Backup Sets section. You can also choose how many backups you want to keep (2-10) and the interval between backups (hourly, daily, weekly or monthly). When you’re done click the Save button. Next, you can tell the system whether or not you want backups emailed to you. If you wish to receive backups by email you can enable this feature and select the email address of the recipient. You can then Save the settings. There is also a Send a backup now button that will save the settings and try to send an email with the backup immediately, so you can test the system. Optionally you can also provide a sender email address (this must be done if your domain or hostname are not resolvable by your DNS) and the address of a smarthost to be used (in case you want all outgoing email go through your companies SMTP server, rather than be sent directly by your Endian Firewall). If the SMTP proxy is disabled it is absolutely necessary to add a smarthost to be able to send emails.
Shutdown
Select System from the menu bar at the top of the screen, then select Shutdown from the submenu on the left side of the screen. In this screen you can shutdown or reboot your Endian Firewall by clicking the Shutdown or the Reboot button respectively.
Credits
Select System from the menu bar at the top of the screen, then select Credits from the submenu on the left side of the screen. This screen displays the list of people that brought Endian Firewall to you.
Import Backup files
15
Endian Unified Network Security
The Status Menu
The Status Menu
Select Status from the menu bar at the top of the screen. The following links will appear in a submenu on the left side of the screen. They give detailed status information about various aspects of your Endian Firewall: • System status: services, resources, uptime, kernel • Network status: configuration of network interfaces, routing table and ARP cache • System graphs: graphs of resource usage • Traffic Graphs: graphs of bandwidth usage • Proxy graphs: graph of HTTP proxy access statistics during the last 24 hours • Connections: list of all open TCP/IP connections • OpenVPN connections: list of all OpenVPN connections • SMTP mail statistics: graph of SMTP proxy filter statistics during the last day/week/month/year • Mail queue: SMTP server’s mail queue Each link will be explained individually in the following sections.
System status
Select Status from the menu bar at the top of the screen, then select System status from the submenu on the left side of the screen. This screen is divided into the following sections (accessible via tabs or scrolling): Services Memory lists the status of all the services installed on Endian Firewall - a service might appear as STOPPED simply because the corresponding feature is not enabled. this is the output of the Linux free command. The first bar shows the total used memory: it is normal for this value to be close to 100% for a long running system, since the Linux kernel uses all available RAM as disk cache. The second bar shows the memory actually used by processes: ideally this should be below 80% to keep some memory available for disk caching - if this value approaches 100%, the system will slow down because active processes are swapped to disk: you should consider upgrading RAM then. The third bar indicates the swap usage. For a long running system it is normal to see moderate swap usage (the value should be below 20%), especially if not all the services are used all the time. this is the output of the Linux df command. It shows the used disk space for each disk partition (/, /boot and /var for a default install). / and /boot should be rather constant, /var grows while using the system. this is the output of the Linux w command. It reports the current time, information about how long your system has been running without a reboot, the number of shell users that are currently logged into the system (normally there should not be any) and the system load average for the past 1, 5 and 15 minutes. Additionally, if any shell user is logged into the system, some information about the user is displayed (such as the remote host from which he or she is logged in). this is the output of the Linux lsmod command. It shows the loaded kernel modules (the information is of interest to advanced users only). this is the output of the Linux uname -r command. It shows the current kernel version.
Disk usage
Uptime and users
Loaded modules Kernel version
16
Endian Unified Network Security
The Status Menu
Network status
Select Status from the menu bar at the top of the screen, then select Network status from the submenu on the left side of the screen. This page shows the output of the Linux command ip addr show (Ethernet interfaces, bridges and virtual devices), the status of the network adapters (if available), the routing table and the ARP cache (MAC / IP addresses in the local LANs).
System graphs
Select Status from the menu bar at the top of the screen, then select System graphs from the submenu on the left side of the screen. This page contains system resource graphs for the last 24 hours: CPU, memory, swap and disk usage. Clicking on one of the graphs will open a new page with the respective usage graphs for the last day, week, month and year.
Traffic graphs
Select Status from the menu bar at the top of the screen, then select Traffic graphs from the submenu on the left side of the screen. This page contains traffic graphs for the last 24 hours. Clicking on one of the graphs will open a new page with traffic graphs for the last day, week, month and year of the chosen interface.
Proxy graphs
Select Status from the menu bar at the top of the screen, then select Proxy graphs from the submenu on the left side of the screen. This page contains graphs with access statistics for the HTTP proxy during the last 24 hours.
Connections
Select Status from the menu bar at the top of the screen, then select Connections from the submenu on the left side of the screen. This page shows the list of current connections from, to or going through Endian Firewall. The source and destination of every connection are highlighted in the color of the zones they belong to. Additionally to the four zones (GREEN, RED,ORANGE, BLUE) that are defined by Endian Firewall, two other colors are shown. BLACK is used for local connections on the firewall whereas PURPLE connections belong to virtual private networks (VPNs).
OpenVPN connections
Select Status from the menu bar at the top of the screen, then select OpenVPN connections from the submenu on the left side of the screen. This page shows a list of OpenVPN connections. It is possible to kill or ban connected users by clicking on the kill or ban button respectively.
SMTP mail statistics
Select Status from the menu bar at the top of the screen, then select SMTP mail statistics from the submenu on the left side of the screen. This page shows statistics of the SMTP traffic (sending email) through Endian Firewall for the last day, week, month and year. This information is only available when the SMTP proxy is used.
17
Endian Unified Network Security
The Status Menu
Mail queue
Select Status from the menu bar at the top of the screen, then select Mail queue from the submenu on the left side of the screen. This page shows the current email queue (only available when the SMTP proxy is used). It is also possible to flush the queue by clicking on the Flush mail queue button.
18
Endian Unified Network Security
The Network Menu
The Network Menu
Select Network from the menu bar at the top of the screen. The following links will appear in a submenu on the left side of the screen. They allow setting up network-related configuration options: • Edit hosts: define hosts for local domain name resolution • Routing: define static routes and set up policy routing • Interfaces: edit your uplinks or create VLANs Each link will be explained individually in the following sections.
Edit hosts
Select Network from the menu bar at the top of the screen, then select Edit hosts from the submenu on the left side of the screen. Endian Firewall contains a caching DNS server (dnsmasq) that checks the system’s host file for name look-ups. In this section you can define a custom host entry that will then be resolved for all clients. Click the Add a host link to add a host entry. This is done by specifying IP address, hostname and domain name and then confirming the host entry by clicking on the Add Host button. An existing entry can be deleted by clicking on the trash bin in its row. To edit an entry it is necessary to click on the pencil symbol. The line is then highlighted and a pre-filled form opens up. After all the changes have been applied the entry is saved by clicking on the Update Host button.
Routing
Select Network from the menu bar at the top of the screen, then select Routing from the submenu on the left side of the screen. It is possible to choose between two types of routing: static routing and policy routing.
Static routing
Allows to associate specific network addresses with given gateways or uplinks. Click the Add a new rule link to specify a static routing rule using the following fields: Source Network Destination Network Route Via Enabled Remark source network in CIDR notation (example: 192.168.10.0/24) destination network in CIDR notation (example: 192.168.20.0/24) enter the static IP address of a gateway or choose between the available uplinks check to enable rule (default) a remark to remember the purpose of this rule later
Click the Save button to confirm your rule. You can then disable/enable, edit or delete each rule from the list of rules by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom).
Static routing
19
Endian Unified Network Security
The Network Menu
Policy routing
Allows to associate specific network addresses and service ports / protocols with given uplinks. Click the Create a policy routing rule link to specify a policy routing rule. The following fields are available: Source The source can be a list of zones or interfaces, a list of IPs or networks in CIDR notation (example: 192.168.10.0/24), a list of OpenVPN users or a list of MAC addresses. By selecting <ANY> the rule will match every source. The destination can be a list of IPs, networks in CIDR notation or a list of OpenVPN users. By selecting <ANY> the rule will match every source. Optionally you can specify the protocol and, in case of TCP, UDP or TCP + UDP, a port for the rule. Some predefined combinations, e.g. HTTP (protocol TCP, port 80), can be selected from the Service dropdown list. Choose the uplink that should be used for this rule. If you want to use the backup uplink whenever the chosen uplink becomes unavailable, the checkbox has to be checked. The type of service (TOS) can be chosen here. The binary number behind each type of service describes how this type works. The first three bits describe the precedence of the packet: 000 stands for default precedence and 111 describes the highest precedence. The fourth bit describes the delay where 0 means normal delay and 1 means low delay. The fifth bit describes the throughput. 1 increases the throughput while 0 stands for normal throughput. The sixth bit controls the reliability. Again 1 increases reliability and 0 is the setting for normal reliability. The eight IP precedence values are called class selectors (CS0-7). Additionally twelve values have been created for assured forwarding (AFxy, x being a class from 1 to 4 and y being drop precedence from 1 to 3) that provide low packet loss with minimum guarantees about latency. Expedited forwarding (EF PHB) has been defined to ask for low-delay, low-jitter and low-loss service. Remark Position Enabled Log all accepted packets Set a remark to remember the purpose of the rule. Define where to insert the rule (relative position in the list of rules). Check this checkbox to enable the rule (default). Check this to log all packets that are affected by this rule.
Destination Service/Port
Route Via Type Of Service
Click the Create rule button to confirm your rule. You can then disable, edit or delete any rule from the list by clicking on the respective icon on the right side of the table. You can also change the order of the rules (by clicking on the down and up arrow icons). After making changes to a rule, do not forget to click the Apply button on the top of the list!
Policy routing
20
Endian Unified Network Security
The Network Menu
Interfaces
Select Network from the menu bar at the top of the screen, then select Interfaces from the submenu on the left side of the screen, finally choose one of the two following tabs: Uplink editor Additional uplinks can be defined by clicking on the Uplink editor tab: choose the type of uplink, then fill in the type-specific form. The fields are almost the same as in the network configuration wizard (see the “Network configuration” section in “The System Menu” chapter). The following options differ from the network confguration wizard: Type This selection includes one additional protocol: PPTP. PPTP can be configured to work in static or in DHCP mode. This is done by selecting the respective value from the “PPTP method” dropdown. The IP address and netmask must be defined in the appropriate textfields and is only required if the static method has been chosen. Additional IP/netmask or IP/CIDR combinations can be added in the field below if the respective checkbox is enabled. Phone number, username and password are not required but may be needed for some configurations to work. This depends on the provider’s settings. The authentication method can be PAP or CHAP. If you are not sure which one to use, just keep the default value “PAP or CHAP” that will work in either case. Start uplink on boot if this uplink fails Reconnection timeout This checkbox specifies whether an uplink should be enabled at boot time or not. This is useful for backup uplinks which are managed but do not need to be started during the boot procedure. If enabled, this field gives you the possibility to choose an alternative uplink from the dropdown list. This uplink will be activated if the current uplink should fail. With this timeout you can specify the time (in seconds) after which an uplink tries to reconnect if it fails. This value depends on your provider’s settings. If you are unsure just leave this field empty.
VLANs Virtual LANs (VLANs) can be defined by clicking on the VLANs tab. The idea behind offering VLAN support in Endian Firewall is helping to allow arbitrary associations of VLAN ids to firewall zones. To add an association click the Add new VLAN link, then specify the following parameters: Interface Zone VLAN ID the physical interface the VLAN is connected to the Zone the VLAN is associated with VLAN ID (0-4095)
Whenever a virtual LAN is created a new interface is created. This interface is named ethX.y where X is the number of the interface and y is the VLAN ID. This interface is then assigned to the chosen zone. “NONE” can be chosen, if the interface is used as High Availability management port.
Interfaces
21
Endian Unified Network Security
The Services Menu
The Services Menu
Select Services from the menu bar at the top of the screen. Endian Firewall can provide a number of useful services that can be configured in this section. In particular, these include services used by the various proxies, such as the ClamAV antivirus. Intrusion detection, high availability and traffic monitoring can be enabled here as well. Following is a list of links that appear in the submenu on the left side of the screen: • DHCP server: DHCP (Dynamic Host Configuration Protocol) server for automatic IP assignment • Dynamic DNS: Client for dynamic DNS providers such as DynDNS (for home / small office use) • ClamAV antivirus: configure the ClamAV antivirus used by the mail and web proxies • Time server: enable/configure NTP time server, set time zone or update time manually • Traffic shaping: prioritize your IP traffic • Spam Training: configure training for the spam filter used by the mail proxies • Intrusion detection: configure the intrusion detection system (IDS) Snort • High availability: configure your Endian Firewall in a high availability setup • Traffic Monitoring: enable or disable traffic monitoring with ntop Each link will be explained in the following sections.
DHCP server
Select Services from the menu bar at the top of the screen, then select DHCP server from the submenu on the left side of the screen. The DHCP (Dynamic Host Configuration Protocol) service allows you to control the IP address configuration of all your network devices from Endian Firewall in a centralized way. When a client (host or other device such as networked printer, etc.) joins your network it will automatically get a valid IP address from a range of addresses and other settings from the DHCP service. The client must be configured to use DHCP - this is something called “automatic network configuration” and is often the default setting. You may choose to provide this service to clients on your GREEN zone only, or include devices on the ORANGE (DMZ) or BLUE (WLAN) zone. Just tick the check boxes that are labeled Enabled accordingly. Click on the Settings link to define the DHCP parameters as described below: Start address / End address Specify the range of addresses to be handed out. These addresses have to be within the subnet that has been assigned to the corresponding zone. If you want to configure some hosts to use manually assigned IP addresses or fixed IP addresses (see below), be sure to define a range that does not include these addresses or addresses from the OpenVPN address pool (see OpenVPN, OpenVPN server) to avoid conflicts. If you intend to use fixed leases only (see below), leave these fields empty. This defines the default / maximum time in minutes before the IP assignment expires and the client is supposed to request a new lease from the DHCP server. This is the default domain name suffix that is passed to the clients. When the client looks up a hostname, it will first try to resolve the requested name. If that is not possible, the client will append this domain name suffix preceded by a dot and try again. Example: if the fully qualified domain name of your local file server is earth. example.com and this suffix is “example.com”, the clients will be able to resolve the server by the name “earth”.
Default / Max lease time Domain name suffix
22
Endian Unified Network Security
The Services Menu
Primary / Secondary DNS Primary / Secondary NTP server Primary / Secondary WINS server
This specifies the domain name servers (DNS) to be used by your clients. Since Endian Firewall contains a caching DNS server, the default value is the firewall’s own IP address in the respective zone. Here you can specify the Network Time Protocol (NTP) servers to be used by your clients (to keep the clocks synchronized on all clients). This setting specifies the Windows Internet Name Service (WINS) servers to be used by your clients (for Microsoft Windows networks that use WINS).
Advanced users might wish to add custom configuration lines to be added to dhcpd.conf in the text area below the settings forms. Pay attention that Endian Firewall’s interface does not perform any syntax check on these lines: Any mistake here, might inhibit the DHCP server from starting! Example: The following extra lines may be used to handle VoIP telephones that need to retrieve their configuration files from an HTTP server at boot time: option tftp-server-name “”; option bootfile-name “download/snom/{mac}.html”; Note the use of $GREEN_ADDRESS which is a macro that is replaced with the firewall’s own GREEN interface address.
Fixed leases
Sometimes it is necessary for certain devices to always use the same IP address while still using DHCP. Clicking on the Add a fixed lease link allows to assign static IP addresses to devices. The devices are identified with their MAC addresses. Note that this is still very different from setting up the addresses manually on each of these devices, since each device will still contact the DHCP server to get its address. A typical use case for this is the case of thin clients on your network that boot the operating system image from a network server using PXE (Preboot Execution Environment). The following parameters can be set to define fixed leases: MAC address IP address Description Next address Filename Root path Enabled the client’s MAC address the IP address that will always be assigned to this client optional description the address of the TFTP server (only for thin clients / network boot) the boot image file name (only for thin clients / network boot) the path of the boot image file (only for thin clients / network boot) if this checkbox is not ticked the fixed lease will be stored but not written down to dhcpd. conf
Every fixed lease can be enabled, disabled, edited or removed by clicking on the respective icon (icons are described in the legend at the bottom of the fixed leases table).
List of current dynamic leases
The DHCP sections ends with a list of currently assigned dynamic IP addresses.
Fixed leases
23
Endian Unified Network Security
The Services Menu
Dynamic DNS
Select Services from the menu bar at the top of the screen, then select Dynamic DNS from the submenu on the left side of the screen. Dynamic DNS providers like DynDNS offer a service that allows assigning a globally available domain name to IP addresses. This works even with addresses that are changing dynamically such as those offered by residential ADSL connections. For this to work, each time the IP address changes, the update must be actively propagated to the dynamic DNS provider. Endian Firewall contains a dynamic DNS client for 14 different providers - if enabled, it will automatically connect to the dynamic DNS provider and tell it the new IP address after every address change. For each account (you might use more than one) click on the Add a host link, then specify the following parameters: Service Behind a proxy Enable wildcards choose the dynamic DNS provider (only applies if you use the no-ip.com service) check this box if your Endian Firewall is connecting to the internet through a proxy ome dynamic DNS providers allow having all sub domains of your domain point to your IP address, i.e. and example.dyndns.org will both resolve to the same IP address: by checking this box you enable this feature (if supported by your dynamic DNS provider) the hostname and domain as registered with your dynamic DNS provider, for instance “example” and “dyndns.org” as given to you by your dynamic DNS provider check this if your Endian Firewall is not directly connected to the internet, i.e. behind another router / gateway: in this case the service at is used to find out what your external IP address is check to enable (default)
Hostname and Domain Username and Password behind Router (NAT) Enabled
Please note that you still have to export a service to the RED zone if you want to be able to use you domain name to connect to your home/office system from the internet. The dynamic DNS provider just does the domain name resolution part for you. Exporting a service might typically involve setting up port forwarding (see Firewall, Port forwarding / NAT).
ClamAV antivirus
Select Services from the menu bar at the top of the screen, then select ClamAV antivirus from the submenu on the left side of the screen. The mail proxy (POP and SMTP) and web proxy (HTTP) components of Endian Firewall use the well known ClamAV antivirus service. This sections lets you configure how ClamAV should handle archive bombs (see the next paragraph for an explanation) and how often information about new viruses is downloaded (“signature update schedule”). You can also see when the last scheduled update has been performed as well as manually start an update.
Anti archive bomb configuration
Archive bombs are archives that use a number of tricks to load antivirus software to the point that they hog most of the firewall’s resources (denial of service attack). Tricks include sending small archives made of large files with repeated content that compress well (for example, a file of 1 GB containing only zeros compresses down to just 1 MB using zip), or multiple nested archives (e.g. zip files inside zip files) or archives that contain a large number of empty files, etc...). To avoid these types of attack, ClamAV is preconfigured not to scan archives that have certain attributes, as configured here: Max. archive size Max. nested archives Archives larger than this size in MB are not scanned. Archives containing archives are not scanned if the nesting exceeds this number of levels.
Anti archive bomb configuration
24
Endian Unified Network Security
The Services Menu
Max. files in archive Max compression ratio
Archives are not scanned if they contain more than this number of files. Archives whose uncompressed size exceeds the compressed archive size by more than X times, where X is the specified number, are not scanned, the default value is 1000 - note that normal files typically uncompress to no more than 10 times the size of the compressed archive. What should happen to archives that are not scanned because of the above settings: it is possible to choose between “Do not scan but pass” and “Block as virus”. Since it’s technically impossible to scan encrypted (password protected) archives, they might constitute a security risk and you might want to block them by checking this box.
Handle bad archives Block encrypted archives
ClamAV signature update schedule configuration
Another important aspect of running ClamAV are the antivirus signatures updates: information about new viruses must be downloaded periodically from a ClamAV server. The configuration pane (top right) lets you choose how often these updates are performed - the default is once every hour. Tip: move the mouse over the question marks to see when exactly the updates are performed in each case - the default is one minute past the full hour.
ClamAV virus signatures
This section shows when the last update has been performed and what the latest version of ClamAV’s antivirus signatures is. Click on Update signatures now to perform an update right now (regardless of scheduled updates) - note that this might take some time. There is also a link to ClamAV’s online virus database in case you are looking for information about a specific virus.
Time server
Select Services from the menu bar at the top of the screen, then select Time server from the submenu on the left side of the screen. Endian Firewall keeps the system time synchronized to time server hosts on the internet by using the network time protocol (NTP). A number of time server hosts on the internet are preconfigured and used by the system. Click on Override default NTP servers to specify your own time server hosts. This might be necessary if you are running a setup that does not allow Endian Firewall to reach the internet. These hosts have to be added one per line. Your current time zone setting can also be changed in this section. The last form in this section gives you the possibility to manually change the system time. This makes sense if the system clock is way off and you would like to speed up synchronization (since automatic synchronization using time servers is not done instantly).
Traffic shaping
Select Services from the menu bar at the top of the screen, then select Traffic shaping from the submenu on the left side of the screen. The purpose of traffic shaping is to prioritize the IP traffic that is going through your firewall depending on the service. A typical application is to prioritize interactive services such as Secure Shell (SSH) or voice over IP (VoIP) over bulk traffic like downloads.
Traffic shaping per uplink
Click on the icons on the right side of the table to enable or disable traffic shaping for every single uplink. For traffic shaping to work properly it is also very important to specify the actual values for the down and up bandwidth for each uplink: click on the pencil icon (edit), then fill in the down and up bandwidth expressed in kbit per second.
ClamAV signature update schedule configuration
25
Endian Unified Network Security
The Services Menu
Traffic shaping services
Add your traffic shaping rules: click on Create a service to add a new rule, specifying: Enabled Protocol Priority Port check to enable (default) whether the service to be prioritized is a TCP or UDP service (example: SSH is a TCP service) give a priority: “high”, “medium” or “low” the destination port of the service to be prioritized (example: SSH uses port 22)
Click on Create service to save the settings and apply the new rule.
Spam Training
Select Services from the menu bar at the top of the screen, then select Spam Training from the submenu on the left side of the screen. SpamAssassin can be be configured to learn automatically which emails are spam mails and which are not (so called ham mails). To be able to learn, it needs to connect to an IMAP host and check pre-defined folders for spam and ham messages. The default configuration is not used for training. All it does is provide default configuration values that are inherited by the real training sources which can be added below. By clicking on the Edit default configuration link a new pane appears where the default values can be set: Default IMAP host Default username Default password Default ham folder Default spam folder Schedule an automatic spam filter training the IMAP host that contains the training folders the login name for the IMAP host the password of the user the name of the folder that contains only ham messages the name of the folder that contains only spam messages the interval between checks. This can either be disabled or be an hourly, daily, weekly, or monthly interval. For exact information about the scheduled time you can move your mouse cursor over the question mark next to the chosen interval.
Spam training sources can be added in the section below. By clicking on the Add IMAP spam training source link a new pane appears. The options for the additional training hosts are similar to the default configuration options. The only thing that is missing is the scheduling. This will always be inherited from the default configuration. Three additional options are available. Enabled Remark Delete processed mails if this box is ticked the training source will be used whenever spamassassin is trained in this field it is possible to save comment to remember the purpose of this source at a later time if this box is ticked mails will be deleted after they have been processed
The other options can be defined just like in the default configuration. If they are defined they override the default values. To save a source it is necessary to click on the Update Training Source button after all desired values have been set. A source can be tested, enabled, disabled, edited or removed by clicking on the appropriate icon in its row. The icons are explained in the legend at the bottom of the page.
Traffic shaping services
26
Endian Unified Network Security
The Services Menu
It is also possible to check all connections by clicking on the Test all connections button. Note that this can take some time if many training sources have been defined or the connection to the IMAP servers is slow. To start the training immediately the Start training now has to be clicked. It is important to note that training can take a long time depending on the number of sources, the connection speed and most importantly on the number of emails that will be downloaded. You can also train the antispam engine manually if the SMTP Proxy is enabled for incoming as well as for outgoing mails. This is done by sending spam mails to spam@spam.spam. Non-spam mails can be sent to ham@ham.ham. For this to work it is necessary that spam.spam and ham.ham can be resolved. Typically this is achieved by adding these two hostnames to the host configuration in Network, Edit hosts, Add a host on your Endian Firewall.
Intrusion detection
Select Services from the menu bar at the top of the screen, then select Intrusion detection from the submenu on the left side of the screen. Endian Firewall includes the well known intrusion detection (IDS) and prevention (IPS) system Snort. It is directly built into the IPfirewall (Snort inline). At this time no rules can be added through the web interface, hence Snort is usable only for advanced users that can load their own rules through the command line. Functionality to manage rules from the web interface will be added in a future update.
High availability
Endian Firewall can be easily run in high availability (HA) mode. At least 2 Endian Firewall machines are required for HA mode: one assumes the role of the active (master) firewall while the others are standby (slave) firewalls. If the master firewall fails, an election between the slaves will take place and one of them will be promoted to the new master, providing for transparent failover.
Master setup
To set up such a HA configuration, first set up the firewall that is going to be the master: 1. 2. 3. 4. Execute the setup wizard, filling in all needed informations. Log into the administration web interface, select Services from the menu bar at the top of the screen, then select High availability from the submenu on the left side of the screen. Set Enable High Availability to Yes and set High Availability side to Master. At this point an extra panel appears where the master-specific settings can be configured: The Management network is the special subnet to which all Endian Firewalls that are part of a HA setup must be connected (either via the GREEN interface or via a dedicated physical network). The default is 192.168.177.0/24. Unless this subnet is already used for other purposes there is no need to change this. The Master IP Address is the first IP address of the management network. The Management port is the network port that connects this firewall (the master) to the slave or slaves. This can either be the GREEN zone (i.e. the management network is physically the same as the GREEN network) or it can be a dedicated network port (eth0, eth1, ...), provided the firewall has an interface not yet used and you are planning to have a dedicated physical network for the management network. Next, there are some fields that you can fill in if you wish to be notified by email if a failover event takes place. Finally, click on Save, then Apply to activate the settings.
Master setup
27
Endian Unified Network Security
The Services Menu
Slave setup
Setup the the firewall that is going to be the slave: 1. Execute the setup wizard, including the network wizard, filling in all needed information. It is not necessary to configure services etc, since this information will be synchronized from the master. However, it is necessary to register the slave with Endian Network. Log into the administration web interface, select Services from the menu bar at the top of the screen, then select High availability from the submenu on the left side of the screen. Set Enable High Availability to Yes and set High Availability side to Slave. At this point an extra panel appears where the slave-specific settings can be configured: Choose the management network option according to the settings on the master: either GREEN zone or a dedicated network port. Fill in the Master IP address (CIDR) field: 192.168.177.1/24 unless you choose a non-standard management network address for the master. Fill in the Master root password (the slave needs this to synchronize its configuration from the master). Finally, click on Save, then Apply to activate the settings.
2. 3. 4.
At this point the slave cannot be reached anymore via its old IP address (factory default or previous GREEN address) since it is in standby mode. It is connected to the master only through the management network. If you log in to the master again, on the HA page you can see a list of connected slaves. If you click on the Go to Management GUI link you can open the slave’s administration web interface via the management network (routed via the master firewall).
Traffic Monitoring
Select Services from the menu bar at the top of the screen, then select Traffic Monitoring from the submenu on the left side of the screen. Traffic monitoring is done by ntop and can be enabled or disabled by clicking on the main switch on this page. Once traffic monitoring is enabled a link to the monitoring administration interface appears in the lower section of the page. This administration interface is provided by ntop and includes detailed traffic statistics. ntop displays summaries as well as detailed information. The traffic can be analyzed by host, protocol, local network interface and many other types of information. For detailed information about the ntop administration interface please have a look at About, Online Documentation on the ntop administration interface itself or visit the ntop documentation page.
Slave setup
28
Endian Unified Network Security
The Firewall Menu
The Firewall Menu
Select Firewall from the menu bar at the top of the screen. This section allows setting up the rules that specify if and how IP traffic flows through your Endian Firewall. Following is a list of links that appear in the submenu on the left side of the screen: • Port forwarding / NAT: configure port forwarding and NAT (network address translation) • Outgoing traffic: allow or disallow outgoing (towards RED) traffic - settings are per zone, host, port, etc. • Inter-Zone traffic: allow or disallow traffic between zones • VPN traffic: specify whether hosts connecting through a VPN should be firewalled • ystem access: grant access to the Endian Firewall host itself Each of these subsections will be explained individually in the following chapters.
Port forwarding / NAT
Port forwarding
Select Firewall from the menu bar at the top of the screen, then select Port forwarding / NAT from the submenu on the left side of the screen. Port forwarding grants limited network access from the external RED zone (typically the internet) to hosts on an internal zone, such as the DMZ (ORANGE) or even the trusted LAN (GREEN). However, forwarding to the GREEN zone is not recommended from a security point of view. You can define which port on which external interface (incoming port) will be forwarded to a given host/port on the inside (destination). Typical use cases might be to forward port 80 on an external interface to a webserver in the DMZ or to forward port 1022 on an external interface to a SSH server on port 22 of a host in the DMZ. You need to supply the following parameters: Protocol Incoming IP Port on incoming Destination IP Destination Port Remark Enabled SNAT incoming connections Enable log protocol: TCP, UDP, GRE (generic routing encapsulation - used by tunnels) or all the (external) interface which port (1 - 65535) to listen to on the external interface the IP of the destination host to which incoming traffic is forwarded to the port (1-65535) on the destination host to which incoming traffic is forwarded to a remark for you to remember the purpose of the forward rule later check to enable rule (default) specify whether incoming traffic should appear to be originating from the firewall IP instead of the actual IP log all packets that match this rule
Click the Add button to confirm your rule. You can then disable/enable, edit or delete each rule from the list by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom). After making changes or additions to your rule set, do not forget to click the Apply button on the top of the screen! Once a rule is defined, you can limit access to the forwarding destination from the external RED zone. To do so, you need to click on the plus-icon (“Add external access”) next to the rule: this allows limiting access to a given source (host or network address). You can do this repeatedly to add more sources. A use case for this would be to grant SSH access to the external port 1022 only to one trusted external IP from the internet.
Port forwarding
29
Endian Unified Network Security
The Firewall Menu
Source NAT
In this section you can define to which outgoing connections source network address translation (Source NAT) should be applied. Source NAT can be useful if a server behind your Endian Firewall has its own external IP and outgoing packets should therefore not use the RED IP address of the firewall. Adding Source NAT rules is similar to adding port forwarding rules. The following options are available: Source In this field you can specify whether outgoing connections that are initiated from a network or IP address, or connections initiated by a VPN user should be Source NATed. If you choose the first Type you must then enter IP or network addresses into the textarea below (one address per line). If you choose the second Type you can select the users you want from the multiselection field below. In this field you can specify whether connections to a Zone/VPN/Uplink, to a Network/ IP or to a User should be NATed. If you choose the first Type you must then select a zone, a VPN or an uplink from the multiselection field below. If you choose the second Type you must enter IP or network addresses into the textarea below (one address per line). If you choose the third Type you can select the users you want from the multiselection field below. Here you can specify the service that should be NATed. In the Service selectbox you can select pre-defined values for different protocols. If you want to specifiy a service yourself you must select the protocol in the Protocol selectbox and, should you want to add a port as well, enter the destination ports into the Destination port textarea (one port per line). Here you can choose whether you want to apply Source NAT or not. If you choose to use source network address translation you can select the IP address that should be used. The Auto entries will automatically choose the IP address depending on the outgoing interface. In certain cases you may want to explicitly declare that no Source NAT should be performed, e.g. if a server in your DMZ is configured with an external IP and you do not want its outgoing connections to have your RED IP as source. Enabled Remark Position Tick this checkbox if the rule should be applied. You can enter a short note here so you can later remember the purpose of this rule. Here you can specify after which rule you want to insert this rule.
Destination
Service/Port
NAT
To save the rule just click on the Save button. Configuring an SMTP server running on IP 123.123.123.123 (assuming that 123.123.123.123 is an additional IP address of your uplink) in the DMZ with source NAT: 1. 2. 3. 4. Configure your ORANGE zone as you like. Setup the SMTP server to listen on port 25 on an IP in the ORANGE zone. Add a static ethernet uplink with IP 123.123.123.123 to your Endian Firewall in the Network, Interfaces section. Add a source NAT rule and specify the ORANGE IP of the SMTP server as source address. Be sure to use NAT and set the NATed source IP address to 123.123.123.123.
Source NAT
30
Endian Unified Network Security
The Firewall Menu
Outgoing traffic
Select Firewall from the menu bar at the top of the screen, then select Outgoing traffic from the submenu on the left side of the screen. Endian Firewall comes with a preconfigured set of rules, that allow outgoing traffic (i.e. “internet access”) from the GREEN zone with regard to the most common services (HTTP, HTTPS, FTP, SMTP, POP, IMAP, POP3s, IMAPs, DNS, ping). All other services are blocked by default. Likewise, access to HTTP, HTTPS, DNS and ping is allowed from the BLUE zone (WLAN) while only DNS and ping are allowed from the ORANGE zone (DMZ). Everything else is forbidden by default. In this section you can disable/enable, edit or delete rules by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom). You can also add your own rules by clicking on the Add a new firewall rule link at the top. Please consider that the order of rules is important: the first matching rule decides whether a packet is allowed or denied, no matter how many matching rules might follow. You can change the order of rules using the arrow down/up icons next to each rule. A rule is defined by the following parameters: Source Destination Service Port Action Remark Position Enabled Log all accepted packets select a zone or interface, specify one or more network/host addresses or MAC addresses select the entire RED zone, one or more uplinks or one or more network/host addresses firewall rule later at what position in the list should the rule be inserted check to enable this rule (default) Log all accepted packets (does not include denied/rejected packets): this is off by default as it will create large volumes of log data
After making changes to a rule, do not forget to click the Apply button on the top of the list! At the bottom of the page you can also find the rules that are set automatically by Endian Firewall depending on your configuration. It is possible to disable or enable the whole outgoing firewall by using the Enable Outgoing firewall toggle. When disabled, all outgoing traffic is allowed (not recommended).
Inter-Zone traffic
Select Firewall from the menu bar at the top of the screen, then select Inter-Zone traffic from the submenu on the left side of the screen. This section allows you to set up rules that determine how traffic can flow between the different network zones, excluding the RED zone. Endian Firewall comes with a simple set of preconfigured rules: traffic is allowed from the GREEN zone to any other zone (ORANGE and BLUE) and traffic is allowed within each zone. Everything else is forbidden by default. Analogous to the outgoing traffic firewall you can disable/enable, edit or delete rules by clicking on the appropriate icon on the right side of the table. You can also add your own rules by clicking on the Add a new inter-zone firewall rule link at the top. Please see the preceding section (Outgoing traffic) for details about handling firewall rules. The inter-zone firewall can be disabled/enabled as a whole using the Enable Inter-Zone firewall toggle. When disabled, all traffic is allowed between all zones other than the RED zone (not recommended).
Source NAT
31
Endian Unified Network Security
The Firewall Menu
VPN traffic
Select Firewall from the menu bar at the top of the screen, then select VPN traffic from the submenu on the left side of the screen. The VPN traffic firewall allows to add firewall rules applied to hosts that are connected via VPN. The VPN traffic firewall is normally not active, which means traffic can flow freely between the VPN hosts and hosts in the GREEN zone and VPN hosts can access all other zones. Please note that VPN hosts are not subject to the outgoing traffic firewall or the Inter-Zone traffic firewall. If you need to limit access from or to VPN hosts you need to use the VPN traffic firewall. The handling of the rules is identical to the outgoing traffic firewall. Please refer to the Outgoing traffic section in this chapter for details about handling firewall rules.
System access
Select Firewall from the menu bar at the top of the screen, then select System access from the submenu on the left side of the screen. In this section you can set up rules that grant or deny access to the Endian Firewall itself. There is a list of preconfigured rules that cannot be changed. This is to guarantee the proper working of the firewall, since these rules are automatically created as they are required by the services the firewall provides. Click on the >> button labeled “Show rules of system services” to show these rules. Click on the Add a new system access rule link to add your own custom rules here. The following parameters describe the rule: Source address Source interface Service/Port Action Remark Position Enabled Log all accepted packets specify one or more network/host addresses or MAC addresses specify a zone or interface system access rule later at what position in the list should the rule be inserted check to enable rule (default) Log all accepted packets (besides denied/rejected packets): this is off by default as it will create large volumes of log data
Click the Add button to confirm your rule. You can then disable/enable, edit or delete each rule from the list of rules by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom). After making changes or additions to your rule set, do not forget to click the Apply button on the top of the list!
Source NAT
32
Endian Unified Network Security
The Proxy Menu
The Proxy Menu
Select Proxy from the menu bar at the top of the screen. A proxy is a service on your Endian Firewall that can act as a gatekeeper between clients (e.g. a web browser) and network services (e.g. a web server on the internet). Clients connect to the proxy which in turn can retrieve, cache, filter and potentially block the information from the original server. A proxy is called transparent if all traffic goes through it, of the client’s configuration. Non-transparent proxies hence rely on the collaboration of the client (e.g. the proxy settings of your web browser). Following is a list of proxies that are available on Endian Firewall. Each proxy can be configured via the links that are in the submenu on the left side of the screen: • HTTP: configure the web proxy including authentication, content filter and antivirus • POP3: configure the proxy for retrieving mail via the POP protocol, including spam filter and antivirus • SIP: configure the proxy for the session initiation protocol (SIP) used by voice over IP systems • FTP: enable or disable the FTP proxy (check files that are downloaded via FTP for viruses) • SMTP: configure the proxy for sending or retrieving mail via the SMTP protocol, including spam filter and antivirus • DNS: configure the caching domain name server (DNS) including anti-spyware
Each section will be explained individually below.
HTTP
Select Proxy from the menu bar at the top of the screen, then select HTTP from the submenu on the left side of the screen.
Configuration
Click on the Enable HTTP Proxy toggle to enable the HTTP proxy (Endian Firewall uses the Squid caching proxy). Once the proxy is up and running, a number of controls appear. First of all, you can define the way users in each zone (GREEN and, if enabled also ORANGE, BLUE) can access the proxy. Per zone choices are: disabled no authentication authentication required transparent the proxy server is not available in the given zone the proxy server is available to anyone (no need to log in), but you need to configure your browser manually users need to configure their browser manually and need to log in in order to use the proxy server the proxy server is available to anyone and no browser configuration is needed (HTTP traffic is intercepted by the proxy server)
Some browsers, including Internet Explorer and Firefox, are able to automatically detect proxy servers by using the Web Proxy Autodiscovery Protocol (WPAD). Most browsers also support proxy auto-configuration (PAC) through a special URL. When using an Endian Firewall the URL looks like this: http://<IP OF YOUR FIREWALL>/proxy.pac.
Next, comes a section with global configuration options: Proxy port Visible hostname the TCP port used by the proxy server (defaults to 8080) the proxy server will assume this as its hostname (will also show at the bottom of error messages)
Configuration
33
Endian Unified Network Security
The Proxy Menu
Cache administrator email Language of error messages Max upload size
the proxy server will show this email address in error messages the language in which error messages are displayed limit for HTTP file uploads (such as used by HTML forms with file uploads) in KB (0 means unlimited)
Then you will find a number of additional options, each in its own panel that can be expanded by clicking on the + icon: Allowed Ports and SSL Ports Ports SSL Ports Log settings Log enabled Log query terms Log user-agents Log contentfiltering Firewall logs outgoing connections Allowed Subnets per Zone GREEN / ORANGE / BLUE for each zone that the proxy serves you can define which subnets are allowed to access the proxy (defaults to all subnets associated with the respective zone) - give one subnet per line (example: 172.16.1.0/255.255.255.0 or 172.16.1.0/24). Note: there should be at least one entry for each active zone. If you do not want to allow connections from a whole zone, then rather disable the proxy on that zone using the select boxes below the Enable HTTP Proxy toggle. log all URLs being accessed through the proxy (master switch) also log parameters in the URL (such as ?id=123) also log user agents, i.e. which web browsers access the web also log when content is filtered have the firewall log web accesses (transparent proxies only) list the TCP destination ports to which the proxy server will accept connections when using HTTP (one per line, comments start with #) as above, but when using HTTPS instead of HTTP
Bypass / Banned Sources and Destinations Bypass transparent proxy Bypass proxy filter Banned clients Cache management Harddisk / Memory cache size Max / Min object size Enable offline mode give the amount of memory the proxy should allocate for caching web sites, respectively on disk or in RAM (in Megabytes) give upper and lower size limits of objects that should be cached (in Kilobytes) if this option is on, the proxy will never try to update cached objects from the upstream webserver - clients can then browse cached, static websites even after the uplink went down in this textarea you can specify which domains should not be cached (one domain per line) specify sources (upper left panel) or destinations (upper right panel), that are not subject to transparent proxying; give one subnet, IP address or MAC address per line specify source IP addresses (mid left panel) or source MAC addresses (mid right panel) that, while still passing through the proxy, are not subject to filtering specify source IP addresses (lower left panel) or source MAC addresses (lower right panel) that are banned (unconditionally blocked by the proxy)
Do not cache these domains
Configuration
34
Endian Unified Network Security
The Proxy Menu
Upstream proxy Upstream proxy upstream username / password Username / client IP forwarding use this option to make your Endian Firewall’s proxy connect to another (upstream) proxy; specify the upstream proxy as “host:port” specify credentials, if authentication is required for the upstream proxy forward the username / client IP address to the upstream proxy
Click the Save button to confirm and save the configuration changes. Do not forget to click the Apply button to restart the proxy for the changes to become active. The Clear cache button allows to delete all web pages and files cached by the HTTP proxy.
Authentication
Endian Firewall’s proxy supports four different authentication types: Local, LDAP, Windows, Radius. Each of these types needs different configuration parameters and is described below. However, the global configuration parameters are: Number of authentication processes Authentication cache TTL (in minutes) Limit of IP addresses per user User / IP cache TTL (in minutes) Authentication realm prompt Require authentication for unrestricted source addresses Domains without authentication Sources (SUBNET / IP / MAC)without authentication the number of authentication processes that can run simultaneously the time in minutes how long authentication data should be cached the maximum number of IP addresses from which a user can connect to the proxy simultaneously the time in minutes how long an IP address will be associated with the logged in user this text will be shown in the authentication dialog if you disable this unrestricted source addresses will not have to provide their credentials in this textarea you can enter domain names that can be accessed without being authenticated (one per line) in this textarea you can enter source subnets, IP addresses or MAC addresses that do not require authentication (one per line)
The following parameters are available for local authentication. User management Min password length Click on this button if you want to manage local users. Here you can set the minimum password length for local users.
The following parameters are available for LDAP authentication. Base DN LDAP type LDAP server the base distinguished name, this is the start point of your search here you can choose whether you are using an Active Directory server, a Novell eDirectory server, a LDAP version 2 server or a LDAP version 3 server the IP address or fully qualified domain name of your LDAP server
Authentication
35
Endian Unified Network Security
The Proxy Menu
Port Bind DN username Bind DN password user objectClass group objectClass
the port on which the server is listening the fully distinguished name of a bind DN user, the user must have permission to read user attributes the password of the user the bind DN user must be part of this objectClass the bind DN user must be part of this objectClass
The following parameters are available for Windows authentication. Domain PDC hostname BDC hostname Username Password Join Domain Enable user-based access restrictions Use positive/negative access control the domain you want to join the hostname of the primary domain controller the hostname of the backup domain controller the username you want to use to join the domain the user’s password click here to join the domain
The following parameters are available for Radius authentication. RADIUS server Port Identifier Shared secret Enable user-based access restrictions Use positive/negative access control the address of the RADIUS server the port on which the RADIUS server is listening an additional identifier the password to be used
Authentication
36
Endian Unified Network Security
The Proxy Menu
Use native Windows authentication with Active Directory In order to be able to use Windows’ native authentication with active directory you have to make sure that a few conditions are met: • • • • • • The firewall must join the domain. The system clocks on the firewall and on the active directory server have to be in sync. In the Proxy, DNS, Custom nameserver a custom nameserver has to be entered. The firewall must be able to resolve the name of the Active Directory server (e.g. through an entry in Network, Edit hosts). The realm must be a fully qualified domain name. The PDC hostname has to be set to the netbios name of the Active Directory server.
Default policy
The default policy applies to all users of the proxy, whether they are authenticated or not. Policy settings include a simple user agent and MIME type filter as well as advanced time-based virus scanning and content filtering rules. Restrict allowed clients for web access Max download size Block MIME types This checkbox activates the user agent filter, it restricts web access to the selected user agents. This sets the limit for HTTP file downloads in KB (0 means unlimited). Enabling this option will activate a filter which checks incoming headers for their MIME type. If the MIME type of the incoming file is set to be blocked, access will be denied. This way you can block files not corresponding to the company policy (for example multimedia files). Here you can choose allowed clients and browsers from a list after clicking on the + icon. You can specify blocked MIME types by clicking on the + icon and then adding one type per line. The syntax conforms to the standard defined by the IANA. Examples: application/javascript, audio/mpeg, image/gif, text/html, video/mpeg
Allowed clients for web access Blocked MIME types
Click the Save button to save the default policy settings. You can view your own rules in the Rule list. Any rule can specify if web access is blocked or allowed, in this last case you can activate and select a filter type. To add a new rule just click on Create a rule and the following settings can be performed: Web access Specify whether the rule allows web access or blocks it; also state whether it has effect all day long or at a specific time: choose the days of the week on which you want this rule to be applied and, in case the rule is not valid all day long, you can also set the time range. Here you can choose to connections from which sources this rule will be applied. This can be either <ANY> a Zone or a list of Network/IP addresses (one address per line). Here you can choose connections to which destinations will be affected by this rule. This can be either <ANY>, a Zone, a list of Network/IP addresses (one address per line) or a list of domains (one domain per line). Choose antivirus scan only to create a rule which only scans for viruses, choose content filter only to create a rule which analyzes the content of web pages and filters it according to the settings in the Content filter section. If you choose unrestricted no checks will be performed. Specify where to place the new rule. Larger numbers have higher priority.
Source Destination
Filter type
Position
Default policy
37
Endian Unified Network Security
The Proxy Menu
If you tick the check box Activate antivirus scan on the Proxy, HTTP, Content filter page then all rules (new ones and old ones) marked as content filter only are changed to content filter + antivirus. This means that antivirus filter and content filter work concurrently.
You can then change priority, edit or delete each rule from the list of rules by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom)
Content filter
Firstly, in order to use the content filter, you have to use Content filter as filter type in a rule (either in Default policy or Policy profiles). Endian Firewall’s Content Filter (DansGuardian) takes advantage of three filtering techniques. The first is called PICS (Platform for Internet Content Selection), it is a specification created by W3C that uses metadata to label webpages to help parental control. The second is based on an advanced phrase weighting system, it analyzes the text of web pages and calculates a score for each page. The last method takes advantage of a huge list of categorized URLs and domains, all URLs requested are compared with the blacklist before being served to clients. The screen is divided into a general configuration section and a section where the specific filtering policy can be chosen. Activate antivirus scan Enable logging Platform for Internet Content Selection Max. score for phrases Enable both the content filter (Dansguardian) and the antivirus proxy (HAVP). Log blocked requests. Enable parental control based on PICS metadata. Specify the maximum score level of a trustworthy page (50-300). You can tune this level: if children browse the web through Endian Firewall you should set a value of about 50, for teenagers it should be 100 and for young adults 160. This section allows filter configuration based on phrase analysis. You can block or allow categories of sites by clicking on the icon beside it. Subcategories are shown when clicking on the + icon. This section allows configuration of filtering based on URL comparison. You can block or allow categories of sites by clicking on the icon beside the category name. Subcategories are shown by clicking on + icon. Content filtering may cause false positives and false negatives - here you can list domains that should always be blocked or allowed regardless of the results of the content filter’s analysis.
Content Filter
URL Blacklist
Custom black and white lists
Phrase analysis requires much more computing power than other technologies (PICS and URL blacklist). If you wish to disable this filtering technique you can mark all categories as allowed in the Content Filter section.
When whitelisting a domain always make sure to whitelist all necessary domains for that site to work as well. An example: • • • • google.com is blocked, which means all subdomains of google.com are blocked as well maps.google.com is whitelisted so you can access it maps.google.com does not work like it should because it tries to get data from other google servers you will have to whitelist these domains (e.g. mt0.google.com) as well Click on Save to save the settings of content filter.
Content filter
38
Endian Unified Network Security
The Proxy Menu
Antivirus
In this section you can configure the virus scanner engine (ClamAV) used by the HTTP proxy. Max. content scan size Do not scan the following URLs Last update Specify the maximum size for files that should be scanned for viruses. A list of URLs that will not be scanned for viruses (one per line). Shows the day and time of the last virus signatures update and the total amount of viruses recognized by ClamAV (in parenthesis).
Click on Save to save the settings of the virus scanner engine.
Group policies
On this page you can create groups that can be associated to different policy profiles. These groups can be associated to users when using Local authentication in the Proxy, HTTP, Authentication section. You can add a group by clicking on the Create a group link and entering a group name. After clicking on the Create group button the group is saved. The profile of the groups can be changed by selecting the appropriate policy profile and then clicking on the Save button below the group list. Groups can be deactivated, activated and removed by clicking on the respective icons (as described in the legend below the list).
Policy profiles
It is possible to create additional profiles that can be used in the Proxy, HTTP, Group policies section. Policy profiles are created just like the default policy in the Proxy, HTTP, Default policy section.
POP3
Select Proxy from the menu bar at the top of the screen, then select POP3 from the submenu on the left side of the screen. In this section you can configure the POP3 (incoming mail) proxy.
Global settings
On this page you can configure the global configuration settings of the POP3 proxy. You can enable or disable the POP3 proxy for every zone. It is also possible to enable the Virus scanner and the Spam filter for incoming emails. If you want to log every outgoing POP3 connection you can enable the Firewall logs outgoing connections checkbox.
Spam filter
On this page you can configure how the POP3 proxy should react once it finds a spam email. Spam subject tag Required hits Enable message digest spam detection(pyzor) White list Black list Here you can specify a prefix for the spam email’s subject. This option defines how many hits are required for a message to consider it spam. The default value is 5. If you want to detect spam using message digests you can enable this option. Note that this might slow down your POP3 proxy. Here you can whitelist sender email-addresses (one address per line). It is also possible to whitelist whole domains by using wildcards, e.g. *@example.com. Here you can blacklist sender email-addresses (one address per line). It is also possible to blacklist whole domains by using wildcards, e.g. *@example.com.
Antivirus
39
Endian Unified Network Security
The Proxy Menu
SIP
Select Proxy from the menu bar at the top of the screen, then select SIP from the submenu on the left side of the screen. The SIP Proxy is a proxy/masquerading daemon for the SIP and RTP protocols. SIP (Session Initiation Protocol, RFC3261) and RTP (Real-time Transport Protocol) are used by Voice over IP (VoIP) devices to establish telephone calls and carry voice streams. The proxy handles registrations of SIP clients on the LAN and performs rewriting of the SIP message bodies to make SIP connections possible through Endian Firewall and therefore allow SIP clients (like x-lite, kphone, linphone or VoIP hardware) to work behind NAT. Without this proxy, connections between clients are not possible at all if both are behind NAT, since one client cannot reach the other directly and therefore no RTP connection can be established between them. Once enabled, the following options can be configured (confirm the settings by clicking Save). Status transparent means all outgoing traffic to the SIP port will be automatically redirected to the SIP proxy; enabled means the proxy will listen to the SIP port and clients need to be made aware of the proxy default: 5060 The UDP Port range that the SIP proxy will use for incoming and outgoing RTP traffic. By default the range from 7070 to (and including) 7090 is used. This allows up to 10 simultaneous calls (2 ports per call). If you need more simultaneous calls, increase the range. The SIP Proxy itself can send all traffic to another outbound proxy. This allows the SIP proxy to remember registrations after a restart. Check this if you want to log established calls in the SIP proxy log. This will show outgoing connections in the firewall log.
SIP Port RTP Port Low / High
Outbound proxy host / port Autosave registrations Log calls Firewall logs outgoing connections
FTP
Select Proxy from the menu bar at the top of the screen, then select FTP from the submenu on the left side of the screen. The FTP (File Transfer Protocol) proxy is available only as transparent proxy, this allows scanning for viruses on FTP downloads. Note that only connections to the standard FTP port (21) are redirected to the proxy. This means that if you configure your clients to use the HTTP proxy also for the FTP protocol, this FTP proxy will be bypassed! You can enable the transparent FTP proxy on the GREEN zone and on the other enabled zones (ORANGE, BLUE).The following options can be configured (confirm the settings by clicking Save). Firewall logs outgoing connections Bypass the transparent Proxy Show outgoing connections in the firewall log. Specify sources (left panel) or destinations (right panel), that are not subject to transparent FTP proxying. Always specify one subnet, IP address or MAC address per line.
Endian Firewall supports transparent FTP proxying with frox if and only if it is directly connected to the internet. If you have another NATing firewall or router between Endian Firewall and the internet, frox does not work because it uses an active FTP upstream.
Spam filter
40
Endian Unified Network Security
The Proxy Menu
SMTP
Select Proxy from the menu bar at the top of the screen, then select SMTP from the submenu on the left side of the screen. The SMTP (simple mail transfer protocol) proxy can relay and filter email traffic as it is being sent towards email servers. The scope of the SMTP proxy is to control and optimize SMTP traffic in general and to protect your network from threats when using the SMTP protocol. The SMTP (Simple Mail Transport Protocol) protocol is used whenever an email is sent by your mail client to a remote mail server (outgoing mail). It will also be used if you have your own mail server running on your LAN (GREEN interface) or your DMZ (ORANGE interface) and are allowing mails to be sent from the outside of your network (incoming requests) through your mail server. The SMTP proxy configuration is split into several subsections. Warning In order to download mail from a remote mailserver with your local mail clients, the POP3 or IMAP protocol will be used. If you want to protect that traffic too, you have to enable the POP3 proxy in Proxy, POP3. Scanning of IMAP traffic is currently not supported. With the mail proxy functionality, both sorts of traffic (incoming and outgoing mail) can be scanned for viruses, spam and other threats. Mail will be blocked if necessary and notices will be sent to both the receiving user and the administrator. With the possibility to scan incoming mail, the mail proxy can handle incoming connections and pass the mail to one or more internal mail servers in order to remove the necessity to have SMTP connections from the outside within your local networks.
Main
The is the main configuration section for the SMTP proxy. It contains the following options: Enabled Transparent on GREEN, BLUE, ORANGE Antivirus is enabled Spamcheck is enabled File extensions are blocked Incoming mail enabled Firewall logs outgoing connections This enables the SMTP proxy in order to accept requests on port 25. If the transparent mode is enabled, all requests to destination port 25 will be intercepted and forwarded to the SMTP proxy without the need to change the configuration on your clients. Check this box if you would like to enable antivirus. The antivirus can be configured in the Proxy, SMTP, Antivirus link. Check this box if you would like to filter spam emails. The spam filter can be configured in the Proxy, SMTP, Spam section. Check this box if you would like to block mails that contain attached files with certain extensions. The file extensions can be configured in the Proxy, SMTP, File extensions section. If you have an internal mailserver and would like the SMTP proxy to forward incoming mails to your internal server you must enable this option. Tick this on if you want the firewall to log all established outgoing connections. Note that in some countries this may be illegal.
You need to configure the email domains for which the server should be responsible. You can add the list of domains in the Proxy, SMTP, Domains section.
To save and apply the settings you must click on the Save changes and restart button.
Main
41
Endian Unified Network Security
The Proxy Menu
Antivirus
The Antivirus is one of the main features of the SMTP proxy module. Three different actions can be performed when a mail that contains a virus is sent. It is also possible to configure an email address for notifications. Mode You can choose between three different modes how infected mails should be handled. • DISCARD: if you choose this mode the mail will be deleted • BOUNCE: if you choose this mode the email will not be delivered but bounced back to the sender in form of a non-delivery notification • PASS: if you choose this mode the mail will be delivered normally Email used for virus notifications Virus quarantine Here you can provide an email-address that will receive a notification for each infected email that is processed. Here you can specify what kind of quarantine you are using. Valid values are: • leaving this field empty will disable the quarantine. • virus-quarantine this stores infected mails on the firewall (in /var/amavis/virusmails), this is the default setting. • valid.email@address any valid email address will result in the infected emails being forwarded to that email address.
To save and apply the settings just click on the Save changes and restart button.
Spam
The antispam module knows several different ways to protect you from spam mails. In general spamassassin and amavisd-new are used to filter out spam. SpamAssassin provides several means of detecting spam. It has a score tally system where large numbers of inter-related rules fire off and total up a score to determine whether a message is spam or not. The page is divided into two sections: SMTP Proxy and greylisting. While most simple spam mails such as well known spam messages and mail sent by known spam hosts are blocked, spammers always adapt their messages in order to circumvent spam filters. Therefore it is absolutely necessary to always train the spam filter in order to reach a personalized and stronger filter (bayes).
The SMTP Proxy section contains the main configuration for the spam filter. Spam destination You can choose between three different modes how spam emails Email used for notification on spam alert Here you can provide an email-address that will receive a notification for each spam email that is processed.
Antivirus
42
Endian Unified Network Security
The Proxy Menu
Spam quarantine
Here you can specify what kind of quarantine you are using. Valid values are: • leaving this field empty will disable the spam quarantine. • spam-quarantine this stores spam mails on the firewall (in /var/amavis/ virusmails), this is the default setting. • valid.email@address any valid email address will result in the spam emails being forwarded to that email address.
Spam tag level Spam mark level Spam quarantine level Send notification only below level Spam subject
If SpamAssassin’s spam score is greater than this number X-Spam-Status and XSpam-Level headers are added to the email. If SpamAssassin’s spam score is greater than this number mails are tagged with the Spam subject and an X-Spam-Flag header. Mails that exceed this spam score will be moved to the quarantine. Send notification emails only if the spam score is below this number. Here you can specify a prefix for the subject of marked spam emails.
The second section contains configuration options for Endian Firewall’s greylisting. It contains the following options: greylisting enabled delay(sec) Whitelist recipient Whitelist client Check this box if you want to enable greylisting. The greylisting delay in seconds can be a value between 30 and 3600. You can whitelist email-addresses or whole domains in this textarea, e.g. test@ endian.com or the domain endian.com (one entry per line). You can whitelist a mailserver’s address here. This means that all emails coming from this server’s address will not be checked for spam (one entry per line).
Save the settings and restart the SMTP Proxy by clicking on the Save changes and restart button.
File Extensions
This allows you to block files with certain file extensions which may be attached to mails. Mails which contain such attachments will be recognized and the selected action will be performed for the respective mail. The following options can be configured: Blocked file extensions Banned files destination You can select one or more file extensions to be blocked. In order to select multiple files press the control key and click on the desired entries with your mouse. You can choose between three different modes how emails that contain such attachments
File Extensions
43
Endian Unified Network Security
The Proxy Menu
Banned files quarantine
Here you can specify what kind of quarantine you are using. Valid values are: • leaving this field empty will disable the quarantine for mails with blocked attachments. • spam-quarantine this stores mails with blocked attachments on the firewall (in /var/amavis/virusmails), this is the default setting. • valid.email@address any valid email address will result in the emails with blocked attachments being forwarded to that email address.
Whenever an email with an attachment that is blocked due to its file extension is found, a notification email is sent to this address. If you enable this option, files with double extensions will be blocked since these files are usually created to harm computers (blocked double extensions are composed of any extension followed by .exe, .com, .vbs, .pif, .scr, .bat, .cmd or .dll).
Save the settings and restart the SMTP Proxy by clicking on the Save changes and restart button.
Blacklists/Whitelists
An often used method to block spam e-mails are so called real-time blacklists (RBL). These lists are created, managed and updated by different organisations. If a domain or a sender IP address is listed in one of the blacklists, emails from it will be refused without further notice. This saves more bandwith than the RBL of the antispam module, since here mails will not be accepted and then handled, but dismissed as soon as a listed IP address is found. This dialogue also gives you the possibility to explicitely block (blacklist) or allow (whitelist) certain senders, recipients, IP addresses or networks. Warning Sometimes it may happen that IP addresses have been wrongly listed by the RBL operator. If this should happen, it may negatively impact your communication, to the effect that mail will be refused without the possibility to recover it. You also have no direct influence on the RBLs.
In the RBL section you can enable the following lists: bl.spamcop.net zen.spamhaus.org cbl.abuseat.org This RBL is based on submissions from its users (). This list replaces sbl-xbl.spamhaus.org and contains the Spamhaus block list as well as Spamhaus’ exploits block list and its policy block list. The CBL takes its source data from very large spamtraps.. This contains a list of Dynamic IP Address ranges (). DSBL is the Distributed Sender Blackhole List. It publishes the IP addresses of hosts which have sent special test emails to listme@listme.dsbl.org or another listing address. The main delivery method of spammers is the abuse of non-secure servers. For that reason many people want to know which servers are non-secure so they can refuse email from these servers. DSBL provides exactly that information (www. dsbl.org). This is a list which contains domains or IP networks whose administrators choose not to obey to the RFCs, the standards of the net ().
dul.dnsbl.sorbs.net list.dsbl.org
dsn.rfc-ignorant.org
Blacklists/Whitelists
44
Endian Unified Network Security
The Proxy Menu
ix.dnsbl.manitu.net
A publicly available DNS blacklist which is permanently regenerated from the IP blacklist and the spam hash table of the spam filter NiX Spam.
Save the settings and restart the SMTP Proxy by clicking the Save changes and restart button. Note Advanced users can modify the list by editing the file /var/efw/smtpd/default/RBL.
You can also create custom black- and whitelists by adding entries to the fields in the blacklist/whitelist section. The following textareas can be filled out in this section: sender whitelist sender blacklist recipient whitelist recipient blacklist client whitelist client blacklist Mails from these addresses or domains will always be accepted. Mails from these addresses or domains will never be accepted. Mails to these addresses or domains will always be accepted. Mails to these addresses or domains will never be accepted. Mails that have been sent from these IP addresses or hosts will always be accepted. Mails that have been sent from these IP addresses or hosts will never be accepted.
To save the changes and restart the SMTP proxy click on the Save changes and restart button. Examples for recipient/sender black- and whitelists: • a whole domain - example.com • only subdomains - .example.com • a single address - admin@example.com
Domains
If you have enabled incoming mail and would like to forward that mail to a mail server behind your Endian Firewall - usually set up in the GREEN or ORANGE zone - you need to declare the domains which will be accepted by the SMTP proxy and to which of your mail servers the incoming mail should be forwarded to. It is possible to specify multiple mail servers behind Endian Firewall for different domains. It is also easily possible to use Endian Firewall as a backup MX. Domain Internal mailserver The domain this mailserver is responsible for. The address of the mailserver.
To add a domain click the Add button. To apply the changes the SMTP proxy has to be restarted by clicking on the Save changes and restart button. Existing entries can be edited and deleted by clicking on the respective icon (as described in the legend at the bottom of the page).
Mail Routing
This option allows you to send a blind carbon copy (BCC) to a specified email address. This option will be applied to all emails that are sent to the specified recipient address or are sent from the specified sender address.
Domains
45
Endian Unified Network Security
The Proxy Menu
Direction Mail address BCC address
Specify whether you want to apply this copying process for a certain Sender or Recipient. Here you specify the mail address of the recipient or sender (depending on what you have chosen above). The mail address where you want to send the copy of the emails.
The mail route is saved by clicking on the Add mail route button. Existing entries can be changed or deleted by clicking on the respective icons which are explained in the legend at the bottom of the page.
Warning Neither the sender nor the recipient will be notified of the copy. In most countries of this planet it is highly illegal to read other people’s private messages. Do not abuse this feature.
Advanced
On this page you can configure the advanced settings of the SMTP proxy. In the Smarthost section the following options can be configured: Smarthost enabled for delivery Address of smarthost Authentication required Username Password Authentication method Check this box if you want to use a smarthost to deliver emails. Here you can enter the address of the smarthost. Check this box if the smarthost requires authentication. This username is used for authentication. This password is used for authentication Here you can choose the authentication methods that are supported by your smarthost. PLAIN, LOGIN, CRAM-MD5 and DIGEST-MD5 are supported.
The settings are saved and applied by clicking on the Save changes and restart button. If you have a dynamic IP address because you are using an ISDN or an ADSL dialup internet connection you might get problems sending mails to other mail servers. More and more mail servers check whether your IP address is listed as a dynamic IP address and therefore might refuse your emails. Hence it could be necessary to use a smarthost for sending emails. A smarthost is a mail server which your SMTP proxy will use as outgoing SMTP server. The smarthost needs to accept your emails and relays them for you. Normally you may use your provider’s SMTP server as smarthost, since it will accept to relay your emails while other mail servers may not.
In the IMAP Server for SMTP Authentication section you can configure which IMAP server should be used for authentication when sending emails. Most of all this is important for SMTP connections that are opened from the RED zone. The following settings can be configured: Authentication enabled IMAP server Number authentication daemons Check this box if you want to enable IMAP authentication. Here you can enter the address of the IMAP server. This settings defines how many concurrent logins should be possible through your Endian Firewall.
Advanced
46
Endian Unified Network Security
The Proxy Menu
The settings are saved and applied by clicking on the Save changes and restart button. In the Advanced settings additional parameters can be defined. The options are: smtpd HELO required reject invalid hostname reject non-FQDN sender reject non-FQDN recipient reject unknown sender domain reject unknown recipient domain SMTP HELO name Always BCC address smtpd hard error limit If this is enabled the connecting client must send a HELO (or EHLO) command at the beginning of an SMTP session. Reject the connecting client when the client HELO or EHLO parameter supplies an invalid hostname. Reject the connecting client if the hostname supplied with the HELO or EHLO command is not a fully-qualified domain name as required by the RFC. Reject the request when the RCPT TO address is not in fully-qualified domain name form, as required by the RFC. Reject the connection if the domain of the sender email address has no DNS A or MX record. Reject the connection if the domain of the recipient email address has no DNS A or MX record. The hostname to send with the SMTP EHLO or HELO command. The default value is the IP of RED. Specify a hostname or IP address. Optionally you can enter an email address here that will receive a blind carbon copy of each message that goes through the SMTP proxy. The maximum number of errors a remote SMTP client is allowed to produce without delivering mail. The SMTP Proxy server disconnects once this limit is exceeded (default 20). The language in which error messages should be sent. The maximum size a single message is allowed to have.
Language email templates maximal email size
The settings are saved and applied by clicking on the Save changes and restart button.
DNS
Select Proxy from the menu bar at the top of the screen, then select DNS from the submenu on the left side of the screen. In this section you can change the settings for the DNS proxy. It is divided into three subpages.
DNS proxy
On this page you can enable the transparent DNS proxy for the GREEN, ORANGE and BLUE zones (if they are active). You can also define for which source addresses the proxy will be bypassed in the lower left textarea. These sources can be IP addresses, addresses of subnets and MAC addresses (one per line). In the lower right textarea you can enter destinations for which the proxy is bypassed. In this textarea IP addresses and addresses of subnets can be entered. To save the settings you must click on the Save button.
DNS proxy
47
Endian Unified Network Security
The Proxy Menu
Custom nameserver
On this page you can add custom nameservers for specific domains. You can add a new custom nameserver by clicking on the Add new custom name server for a domain link. To change an existing entry you have to click on the pencil icon in its row. Clicking on a trash can icon will delete the custom nameserver in that row. The following details can be saved for custom nameservers: Domain DNS Server Remark The domain for which you want to use the custom nameserver. The IP address of the namserver. An additional comment you might want to save.
Anti-spyware
On this page you can configure how your Endian Firewall should react if a domain name has to be resolved that is known to be used by spyware. The options that can be set are: Enabled Redirect requests to spyware listening post Whitelist domains Blacklist domains Spyware domain list update schedule If enabled these requests will be redirected to localhost. If this is enabled the requests will be redirected to the spyware listening post instead of localhost. Domain names that are entered here are not treated as spyware targets regardless of the list’s content. Domain names that are entered here are always treated as spyware targets regardless of the list’s content Here you can specify how often the spyware domain list should be updated. Possible values are Hourly, Daily, Weekly and Monthly. By moving the mouse cursor over the respective question mark you can see when exactly the updates will be performed.
The settings are saved and applied by clicking on the Save button.
Custom nameserver
48
Endian Unified Network Security
The VPN Menu
The VPN Menu: • OpenVPN server: set up the OpenVPN server so that clients (be it Road Warriors or other Endian Firewalls in a Gateway-to-Gateway setup) can connect to your GREEN zone through a VPN tunnel • OpenVPN client (Gw2Gw): set up the client-side of a Gateway-to-Gateway setup between two or more Endian Firewalls • IPsec: set up IPsec-based VPN tunnels
Each link will be explained individually in the following sections.
OpenVPN server
Select VPN from the menu bar at the top of the screen, then select OpenVPN server from the submenu on the left side of the screen.
Server configuration.
Server configuration
49
Endian Unified Network Security
The VPN Menu
Accounts
This panel contains the list of OpenVPN accounts. Cick on Add account to add an account. The following parameters can be specified for each account: Account information Username Password / Verify password user login name specify password (twice) Client routing Direct all client traffic through the VPN server if you check this, all the traffic from the connecting client (regardless of the destination) is routed through the uplink of the Endian Firewall that hosts the OpenVPN server. The default is to route traffic with a destination that is not part of any of the internal Endian zones (such as internet hosts) through the client’s uplink (advanced users only) normally, when a client connects, tunneled routes to networks that are accessible via VPN are added to the client’s routing table - check this box if you do not want this to happen and are prepared to manipulate your clients’ routing tables manually only needed if you want to use this account as client in a Gateway-to-Gateway setup: enter the networks behind this client you would like to push to the other clients add your own network routes to be pushed to the client here (overrides all automatically pushed routes) Custom push configuration Static ip addresses Push these nameservers Push domain normally, dynamic IP addresses are assigned to clients, you can override this here and assign a static address assign nameservers on a per-client basis here assign search domains on a per-client basis here
Don’t push any routes to client
Networks behind client Push only these networks.
Advanced
Use this panel to change advanced settings. Among other things, certificate-based authentication (as opposed to password-based) can be set up in this section. The first section has some generic settings regarding the server: Port / Protocol port 1194 / protocol UDP are the default OpenVPN settings. It is a good idea to keep these values as they are - if you need to make OpenVPN accessible via other ports (possibly more than one), you can use port forwarding (see Firewall, Port Forwarding). A use case for setting TCP as the protocol is when you want to access the OpenVPN server through a third-party HTTP proxy.
Accounts
50
Endian Unified Network Security
The VPN Menu
Block DHCP responses coming from tunnel Don’t block traffic between clients
check this if you’re getting DHCP responses from the LAN at the other side of the VPN tunnel that conflict with your local DHCP server the default is to isolate clients from each other, check this if you want to allow traffic between different VPN clients
In the second section you can change the global push options. Push these networks Push these nameservers Push domain if enabled, the routes to the specified networks are pushed to the connected clients if enabled, the specified nameservers are pushed to the connected clients if enabled, the specified search domains are pushed to the connected clients.
VPN client download
Click on the link to download the Endian VPN client for Microsoft Windows, MacOS X and Linux from Endian Network.
VPN client download
51
Endian Unified Network Security
The VPN Menu
OpenVPN client (Gw2Gw)): Connection name Connect to Upload certificate just a label for this connection the remote OpenVPN server’s fully qualified domain name and port (such as efw. example.com:port) - the port is optional and defaults to 1194 if the server is configured to use PSK authentication (password/username), you must upload the server’s host certificate (the one you get from the Download CA certificate link at the server). Otherwise, if you use certificate-based authentication, you must upload the server’s PKCS#12 file (you can get it from the Export CA as PKCS#12 file link on the server (advanced section of the OpenVPN submenu). specify the “Challenge password” if you supplied one to the certificate authority before or during the creation of the certificate if the server is configured to use PSK authentication (password/username) or certificate plus password authentication, give the username and password of the OpenVPN server account here your comment
PKCS#12 challenge password Username / Password
Remark
Click on Advanced tunnel configuration to see more options: Fallback VPN servers specify one or more (one per line) fallback OpenVPN servers in the form efw. example.com:port (the port is optional and defaults to 1194). If the connection to the main server fails, a fallback server will take over. “routed” (the client firewall acts as a gateway to the remote LAN) or “bridged” (as if the client firewall was part of the remote LAN). Default is “routed”. check this if you are getting DHCP responses from the LAN at the other side of the VPN tunnel that conflict with your local DHCP server check this if you want to hide the clients connected through this Endian Firewall behind the firewall’s VPN IP address. Doing so will prevent incoming connection requests to your clients. UDP (default) or TCP. Set to TCP if you want to use a HTTP proxy (next option). if your Endian Firewall can access the internet only through an upstream HTTP proxy it is still possible to use it as an OpenVPN client in a Gateway-to-Gateway setup. However, you must use the TCP protocol for OpenVPN on both sides. Fill in the HTTP proxy account information in these text fields: proxy host (such as proxy.example.com:port, where port defaults to 8080), username and password. You can even use a forged user agent string if you want to camouflage your Endian Firewall as a regular web browser.
Connection type Block DHCP responses coming from tunnel NAT
Protocol HTTP proxy
Click the Save button to save the tunnel settings. You can at any moment disable/enable, edit or delete tunnels from the list by clicking on the appropriate icon on the right side of the table (see the icon legend at the bottom).
VPN client download
52
Endian Unified Network Security
The VPN Menu
IPsec: Local VPN hostname/IP Enabled VPN on ORANGE VPN on BLUE Override default MTU Debug options Here you can enter the external IP (or a fully qualified domain name) of your IPsec host. By ticking this checkbox you enable IPsec. If this is enabled it is possible for a user to connect to the VPN from the ORANGE zone. If this is enabled it is possible for a user to connect to the VPN from the BLUE zone. If you want to override the default maximum transmission unit you can specifiy the new value here. Usually this is not needed. Ticking checkboxes in this section will increase the amount of data that is logged to /var/log/messages.: Name Enabled Interface Local subnet Local ID Remote host/IP Remote subnet Remote ID Dead peer detection action Remark Edit advanced settings the name of this connection if checked, this connection is enabled this is only available for host-to-net connections and specifies to which interface the host is connecting the local subnet in CIDR notation, e.g. 192.168.15.0/24 an ID for the local host of the connection the IP or fully qualified domain name of the remote host this is only available for net-to-net connections and specifies the remote subnet in CIDR notation, e.g. 192.168.16.0/24 an ID for the remote host of this connection what action should be performed if a peer disconnects a remark you can set to remember the purpose of this connection later tick this checkbox if you want to edit more advanced settings
VPN client download
53
Endian Unified Network Security
The VPN Menu
In the Authentication section you can configure how authentication is handled. Use a pre-shared key Host-to-Net connections. Some roadwarrior IPSec implementations do not have their own CA. If they wish to use IPSec’s built in CA, they can generate what a so called certificate request. This partial X.509 certificate must be signed by a CA. During the certificate request upload, the request is signed and the new certificate will become available on the VPN’s main web page. In this case, the peer IPSec has a CA available for use. Both the peer’s CA certificate and host certificate must be included in the uploaded file. Choose this option to upload a PKCS12 file. If the file is secured by a password you must also enter the password in the text field below the file selection field. You can also create a new X.509 certificate. In this case, complete the required fields. Optional fields are indicated by red dots. If this certificate is for a Net-to-Net connection, the User’s Full Name or System Hostname field must contain fully qualified domain name of the peer. The PKCS12 File Password fields ensure that the host certificates generated cannot be intercepted and compromised while being transmitted to the IPSec peer.
Upload a certificate request
Upload a certificate Upload PKCS12 file PKCS12 file password Generate a certificate
If you have chosen to edit the advanced settings of this connection, a new page will open after you hit the Save button. In this page you can set Advanced connection settings. Unexperienced users should not change the settings here: IKE encryption IKE integrity IKE group type IKE lifetime ESP encryption ESP integrity ESP group type ESP key lifetime IKE aggressive mode allowed Perfect Forward Secrecy Negotiate payload compression Here you can specify which encryption methods should be supported by IKE (Internet Key Exchange). Here you can specifiy which algorithms should be supported to check the integrity of packets. Here you can specify the IKE group type. Here you can specify how long IKE packets are valid. Here you can specify which encryption methods should be supported by ESP (Encapsulating Security Payload). Here you can specify which algorithms should be supported to check the integrity of packets. Here you can specify the ESP group type. Here you can specify how long an ESP key should be valid. Check this box if you want to enable IKE aggressive mode. You are encouraged NOT to do so. If this box is checked perfect forward secrecy is enabled. Check this box, if you want to use payload compression.
Finally save the settings by clicking on the Save button.
VPN client download
54
Endian Unified Network Security
The VPN Menu: Organization name The organization name you want to use in the certificate. For example, if your VPN is tying together schools in a school district, you may want to use something like “Some School District.” This is used to identify the certificate. Use a fully qualified domain name or the firewall’s RED IP address. Here you can enter your email address. Here you can enter a department name. Here you can enter the name of your town or your city. Here you can enter the name of the state or province you are living in. Choose your country here. Here you can specify an alternative hostname for identification.
Endian Firewall hostname Your email address Your department City State or province Country Subject alt name
The certificates are created after clicking on the Generate root/host certificates button. If you already created certificate somewhere else earlier you can upload a PKCS12 file in the lower section of the page instead of generating new certificates. Upload PKCS12 file PKCS12 file password Open the file selection dialog and select your PKCS12 file here. If the file is password protected you must enter the password here.
You can upload the file by clicking on the Upload PKCS12 file button.
VPN client download
55
Endian Unified Network Security
The VPN Menu: • In the VPN, IPsec menu enable IPsec and specify 123.123.123.123 as Local VPN hostname/IP. • After saving click on the Generate host/root CA certificate button (unless you already generated these certificates before) and compile the form. • Download the host certificate and save it as fw_a_cert.pem. • In the Connection status and control section click on the Add button. • Select Net-to-Net. • Enter 124.124.124.124 in the Remote host/IP field, 192.168.15.0/24 as Local subnet and 192.168.16.0/24 as Remote subnet. • In the Authentication section select Generate a certificate and compile the form, make sure to set a password. • After saving, download the PKCS12 file and save it as fw_a.p12.
The following steps have to be performed on firewall B: • In the VPN, IPsec menu enable IPsec and specify 124.124.124.124 as Local VPN hostname/IP. • After saving click on the Generate host/root CA certificate button (if you already generated them earlier you must Reset the previous certificates). • Do not compile anything in the first section! Instead upload the fw_a.p12 file and enter the password you set on firewall A. • Click on Add in the Connection status and control section. • Select Net-To-Net. • Enter 123.123.123.123 in the Remote host/IP field, 192.168.16.0/24 as Local subnet and 192.168.15.0/24 as Remote subnet. • Select Upload a certificate and upload the fw_a_cert.pem you have created on firewall A.
VPN client download
56
Endian Unified Network Security
The Hotspot Menu
The Hotspot Menu
Select Hotspot from the menu bar at the top of the screen. Endian Hotspot is a powerful hotspot that can be used for wireless connections as well as for wired LAN connections. The hotspot’s captive portal will capture all connections passing through the BLUE zone, no matter what device they come from. Therefore the hotspot does not work if the BLUE zone is disabled. The hotspot can be enabled or disabled by clicking on the main switch on this page. If the hotspot is enabled a link to its administration interface is shown. Clicking on the link opens a new browser window with the hotspot administration interface. Although this interface shares its design with the firewall, it contains a whole new menu structure. • Hotspot: account and ticket management, statistics and settings • Dialin: current connection state of the uplinks • Password: change the password of the hotspot user • Allowed Sites: sites that can be accessed without login
Hotspot
This section includes subpages to manage accounts, tickets and ticket rates. Statistics can be viewed as well as current and previous connections. Finally it is possible to change the hotspot’s settings here.
Accounts
On this page it is possible to administer user accounts. By default a list of available accounts is shown. This list can be sorted by Username/MAC, Name, Creation date or by the date until which the user account is valid. It is also possible to reverse the sort order by checking Reverse Order and to Hide disabled accounts as well as to search for accounts. Pagination is also available if the number of results exceeds the number of results per page that has been defined in Hotspot, Settings. Every user can be edited by clicking on the Edit link in his row (for details see Hotspot, Accounts, Add new account). Tickets can be added to accounts by clicking on the Add ticket link. It is also possible to view the balance and the connection log of an account by clicking on the Balance and Connections links respectively. Add a new account On this page you can create a new account or an existing account can be modified. The information is split into two parts: Login information and Account information. To create an account you can fill the following fields: Login information Username Password In this field you have to enter the username. In this field you can enter the password for the new account. If you do not have the time to think of an adequate password just leave this field empty and the password will be autogenerated. The date until the account will be valid. If you want to change it you can either enter the new date manually or click on the ... button and select the new date from the calendar popup. This checkbox specifies if the account is enabled or not. If this is ticked on the account is active. If you want to disable a user tick this checkbox off. Here you can select the user’s native language if available. Otherwise English should be a good choice.
Valid until
Active? Language
Accounts
57
Endian Unified Network Security
The Hotspot Menu
Bandwidth limiting Static IP address
If you do not want to use default values here you can tick the checkbox and specify an upload and download limit for the account in kb/s. If you want this account to always use the same IP address you can tick this checkbox and enter the IP address you want.
Account information Title Firstname Lastname Country City ZIP Street City of birth Birthdate Document ID Document Type Document issued by Description The person’s title (e.g. Mrs., Dr.) The user’s first name. The user’s last name. The country the user comes from. The city or town the user comes from. The ZIP of the user’s hometown. The street in which the user lives. The city or town in which the user was born. The user’s birthdate. The ID of the document that has been used to identify the user. The type of document that has been used to identify the user. The issuer of the document (e.g. City of New York) Additional description for the account.
The account information is stored by clicking on the Save button below the form. When editing an existing user it is also possible to print the user information by clicking on the Print button. On the right side of the screen you will notice the Tickets section. If you want to add a new ticket to the user just select the appropriate ticket-type and hit the Add button. Below you will notice a list of all tickets for this user with the following information: Ticket Type Creation date Action The type of the ticket The date on which the ticket has been created If the ticket has not yet been used you will be able to Delete it here by clicking on the appropriate link.
Add MAC-based account This page is used just like the Hotspot, Accounts, Add new account page. The only difference is that for this type of accounts username and password are not needed. Instead the MAC-Address of a computer’s network interface is entered and will be used to identify the account. Import Accounts It is possible to import accounts from a CSV (comma separated values) file. By clicking on the Browse.. button a file selection dialog is opened. After you have selected the file you can specify whether The first line of the CSV file contains the column titles by ticking or not ticking the checkbox. You should also add a Delimiter in the appropriate field. Usually a delimiter is either a semicolon (;) or a comma (,). If you do not specify a delimiter the system will automatically try to figure out which character has been used as the delimiter. To finally
Accounts
58
Endian Unified Network Security
The Hotspot Menu
import the CSV file you must click on the Import accounts button. Export Accounts as CSV When you click on this link a download dialog will be opened. The download is a CSV file that contains all the account data and can later be re-imported from the Hotspot, Accounts, Import Accounts page.
Quick Ticket
On this page you can create a new user account with a ticket of your choice already assigned. The username and password are automatically generated. All you have to do is click on the ticket rate you wish to use and the user will be created. The Username, Password and Rate are then displayed on the screen. It is also possible to print this information by clicking on the Print information button.
Ticket rates
Endian Firewall gives you the possibility to specify more than one ticket rate. You can even specify if you want a rate to be post-paid or pre-paid. It is also possible to create different rates for both types. This is useful if you want to sell different pre-paid types e.g. 4 pre-paid 15 minutes tickets should be more expensive than 1 pre-paid 1 hour ticket. When opening the page a list with all defined ticket rates is shown. In this list you can see the different ticket rates, the following are the columns: Name Code Hourly Price Actions The name you gave to the ticket rate. This is the ASA code for your ticket rate. Although this can be used only for the ASA hotel management system the field is mandatory. This is the hourly price you have specified. Here you can choose to Edit or Delete a ticket rate by clicking on the respective link.
When editing or adding a ticket rate the Rate Name, Rate Code (ASA), Unit minutes (duration of one unit of this rate in minutes) and the Hourly price of this unit have to be specified. To save the ticket rate click on the Save button. The price per unit is calculated from unit minutes and the hourly price.
Statistics
On this page you can see statistics about the hotspot usage and accounting information. Filter Period This is the standard view. It shows a list of accounts and the following data for each account: Username Amount used Payed Duration Traffic The username or MAC address of the account. The amount of money that has been used by this account. The money that this user has already paid. The duration that this user has been connected to the hotspot. The traffic that has been created by this account.
At the bottom of the page a summary over all accounts is shown. At the top of the page it is possible to enter a start and an end date. By entering these dates into the From and Filter button the page will be reloaded with statistics between these two dates only.
Quick Ticket
59
Endian Unified Network Security
The Hotspot Menu
Clicking on a username opens a page with details about the unpaid connections of this user. If a user pays, it is enough to enter the amount of money he paid into the Amount field and click on the Bill button. It is also possible to print these statistics by clicking on the >>> Print button. Open Accounting Items This page displays a list of statistics like Hotspot, Statistics, Filter Periods but with one additional column. The Amount to pay column shows the amount of money for each account that has not been paid yet.
Active Connections
On this page you can see all currently active connections on the hotspot. The list contains the following columns: Username Description Authenticated Duration IDLE Time IP Address MAC Address Action The username of the connected account. The description of the connected account. Shows whether the connection is authenticated or not. The amount of time since this connection has been established. The amount of time that the account has been connected without packets from this account passing through the hotspot. The IP address that is connected to the hotspot. The MAC address of the connected interface. Every active connection can be closed by clicking on the Close link in this column.
Connection Log
On this page it is possible to see and filter previous connections. Like in the Hotspot, Active Connections page the list contains various columns. The columns are: Username IP Address MAC Address Connection Start Connection Stop Download Upload Duration The username of the connection. The IP address that was used for the connection. The MAC address of the connected interface. The start time of the connection. The end time of the connection. The amount of data that has been downloaded during this connection. The amount of data that has been uploaded during this connection. The duration of the connection.
The list can be sorted by any of these columns by selecting the respective entry from the Sort by select box. The sort order can be reversed by ticking the Reverse Order checkbox. It is also possible to filter connections by entering a Start Date or an End Date in the respective fields an then clicking on the Filter button. If more results than specified in Hotspot, Settings are found, pagination is enabled and you can browse through the pages by clicking on the First, Previous, Next and Last links above the list. Export as CSV The connection logs can be downloaded by clicking on the Export as CSV link. The download is in CSV format and contains all relevant information.
Active Connections
60
Endian Unified Network Security
The Hotspot Menu
Settings
On this page it is possible to change the hotspot’s settings. The page contains two subpages for System settings and settings regarding the different Languages. System This page consists of two subsections. The first subsection is called Global Settings. This subsection lets you define default values for connections as well as for the administration interface. Homepage after successful login Currency Logout user on Idle-Timeout Default account lifetime Items per page Bandwidth limiting This lets you specify which page to open after a user has logged in successfully. Here you can specify the symbol of your currency. In this dropdown you can select after how many minutes a user will be logged out when inactive. Here you can enter the number of days an account will be valid by default. This value defines how many items will be displayed on each page in the hotspot administration interface. This option lets you specify the default upload and download limits per user in kb/s. If these fields are left empty no limit is applied.
The second subsection is called Endian Hotspot API. If you want to integrate the hotspot of Endian Firewall into an already existing system of yours, you can set the parameters here. Mode Here you can choose whether your system uses Endian’s Generic API/JSON interface or the ASA jHotel interface. The ASA jHotel interface is only needed by hotels that use the ASA jHotel hotel management software whereas the generic API can be implemented in other software systems. The other options depend on the selection you made here. API enabled Accounting URL This option is only visible if you chose Generic API/JSON in the selectbox above. The API is enabled if this checkbox is ticked. This option is only visible if you chose Generic API/JSON in the selectbox above. The hotspot will send accounting information to this URL. If you do not want the hotspot to handle accounting you can leave this field empty. This option is only visible if you chose Generic API/JSON in the selectbox above. If the URL you provided above requires HTTP authentication you must tick this checkbox. Two new textfields will appear where you can enter the Username and Password respectively. This option is only visible if you chose ASA jHotel in the selectbox above. By ticking this checkbox you can enable the ASA jHotel interface. This option is only visible if you chose ASA jHotel in the selectbox above. Here you can enter the URL of your ASA jHotel interface. This option is only visible if you chose ASA jHotel in the selectbox above. If the hotel guests should be able to register themselves this checkbox has to be ticked.
Accounting URL requires HTTP Authentication ASA jHotel Interface enabled ASA jHotel URL Allow guest registration (SelfService)
Settings
61
Endian Unified Network Security
The Hotspot Menu
Guest registration default rate
This option is only visible if you chose ASA jHotel in the selectbox above. In this selectbox you can select the default rate that will be applied to new accounts.
Finally the options can be saved by clicking on the Save button. Languages On this page all language-dependent options can be set. In the first section (Supported Languages) of this page it is possible to choose the Supported Languages for your hotspot. The languages must be selected in the multi-select box and then saved by clicking on the Store button. In the second section (Templates) it is possible to modify the two templates (Welcome Page, Account Print) for every language. The language can be chosen in the Edit language selectbox whereas the template type can be selected from the Template selectbox. The Welcome Page template is presented to the user before logging in while the Account Print template is printed and handed out to the users after their registration. The content of the templates can be changed with the help of a fully featured WYSIWYG (what you see is what you get) editor. In the Account Print template it is also possible to use placeholders which will then be replaced with real data when a user is registered. The templates can be saved by clicking on the Store button below the editor. The third section is called Strings and contains translations for strings that are used in the webinterface of the hotspot. The translations can be changed and new translations can be added. This is done by selecting the language from the Edit language selectbox and then filling out the textfields. The translations are saved by clicking on the Store button. Account Print template placeholders: $title - the title of the account holder $firstname - the first name of the account holder $lastname - the last name of the account holder $username - the username of the account $password - the password of the account
Dialin
On this page it is possible to see and manage the status of the uplinks like in the System, Home section.
On this page you can change the password for the hotspot account. Just enter the password in the Password field and confirm it in the Again field. The password is stored after hitting the Save button.
Allowed sites
On this page you can define which sites should be accessible without being authenticated. You can also specify whether all IPs should be able to connect to the hotspot or just IPs that belong to the BLUE zone. To allow connections from any IP it is necessary to tick the Enable AnyIP checkbox. Sites that can be accessed without authentication have to be entered in the textarea below. One site per line is allowed. A site can be a normal domain name or a string of the format protocol:IP[/mask]:port, e.g. or tcp:192.168.20.0/24:443. The settings are stored after clicking on the Save button.
Dialin
62
Endian Unified Network Security
The Logs Menu
The Logs Menu
Select Logs from the menu bar at the top of the screen. Endian Firewall keeps logs of all firewall activities. The logs can be viewed and exported from this section. Following is a list of links that appear in the submenu on the left side of the screen: • Live: get quick, live view of the latest log entries as they are being generated • Summary: get daily summaries of all logs (generated by logwatch) • System: system logs (/var/log/messages) filtered by source and date • Service: logs from the intrusion detection system (IDS), OpenVPN and antivirus (ClamAV) • Firewall: logs from the IP firewall rules • Proxy: logs from the HTTP proxy, the SMTP proxy and the SIP proxy • Settings: specify log options such as how long log files should be kept
Each link will be explained individually in the following sections.
Live
Select Logs from the menu bar at the top of the screen, then select Live from the submenu on the left side of the screen. The live log viewer shows you a list of all log files that are available for real time viewing. You can select the logs you want to see by ticking the checkboxes. After clicking on the Show selected logs button a new window with the selected logs will open. If you want to open a single log file you can click on the Show this log only link in the respective row. This new window contains the main live log viewer. The viewer is configured at the top of the page in the Settings. On the right side the list of the logs that are currently displayed is shown. On the left side some additional control elements are shown. These control elements are: Filter Additional filter Pause output Highlight Highlight color Autoscroll Only log entries that contain the expression in this field are shown. Like the filter above. Only that this filter is applied after the first filter. Clicking on this button will prevent new log entries from appearing on the live log. However, after clicking the button once more all new entries will appear at once. All log entries that contain this expression will be highlighted in the chosen color. By clicking on the colored square you can choose the color that will be used for highlighting. This option is only available if in the Logs, Settings section Sort in reverse chronological order is turned off. In this case new entries will always be shown at the bottom of the page. If the checkbox is ticked the scrollbar will always be at the bottom of the Live logs section. If this is disabled the Live logs section will show the same entry no matter how many new entries are appended at the bottom.
If you want to show other log files you can click on the Show more link right below the list of log files that are shown. The controls will be replaced by a table in which you can select the log files you want to see by checking or unchecking the respective checkboxes. If you want to change the color of a log file you can click on the color palette of that log type and then choose a new color. To show the controls again you can click on one of the Close links below the table and below the list of shown log files. Finally you can also increase or decrease the window size by clicking on the Increase height or Decrease height buttons respectively.
63
Endian Unified Network Security
The Logs Menu
Summary
Select Logs from the menu bar at the top of the screen, then select Summary from the submenu on the left side of the screen. On this page you can see your Endian Firewall’s log summary. The following control elements are available: Month Day << / >> Update Export Here you can select the month of the date that should be displayed. Here you can select the day of the date that should be displayed. By using these controls you can go one day back or forth in the history. By clicking this button the page content will be refreshed. Clicking this button will open a plain text file with logwatches output.
Depending on the settings in the Log summaries section of the Logs, Settings page you will see more or less output on this page.
System
Select Logs from the menu bar at the top of the screen, then select System from the submenu on the left side of the screen. In this section you can browse through the various system log files. You can search for log entries in the Settings section by using the following controls: Section Filter Jump to Date Jump to Page Update Export Here you can choose the type of logs you want to display..
Service
Select Logs from the menu bar at the top of the screen, then select Service from the submenu on the left side of the screen. The service logs that can be seen here are those of the IDS (Intrusion Detection System), OpenVPN and ClamAV. All these log sites share the same functionality: Filter Jump to Date Jump to Page Update Export.
64
Endian Unified Network Security
The Logs Menu
Firewall
Select Logs from the menu bar at the top of the screen, then select Firewall from the submenu on the left side of the screen. The firewall log search can be controlled like the search for service logs in Logs, Service. Please refer to that section for details.
Proxy
Select Logs from the menu bar at the top of the screen, then select Proxy from the submenu on the left side of the screen.
HTTP
Filter Source IP Ignore filter Enable ignore filter Jump to Date Jump to Page Restore defaults Update Export Only lines that contain this expression are shown. Show only log entries from the selected source IP. Lines that contain this expression are not shown. Tick this checkbox if you want to use the ignore filter. Directly show log entries from this date. Directly show log entries from this page in your result set (how many entries per page are shown can be configured on the Logs, Settings page). Clicking on this button will restore the default search parameters. By clicking on this button will perform the search. Clicking on this button will export the log entries to a text file.
It is possible to see older and newer entries of the search results by clicking on the Older and Newer buttons right above the search results.
Content filter
The content filter proxy log search can be controlled like the search for http proxy logs in Logs, Proxy, HTTP. Please refer to that section for details.
HTTP report
On this page you can enable the proxy analysis report generator by ticking the Enable checkbox and clicking on Save afterwards. Once the report generator is activated you can click on the Daily report, Weekly report and Monthly report links for detailed HTTP reports.
SMTP
The SMTP proxy log search can be controlled like the search for service logs in Logs, Service. Please refer to that section for details.
SIP
The SIP proxy log search can be controlled like the search for service logs in Logs, Service. Please refer to that section for details.
HTTP
65
Endian Unified Network Security
The Logs Menu
Settings
Select Logs from the menu bar at the top of the screen, then select Settings from the submenu on the left side of the screen. On this page you can configure global settings for the logging of your Endian Firewall. The following options can be configured: Number of lines to display Sort in reverse chronological order Keep summaries for __ days Detail level Enabled (Remote Logging) Syslog server Log packets with BAD constellation of TCP flags Log NEW connections without SYN flag Log accepted outgoing connections Log refused packets This defines how many lines are displayed per log-page. If this is enabled the newest results will be displayed first. This defines for how many days log summaries should be stored. This defines the detail level for the log summary. Check this box if you want to enable remote logging. This specifies to which remote server the logs will be sent. The server must support the latest IETF syslog protocol standards. If this is enabled the firewall will log packets with a bad constellation TCP flag (e.g. all flags are set). If this is enabled new TCP connections without SYN flag will be logged. If you want to log all accepted outgoing connections this checkbox must be ticked. If you enable this all refused packets will be logged by the firewall.
To save the settings click on the Save button.
SIP
66
Endian Unified Network Security
GNU Free Documentation Licenseoffor modi-
GNU Free Documentation License / Version 1.2, November 2002
67
Endian Unified Network Security
GNU Free Documentation License
fication.
GNU Free Documentation License / Version 1.2, November 2002
68
Endian Unified Network Security
GNU Free Documentation License.
GNU Free Documentation License / Version 1.2, November 2002
69
Endian Unified Network Security
GNU Free Documentation License / Version 1.2, November 2002
70
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/doc/170293259/Endian-Document | CC-MAIN-2016-40 | refinedweb | 27,289 | 59.74 |
>>.
In last project we have simply Interfaced Stepper Motor with Arduino, where you can rotate the stepper motor by entering the rotation angle in Serial Monitor of Arduino. Here in this project, we will Rotate the Stepper Motor using Potentiometer and Arduino, like if you turn the potentiometer clockwise then stepper will rotate clockwise and if you turn potentiometer anticlockwise then it will rotate anticlockwise..
Circuit Diagram for Rotating Stepper Motor using Potentiometer:
The circuit Diagram for the Controlling Stepper Motor using Potentiometer and Arduino. A potentiometer is connected to A0 based in whose values we will rotate the Stepper motor. Driver complete demonstration video can be found at the end of this tutorial.
In this tutorial we are going to program the Arduino in such a way that we can turn the potentiometer connected to pin A0 and control the direction of the Stepper motor. clockwise we can use the following line.
stepper.step(1);
To make the motor move one step anti-clockwise we can use the following line.
stepper.step(-1);
In our program we will read the value of the Analog pin A0 and compare it with previous value (Pval). If it has increased we move 5 steps in clockwise and if it is decreased then we move 5 steps in anti-clockwise.
potVal = map(analogRead(A0),0,1024,0,500); if (potVal>Pval) stepper.step(5); if (potVal<Pval) stepper.step(-5); Pval = potVal;
Working:
Once the connection is made the hardware should look something like this in the picture below.
Now, upload the below program in your Arduino UNO and open the serial monitor. As discussed earlier you have to rotate the potentiometer to control the rotation of the Stepper motor. Rotating it in clockwise will turn the stepper motor in clockwise direction and vice versa.
Hope you understood the project and enjoyed building it. The complete working of the project is shown in the video below. If you have any doubts post them on the comment section below or on our forums.
#include <Stepper.h> // Include the header file
// change this to the number of steps on your motor
#define STEPS 32
// create an instance of the stepper class using the steps and pins
Stepper stepper(STEPS, 8, 10, 9, 11);
int Pval = 0;
int potVal = 0;
void setup() {
Serial.begin(9600);
stepper.setSpeed(200);
}
void loop() {
potVal = map(analogRead(A0),0,1024,0,500);
if (potVal>Pval)
stepper.step(5);
if (potVal<Pval)
stepper.step(-5);
Pval = potVal;
Serial.println(Pval); //for debugging
} | https://circuitdigest.com/microcontroller-projects/stepper-motor-control-with-potentiometer-arduino | CC-MAIN-2018-26 | refinedweb | 421 | 55.24 |
Keytar
Hi
I’m trying to figure out how to integrate keytar. In the code I tried both
const keytar = require("keytar");
and
import {keytar} from "keytar".
Both resulted with the following error:
Module parse failed: Unexpected character '�' (1:2) You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See (Source code omitted for this binary file)
I searched for it and got here: but I didn’t understand how/where should I integrate it into quasar.conf.js also I’m not sure which of the 2 suggested solutions is correct. Maybe both of them?
Moreover, originally I develop for the web and only when building, I (also) build in electron mode so do I need to distinguish the “import” to happen only in electron mode? I’m a bit confused.
Any advice will be appreciated, thanks.
In other words, where should I put this code
// vue.config.js
configureWebpack: {
devtool: ‘source-map’,
module: {
rules: [
{ test: /.node$/, loader: ‘node-loader’ }
]
}
}
I made some progress but now encountered a new issue. From the docs I realized that I should put the above code, under build -> extendWebpack.
extendWebpack(cfg) { cfg.module.rules.push({ enforce: "pre", test: /.node$/, loader: "node-loader", //exclude: /(node_modules|quasar)/, }); },
So now it is compiled and the web opens up but empty. Looking in the console I see the following
Cannot open C:\Users\user\Downloads\portable\vscode\Projects\project\node_modules\keytar\build\Release\keytar.node: TypeError: Cannot read property 'dlopen' of undefined at Object.eval (keytar.node?3002:1)
and the line itself is
try {global.process.dlopen(module, "C:\\Users\\user\\Downloads\\portable\\vscode\\Projects\\project\\node_modules\\keytar\\build\\Release\\keytar.node"); } catch(e) {throw new Error('Cannot open ' + "C:\\Users\\user\\Downloads\\portable\\vscode\\Projects\\project\\node_modules\\keytar\\build\\Release\\keytar.node" + ': ' + e);}
The above was taken from Chrome. Firefox says the same but from a different angle:
global.process is undefined
I wonder if it’s because keytar should only be loaded in electron mode?
If so, how can I set it? If not, what am I doing wrong?
Thanks
The error is because global.process is undefined. What is global.process and who owns it?
Is it related to quasar? electron? keytar?
Thanks
I know global.process is coming from node-loader but there’s one thing I still don’t understand:
I’m developing in quasar/vue and when building, I “tell” it to build it in electron mode.
How can I import something only in electron mode?
At the moment I have this, after adding the relevant code to extendWebpack (see previous message in this thread)
import { keytar } from “keytar”;
and it compiles and executed but fails to run because global.process is undefined.
My guess is because I’m not in electron mode, but this is just a guess and I want to test it.
I tried the following but obviously it does not compile
import { Platform } from “quasar”;
if ($q.platform.is.electron) import { keytar } from “keytar”;
‘import’ and ‘export’ may only appear at the top level
Any kind of help will be appreciated.
Thanks
you could maybe replace import with require?
let foobar if ($q.platform.is.electron) { foobar= require('something'); }
Or you can do something dirty in the index.template.html:
conditionaly including the cdn url of keytar in index.template.html
<% if ($q.platform.is.electron) ...
Thank you very much for the tip. I tried to do the following:
try { let keyt = require("keytar"); } catch (e) { this.errorMessage = e; this.showError = true; }
When running as “web”, I got the same error as before with global.process undefined.
When running as “electron”, I got the following:
Error: node-loader: Error: C:\Users\user\AppData\Local\Temp\e6425528-a110-43aa-a474-41723fc46383.tmp.node is not a valid Win32 application. C:\Users\user\AppData\Local\Temp\e6425528-a110-43aa-a474-41723fc46383.tmp.node
Building as electron generates a portable executable that I run. My guess is that when running, it extracts the inside to the temp folder.
Here is someone with the same error:
looks like node/windows/version stuff.
Possible solutions:
Switch node versions. ( Version & 32/64 bit) . Use nvm to do this
Does it run on linux? (ubuntu or something)
@amoss Btw if you have a github repo( or something) with the code. I could see if I can get it running.
I changed node-loader’s index.js code from global.process.dlopen to process.dlopen and it passed this line of code but now fails of something else.
Even though it recognized the function, it fails as if it’s not a function.
I posted a question on node-loader’s github.
Also made a MVP demo but before the latest changes if you want to play with it:
type quasar dev to run it
Any idea how come keyt is recognized correctly with the relevant functions (getPassword in this case) but still will fail when calling that function?
I mean generally, what can cause such error, any tip or hint might help me to go forward.
Thanks
@amoss I don’t get the demo. Keytar is not used anywhere and there’s no electron project in quasar defined.
Each release of keytar includes prebuilt binaries for the versions of Node and Electron that are actively supported by these projects. Please refer to the release documentation for Node and Electron to see what is supported currently.
It will only work in electron and in nodejs.
Well, when running as web, it fails with: Keytar,getPassword is not a function.
When running as electron (executable), it fails with:
Error: node-loader: Error: C:\Users\user\AppData\Local\Temp\07e92995-d4c3-47dc-8b61-d882691c2e86.tmp.node is not a valid Win32 application. C:\Users\user\AppData\Local\Temp\07e92995-d4c3-47dc-8b61-d882691c2e86.tmp.node
I’m kinda lost
The application itself is running ok, only that line specifically fails.
I don’t understand why it fails with “is not a valid Win32 application” - in github, the person told me that I must be doing something wrong and closed the ticket.
I guess I’m the only one ever to try to use Keytar or to add a “remember me” feature to his login window.
If not, I wonder how other people managed to do it.
The demo is from the very first/initial try to use Keytar before I tested deeper, not that it made me any progress
@amoss It will not run on the web ever.
As for the error:
- try linux
- try other 32 bit/64bit nodejs versions. Use nvm
Or give me a real demo where you actually get the error for electron.
There you go:
Thanks a lot!
the " is not a valid Win32 application" error means you have to rebuild the keytar module for your system ( it’s a native module)
I followed this guide:
Solution 2
suc6
Thanks, a newbie question in order for me not to damage anything, in solution 2 point b I saw
from an unprivileged shell
mkdir electron-keytar
cd electron-keytar
npm init
npm install electron
npm install keytar
npm install electron-rebuild --save-dev
.\node_modules.bin\electron-rebuild -w keytar -p -f
Because I already have a project folder with the relevant installations (electron, keytar and electron rebuild), is it enough for me just to run
.\node_modules.bin\electron-rebuild -w keytar -p -f
?
@amoss
Yes that’s enough , if you have the right visual c++ libaries installed ect ( probably not). BUt just try it. You probably get missing visual c++ libs or something.
What I had to do was:
npm install --global --production windows-build-tools
npm install --global node-gyp
( I did not have to do this because I already had python)
setx PYTHON $env:USERPROFILE.windows-build-tools\python27\python.exe
Than it still gave me errors, so I had to manually update the visual c++ 2020 libs or something
if you get this too I let me know…
It looks a bit cumbersome (to my non-professional eyes) to achieve what I need.
I appreciate a lot your effort and wanting to help but I will first finish some unfinished business I have in the queue and then I’ll get to this.
Again, thanks a lot! | https://forum.quasar-framework.org/topic/6416/keytar | CC-MAIN-2021-04 | refinedweb | 1,389 | 57.98 |
Memory Management¶
This page describes how memory management works in Ray and how you can set memory quotas to ensure memory-intensive applications run predictably and reliably.
ObjectID Reference Counting¶
Ray implements distributed reference counting so that any
ObjectID in scope in the cluster is pinned in the object store. This includes local python references, arguments to pending tasks, and IDs serialized inside of other objects.
Frequently Asked Questions (FAQ)¶
My application failed with ObjectStoreFullError. What happened?
Ensure that you’re removing
ObjectID references when they’re no longer needed. See Debugging using ‘ray memory’ for information on how to identify what objects are in scope in your application.
This exception is raised when the object store on a node was full of pinned objects when the application tried to create a new object (either by calling
ray.put() or returning an object from a task). If you’re sure that the configured object store size was large enough for your application to run, ensure that you’re removing
ObjectID references when they’re no longer in use so their objects can be evicted from the object store.
I’m running Ray inside IPython or a Jupyter Notebook and there are ObjectID references causing problems even though I’m not storing them anywhere.
Try Enabling LRU Fallback, which will cause unused objects referenced by IPython to be LRU evicted when the object store is full instead of erroring.
IPython stores the output of every cell in a local Python variable indefinitely. This causes Ray to pin the objects even though your application may not actually be using them.
My application used to run on previous versions of Ray but now I’m getting ObjectStoreFullError.
Either modify your application to remove
ObjectID references when they’re no longer needed or try Enabling LRU Fallback to revert to the old behavior.
In previous versions of Ray, there was no reference counting and instead objects in the object store were LRU evicted once the object store ran out of space. Some applications (e.g., applications that keep references to all objects ever created) may have worked with LRU eviction but do not with reference counting.
Debugging using ‘ray memory’¶
The
ray memory command can be used to help track down what
ObjectID references are in scope and may be causing an
ObjectStoreFullError.
Running
ray memory from the command line while a Ray application is running will give you a dump of all of the
ObjectID references that are currently held by the driver, actors, and tasks in the cluster.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=18301 45b95b1c8bd3a9c4ffffffff010000c801000000 LOCAL_REFERENCE ? (deserialize task arg) __main__..f ; driver pid=18281 f66d17bae2b0e765ffffffff010000c801000000 LOCAL_REFERENCE ? (task call) test.py:<module>:12 45b95b1c8bd3a9c4ffffffff010000c801000000 USED_BY_PENDING_TASK ? (task call) test.py:<module>:10 ef0a6c221819881cffffffff010000c801000000 LOCAL_REFERENCE ? (task call) test.py:<module>:11 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (put object) test.py:<module>:9 -----------------------------------------------------------------------------------------------------
Each entry in this output corresponds to an
ObjectID that’s currently pinning an object in the object store along with where the reference is (in the driver, in a worker, etc.), what type of reference it is (see below for details on the types of references), the size of the object in bytes, and where in the application the reference was created.
There are five types of references that can keep an object pinned:
1. Local ObjectID references
@ray.remote def f(arg): return arg a = ray.put(None) b = f.remote(None)
In this example, we create references to two objects: one that is
ray.put() in the object store and another that’s the return value from
f.remote().
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=18867 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (put object) ../test.py:<module>:9 45b95b1c8bd3a9c4ffffffff010000c801000000 LOCAL_REFERENCE ? (task call) ../test.py:<module>:10 -----------------------------------------------------------------------------------------------------
In the output from
ray memory, we can see that each of these is marked as a
LOCAL_REFERENCE in the driver process, but the annotation in the “Reference Creation Site” indicates that the first was created as a “put object” and the second from a “task call.”
2. Objects pinned in memory
import numpy as np a = ray.put(np.zeros(1)) b = ray.get(a) del a
In this example, we create a
numpy array and then store it in the object store. Then, we fetch the same numpy array from the object store and delete its
ObjectID. In this case, the object is still pinned in the object store because the deserialized copy (stored in
b) points directly to the memory in the object store.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=25090 ffffffffffffffffffffffff0100008801000000 PINNED_IN_MEMORY 229 test.py:<module>:7 -----------------------------------------------------------------------------------------------------
The output from
ray memory displays this as the object being
PINNED_IN_MEMORY. If we
del b, the reference can be freed.
3. Pending task references
@ray.remote def f(arg): while True: pass a = ray.put(None) b = f.remote(a)
In this example, we first create an object via
ray.put() and then submit a task that depends on the object.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=18971 ffffffffffffffffffffffff0100008801000000 PINNED_IN_MEMORY 77 (deserialize task arg) __main__..f ; driver pid=18958 -----------------------------------------------------------------------------------------------------
While the task is running, we see that
ray memory shows both a
LOCAL_REFERENCE and a
USED_BY_PENDING_TASK reference for the object in the driver process. The worker process also holds a reference to the object because it is
PINNED_IN_MEMORY, because the Python
arg is directly referencing the memory in the plasma, so it can’t be evicted.
4. Serialized ObjectID references
@ray.remote def f(arg): while True: pass a = ray.put(None) b = f.remote([a])
In this example, we again create an object via
ray.put(), but then pass it to a task wrapped in another object (in this case, a list).
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=19002 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (deserialize task arg) __main__..f ; driver pid=18989 -----------------------------------------------------------------------------------------------------
Now, both the driver and the worker process running the task hold a
LOCAL_REFERENCE to the object in addition to it being
USED_BY_PENDING_TASK on the driver. If this was an actor task, the actor could even hold a
LOCAL_REFERENCE after the task completes by storing the
ObjectID in a member variable.
5. Captured ObjectID references
a = ray.put(None) b = ray.put([a]) del a
In this example, we first create an object via
ray.put(), then capture its
ObjectID inside of another
ray.put() object, and delete the first
ObjectID. In this case, both objects are still pinned.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=19047 ffffffffffffffffffffffff0100008802000000 LOCAL_REFERENCE 1551 (put object) ../test.py:<module>:10 ffffffffffffffffffffffff0100008801000000 CAPTURED_IN_OBJECT 77 (put object) ../test.py:<module>:9 -----------------------------------------------------------------------------------------------------
In the output of
ray memory, we see that the second object displays as a normal
LOCAL_REFERENCE, but the first object is listed as
CAPTURED_IN_OBJECT.
Enabling LRU Fallback¶
By default, Ray will raise an exception if the object store is full of pinned objects when an application tries to create a new object. However, in some cases applications might keep references to objects much longer than they actually use them, so simply LRU evicting objects from the object store when it’s full can prevent the application from failing.
Please note that relying on this is not recommended - instead, if possible you should try to remove references as they’re no longer needed in your application to free space in the object store.
To enable LRU eviction when the object store is full, initialize ray with the
lru_evict option set:
ray.init(lru_evict=True)
ray start --lru-evict
Memory Quotas¶
You can set memory quotas to ensure your application runs predictably on any Ray cluster configuration. If you’re not sure, you can start with a conservative default configuration like the following and see if any limits are hit.
For Ray initialization on a single node, consider setting the following fields:
ray.init( memory=2000 * 1024 * 1024, object_store_memory=200 * 1024 * 1024, driver_object_store_memory=100 * 1024 * 1024)
For Ray usage on a cluster, consider setting the following fields on both the command line and in your Python script:
Tip
200 * 1024 * 1024 bytes is 200 MiB. Use double parentheses to evaluate math in Bash:
$((200 * 1024 * 1024)).
# On the head node ray start --head --redis-port=6379 \ --object-store-memory=$((200 * 1024 * 1024)) \ --memory=$((200 * 1024 * 1024)) \ --num-cpus=1 # On the worker node ray start --object-store-memory=$((200 * 1024 * 1024)) \ --memory=$((200 * 1024 * 1024)) \ --num-cpus=1 \ --address=$RAY_HEAD_ADDRESS:6379
# In your Python script connecting to Ray: ray.init( address="auto", # or "<hostname>:<port>" if not using the default port driver_object_store_memory=100 * 1024 * 1024 )
For any custom remote method or actor, you can set requirements as follows:
@ray.remote( memory=2000 * 1024 * 1024, )
Concept Overview¶
There are several ways that Ray applications use memory:
- Ray system memory: this is memory used internally by Ray
Redis: memory used for storing task lineage and object metadata. When Redis becomes full, lineage will start to be be LRU evicted, which makes the corresponding objects ineligible for reconstruction on failure.
Raylet: memory used by the C++ raylet process running on each node. This cannot be controlled, but is usually quite small.
- Application memory: this is memory used by your application
Worker heap: memory used by your application (e.g., in Python code or TensorFlow), best measured as the resident set size (RSS) of your application minus its shared memory usage (SHR) in commands such as
top. The reason you need to subtract SHR is that object store shared memory is reported by the OS as shared with each worker. Not subtracting SHR will result in double counting memory usage.
Object store memory: memory used when your application creates objects in the objects store via
ray.putand when returning values from remote functions. Objects are LRU evicted when the store is full, prioritizing objects that are no longer in scope on the driver or any worker. There is an object store server running on each node.
Object store shared memory: memory used when your application reads objects via
ray.get. Note that if an object is already present on the node, this does not cause additional allocations. This allows large objects to be efficiently shared among many actors and tasks.
By default, Ray will cap the memory used by Redis at
min(30% of node memory, 10GiB), and object store at
min(10% of node memory, 20GiB), leaving half of the remaining memory on the node available for use by worker heap. You can also manually configure this by setting
redis_max_memory=<bytes> and
object_store_memory=<bytes> on Ray init.
It is important to note that these default Redis and object store limits do not address the following issues:
Actor or task heap usage exceeding the remaining available memory on a node.
Heavy use of the object store by certain actors or tasks causing objects required by other tasks to be prematurely evicted.
To avoid these potential sources of instability, you can set memory quotas to reserve memory for individual actors and tasks.
Heap memory quota¶
When Ray starts, it queries the available memory on a node / container not reserved for Redis and the object store or being used by other applications. This is considered “available memory” that actors and tasks can request memory out of. You can also set
memory=<bytes> on Ray init to tell Ray explicitly how much memory is available.
Important
Setting available memory for the node does NOT impose any limits on memory usage unless you specify memory resource requirements in decorators. By default, tasks and actors request no memory (and hence have no limit).
To tell the Ray scheduler a task or actor requires a certain amount of available memory to run, set the
memory argument. The Ray scheduler will then reserve the specified amount of available memory during scheduling, similar to how it handles CPU and GPU resources:
# reserve 500MiB of available memory to place this task @ray.remote(memory=500 * 1024 * 1024) def some_function(x): pass # reserve 2.5GiB of available memory to place this actor @ray.remote(memory=2500 * 1024 * 1024) class SomeActor(object): def __init__(self, a, b): pass
In the above example, the memory quota is specified statically by the decorator, but you can also set them dynamically at runtime using
.options() as follows:
# override the memory quota to 100MiB when submitting the task some_function.options(memory=100 * 1024 * 1024).remote(x=1) # override the memory quota to 1GiB when creating the actor SomeActor.options(memory=1000 * 1024 * 1024).remote(a=1, b=2)
Enforcement: If an actor exceeds its memory quota, calls to it will throw
RayOutOfMemoryError and it may be killed. Memory quota is currently enforced on a best-effort basis for actors only (but quota is taken into account during scheduling in all cases).
Object store memory quota¶
Use
@ray.remote(object_store_memory=<bytes>) to cap the amount of memory an actor can use for
ray.put and method call returns. This gives the actor its own LRU queue within the object store of the given size, both protecting its objects from eviction by other actors and preventing it from using more than the specified quota. This quota protects objects from unfair eviction when certain actors are producing objects at a much higher rate than others.
Ray takes this resource into account during scheduling, with the caveat that a node will always reserve ~30% of its object store for global shared use.
For the driver, you can set its object store memory quota with
driver_object_store_memory. Setting object store quota is not supported for tasks.
Questions or Issues?¶
If you have a question or issue that wasn’t covered by this page, please get in touch via on of the following channels:
ray-dev@googlegroups.com: For discussions about development or any general questions and feedback.
StackOverflow: For questions about how to use Ray.
GitHub Issues: For bug reports and feature requests. | https://docs.ray.io/en/latest/memory-management.html | CC-MAIN-2020-29 | refinedweb | 2,326 | 54.93 |
CI Best Practices Guide – SAPUI5/SAP Fiori on SAP Cloud Platform
CI Best Practices Guide – SAPUI5/SAP Fiori on SAP Cloud Platform
Part 4.5 – Implementing the CI pipeline to build an SAPUI5/SAP Fiori application on SAP Cloud Platform.
This document is part of the guide Continuous Integration (CI) Best Practices with SAP. To ensure that all the examples work properly, make sure that you have followed the setup instructions for all components listed in the prerequisites box.
1. Introduction
A ready-to-use Jenkins 2 pipeline for SAPUI5 and SAP Fiori development is now available with Project “Piper”. It offers a fast adoption approach as an alternative to what is described here.
There is a lot of infrastructure available to support single developers who are creating and maintaining SAPUI5 or Fiori projects. SAP Web IDE provides a rich tool set that supports single developers or small teams; for example, wizards that generate a skeleton, and the metadata files that are required for new projects. For larger teams, however, there is an urgent need for an automated CI process based on a central build that includes automated testing and code quality checks.
This chapter’s scenario describes a CI process for development of SAP Fiori or SAPUI5 applications running on SAP Cloud Platform. The developer’s workspace is SAP Web IDE on SAP Cloud Platform (using SAP Web IDE personal edition as local development environment is also possible) from where deployments into a developer’s account for instant testing are possible. When a developer finishes implementing a feature, he or she creates a Git commit in the SAP Web IDE workspace and pushes it into a centralized corporate Git repository, which is connected to SAP Web IDE through the SAP Cloud Connector. The example uses Gerrit for the central Git repository, but other solutions are possible. We make use of the Gerrit review features by implementing a voter build and a review process on the pushed commit before it is merged into the
master branch of the central Git repository. The voter build usually executes static code checks and unit tests. Depending on the available resources (for example, an additional account on SAP Cloud Platform, or, if an additional account is not available, the application to be tested can be deployed with a generated, unique name), runtime tests are also possible.
Immediately after the commit has been merged to the
master branch, the CI build starts on the new snapshot. The build executes static code analysis, unit tests and automatic runtime tests. For execution of the latter we use in our example a dedicated SAP Cloud Platform account. Application files are minified (white spaces and comments are removed), and a preload file is created. When the deployed application is accessed from SAP Cloud Platform via a browser, the preload file is requested first by default since it contains the content of all the application’s JavaScript files. This reduces the number of round trips between the browser and the back end, significantly increasing the performance of the application loading process. Finally, the SAP Fiori/SAPUI5 application files are packaged into an MTA (multi-target application) archive. This is the package format that can be deployed automatically by the SAP Cloud Platform console client from a CI server where the build runs.
Multi-Target Applications
SAP Cloud Platform Console Client
After it has been successfully built and tested, the MTA artifact is archived for further processing. Acceptance tests are performed on a dedicated test system to which a stable version of the MTA file has been deployed. The deployment of the MTA version, which was successfully created during a CI build, can be triggered either manually by a responsible person (like a quality manager) or automatically via a defined schedule (for example, once a day in the morning). Testers can then execute manual acceptance tests.
After successful testing, it is the decision of the delivery manager to release the tested version to the productive system and to store the archive as release version to an artifact repository (the example uses Nexus).
Figure 1: Process for SAP Fiori/SAPUI5 development.
The landscape setup for this process is described in Landscape Configuration. The pipeline implementation by means of Jenkins jobs places real code into the skeleton described in Sample Pipeline Configuration.
Figure 2: Landscape for SAP Fiori/SAPUI5 development.
2. Prerequisites
Fiori/SAP UI5 applications are deployed in different stages to SAP Cloud Platform. To accommodate these stages on the runtime requires four different SAP Cloud Platform accounts for the following purposes:
One development account for automatic deployment and the tests performed during the voter build. The voter builds must be executed sequentially; otherwise, the deployment of a new application version could be started before the running test of the currently deployed version has been finished. If more parallelization is needed, additional accounts could be used in a round-robin access strategy.
One CI account for automatic deployment and tests during the CI build.
One test account for manually triggered deployment of the application to be tested manually for acceptance tests.
One productive account. The release and deployment of the application is triggered manually.
In addition, the following CI infrastructure components are needed. Their setup is described in the CI/CD Landscape - Component Setup pages accessible from the Navigator page.
A corporate Git/Gerrit instance connected with SAP Web IDE (or the personal edition) via the SAP Cloud Connector.
A Jenkins instance (eventually with separate slaves).
A Nexus instance.
3. Creating Sources for a New Project
The standard procedure for creating a new SAPUI5 or Fiori project is to use the wizard in SAP Web IDE, which lets you choose from available templates and create a SAP Fiori/SAPUI5 skeleton in your workspace. The example is a master-detail application using an external sample OData service.
You can either use SAP Web IDE on SAP Cloud Platform, or SAP Web IDE personal edition, which offers the same features but runs on your local machine.
SAP Web IDE
SAP Web IDE Personal Edition
Procedure
In Gerrit, create a project with a
masterbranch as described in Generic Project. The example uses
Fiori_Northwindas the project name.
Before creating the application in SAP Web IDE, define an OData destination has in HANA Platform such that your application can consume it. The example uses the publicly available Northwind sample OData service:
Creating a Northwind Destination
In the SAP Cloud Platform cockpit, select Connectivity > Destinations > New Destination. Enter the following:
Add the following Additional Properties:
Open and log in to the appropriate IDE:
Opening SAP Web IDE
SAP Web IDE Personal Edition
In the IDE, select Tools > Preferences > Git settings. Enter your Git user name and email address, and save your settings.
Select the
Workspacefolder, then select New > Project from Template. Follow the instructions in the wizard to create the example master-detail application:
Select SAP Fiori Master-Detail Application as template. Press Next.
Enter
Fiori_Northwindas Project Name. Press Next.
Enter
Service URLas the data source, choose the
Northwind OData serviceand enter
/V2/Northwind/Northwind.svcas the URL path. Press Test to verify that the service metadata can be loaded, then press Next.
To keep it simple, the example uses only some of the fields in the template customization. Enter the following:
Leave all other fields empty. Press Finish. The project is created and you see a folder structure like below.
Select the new project and select Git > Initialize Local Repository.
Select the new project again and select Git > Set Remote. Enter the following data:
Name:
origin
URL:
<The HTTPS-based URL of your Gerrit project>
Select Add configuration for Gerrit.
Press OK.
In the right sidebar, open the Git pane. Scroll down, mark Amend Changes and press Commit. This injects a change ID into the initial commit, which is required to be able push to Gerrit for review.
In Git pane again, mark Stage All and enter a commit description. Press Commit.
Select Pull to merge the version graphs of the local Git repository in SAP Web IDE and the remote repository. Check the Git history pane to make sure it looks as expected. to be merged into the
masterbranch.
4. Installing and Configuring Node.js on the Jenkins Slave Machine
For processing the Fiori project’s sources on the build node, Grunt as a task processor is used. Grunt requires Node.js and the included package manager npm. The example uses the Jenkins Node.js plugin to make node and npm available inside a job execution.
Procedure
Log in as user
jenkinsto the Jenkins slave machine and install Node.js (version 6 or later) to a path of your choice. It must be writeable for user
jenkins.
Node.js Home Page
Node.js Downloads
You can install the
tar.gzpackage on Linux in any directory. We recommend that you define and use a common installation directory on all your Jenkins slave machines.
To enable the
gruntcommand in a shell script,
grunt-climust be installed globally. As user
jenkins, open a shell on the Jenkins slave machine, temporarily place the
bindirectory of your Node.js installation into the
PATHvariable, and execute the following command in a shell:
npm install -g grunt-cli
Open the Jenkins front end, and go to Manage Jenkins > Manage Plugins > Available. Select Node.js Plugin and choose Download now and install after restart. Although the primary feature offered by the Node.js plugin (using JavaScript directly in job implementations) is not used in our example, it does handle multiple Node.js versions in parallel, allowing you to choose the appropriate one at the job level.
The latest version of the Node.js plugin does not correctly provide the node path to the jobs, due to a known issue in the Environment Injector plugin. You may want to install an older version (for example, 0.2.1) instead:
Environment inject plugin suppress variables contributed by extension points.
6. Creating the Grunt Build File
The Grunt build of the Fiori/SAP UI5 application is controlled by a
Gruntfile.js file. This file controls the task flow for processing sources and uses Grunt plugins, which are expected to be present during the Grunt run. The Grunt file in the example contains tasks for static code analysis, minification of the JavaScript files, and creation of a
Component-preload.js file. The latter plays the role of the
Component.js file, but contains the content of all JavaScript files on which it depends. When a browser loads a Fiori application, it first looks for this preload file, to reduce the number of round trips that would be required to load all the JavaScript files individually. (The files are loaded individually if
Component-preload.js is not available.)
Before the Grunt build, the npm package manager installs the plugins needed for Grunt. A file named
package.json declares the dependencies to required plugins. In the Jenkins build job definitions that are described in subsequent chapters, npm is not called explicitly; the MTA archive builder encapsulates all npm actions.
This scenario has been tested with the Grunt plugin versions that are described in the code listing for
package.json in the appendix.
Procedure
Open your project in SAP Web IDE.
Select your project folder, choose New > File and enter
package.jsonas the name.
Copy the content of
package.jsonfrom the appendix and paste it into the new file.
Adapt the
package.jsonfile to your context by entering the following values:
Package name - must be identical to the namespace you used to create the application in SAP Web IDE. It the example:
com.mycompany.northwind..
7. Creating the MTA Descriptor
The packaging of the module to a deployable archive is done with help of the MTA archive builder. It calls custom builders for the contained modules, in our case the Grunt build for the SAP Fiori module.
Procedure
Open your project in SAP Web IDE.
Select your project folder, choose New > File and enter
mta.yamlas the name.
Copy the content of
mta.yamlfrom the appendix and paste it into the new file.
Adapt the
mta.yamlfile to your context by entering the following values:
ID of your MTA. In our scenario, in which the MTA is a wrapper for the Fiori application, it makes sense to use the same name as for the Fiori application itself
Version of your application, which is needed in two places: at the MTA level and at the module level. The module version number is used when the Fiori application is deployed to SAP Cloud Platform and exposed there. The version number on SAP Cloud Platform must be unique. We append a time stamp, which is evaluated during build time, to the version number. It is always good practice, to record a reference from the build artifact in SAP Cloud Platform to the time it was built.
The name of the module. This is exposed during deployment as name of the application on SAP Cloud Platform.
MTA builds produce additional temporary files and folders inside the project directory. To avoid accidentally placing these temporary files under Git control when working on a local PC, add the following entries into the
.gitignorefile:
.mta *.mtar dist node_modules
In the Git pane, stage the new file, enter a commit description, and select Commit and Push.
8. Creating a Jenkins CI Build Job
In the example, the job for the CI build is created_Fiori_Northwind_master_build. Select Freestyle Project and press OK.
Select This build is parametrized and add the following string parameters:
In the job configuration, enter the following values:
Select Additional Behaviours > Add > Check out to a sub-directory and enter
src.
Continue with the rest of the configuration:
In the Build section, select Add build step > Execute shell. In the Command field, enter the following code:
# install the MTA archive builder mkdir -p ${WORKSPACE}/tmp/mta cd ${WORKSPACE}/tmp/mta wget --output-document=mta.jar '<URL from where to download the MTA archive builder>' # ${WORKSPACE}/tmp/neo-java-web-sdk/tools/neo.sh deploy-mta --user ${CI_DEPLOY_USER} --host ${DEPLOY_HOST} --source ${mtaName}.mtar --account ${CI_DEPLOY_ACCOUNT} --password ${CI_DEPLOY_PASSWORD} --synchronous
The MTA archive builder and the SAP Cloud Platform console client are downloaded and installed using their URLs. To configure npm to connect to the SAP npm registry to download SAP scoped npm packages, we created a local
.npmrcfile. Although using the Jenkins Config File Provider plugin would fit very well here, doing so is currently not possible, due to the aforementioned issue with the Environment Injector plugin. The application name is taken from the
mta.yamlfile, and the timestamp placeholder is replaced by the current time to reflect the build time in the application meta data. The MTA build starts and produces an
mtarfile, which is deployed to SAP Cloud Platform.
In the Post-build Actions section, select Add post-build action > Archive the artifacts and enter
src/*.mtar, src/mta.yamlas Files to archive.
Select Add post-build action > Build other projects (manual step) and enter
CI_Fiori_Northwind_master_testDeployas the projects to build. Ignore the warning that the job entered does not yet exist. We will create it in the next procedure.
Select Add Parameters > Current build Parameters.
Save the configuration.
Select Jenkins > Manage Jenkins > Configure System and scroll down to the Global Passwords Section.
Add the following names and store their values:
CI_DEPLOY_USER,
CI_DEPLOY_PASSWORD,
TEST_DEPLOY_USER,
TEST_DEPLOY_PASSWORD,
PROD_DEPLOY_USER,
PROD_DEPLOY_PASSWORDaccording to the credentials of the deploy users in the CI, TEST and PROD accounts.
Trigger the build manually. Monitor the build and the deployment of the application into your SAP Cloud Platform account.
9. Creating a Jenkins Job for Deployment to the Test Account
The next job of the CI build jobs is triggered manually: the quality manager or test coordinator must provide a test account with one candidate that has successfully passed the CI build job. The account is used by manual testers for acceptance tests.
From a technical point of view, this job deploys the
mtar file that was archived in the CI build job to the
TEST account on SAP Cloud Platform.
Procedure
Open Jenkins and select New Item > Freestyle Job. Enter
CI_Fiori_Northwind_master_testDeploy.
Select This build is parametrized, enter the following string parameters and leave their values empty:
For the other configuration options, enter the following:
In the Build section, select Add build step > Copy artifacts from another project and enter the following:
This step restores the artifact that was created in the build job into the workspace directory of this job.
Select Add build step > Execute shell and enter the following script implementation:
# cd ${WORKSPACE}/src ${WORKSPACE}/tmp/neo-java-web-sdk/tools/neo.sh deploy-mta --user ${TEST_DEPLOY_USER} --host ${DEPLOY_HOST} --source *.mtar --account ${TEST_DEPLOY_ACCOUNT} --password ${TEST_DEPLOY_PASSWORD} --synchronous
In the Post-build Actions section, select Add post-build action > Build other projects (manual step) and enter
CI_Fiori_Northwind_master_releasein Downstream Project Names. You can safely ignore the warning that the job entered does not yet exist, as we will be creating it in the next procedure.
Select Add Parameters > Current build Parameters.
Save.
10. Creating a Jenkins Release Job
The last job in the pipeline implements the release of a version that has successfully passed the acceptance test. Technically, two things happen: the artifact is uploaded to Nexus into a release repository, and it is deployed to the production account.
The example uses a copy of the test deploy job, adapting the target that points to the productive account, and adding a step for the upload to Nexus.
Procedure
Open Jenkins, select New Item and enter
CI_Fiori_Northwind_master_releaseas the Item name. Select Copy existing item and enter
CI_Fiori_Northwind_master_testDeployas the template to be copied.
In the Build section, enter the following code into the Command field:
# install neo command line client mkdir -p ${WORKSPACE}/tmp/neo-java-web-sdk cd ${WORKSPACE}/tmp/neo-java-web-sdk wget '' unzip -o neo-java-web-sdk-1.127.11.zip rm neo-java-web-sdk-1.127.11.zip # upload to Nexus cd ${WORKSPACE}/src awk -F: '\ BEGIN { print "<project xsi:schemaLocation=\"\">" print "<modelVersion>4.0.0</modelVersion>"} $1 ~ /^version/ { gsub(/\s/,"", $2) gsub(/\"/,"", $2) printf "<version>%s</version>\n", $2} $1 ~ /^ID/ { gsub(/\s/,"", $2) gsub(/\"/,"", ) gsub(/\"/,"", $2) print $2 }' mta.yaml` mvn deploy:deploy-file -Durl=${NEXUS_REPOSITORY} \ -Dfile=${mtaName}.mtar -DrepositoryId=nexusCIProcess -Dpackaging=mtar -DpomFile=pom.xml # deploy to SAP Cloud Platform ${WORKSPACE}/tmp/neo-java-web-sdk/tools/neo.sh deploy-mta --user ${PROD_DEPLOY_USER} --host ${DEPLOY_HOST} --source *.mtar --account ${PROD_DEPLOY_ACCOUNT} --password ${PROD_DEPLOY_PASSWORD} --synchronous
Remove any post-build action.
Save.
11. Adding a Pipeline View
Once you create the CI Jenkins jobs, add a convenient overview of the pipeline in Jenkins.
Procedure
Open Jenkins and click the view tab with the + sign.
Enter
Fiori_Northwind_pipelineand select
Build Pipeline View.
CI_Fiori_Northwind_master_buildfor Select Initial Job and specify the No of Displayed Builds, for example,
5.
Press OK.
12. Creating a Jenkins Voter Build Job
The voter build job is executed immediately after you push a commit to Gerrit for review.
Procedure
Open Jenkins and select New Item. Enter
VO_Fiori_Northwind_master_buildas the Item name, select Copy existing item, and enter
CI_Fiori_Northwind.
Select Trigger on > Add > Patch set Created.
In the Gerrit Project configuration, enter
Fiori_Northwindas pattern and
masteras branch.
In the Build section, enter the following code into the Command field:
# install MTA build tool mkdir -p ${WORKSPACE}/tmp/mta cd ${WORKSPACE}/tmp/mta wget --output-document=mta.jar '<URL from where to download the MTA archive builder>' #
Remove any post-build action.
Save.
You may want to test the voter and the CI build jobs: apply a local change on your project, create a Git commit and push it to Gerrit. The voter build is triggered immediately. If your change does not contain any build errors, verify and submit it in Gerrit. After two minutes, the CI build starts running.
13. Roundtrip Through the Process
All Jenkins jobs are now ready to do a full roundtrip through the CI process, including a voter build that is done before a commit reaches the master branch, through the CI process.
In SAP Web IDE, enter the
Fiori_Northwindproject.. The build starts within two minutes of submitting the change.
When the CI build job has finished successfully, start the test deploy job. Verify in the log that the
mtarfile has been deployed to the
TESTaccount.
Assume that the manual tester has finished the testing efforts. Release the
mtarfile and deploy it to the
PRODaccount. has been correctly uploaded. here contains the minimum amount of code necessary to make the process skeleton run. You might want to add more data according to your requirements.
You will need to enter some items manually, such as the name and the version of your package.
Npm documentation for
package.json
{ "name": "<name of the package>", "version": "<version of the package>", "description": "<description of the package>", "private": true, "devDependencies": { "grunt": "1.0.1", "@sap/grunt-sapui5-bestpractice-build": "^1.3.17" } }
Gruntfile.js
The Grunt build file uses the
@sap/grunt-sapui5-bestpractice-build npm module, which is available on the SAP npm registry
npm.sap.com. It contains tasks to cover the following:
Static code analysis
Minifying the JavaScript files
Creating a preload file that contains the contents of all JavaScript files for reducing the number of HTTP round trips
module.exports = function (grunt) { 'use strict'; grunt.loadNpmTasks('@sap/grunt-sapui5-bestpractice-build'); grunt.registerTask('default', [ 'lint', 'clean', 'build' ]); };
Since
@sap/grunt-sapui5-bestpractice-build already initializes the Grunt configuration, additionally required custom task configurations must be added in the code above; the usage of
grunt.initConfig does not work here.
mta.yaml
The
mta.yaml file describes the metadata of a multi-target application (MTA) and its contained modules with their dependencies. In the example, which has only one Fiori/SAP UI5 module, the MTA serves as deployment vehicle for the application to SAP Cloud Platform.
Multi-Target Applications: Reference of Supported Deployment Options
_schema-version: "2.0.0" ID: "<Id of your MTA>" version: <version number of your application> parameters: hcp-deployer-version: "1.0.0" modules: - name: "<Name of your Fiori application>" type: html5 path: . parameters: version: <version number of your application>-${timestamp} build-parameters: builder: grunt build-result: dist
The
build-result parameter informs the MTA archive builder about the location of the files that should be zipped into the archive.
The content of this document is for guidance purposes only. No warranty or guarantees are provided.
Next Steps
Updated 12/19/2017
Contributors Provide Feedback | https://www.sap.com/japan/developer/tutorials/ci-best-practices-fiori-sapcp.html | CC-MAIN-2018-13 | refinedweb | 3,735 | 55.95 |
Mapping SQL Server Query Results to a DataGridView in .NET
By: Artemakis Artemiou | Updated: 2018-12-27 | Comments | Related: More > Application Development
Problem
In previous tips that I have written about SQL Server and .NET, we've learned how to get started with .NET and SQL Server data access. To this end, we've learned how to connect to SQL Server from a C# program and run a simple query, as well as how to query SQL Server tables from .NET and process the results. Moreover, we've learned how to work with SQL Server stored procedures and functions from within .NET applications. In this tip, we are going to see something different. We are going to create a .NET Windows Forms Application, retrieve query results from a SQL Server database, and finally display these results in a DataGridView control.
Solution
Raw data is just data. In order to transform data to knowledge, you need to properly process them and present them. One way towards this, is to use Graphical User Interface controls and data visualization. For example, in .NET, one such control is DataGridView.
The DataGridView control in .NET, displays data in a customizable grid. For example, you can retrieve data from SQL Server, i.e. via a query, and display the data on the DataGridView control in a Windows Forms .NET application. Then, using the DataGridView control's features and properties, you can further process the data and customize their display (i.e. sort them, etc.).
In this tip, we are going to perform the below:
- Create a simple Windows Forms .NET C# application along with including a DataGridView control
- Run a query against a sample database and retrieve the results
- Display the results on a DataGridView control
- Customize the DataGridView control
Sample Database and Data
In this tip's examples, I will be using the database "SampleDB", which is on a test SQL Server 2017 named instance on my local machine, called "SQL2K17". This database, is the same I used in my previous .NET Application Development tips.
Here's a screenshot of the SQL Server instance, as it can be seen in SSMS:
The sample database has two tables named "employees" and "location" and you can see their content on the above screenshot.
Query for Retrieving Sample Data
The query that will be used for retrieving the sample data is:
SELECT e.id, e.code, e.firstName, e.lastName, l.code AS locationCode, l.descr AS locationDescr FROM dbo.employees e INNER JOIN dbo.location l ON l.id = e.locationID; GO
If we execute the above SQL query in SSMS against our sample database and tables, this is what we get:
As you can see, the SQL query returns 4 rows with the data presented in the above screenshot.
Create the Windows Forms .NET C# application and add a DataGridView control
Let's start a new "Windows Forms App" project in Visual Studio 2017, name it "TestApp5", and save it in the folder "c:\temp\demos":
Right after the above action, out Windows Forms project opens and the workspace is ready for us to add some GUI controls and write some code!
So, after I increase the size of my main form a bit, from the toolbox on the left I drag a drop a DataGridView control, and a button. Next, I increase the size of the DataGridView control and set the "Text" property for the button (see the "Properties" dialog on the right after you select the button control) to "Refresh". Also, to make code look prettier and more manageable, I change the button's "Name" property to "btnRefresh" and the DataGridView's name to "grdData".
After the above, this how my Windows form looks:
The DataGridView Control
There two ways to set the columns of a DataGridView control in .NET. These are:
- Set static columns by populating the "Columns" collection in the control's properties.
- Write code that dynamically creates the columns.
Since I find the second option, that is dynamically creating the columns, more robust, I will let my code create them.
Run Query against the Sample Database, Retrieve the Results and Populate the DataGridView Control
Now let's write the query and relevant C# code that retrieves the data from the "SampleDB" database. The code will be executed when the "Refresh" button is clicked. So, on our workspace, we double-click on the "Refresh" button in order to navigate to the code editor, and more explicitly, in the "Refresh" button's click event code handling block:
The code to be added here, is similar to the code I wrote in my previous .NET development tips. To this end, on the very top of my code, I add the library System.Data.SqlClient:
using System.Data.SqlClient;
Then, in the "btnRefresh_Click" method I add the below code:
private void btnRefresh_Click(object sender, EventArgs e) { //set the connection string string connString = @"Server =.\SQL2K17; Database = SampleDB; Trusted_Connection = True;"; try { //sql connection object using (SqlConnection conn = new SqlConnection(connString)) { //retrieve the SQL Server instance version string query = @"SELECT e.id, e.code, e.firstName, e.lastName, l.code AS locationCode, l.descr AS locationDescr FROM dbo.employees e INNER JOIN dbo.location l ON l.id = e.locationID;"; //define the SqlCommand object SqlCommand cmd = new SqlCommand(query, conn); //Set the SqlDataAdapter object SqlDataAdapter dAdapter = new SqlDataAdapter(cmd); //define dataset DataSet ds = new DataSet(); //fill dataset with query results dAdapter.Fill(ds); //set DataGridView control to read-only grdData.ReadOnly = true; //set the DataGridView control's data source/data table grdData.DataSource = ds.Tables[0]; //close connection conn.Close(); } } catch (Exception ex) { //display error message MessageBox.Show("Exception: " + ex.Message); } }
Let's discuss the above code in order to better understand it.
As you can see, just like the rest of my .NET tips, everything takes place inside the "using (SqlConnection conn = new SqlConnection(connString))" code block.
So, within that code block, I define the query to be executed against my SQL Server connection and again, I'm using an SqlCommand object. However, in this tip, I introduce the use of some "new" data access classes. To this end, I'm using the "SqlDataAdapter" and "DataSet" classes. So, I basically run the query using my SqlDataAdapter object and fill the DataSet object with the retrieved data which is nothing else than a DataTable. The last step included in the above code, was to set the DataSet's table as the DataGridView control's data source. This last step, automatically creates the DataGridView control's columns and populates the control with the data retrieved by the query.
Now we are ready to compile and run our application. To do this, within our project in Visual Studio, by pressing F6 or by clicking on the "Build" menu and then click on "Build Solution"", our program will be compiled and if everything is OK, that is if we get no errors and see the "Build succeeded" notification on the bottom left corner of the window, it means that the program is now ready for execution. We press F5 (or under the "Debug" menu, we select "Start Debugging") and our program starts:
Now, let's click on the "Refresh" button:
As you can see, it worked just fine! My application connected to the database, executed the query, retrieved the results and displayed it on the DataGridView control. Now, whenever we click on the "Refresh" button, our application will execute the query and update the DataGridView control with the most recent data.
Customizing the DataGridView Control
While our project runs, by clicking on any column header on the DataGridView control, the data will be sorted based on that column. For example, let's sort the data by location code (ascending order):
For customizing the DataGridView control, you first need to stop the execution of the program (Shift + F5), and then back in the workspace, after selecting the DataGridView control, you can play with the available properties of the control. For example, you can change the BackgroundColor and GridColor properties, etc.:
Conclusion
In this tip, we discussed how you can create a basic Windows Forms .NET application, add a DataGridView control and a button, and add event handling code to the button so that every time it is clicked, it will connect to a SQL Server database, run a query and display the query results dynamically on the DataGridView control. Moreover, we saw how easy it is to further customize the DataGridView control via its properties.
Stay tuned, as we continue our journey into the Data Access world via this exciting tip series.
Next Steps
- Check out my tip: How to Get Started with SQL Server and .NET
- Check out my tip: Querying SQL Server Tables from .NET
- Check out my tip: Working with SQL Server Stored Procedures and .NET
- Check out my tip: Working with SQL Server Functions and .NET
- Check out my tip: Understanding SQL Server Connection Pooling in ADO.NET
- Check the MS Docs article: DataGridView Class
- Check the MS Docs article: SqlDataAdapter Class
- Check the MS Docs article: DataSet Class
- Check the MS Docs article: .NET Framework Data Providers
- Check the MS Docs article: SqlConnection Class
- Check the MS Docs article: SqlCommand Class
Last Updated: 2018-12-27
About the author
View all my tips | https://www.mssqltips.com/sqlservertip/5834/mapping-sql-server-query-results-to-a-datagridview-in-net/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MSSQLTips-LatestSqlServerTips+%28MSSQLTips+-+Latest+SQL+Server+Tips%29 | CC-MAIN-2020-34 | refinedweb | 1,546 | 62.58 |
A). More...
#include <IFSelect_SessionFile.hxx>
A).
The produced File is under an Ascii Form, then it may be easily consulted. It is possible to cumulate reading of several Files. But in case of Names conflict, the newer Names are forgottens.
The Dump supports the description of XSTEP functionnalities (Sharing an Interface File, with Selections, Dispatches, Modifiers ...) but does not refer to the Interface File which is currently loaded.
SessionFile works with a library of SessionDumper type objects
The File is Produced as follows : SessionFile produces all general Informations (such as Int and Text Parameters, Types and Inputs of Selections, Dispatches, Modifiers ...) and calls the SessionDumpers to produce all the particular Data : creation arguments, parameters to be set It is Read in the same terms : SessionFile reads and interprets all general Informations, and calls the SessionDumpers to recognize Types and for a recognized Type create the corresponding Object with its particular parameters as they were written. The best way to work is to have one SessionDumper for each consistent set of classes (e.g. a package).
Creates a SessionFile, ready to read Files in order to load them into a given WorkSession. The following Read Operations must then be called. It is also possible to perform a Write, which produces a complete File of all the content of the WorkSession.
Creates a SessionFile which Writes the content of a WorkSession to a File (directly calls Write) Then, IsDone aknowledges on the result of the Operation. But such a SessionFile may not Read a File to a WorkSession.
Adds an Item to the WorkSession, taken as Name the first item of the read Line. If this Name is not a Name but a Number or if this Name is already recorded in the WorkSession, it adds the Item but with no Name. Then the Name is recorded in order to be used by the method ItemValue <active> commands to make active or not in the session.
Adds a line to the list of recorded lines.
Clears the lines recorded whatever for writing or for reading.
Specific Destructor (closes the File if not yet done)
Returns True if the last Read or Write operation has been corectly performed. ELse returns False.
Returns True if a Parameter, in the Own List (see NbOwnParams) is a Text (between "..."). Else it is an Item (Parameter, Selection, Dispatch ...), which can be Void.
Returns True if a Parameter, given its rank in the Own List (see NbOwnParams), is Void. Returns also True if <num> is out of range (undefined parameters)
Returns a Parameter as an Item. Returns a Null Handle if the Parameter is a Text, or if it is defined as Void.
Returns a line given its rank in the list of recorded lines.
Returns the count of recorded lines.
During a Read operation, SessionFile processes sequencially the Items to read. For each one, it gives access to the list of its Parameters : they were defined by calls to SendVoid/SendParam/SendText during Writing the File. NbParams returns the count of Parameters for the line currently read.
At beginning of writing an Item, writes its basics :
Returns a Parameter (alphanumeric item of a line) as it has been read.
Performs a Read Operation from a file to a WorkSession i.e. calls ReadFile, then ReadSession and ReadEnd Returned Value is : 0 for OK, -1 File could not be opened, >0 Error during Read (see WriteSession) IsDone can be called too (will return True for OK)
Reads the end of a file (its last line). Returns 0 if OK, status >0 in case of error (not a suitable end line).
Reads the recorded lines from a file named <name>, after having cleared the list (stops if RecognizeFile fails) Returns False (with no clearing) if the file could not be read.
Reads a Line and splits it into a set of alphanumeric items, which can then be queried by NbParams/ParamValue ...
Tries to Read an Item, by calling the Library of Dumpers Sets the list of parameters of the line to be read from the first own one.
Performs a Read Operation from a File to a WorkSession, i.e. reads the list of line (which must have already been loaded, by ReadFile or by calls to AddLine) Important Remark : this excludes the reading of the last line, which is performed by ReadEnd Returns 0 for OK, >0 status for Read Error (not a suitable File, or WorkSession given as Immutable at Creation Time) IsDone can be called too (will return True for OK)
Recognizes the header line. returns True if OK, False else.
Removes the last line. Can be called recursively. Does nothing if the list is empty.
During a Write action, commands to send the identification of a Parameter : if it is Null (undefined) it is send as Void ($) if it is Named in the WorkSession, its Name is sent preceeded by ':', else a relative Ident Number is sent preceeded by '#' (relative to the present Write, i.e. starting at one, without skip, and counted part from Named Items)
During a Write action, commands to send a Text without interpretation. It will be sent as well.
During a Write action, commands to send a Void Parameter i.e. a Parameter which is present but undefined Its form will be the dollar sign : $.
Sets the rank of Last General Parameter to a new value. It is followed by the Fist Own Parameter of the item. Used by SessionFile after reading general parameters.
Sets Parameters to be sent as Own if <mode> is True (their Name or Number or Void Mark or Text Value is preceeded by a Column sign ':') else they are sent normally Hence, the Own Parameter are clearly identified in the File.
Internal routine which processes a line into words and prepares its exploration.
Returns the content of a Text Parameter (without the quotes). Returns an empty string if the Parameter is not a Text.
Returns the WorkSession on which a SessionFile works. Remark that it is returned as Immutable.
Performs a Write Operation from a WorkSession to a File i.e. calls WriteSession then WriteEnd, and WriteFile Returned Value is : 0 for OK, -1 File could not be created, >0 Error during Write (see WriteSession) IsDone can be called too (will return True for OK)
Writes the trailing line. It is separate from WriteSession, in order to allow to redefine WriteSession without touching WriteEnd (WriteSession defines the body of the file) WriteEnd fills the list of lines. Returns a status of error, 0 if OK, >0 else.
Writes the recorded lines to a file named <name> then clears the list of lines. Returns False (with no clearing) if the file could not be created.
Writes a line to the File. If <follow> is given, it is added at the following of the line. '
' must be added for the end.
Writes the Parameters own to each type of Item. Uses the Library of SessionDumpers Returns True if Done, False if could not be treated (hence it remains written with no Own Parameter)
Prepares the Write operation from a WorkSession (IFSelect) to a File, i.e. fills the list of lines (the file itself remains to be written; or NbLines/Line may be called) Important Remark : this excludes the reading of the last line, which is performed by WriteEnd Returns 0 if OK, status > 0 in case of error. | https://www.opencascade.com/doc/occt-6.9.1/refman/html/class_i_f_select___session_file.html | CC-MAIN-2019-47 | refinedweb | 1,232 | 70.73 |
MiniMusicPlayer2 class in Head First Java book (Chapter 12)
Juan MenendezB
Greenhorn
Joined: Feb 25, 2012
Posts: 2
posted
Feb 27, 2012 03:51:55
0
In the following code, which is copied as it is in page 390 of the book Head First Java, I came across a little issue I hope someone will help me clarify.
The expected output from running the program should be hearing the piano playing and at the same time in the console, everytime a note was played, the word "la" should we written. The purpose of this exercise was to explain how to handle ControlChange MIDI events (with code number 176) so we could do something at the same time any note was played (since we could not use the NoteOn event for the same purpose). The problem is that adding the event with command 176 on the same tick but right after the NoteOn event (command 144)
(lines 18 and 19)
seem to silence the note somehow. I tried to add it on tick i+1 and as expected I could hear more of the the note but silenced half way by the ControChange event.
I found the solution was to add the ControlChange midi event on the same tick but BEFORE the NoteOn event and not AFTER. Seems like the events are stored in a FIFO queue and this prevents the note from being silenced by the ControlChange event as it happened before.
The question is: am I doing the right thing changing the order of these two lines of code or should the program have worked in the order they were originally in the book?
import javax.sound.midi.*; public class MiniMusicPlayer2 implements ControllerEventListener { public static void main(String[] args) { MiniMusicPlayer2 mini = new MiniMusicPlayer2(); mini.go(); } public void go() { try { Sequencer sequencer = MidiSystem.getSequencer(); sequencer.open(); int[] eventsIWant = { 127 }; sequencer.addControllerEventListener(this, eventsIWant); Sequence seq = new Sequence(Sequence.PPQ, 4); Track track = seq.createTrack(); for (int i = 5; i < 60; i += 4) { track.add(makeEvent(144, 1, i, 100, i)); track.add(makeEvent(176, 1, 127, 0, i)); track.add(makeEvent(128, 1, i, 100, i + 2)); } // end loop sequencer.setSequence(seq); sequencer.setTempoInBPM(220); sequencer.start(); } catch (Exception ex) { ex.printStackTrace(); } } // close public void controlChange(ShortMessage event) { System.out.println("la"); } public MidiEvent makeEvent(int comd, int chan, int one, int two, int tick) { MidiEvent event = null; try { ShortMessage a = new ShortMessage(); a.setMessage(comd, chan, one, two); event = new MidiEvent(a, tick); } catch (Exception e) { } return event; } } // close class
dennis deems
Ranch Hand
Joined: Mar 12, 2011
Posts: 808
posted
Feb 27, 2012 09:12:42
0
A control change event should not silence a note event. Are you sure you copied this line correctly?
track.add(makeEvent(176, 1, 127, 0, i));
Control change 127 is a message to set poly mode On. It's been a while since I worked with MIDI, but I wouldn't expect the repeated sending of this message to have any audible effect. Especially here, where there is nothing that would make 2 tones sound simultaneously.
dennis deems
Ranch Hand
Joined: Mar 12, 2011
Posts: 808
posted
Feb 27, 2012 09:15:04
0
Oh, I see now. It is just using 127 to fire the listener and print out "la".
dennis deems
Ranch Hand
Joined: Mar 12, 2011
Posts: 808
posted
Feb 27, 2012 09:26:51
0
I did a little reading on Midi Mode messages since my knowledge is beyond rusty. Apparently, according to the MIDI spec, mode messages are also supposed to behave as All Notes Off commands. So you are absolutely right to move it before the Note On event.
Juan MenendezB
Greenhorn
Joined: Feb 25, 2012
Posts: 2
posted
Feb 27, 2012 10:44:29
0
OK, thanks a lot
It's strange no one has mentioned this error before (have they?).
Ryan Sykes
Ranch Hand
Joined: Jan 18, 2012
Posts: 58
posted
Feb 27, 2012 11:32:56
0
Hi Juan,
I did notice this issue when I read the book. I found the solution on the forums for the book (although surprisingly, there was no response from the authors on the book forums). Dennis, thanks for looking it up and posting why this solution works. I didn't really dig too deep as I figured that I probably never would be using MIDI in anything I write. I was a little put-off that the book chose a MIDI based application as the main programming project, as it seems that they ended up devoting far too many pages on explaining how to get a working MIDI sequence...pages that I personally feel could have been devoted to some of the things they left out in the book. Still, it is an excellent introductory book to OOP concepts and to Java.
I agree. Here's the link:
subject: MiniMusicPlayer2 class in Head First Java book (Chapter 12)
Similar Threads
HFJ Music Box Page 392
MiniMusicPlayer HFJ
Head First Code
Java Class hangs after run
Program Works, But Why Does It Hang At The End?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/568665/java/java/MiniMusicPlayer-class-Head-Java-book | CC-MAIN-2013-48 | refinedweb | 867 | 67.89 |
Hi all,
i tried to run Linux command from Java Application.
Here i past my code:-
package com;
public class linux_java { public static void main(String[] args) { try { String command = "cut -f 2,5 ABC/test.tab>ABC/test1.tab"; final Process process = Runtime.getRuntime().exec(command); int returnCode = process.waitFor(); System.out.println("Return code = " + returnCode); } catch (Exception e) { e.printStackTrace(); } }}
Return code =1
But no new file was created.
But when i run "cut -f 2,5 ALLIANCEHOURLYFILES/test.tab>ALLIANCEHOURLYFILES/test1.tab" command from Linux terminal it's works.
please help me.
Thanks!Vaskar
Try using the full path to cut (probably /bin/cut or /usr/bin/cut), since if I'm not mistaken Java doesn't know it's supposed to look for your program in your PATH.
Also make sure your program is running on the right directory.
I tried but it didn't work...
AddSystem.out.println( System.getProperty("user.dir") );to get the current working directory, ABC should be a subdirectory of it.
Alternately, use absolute paths for everything.
I changed my code
import java.io.*;
public class linux_java { public static void main(String[] args) { try { String command = "/bin/cut -f 2,5 /var/www/html/final_java/ABC/qavhourly.tab"; BufferedWriter out = new BufferedWriter(new FileWriter( new File( "/var/www/html/final_java/ABC/testqavhourly.tab"), true)); final Process process = Runtime.getRuntime().exec(command); BufferedReader buf = new BufferedReader(new InputStreamReader( process.getInputStream())); String line; while ((line = buf.readLine()) != null) { out.write(line); out.newLine(); } buf.close(); out.close(); int returnCode = process.waitFor(); System.out.println("Return code = " + returnCode); } catch (Exception e) { e.printStackTrace(); } }}
the code is works properly.But it takes long time.Because the file size is too large i.e nearly 1 GB.Is there any way to reduce the time? pls help me.....
I need to execute Linux commands in terminal with the help of Java GUI . I also need to store the result back to a file.Please help!! | https://www.sitepoint.com/community/t/call-linux-command-from-java-application/3751 | CC-MAIN-2015-48 | refinedweb | 329 | 54.39 |
Distilled • LeetCode • Sliding/Rolling/Moving Window
- Pattern: Sliding/Rolling/Moving Window
- Introduction
- Approach: Brute Force
- Complexity
- Approach: Sliding Window
- [1/Easy] Two Sum
- [Easy] Maximum Sum Subarray of Size K
- [3/Medium] Longest Substring Without Repeating Characters
- [15/Medium] 3Sum
- [18/Medium] 4Sum
- [30/Hard] Substring with Concatenation of All Words
- [76/Hard] Minimum Window Substring
- [167/Medium] Two Sum II - Input Array Is Sorted
- [198/Medium] House Robber
- [904/Medium] Fruit Into Baskets
- [1291/Medium] Sequential Digits
Pattern: Sliding/Rolling/Moving Window
Introduction
In many problems dealing with an array (or a LinkedList), we are asked to find or calculate something among all the subarrays (or sublists) of a given size. For example, take a look at this problem:
Given an array, find the average of all subarrays of ‘K’ contiguous elements in it.
Let’s understand this problem with a real input:
Array: [1, 3, 2, 6, -1, 4, 1, 8, 2], K=5
- Here, we are asked to find the average of all subarrays of ‘5’ contiguous elements in the given array. Let’s solve this:
1. For the first 5 numbers (subarray from index 0-4), the average is: (1+3+2+6-1)/5 => 2.2(1+3+2+6−1)/5=>2.2 1. The average of next 5 numbers (subarray from index 1-5) is: (3+2+6-1+4)/5 => 2.8(3+2+6−1+4)/5=>2.8 1. For the next 5 numbers (subarray from index 2-6), the average is: (2+6-1+4+1)/5 => 2.4(2+6−1+4+1)/5=>2.4 ...
- Here is the final output containing the averages of all subarrays of size 5:
Output: [2.2, 2.8, 2.4, 3.6, 2.8]
Approach: Brute Force
- A brute-force algorithm will calculate the sum of every 5-element subarray of the given array and divide the sum by ‘5’ to find the average. This is what the algorithm will look like:
def find_averages_of_subarrays(K, arr): result = [] for i in range(len(arr)-K+1): # find sum of next 'K' elements _sum = 0.0 for j in range(i, i+K): _sum += arr[j] result.append(_sum/K) # calculate average return result def main(): result = find_averages_of_subarrays(5, [1, 3, 2, 6, -1, 4, 1, 8, 2]) print("Averages of subarrays of size K: " + str(result)) main()
Complexity
- Time:
Since for every element of the input array, we are calculating the sum of its next \(K\) elements, the time complexity of the above algorithm will be \(O(N*K)\) where \(N\) is the number of elements in the input array.
Can we find a better solution? Do you see any inefficiency in the above approach?
The inefficiency is that for any two consecutive subarrays of size ‘5’, the overlapping part (which will contain four elements) will be evaluated twice. For example, take the above-mentioned input:
As you can see, there are four overlapping elements between the subarray (indexed from 0-4) and the subarray (indexed from 1-5). Can we somehow reuse the
sumwe have calculated for the overlapping elements?
The efficient way to solve this problem would be to visualize each subarray as a rolling/sliding window of ‘5’ elements. This means that we will slide the window by one element when we move on to the next subarray. To reuse the
sumfrom the previous subarray, we will subtract the element going out of the window and add the element now being included in the sliding window. This will save us from going through the whole subarray to find the
sumand, as a result, the algorithm complexity will reduce to \(O(N)\).
Approach: Sliding Window
def find_averages_of_subarrays(K, arr): result = [] windowSum, windowStart = 0.0, 0 for windowEnd in range(len(arr)): windowSum += arr[windowEnd] # add the next element # slide the window, we don't need to slide if we've not hit the required window size of 'k' if windowEnd >= K - 1: print(windowEnd, K) result.append(windowSum / K) # calculate the average windowSum -= arr[windowStart] # subtract the element going out windowStart += 1 # slide the window ahead return result def main(): result = find_averages_of_subarrays(5, [1, 3, 2, 6, -1, 4, 1, 8, 2]) print("Averages of subarrays of size K: " + str(result)) main()
- Note that in some problems, the size of the sliding window is not fixed. We have to expand or shrink the window based on the problem constraints.
[1/Easy] Two Sum
- In general, sum problems can be categorized into two categories:
- There is an array and you add some numbers to get to (or close to) a target, or
- You need to return indices of numbers that sum up to a (or close to) a target value.
- Note that when the problem is looking for a indices, sorting the array is probably NOT a good idea.
Problem
-.
- Example 1:
Input: nums = [2,7,11,15], target = 9 Output: [0,1] Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].
- Example 2:
Input: nums = [3,2,4], target = 6 Output: [1,2]
- Example 3:
Input: nums = [3,3], target = 6 Output: [0,1]
- Constraints:
2 <= nums.length <= 104
-109 <= nums[i] <= 109
-109 <= target <= 109
- Only one valid answer exists.
- See problem on LeetCode.
Solution: Sliding Window
- This is the second type of the problems where we’re looking for indices, so sorting is not necessary. What you’d want to do is to go over the array, and try to find two integers that sum up to a target value. Most of the times, in such a problem, using dictionary (hastable) helps. You try to keep track of you’ve observations in a dictionary and use it once you get to the results.
- In this problem, you initialize a dictionary (
seen). This dictionary will keep track of numbers (as
key) and indices (as
value).
- So, you go over your array (line #1) using
enumeratethat gives you both index and value of elements in array.
- As an example, let’s do
nums = [2,3,1]and
target = 3. Let’s say you’re at index
i = 0and
value = 2, ok? you need to find
value = 1to finish the problem, meaning,
target - 2 = 1. 1 here is the remaining. Since remaining + value = target, you’re done once you found it.
- So when going through the array, you calculate the remaining and check to see whether remaining is in the seen dictionary (line #3). If it is, you’re done!
- The current number and the remaining from
seenwould give you the output (line #4).
- Otherwise, you add your current number to the dictionary (line #5) since it’s going to be a remaining for (probably) a number you’ll see in the future assuming that there is at least one instance of answer.
class Solution: def twoSum(self, nums: List[int], target: int) -> List[int]: seen = {} for i, value in enumerate(nums): #1 remaining = target - nums[i] #2 if remaining in seen: #3 return [i, seen[remaining]] #4 else: seen[value] = i #5
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[Easy] Maximum Sum Subarray of Size K
Problem
Given an array of positive numbers and a positive number ‘k,’ find the maximum sum of any contiguous subarray of size ‘k’.
Example 1:
Input: [2, 1, 5, 1, 3, 2], k=3 Output: 9 Explanation: Subarray with maximum sum is [5, 1, 3].
- Example 2:
Input: [2, 3, 4, 1, 5], k=2 Output: 7 Explanation: Subarray with maximum sum is [3, 4].
Solution: Brute force
- A basic brute force solution will be to calculate the sum of all ‘k’ sized subarrays of the given array to find the subarray with the highest sum. We can start from every index of the given array and add the next ‘k’ elements to find the subarray’s sum. Following is the visual representation of this algorithm for example 1:
def max_sub_array_of_size_k(k, arr): max_sum = 0 window_sum = 0 for i in range(len(arr) - k + 1): window_sum = 0 for j in range(i, i+k): window_sum += arr[j] max_sum = max(max_sum, window_sum)*K)\), where ‘N’ is the total number of elements in the given array.
Solution: Sliding window
- If you observe closely, you will realize that to calculate the sum of a contiguous subarray, we can utilize the sum of the previous subarray. For this, consider each subarray as a Sliding Window of size ‘k.’ To calculate the sum of the next subarray, we need to slide the window ahead by one element. So to slide the window forward and calculate the sum of the new position of the sliding window, we need to do two things:
- Subtract the element going out of the sliding window, i.e., subtract the first element of the window.
- Add the new element getting included in the sliding window, i.e., the element coming right after the end of the window.
- This approach will save us from re-calculating the sum of the overlapping part of the sliding window. Here is what our algorithm will look like:
def max_sub_array_of_size_k(k, arr): max_sum , window_sum = 0, 0 window_start = 0 for window_end in range(len(arr)): window_sum += arr[window_end] # add the next element # slide the window, we don't need to slide if we've not hit the required window size of 'k' if window_end >= k-1: max_sum = max(max_sum, window_sum) window_sum -= arr[window_start] # subtract the element going out window_start += 1 # slide the window ahead)\)
- Space: \(O(1)\)
[3/Medium] Longest Substring Without Repeating Characters
Problem
Given a string
s, find the length of the longest substring without repeating characters.
Example 1:
Input: s = "abcabcbb" Output: 3 Explanation: The answer is "abc", with the length of 3.
- Example 2:
Input: s = "bbbbb" Output: 1 Explanation: The answer is "b", with the length of 1.
- Example 3:
Input: s = "pwwkew" Output: 3 Explanation: The answer is "wke", with the length of 3. Notice that the answer must be a substring, "pwke" is a subsequence and not a substring.
- Constraints:
0 <= s.length <= 5 * 104
s consists of English letters, digits, symbols and spaces.
Solution: Brute Force
- Intuition:
- Check all the substring one by one to see if it has no duplicate character.
- Algorithm:
- Suppose we have a function
boolean allUnique(String substring)which will return true if the characters in the substring are all unique, otherwise false. We can iterate through all the possible substrings of the given string s and call the function
allUnique. If it turns out to be true, then we update our answer of the maximum length of substring without duplicate characters.
- Now let’s fill the missing parts:
- To enumerate all substrings of a given string, we enumerate the start and end indices of them. Suppose the start and end indices are \(i\) and \(j\), respectively. Then we have \(0 \leq i \lt j \leq n\) (here end index \(j\) is exclusive by convention). Thus, using two nested loops with \(i\) from \(0\) to \(n - 1\) and \(j\) from \(i+1\) to \(n\), we can enumerate all the substrings of \(s\).
- To check if one string has duplicate characters, we can use a set. We iterate through all the characters in the string and put them into the set one by one. Before putting one character, we check if the set already contains it. If so, we return
false. After the loop, we return
true.
class Solution: def lengthOfLongestSubstring(self, s: str) -> int: def check(start, end): chars = [0] * 128 for i in range(start, end + 1): c = s[i] chars[ord(c)] += 1 if chars[ord(c)] > 1: return False return True n = len(s) res = 0 for i in range(n): for j in range(i, n): if check(i, j): res = max(res, j - i + 1) return res
Complexity
- Time: \(O(n^3)\)
- Space: \(O(min(n,m))\). We need \(O(k)\) space for checking a substring has no duplicate characters, where \(k\) is the size of the
Set. The size of the Set is upper bounded by the size of the string \(n\) and the size of the charset/alphabet \(m\).
Solution: Sliding Window
- Algorithm:
- The naive approach is very straightforward. But it is too slow. So how can we optimize it?
- In the naive approaches, we repeatedly check a substring to see if it has duplicate character. But it is unnecessary. If a substring \(s_{ij}\) from index \(i\) to \(j - 1\) is already checked to have no duplicate characters. We only need to check if \(s[j]\) is already in the substring \(s_{ij}\).
- To check if a character is already in the substring, we can scan the substring, which leads to an \(O(n^2)\) algorithm. But we can do better.
- By using HashSet as a sliding window, checking if a character in the current can be done in \(O(1)\).
- A sliding window is an abstract concept commonly used in array/string problems. A window is a range of elements in the array/string which usually defined by the start and end indices, i.e., \([i, j)\) (left-closed, right-open). A sliding window is a window “slides” its two boundaries to the certain direction. For example, if we slide \([i, j)\) to the right by 1 element, then it becomes \([i+1, j+1)\) (left-closed, right-open).
- Back to our problem. We use HashSet to store the characters in current window \([i, j)\) (\(j = i\) initially). Then we slide the index \(j\) to the right. If it is not in the HashSet, we slide \(j\) further. Doing so until \(s[j]\) is already in the HashSet. At this point, we found the maximum size of substrings without duplicate characters start with index \(i\). If we do this for all \(i\), we get our answer.
class Solution: def lengthOfLongestSubstring(self, s: str) -> int: chars = [0] * 128 left = right = 0 res = 0 while right < len(s): r = s[right] chars[ord(r)] += 1 # loop until while chars[ord(r)] > 1: l = s[left] chars[ord(l)] -= 1 left += 1 res = max(res, right - left + 1) right += 1 return res
Complexity
- Time: \(O(2n) = O(n)\). In the worst case each character will be visited twice by ii and jj.
- Space: \(O(min(n,m))\). We need \(O(k)\) space for the sliding window, where \(k\) is the size of the
Set. The size of the Set is upper bounded by the size of the string nn and the size of the charset/alphabet \(m\).
Solution: Sliding Window with Hashmap
- The above solution requires at most \(2n\) steps. In fact, it could be optimized to require only n steps. Instead of using a set to tell if a character exists or not, we could define a mapping of the characters to its index. Then we can skip the characters immediately when we found a repeated character.
- The reason is that if \(s[j]\) have a duplicate in the range \([i, j)\) with index \(j'\), we don’t need to increase \(i\) little by little. We can skip all the elements in the range \([i, j']\) and let \(i\) to be \(j' + 1\) directly.
class Solution: def lengthOfLongestSubstring(self, s: str) -> int: n = len(s) ans = 0 # charMapping stores the mapping of the "current" index of a character charMapping = {} i = 0 # try to extend the range [i, j] for j in range(n): if s[j] in charMapping: i = max(charMapping[s[j]], i) ans = max(ans, j - i + 1) charMapping[s[j]] = j + 1 return ans
Complexity
- Time: \(O(n)\). Index \(j\) will iterate \(n\) times.
- Space: \(O(min(m,n))\). Same as the previous approach.
Sliding Window with an Integer Array
- The previous implementations all have no assumption on the charset of the string
s.
- If we know that the charset is rather small, we can replace the Map with an integer array as direct access table.
- Commonly used tables are:
int[26]for Letters ‘a’ - ‘z’ or ‘A’ - ‘Z’
int[128]for ASCII
int[256]for Extended ASCII
class Solution: def lengthOfLongestSubstring(self, s: str) -> int: n = len(s) ans = 0 # charMapping stores the mapping of the "current" index of a character charMapping = [None] * 128 i = 0 # try to extend the range [i, j] for j in range(n): if charMapping[ord(s[j])] is not None: i = max(charMapping[ord(s[j])], i) ans = max(ans, j - i + 1) charMapping[ord(s[j])] = j + 1 return ans
- Same approach, rephrased:
class Solution: def lengthOfLongestSubstring(self, s: str) -> int: chars = [None] * 128 left = right = 0 res = 0 while right < len(s): r = s[right] index = chars[ord(r)] if index != None and index >= left and index < right: left = index + 1 res = max(res, right - left + 1) chars[ord(r)] = right right += 1 return res
Complexity
- Time: \(O(n)\). Index \(j\) will iterate \(n\) times.
- Space: \(O(m)\), where \(m\) is the size of the charset.
[15/Medium] 3Sum: []
Solution: Adapt to Two-Sum
- Another way to solve this problem is to change it into a two sum problem. Instead of finding
a+b+c = 0, you can find
a+b = -cwhere we want to find two numbers
aand
bthat are equal to
-c. This is similar to the first problem. Remember if you wanted to use the exact same as the first code, it’d return indices and not numbers. Also, we need to re-arrange this problem in a way that we have
numsand
target.
class Solution: def threeSum(self, nums: List[int]) -> List[List[int]]: res = [] nums.sort() for i in range(len(nums)-2): if i > 0 and nums[i] == nums[i-1]: continue output_2sum = self.twoSum(nums[i+1:], -nums[i]) if output_2sum ==[]: continue else: for idx in output_2sum: instance = idx + [nums[i]] res.append(instance) output = [] for idx in res: if idx not in output: output.append(idx) return output def twoSum(self, numbers: List[int], target: int) -> List[int]: seen = {} res = [] for i, value in enumerate(nums): #1 remaining = target - nums[i] #2 if remaining in seen: #3 res.append([value, remaining]) #4 else: seen[value] = i #5 return res
Complexity
- Time: \(O(n^2)\)
- Space: \(O(n)\)
Solution: Two pointers
This is similar to the Two Sum II - Input Array is Sorted problem except that it’s looking for three numbers. There are some minor differences in the problem statement. It’s looking for all combinations (not just one) of solutions returned as a list. And second, it’s looking for unique combinations, so repetition is not allowed.
Here, instead of looping (line #1) to
len(nums) - 1, we loop to
len(nums) - 2since we’re looking for three numbers. Since we’re returning values, sort would be a good idea. Otherwise, if the
numsis not sorted, you cannot reducing
rightpointer or increasing
leftpointer easily.
So, first you
sortthe array and define
res = []to collect your outputs. In line #2, we check whether two consecutive elements are equal or not because if they are, we don’t want them (solutions need to be unique) and will skip to the next set of numbers. Also, there is an additional constrain in this line that
i > 0. This is added to take care of cases like
nums = [1, 1, 1]and
target = 3. If we didn’t have
i > 0, then we’d skip the only correct solution and would return
[]as our answer which is wrong (correct answer is
[[1, 1, 1]].
We define two additional pointers this time,
left = i + 1and
right = len(nums) - 1. For example, if nums =
[-2, -1, 0, 1, 2], all the points in the case of i=1 are looking at:
iat -1, left at 0 and right at 2. We then check temp variable similar to the previous example. There is only one change with respect to the previous example here between lines #5 and #10. If we have the
temp = target, we obviously add this set to
resin line #5. However, we’re not done yet. For a fixed
i, we still need to check and see whether there are other combinations by just changing left and right pointers. That’s what we are doing in lines #6, 7, 8. If we still have the condition of
left < rightand
nums[left]and the number to the right of it are not the same, we move left one index to right (line #6). Similarly, if
nums[right]and the value to left of it is not the same, we move right one index to left. This way for a fixed i, we get rid of repetitive cases. For example, if
nums = [-3, 1, 1, 3,5]and
target = 3, one we get the first
[-3, 1, 5],
left = 1, but,
nums[2]is also 1 which we don’t want the left variable to look at it simply because it’d again return
[-3, 1, 5]. So, we move left one index. Finally, if the repeating elements don’t exists, lines #6 to #8 won’t get activated. In this case we still need to move forward by adding 1 to left and extracting 1 from right (lines #9, 10).
class Solution: def threeSum(self, nums: List[int]) -> List[List[int]]: nums.sort() # We need to sort the list first! res = [] # Result list holding the triplets # len(nums)-2 is because we need at least 3 numbers to continue. for i in range(len(nums)-2): #1 # Since the list is sorted, if nums[i] > 0, then all # nums[j] with j > i are positive as well, and we cannot # have three positive numbers sum up to 0. Return immediately. if nums[i] > 0: break # if i > 0 is because when i = 0, it doesn't need to check if it's a duplicate # element since it doesn't even have a previous element to compare with. # This condition also helps avoid a negative index, i.e., when i = 0, nums[i-1] = nums[-1] # and you don't want to skip this iteration when nums[0] == nums[-1] # The nums[i] == nums[i-1] condition helps us avoid duplicates. # E.g., given [-1, -1, 0, 0, 1], when i = 0, we see [-1, 0, 1] # works. Now at i = 1, since nums[1] == -1 == nums[0], we avoid # this iteration and thus avoid duplicates. if i > 0 and nums[i] == nums[i-1]: #2 continue # Classic two pointer solution left = i + 1 #3 right = len(nums) - 1 #4 while left < right: curr_sum = nums[i] + nums[left] + nums[right] if curr_sum > 0: # sum too large, move right ptr right -= 1 elif curr_sum < 0: # sum too small, move left ptr left += 1 else: res.append([nums[i], nums[left], nums[right]]) #5 # the below 2 loops are to avoid duplicate triplets # we need to skip elements that are identical to our # current solution, otherwise we would have duplicated triples while left < right and nums[left] == nums[left + 1]: #6 left += 1 while left < right and nums[right] == nums[right - 1]:#7 right -= 1 #8 right -= 1 #9 left += 1 #10 return res
Complexity
- Time: \(O(n^2)\)
- Space: \(O(n)\)
[18/Medium] 4Sum
Problem
- Given an array
numsof
nintegers, return an array of all the unique quadruplets
[nums[a], nums[b], nums[c], nums[d]]such that:
0 <= a, b, c, d < n
a, b, c, and d are distinct.
nums[a] + nums[b] + nums[c] + nums[d] == target
You may return the answer in any order.
- Example 1:
Input: nums = [1,0,-1,0,-2,2], target = 0 Output: [[-2,-1,1,2],[-2,0,0,2],[-1,0,0,1]]
- Example 2:
Input: nums = [2,2,2,2,2], target = 8 Output: [[2,2,2,2]]
- Constraints:
1 <= nums.length <= 200
-109 <= nums[i] <= 109
-109 <= target <= 109
- See problem on LeetCode.
Solution: Two pointers
class Solution: def fourSum(self, nums: List[int], target: int) -> List[List[int]]: # Sort the initial list nums.sort() # HashMap for the solution, to avoid duplicates solution = {} # i = 0 ..... n-1 for i in range(len(nums)): #j = i+1 ..... n-1 for j in range(i+1, len(nums)): # Two pointer approach to find the remaining two elements start = j+1 end = len(nums) - 1 while start < end: temp = nums[i] + nums[j] + nums[start] + nums[end] if (temp == target): solution[(nums[i], nums[j], nums[start], nums[end])] = True start +=1 end -=1 elif temp < target: start += 1 else: end -=1 return solution.keys()
Complexity
- Time: \(O(n^3)\)
- Space: \(O(n)\)
Solution: Adapt to Two-Sum-II
You should have gotten the idea, and what you’ve seen so far can be generalized to nSum. Here, I write the generic code using the same ideas as before. What I’ll do is to break down each case to a 2Sum II problem, and solve them recursively using the approach in 2Sum II example above.
First sort nums, then I’m using two extra functions, helper and twoSum. The twoSum is similar to the 2sum II example with some modifications. It doesn’t return the first instance of results, it check every possible combinations and return all of them now. Basically, now it’s more similar to the 3Sum solution. Understanding this function shouldn’t be difficult as it’s very similar to 3Sum. As for helper function, it first tries to check for cases that don’t work (line #1). And later, if the N we need to sum to get to a target is 2 (line #2), then runs the twoSum function. For the more than two numbers, it recursively breaks them down to two sum (line #3). There are some cases like line #4 that we don’t need to proceed with the algorithm anymore and we can break. These cases include if multiplying the lowest number in the list by N is more than target. Since its sorted array, if this happens, we can’t find any result. Also, if the largest array (nums[-1]) multiplied by N would be less than target, we can’t find any solution. So, break.
For other cases, we run the helper function again with new inputs, and we keep doing it until we get to N=2 in which we use twoSum function, and add the results to get the final output.
class Solution: def fourSum(self, nums: List[int], target: int) -> List[List[int]]: nums.sort() results = [] self.helper(nums, target, 4, [], results) return results def helper(self, nums, target, N, res, results): if len(nums) < N or N < 2: #1 return if N == 2: #2 output_2sum = self.twoSum(nums, target) if output_2sum != []: for idx in output_2sum: results.append(res + idx) else: for i in range(len(nums) -N +1): #3 if nums[i]*N > target or nums[-1]*N < target: #4 break if i == 0 or i > 0 and nums[i-1] != nums[i]: #5 self.helper(nums[i+1:], target-nums[i], N-1, res + [nums[i]], results) def twoSum(self, nums: List[int], target: int) -> List[int]: res = [] left = 0 right = len(nums) - 1 while left < right: temp_sum = nums[left] + nums[right] if temp_sum == target: res.append([nums[left], nums[right]]) right -= 1 left += 1 while left < right and nums[left] == nums[left - 1]: left += 1 while right > left and nums[right] == nums[right + 1]: right -= 1 elif temp_sum < target: left +=1 else: right -= 1 return res
Complexity
- Time: TODO
- Space: TODO
[30/Hard] Substring with Concatenation of All Words
Problem
You are given a string
sand an array of strings
wordsof the same length. Return all starting indices of
substring(s)in
sthat is a concatenation of each word in
wordsexactly once, in any order, and without any intervening characters.
You can return the answer in any order.
Example 1:
Input: s = "barfoothefoobarman", words = ["foo","bar"] Output: [0,9] Explanation: Substrings starting at index 0 and 9 are "barfoo" and "foobar" respectively. The output order does not matter, returning [9,0] is fine too.
- Example 2:
Input: s = "wordgoodgoodgoodbestword", words = ["word","good","best","word"] Output: []
- Example 3:
Input: s = "barfoofoobarthefoobarman", words = ["bar","foo","the"] Output: [6,9,12]
- Constraints:
1 <= s.length <= 104
s consists of lower-case English letters.
1 <= words.length <= 5000
1 <= words[i].length <= 30
words[i] consists of lower-case English letters.
- See problem on LeetCode.
Solution: Approach 1: Check All Indices Using a Hash Table
- Intuition:
Definition: a valid substring is a string that is a concatenation of all of the words in our word bank. So if we are given the words “foo” and “bar”, then “foobar” and “barfoo” would be valid substrings.
- An important detail in the problem description to notice is that all elements in words have the same length. This gives us valuable information about all valid substrings - we know what length they will be. Each valid substring is the concatenation of words.length words which all have the same length, so each valid substring has a length of
words.length * words[0].length.
This makes it easy for us to take a given index and check if a valid substring starting at this index exists. Let’s say that the elements of words have a length of 3. Then, for a given starting index, we can just look at the string in groups of 3 characters and check if those characters form a word in words. Because words can have duplicate words, we should use a hash table to maintain a count for each word. As a bonus, a hash table also lets us search for word matches very quickly.
We can write a helper function that takes an index and returns if a valid substring starting at this index exists. Then, we can build our answer by running this function for all candidate indices. The logic for this function can be something along the lines of:
- Iterate from the starting index to the starting index plus the size of a valid substring.
- Iterate words[0].length characters at a time. At each iteration, we will look at a substring with the same length as the elements in words.
- If the substring doesn’t exist in words, or it does exist but we already found the necessary amount of it, then return false.
- We can use a hash table to keep an updated count of the words between the starting index and the current index.
-.
- Create a function
checkthat takes a starting index
iand returns if a valid substring starts at index
i:
- Create a copy of
wordCountto make use of for this particular index. Let’s call it
remaining. Also, initialize an integer wordsUsed which tracks how many matches we have found so far.
- Iterate starting from
i. Iterate until
i + substringSize- we know that each valid substring will have this size, so we don’t need to go further. At each iteration, we will be checking for a word - and we know each word has a length of wordLength, so increment by
wordLengtheach time.
- If the variable we are iterating with is
j, then at each iteration, check for a word
sub = s.substring(j, j + wordLength).
- If sub is in
remainingand has a value greater than 0, then decrease its count by 1 and increase
wordsUsedby 1. Otherwise, break out of the loop.
- At the end of it all, if
wordsUsed == k, that means we used up all the words in words and have found a valid substring. Return true if so, false otherwise.
- Now that we have this function check, we can just check all possible starting indices. Because a valid substring has a length of substringSize, we only need to iterate up to n -
substringSize. Build an array with all indices that pass check and return it.
- Implementation:
class Solution: def findSubstring(self, s: str, words: List[str]) -> List[int]: n = len(s) k = len(words) word_length = len(words[0]) substring_size = word_length * k word_count = collections.Counter(words) def check(i): # Copy the original dictionary to use for this index remaining = word_count.copy() words_used = 0 # Each iteration will check for a match in words for j in range(i, i + substring_size, word_length): sub = s[j : j + word_length] if remaining[sub] > 0: remaining[sub] -= 1 words_used += 1 else: break # Valid if we used all the words return words_used == k answer = [] for i in range(n - substring_size + 1): if check(i): answer.append(i) return answer
Complexity
- Time: \(O(n \cdot a \cdot b - (a \cdot b) ^ 2)\), given \(n\) as the length of \(s\), \(a\) as the length of words, and \(b\) as the length of each word.
- First, let’s analyze the time complexity of check. We start by creating a copy of our hash table, which in the worst case will take O(a)O(a) time, when words only has unique elements. Then, we iterate aa times (from i to i + substringSize, wordLength at a time): \(\text{substringSize / wordLength = words.length = a}\). At each iteration, we create a substring, which takes wordLength = bb time. Then we do a hash table check.
- That means each call to check uses O(a + a \cdot (b + 1))O(a+a⋅(b+1)) time, simplified to \(O(a \cdot b)\). How many times do we call check? Only \(n - substringSize\) times. Recall that substringSize is equal to the length of words times the length of words[0], which we have defined as aa and bb respectively here. That means we call check \(n - a \cdot b\) times.
- This gives us a time complexity of O((n - a \cdot b) \cdot a \cdot b)O((n−a⋅b)⋅a⋅b), which can be expanded to \(O(n \cdot a \cdot b - (a \cdot b) ^ 2)\).
- Space: \(O(a + b)\).
- Most of the time, the majority of extra memory we use is the hash table to store word counts. In the worst-case scenario where words only has unique elements, we will store up to aa keys.
- We also store substrings in sub which requires \(O(b)\) space. So the total space complexity of this approach is \(O(a + b)\). However, because for this particular problem the upper bound for \(b\) is very small (30), we can consider the space complexity to be \(O(a)\).
Solution: Sliding Window
- Intuition:
- In the previous approach, we made use of the fact that all elements of words share the same length, which allows us to efficiently check for valid substrings. Unfortunately, we repeated a lot of computation - each character of s is iterated over many times. Imagine if we had an input like this:
s = "barfoobarfoo" and words = ["bar", "foo"]
- Valid substrings start at index 0, 3, and 6. Notice that the substrings starting at indices 0 and 3 share the same “foo”. That means we are iterating over and handling this “foo” twice, which shouldn’t be necessary. We do it again with the substrings starting at indices 3 and 6 - they use the same “bar”. In this specific example it may not seem too bad, but imagine if we had an input like:
s = "aaaa...aaa", s.length = 10,000 and words = ["a", "a", ..., "a", "a"], words.length = 5000
We would be iterating over the same characters millions of times. How can we avoid repeated computation? Let’s make use of a sliding window. We can re-use most of the logic from the previous approach, but this time instead of only checking for one valid substring at a time with each call to check, we will try to find all valid substrings in one pass by sliding our window across s.
So how will the left and right bounds of the window move, and how can we tell if we our window is a valid substring? Let’s say we start at index 0 and do the same process as the previous approach - iterate wordLength at a time, so that at each iteration we are focusing on one potential word. Our iteration variable, say right, can be our right bound. We can initialize our left bound at
0, say
left = 0.
Now, right will move at each iteration, by wordLength each time. At each iteration, we have a word sub = s.substring(right, right + wordLength). If sub is not in words, we know that we cannot possibly form a valid substring, so we should reset the entire window and try again, starting with the next iteration. If sub is in words, then we need to keep track of it. Like in the previous approach, we can use a hash table to keep count of all the words in our current window.
When our window has reached the maximum size (
substringSize), we can check if it is a valid substring. Like in the previous approach, we can use an integer wordsUsed to check if
wordsUsed == words.lengthto see if we made use of all the elements in words, and thus have a valid substring. If we do, then we can add
leftto our answer.
Whether we have a valid substring or not, if our window has reached maximum size, we need to move the left bound. This means we need to find the word we are removing from the window, and perform the necessary logic to keep our hash table up to date.
Another thing to note: we may encounter excess words. For example, with
s = "foofoobar", and
words = ["foo", "bar"], the two “foo” should not be matched together to have
wordsUsed = 2. Whenever we find that sub is in words, we should check how many times we have seen sub so far in the current window (using our hash table), and if it is greater than the number of times it appears in
words(which we can find with a second hash table, wordCount in the first approach), then we know we have an excess word and should not increment
wordsUsed.
In fact, so long as we have an excess word, we can never have a valid substring. Therefore, another criterion for moving our left bound should be to remove words from the left until we find the excess word and remove it (which we can accomplish by comparing the hash table values).
- Now that we’ve described the logic needed for the sliding window, how will we apply the window? In the first approach, we tried every candidate index (all indices up until n - substringSize). In this problem, you may notice that starting the process from two indices that are wordLength apart is pointless. For example, if we have
words = ["foo", "bar"], then starting from index 3 is pointless since by starting at index 0, we will move over index 3. However, we will still need to try starting from indices 1 and 2, in case the input looks something like
s = "xfoobar"or
s = "xyfoobar". As such, we will only need to perform the sliding window
wordLengthamount of times.
-.
answeras an array that will hold the starting index of every valid substring.
- Create a function
slidingWindowthat takes an index left and starts a sliding window from left:
- Initialize a hash table
wordsFoundthat will keep track of how many times a word appears in our window. Also, an integer
wordsUsed = 0to keep track of how many words are in our window, and a boolean
excessWord = falsethat indicates if our window is currently holding an excess word, such as a third “foo” if
words = ["foo", "foo"].
- Iterate using the right bound of our window, right. Start iteration at
left, until
n, wordLength at a time. At each iteration:
- We are dealing with a word
sub = s.substring(right, right + wordLength). If sub is not in
wordCount, then we must reset the window. Clear our hash table wordsFound, and reset our variables
wordsUsed = 0and
excessWord = false. Move left to the next index we will handle, which will be
right + wordLength.
- Otherwise, if sub is in
wordCount, we can continue with our window. First, check if our window is beyond max size or has an excess word. So long as either of these conditions are true, move left over while appropriately updating our hash table, integer and boolean variables.
- Now, we can handle
sub. Increment its value in
wordsFound, and then compare its value in wordsFound to its value in
wordCount. If the value is less than or equal, then we can make use of this word in a valid
substring - incrementwordsUsed. Otherwise, it is an excess word, and we should set
excessWord = true.
- At the end of it all, if we have
wordsUsed == kwithout any excess words, then we have a valid substring. Add left to answer.
- Call
slidingWindowwith each index from 0 to
wordLength. Return answer once finished.
class Solution: def findSubstring(self, s: str, words: List[str]) -> List[int]: n = len(s) k = len(words) word_length = len(words[0]) substring_size = word_length * k word_count = collections.Counter(words) def sliding_window(left): words_found = collections.defaultdict(int) words_used = 0 excess_word = False # Do the same iteration pattern as the previous approach - iterate # word_length at a time, and at each iteration we focus on one word for right in range(left, n, word_length): if right + word_length > n: break sub = s[right : right + word_length] if sub not in word_count: # Mismatched word - reset the window words_found = collections.defaultdict(int) words_used = 0 excess_word = False left = right + word_length # Retry at the next index else: # If we reached max window size or have an excess word while right - left == substring_size or excess_word: # Move the left bound over continously leftmost_word = s[left : left + word_length] left += word_length words_found[leftmost_word] -= 1 if words_found[leftmost_word] == word_count[leftmost_word]: # This word was the excess word excess_word = False else: # Otherwise we actually needed it words_used -= 1 # Keep track of how many times this word occurs in the window words_found[sub] += 1 if words_found[sub] <= word_count[sub]: words_used += 1 else: # Found too many instances already excess_word = True if words_used == k and not excess_word: # Found a valid substring answer.append(left) answer = [] for i in range(word_length): sliding_window(i) return answer
Complexity
- Given \(n\) as the length of \(s\), \(a\) as the length of words, and \(b\) as the length of each word:
- Time: \(O(a + n \cdot b)\)
- First, let’s analyze the time complexity of slidingWindow(). The for loop in this function iterates from the starting index left up to n, at increments of wordLength. This results in n / b total iterations. At each iteration, we create a substring of length wordLength, which costs \(O(b)\).
- Although there is a nested while loop, the left pointer can only move over each word once, so this inner loop will only ever perform a total of \(\frac{n}{wordLength}\) iterations summed across all iterations of the outer for loop. Inside that while loop, we also take a substring which costs \(O(b)\), which means each iteration will cost at most \(O(2 \cdot b)\) on average.
- This means that each call to
slidingWindowcosts \(O(\dfrac{n}{b} \cdot 2 \cdot b)\), or \(O(n)\). How many times do we call slidingWindow? wordLength, or \(b\) times. This means that all calls to slidingWindow costs \(O(n \cdot b)\).
- On top of the calls to
slidingWindow, at the start of the algorithm we create a dictionary wordCount by iterating through words, which costs \(O(a)\). This gives us our final time complexity of \(O(a + n \cdot b)\).
- Notice that the length of words \(a\) is not multiplied by anything, which makes this approach much more efficient than the first approach due to the bounds of the problem, as \(n > a \gg b\).
- Space: \(O(a + b)\)
- Most of the times, the majority of extra memory we use is due to the hash tables used to store word counts. In the worst-case scenario where words only has unique elements, we will store up to \(a\) keys in the tables.
- We also store substrings in sub which requires \(O(b)\) space. So the total space complexity of this approach is \(O(a + b)\). However, because for this particular problem the upper bound for \(b\) is very small (30), we can consider the space complexity to be \(O(a)\).
[76/Hard] Minimum Window Substring
Problem
Given two strings
sand
tof lengths
mand
nrespectively, return the minimum window substring of
ssuch that every character in
t(including duplicates) is included in the window. If there is no such substring, return the empty string
"".
s and t consist of uppercase and lowercase English letters.
- See problem on LeetCode.
Solution: Maintain number of times a char is needed and overall missing chars; run a sliding window
- Idea:
- The current window is
s[i:j]and the result window is
s[windowStart:windowEnd]. In
need[char], store how many times character
charis needed (can be negative) and
missingtells how many characters are still missing. In the loop, first add the new character to the window. Then, if nothing is missing, remove as much as possible from the window start and then update the result.
- Algorithm:
- Create a dict for the target list
need = collections.Counter(t).
- Meanwhile create a empty dict for slidewindow.
- Set two point left and right from the beginning
- Valid is to calculate the item in slidewindow,the valid variable represents the number of characters in the window that meet the need condition. If the size of valid and dict_t.size() is the same, it means that the window has met the condition - and has completely covered the string T.
- Pattern for sliderwindow:
class Solution: def minWindow(self, s: str, t: str) -> str: # hash table to store the required char frequency need = collections.Counter(t) # total character count we need to care about missing = len(t) # windowStart and windowEnd to be windowStart, windowEnd = 0, 0 i = 0 # iterate over s starting over index 1 for j, char in enumerate(s, 1): # if char is needed, then decrease missing (since we now have it) if need[char] > 0: missing -= 1 # decrease the freq of char from need (can be negative - which basically denotes # that we have few extra characters which are not required but present in between current window) need[char] -= 1 # we found a valid window if missing == 0: # trim chars from start to find the real windowStart while i < j and need[s[i]] < 0: # we're moving ahead (with i += 1 down below) so add to need[s[i]] need[s[i]] += 1 # march ahead i += 1 # if it's only one char case or curr window is smaller, then update window if windowEnd == 0 or j-i < windowEnd-windowStart: windowStart, windowEnd = i, j # now reset the window to explore smaller windows # make sure the first appearing char satisfies need[char]>0 (since we're ) need[s[i]] += 1 # missed this first char, so add missing by 1 missing += 1 #update i to windowStart+1 for next window i += 1 return s[windowStart:windowEnd]
Complexity
- Time: \(O(n^2)\)
- Space: \(O(n)\) for
need
[167/Medium] Two Sum II - Input Array Is Sorted
Problemand
index2, added by one as an integer array
[index1, index2]of length 2.
The tests are generated such that there is exactly one solution. You may not use the same element twice.
Your solution must use only constant extra space.].
- Constraints:
2 <= numbers.length <= 3 * 104
-1000 <= numbers[i] <= 1000
numbers is sorted in non-decreasing order.
-1000 <= target <= 1000
The tests are generated such that there is exactly one solution.
- See problem on LeetCode.
Solution: Sliding Window
- Follow the solution to Two Sum. The only change made below was to change the order of line #4. In the previous example, the order didn’t matter. But, here the problem asks for ascending order and since the values/indices in seen has always lower indices than your current number, it should come first. Also, note that the problem says it’s not zero based, meaning that indices don’t start from zero, which is why 1 was added to both of them.
class Solution: def twoSum(self, numbers: List[int], target: int) -> List[int]: seen = {} for i, value in enumerate(numbers): remaining = target - numbers[i] if remaining in seen: return [seen[remaining]+1, i+1] #4 else: seen[value] = i
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
Solution: Two Pointers
- A better approach to solve this problem is to treat it as a two pointer problem. Since the array is already sorted, this works.
- You see the following approach in a lot of problems. What you want to do is to have two pointer (if it was 3sum, you’d need three pointers). One pointer move from left and one from right.
- Let’s say you have
numbers = [1,3,6,9]and your
target = 10. Now, left points to 1 at first, and right points to 9. There are three possibilities. If you sum numbers that left and right are pointing at, you get
temp_sum(line #1). If temp_sum is your target, you’re done! You’ll return it (line #9).
- If it’s more than your target, it means that right is pointing to a very large value (line #5) and you need to bring it a little bit to the left to a smaller (or maybe an equal) value (line #6) by adding one to the index. If the
temp_sumis less than target (line #7), then you need to move your left to a little bit larger value by adding one to the index (line #9). This way, you try to narrow down the range in which you’re looking at and will eventually find a couple of number that sum to target, then, you’ll return this in line #9.
- In this problem, since it says there is only one solution, nothing extra is necessary. However, when a problem asks to return all combinations that sum to target, you can’t simply return the first instance and you need to collect all the possibilities and return the list altogether (you’ll see something like this in 3sum).
class Solution: def twoSum(self, numbers: List[int], target: int) -> List[int]: for left in range(len(numbers) -1): #1 right = len(numbers) - 1 #2 while left < right: #3 temp_sum = numbers[left] + numbers[right] #4 if temp_sum > target: #5 right -= 1 #6 elif temp_sum < target: #7 left +=1 #8 else: return [left+1, right+1] #9
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[198/Medium] House Robber
Problem
You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed, the only constraint stopping you from robbing each of them is that adjacent houses have security systems connected and it will automatically contact the police if two adjacent houses were broken into on the same night.
Given an integer array
numsrepresenting the amount of money of each house, return the maximum amount of money you can rob tonight without alerting the police.: Recursion
- Recurrence relation:
rob(i) = max(rob(i - 2) + currentHouseValue, rob(i - 1))
class Solution: def rob(self, nums: List[int]) -> int: if len(nums) <= 2: return max(nums) return self.robMax(nums, len(nums) - 1) # start from the last/top def robMax(self, nums, i): if i < 0: return 0 # when i < 0 we just have to return 0 return max(nums[i] + self.robMax(nums, i - 2), self.robMax(nums, i - 1))
Complexity
- Time: \(O(2^n)\)
- Space: \(O(n)\)
Solution: Recursive/Top-Down DP
class Solution: def rob(self, nums: List[int]) -> int: def robMax(nums, dp, i): if i < 0: return 0 # Look Before You Leap (LBYL) if dp[i] != -1: # if the value of dp[i] is not default i.e. -1 that means we have already calculated it so we dont have to do it again we just have to return it return dp[i] dp[i] = max(nums[i] + robMax(nums, dp, i - 2), robMax(nums, dp, i - 1)) return dp[i] if len(nums) <= 2: return max(nums) dp = [-1 for _ in range(len(nums))] # cache return robMax(nums, dp, len(nums) - 1)
Solution: Iterative/Bottom-Up DP
- Algorithm:
- For each house, the robber has two choices - to rob or not to rob. Also, you can only rob when you didn’t rob the previous house. Here, we create a DP table with size
len(nums)and
dp[i]represents the maximum amount of money you can rob until the \(i^{th}\) house. The goal of this problem is to calculate
dp[-1], which is the maximum amount of money you can rob until the last day.
- Now, let’s write a formula for the two options we have at every stage:
- If you robbed the \((i - 2)^th\) house (and thus didn’t rob the previous \((i - 1)^th\) house, since you cannot rob adjacent houses), you can rob the current \(i^{th}\) house. This can be written as
dp[i - 2] + nums[i], because
dp[i - 2]denotes the maximum amount of money you can rob for the \((i - 2)^th\) house, and since you haven’t robbed the \(i_th\) house, you can add \(nums[i]\).
- If you robbed on the previous \((i - 1)^th\) house, you can’t rob the current house. This can be written as \(dp[i - 1]\).
- You can choose the maximum value between these two choices above, and the larger one is the maximum amount you can rob up until the \(i^{th}\) house. This gives us the the following recurrence formula:
dp[i] = max(nums[i - 2] + nums[i], dp[i - 1])
class Solution: def rob(self, nums: List[int]) -> int: if not nums: return 0 if len(nums) < 3: return max(nums) dp = [0] * len(nums) # the base cases where dp[0] is the amount you can take from 1 house and # dp[1] is the amount you can take from 2 houses (that will be the maximum of nums[0] and nums[1]) dp[0] = nums[0] dp[1] = max(nums[0], nums[1]) for num in range(2, len(nums)): # take max of (i) the current house and two houses (i - 2) before the current house, and # (ii) take all the gains we made up to the previous house. # (since we cannot rob adjacent houses) dp[num] = max(nums[num] + dp[num-2], dp[num-1]) # the recurrence relation return dp[-1] # the last value will be your maximum amount of robbery
Complexity
- Time: \(O(n)\)
- Space: \(O(n)\)
Solution: Sliding window; just book-keep the last two values
In the above solution, if we notice carefully we just need the last two values and not a whole array so we can further optimize it.
Note that in calculating
dp[i], you are just referencing
dp[i - 2]and
dp[i - 1]. In other words, all you need is the two values (max until the previous house and the house before), and you can discard the old values in the DP table.
- Let
a(
rob1or
previn the below solutions) be the maximum robbed amount corresponding to the \((i - 2)^th\) house, and let
b(
rob2or
currin the below solutions) be the maximum robbed amount corresponding to the \((i - 1)^th\) house, today’s maximum would be
max(a + nums[i], b). And for the next house \(i\), this the maximum for the house before (i.e., the \((i - 1)^th\) house), and the maximum of the \((i - 2)^th\) house is now
b. So you can write this formula:
a, b = b, max(a + nums[i], b)
- This indicates new
ais replaced with
b, and new
bis replaced with
max(a + nums[i], b). By repeating this, you can calculate until the last day without storing all the values along the way. Finally,
bat the end is what you want. So you can write the improved version.
class Solution: def rob(self, nums: List[int]) -> int: if len(nums) <= 2: return max(nums) rob1, rob2 = nums[0], max(nums[0], nums[1]) for n in nums[2:]: temp = max(rob1 + n, rob2) # the max amount we can rob from the given house and from the prev's previous and from the previous house rob1, rob2 = rob2, temp # update both the variables return temp # return the max amount
- Get rid of the
tempvariable:
class Solution: def rob(self, nums: List[int]) -> int: if len(nums) <= 2: return max(nums) rob1, rob2 = nums[0], max(nums[0], nums[1]) for n in nums[2:]: # the max amount we can rob from the given house and from the prev's previous and from the previous house rob1, rob2 = rob2, max(rob1 + n, rob2) # update both the variables return rob2 # return the max amount
- Simplified:
class Solution: def rob(self, nums: List[int]) -> int: prev, curr = 0 # every loop, calculate the maximum cumulative amount of money until current house for i in nums: # as the loop begins,curr represents dp[k-1],prev represents dp[k-2] # dp[k] = max{ dp[k-1], dp[k-2] + i } prev, curr = curr, max(curr, prev + i) # as the loop ends,curr represents dp[k],prev represents dp[k-1] return curr
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[904/Medium] Fruit Into Baskets
Problem
You are visiting a farm that has a single row of fruit trees arranged from left to right. The trees are represented by an integer array fruits where
fruits[i]is the type of fruit the
ithtree produces.
You want to collect as much fruit as possible. However, the owner has some strict rules that you must follow:
You only have two baskets, and each basket can only hold a single type of fruit. There is no limit on the amount of fruit each basket can hold. Starting from any tree of your choice, you must pick exactly one fruit from every tree (including the start tree) while moving to the right. The picked fruits must fit in one of your baskets. Once you reach a tree with fruit that cannot fit in your baskets, you must stop. Given the integer array
fruits, return the maximum number of fruits you can pick.
Example 1:
Input: fruits = [1,2,1] Output: 3 Explanation: We can pick from all 3 trees.
- Example 2:
Input: fruits = [0,1,2,2] Output: 3 Explanation: We can pick from trees [1,2,2]. If we had started at the first tree, we would only pick from trees [0,1].
- Example 3:
Input: fruits = [1,2,3,2,2] Output: 4 Explanation: We can pick from trees [2,3,2,2]. If we had started at the first tree, we would only pick from trees [1,2].
Solution: Sliding Window
class Solution: def totalFruit(self, fruits: List[int]) -> int: fruit_types = Counter() distinct = 0 max_fruits = 0 left = right = 0 while right < len(fruits): # check if it is a new fruit, and update the counter if fruit_types[fruits[right]] == 0: distinct += 1 fruit_types[fruits[right]] += 1 # too many different fruits, so start shrinking window while distinct > 2: fruit_types[fruits[left]] -= 1 if fruit_types[fruits[left]] == 0: distinct -= 1 left += 1 # set max_fruits to the max window size max_fruits = max(max_fruits, right-left+1) right += 1 return max_fruits
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[1291/Medium] Sequential Digits
Problem
An integer has sequential digits if and only if each digit in the number is one more than the previous digit.
Return a sorted list of all the integers in the range
[low, high]inclusive that have sequential digits.
Example 1:
Input: low = 100, high = 300 Output: [123,234]
- Example 2:
Input: low = 1000, high = 13000 Output: [1234,2345,3456,4567,5678,6789,12345]
Solution: Start with a defined set of digits and run a sliding window
- One might notice that all integers that have sequential digits are substrings of string “123456789”. Hence to generate all such integers of a given length, just move the window of that length along “123456789” string.
- The advantage of this method is that it will generate the integers that are already in the sorted order.
- Algorithm:
- Initialize sample string “123456789”. This string contains all integers that have sequential digits as substrings. Let’s implement sliding window algorithm to generate them.
- Iterate over all possible string lengths: from the length of
lowto the length of
high.
- For each length iterate over all possible start indexes: from
0to
10 - length.
- Construct the number from digits inside the sliding window of current length.
- Add this number in the output list
nums, if it’s greater than
lowand less than
high.
- Return
nums.
class Solution: def sequentialDigits(self, low: int, high: int) -> List[int]: digits = "123456789" nums = [] # Iterate over all possible string lengths: from the length of low to the length of high. for length in range(len(str(low)), len(str(high)) + 1): # For each length iterate over all possible start indexes: from 0 to len(digits) - length. for start in range(len(digits) - length): num = int(digits[start: start + length]) # Construct the number from digits inside the sliding window of current length. if low <= num <= high: # or if num >= low and num <= high: # Add this number in the output list nums, if it's greater than low and less than high. nums.append(num) return nums
Complexity
- Time: \(O(9(9*9)=O(36)=O(1)\). To keep not more than 36 integers with sequential digits.
- Similar approach, worse runtime:
class Solution: def sequentialDigits(self, low: int, high: int) -> List[int]: digits = "123456789" nums = [] for i in range(len(digits)): for j in range(i+1, len(digits)): num = int(digits[i:j+1]) if low <= num <= high: # or if num >= low and num <= high: nums.append(num) return nums
Complexity
- Time: \(O(8(8*8)=O(36)=O(1)\). To keep not more than 36 integers with sequential digits. | https://aman.ai/code/sliding-window/ | CC-MAIN-2022-40 | refinedweb | 10,289 | 67.69 |
getfsent, getfsspec, getfsfile, setfsent, endfsent - handle fstab entries
#include <fstab.h> void endfsent(void); struct fstab *getfsent(void); struct fstab *getfsfile(const char *mount_point); struct fstab *getfsspec(const char *special_file); int setfsent(void);.
Upon success, the functions getfsent(), getfsfile(), and getfsspec() return a pointer to a struct fstab, while setfsent() returns 1. Upon failure or end-of-file, these functions return NULL and 0, respectively.
These functions are not in POSIX.1-2001.).
These functions are not thread-safe..
getmntent(3), fstab(5)
This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://huge-man-linux.net/man3/getfsspec.html | CC-MAIN-2017-13 | refinedweb | 111 | 60.61 |
Solving the Expression Problem↩ ↪
October 01, 2010
I started working on Magpie out of frustration with a lot of the languages I used. One of the key itches I wanted to scratch is something called the expression problem. The original formulation of it isn’t very helpful to someone not writing a compiler, so I’ll recast it to something that’s a little more tangible and relevant to the kind of code you find yourself writing.
The core problem is one of extension: How do you make it easy to add both new datatypes and new behaviors to an existing system?
Let’s say we’re writing a document editor. We’ve got a few kinds of documents that it can work with: Text, Drawings, and Spreadsheets. And we’ve got a few operations we need to be able to do with a document: draw it to the screen, load it, and save it to disc. They form a grid, like so:
Text Drawing Spreadsheet +-----------+-----------+-----------+ draw() | | | | +-----------+-----------+-----------+ load() | | | | +-----------+-----------+-----------+ save() | | | | +-----------+-----------+-----------+
Each cell in that grid is a chunk of code we’ve got to write. We need to draw text, load a drawing, save a spreadsheet, etc. All nine combinations will be functions that need to be implemented or we’ll have problems if we’re trying to deal with documents generically.
There are a couple of questions to answer:
How do we organize the code for this?
How do we add new columns (new types of documents)?
How do we add new rows (new operations you can perform on any document)?
How do we ensure all of the squares are covered?
The way you’ll answer those is strongly influenced by your choice of language. In many ways language paradigms differ exactly in how they answer just those questions. For our purposes, we’ll only care about three flavors:
Static OOP Languages
These are the most popular languages on the block today, and include C++, Java and C#. They organize code into classes, and put operations as methods on those classes. A Java implementation of the above would look something like:
public interface Document { void draw(); void load(); void save(); } public class TextDocument implements Document { public void draw() { /* draw text doc... */ } public void load() { /* load text doc... */ } public void save() { /* save text doc... */ } } public class DrawingDocument implements Document { public void draw() { /* draw drawing... */ } public void load() { /* load drawing... */ } public void save() { /* save drawing... */ } } public class SpreadsheetDocument implements Document { public void draw() { /* draw spreadsheet... */ } public void load() { /* load spreadsheet... */ } public void save() { /* save spreadsheet... */ } }
An OOP language answers question 1 by saying that all operations for a single type should be lumped together. Everything you can do with a spreadsheet— drawing, loading, and saving— will be all together in the same class and typically the same file. The downside is that the operations are smeared across the codebase. If you want to see how drawing is handled overall, you’ll need to look at three files.
Question 2 is easy: you just define a new class that implements the interface (or inherits from a base class). OOP languages are good at this. You can even do this if the base class or interface is in some other library.
Question 3 is a bit tougher. Let’s say we decide we want to add support for
printing. We’ll have to add a
print() method to our base
Document
interface and then touch every file that implements it. Gross. If
Document
happens to be defined in code we don’t control, we’re out of luck.
Even worse, it means we tend to put things in classes that don’t really belong there. Do we really want to mix the logic for drawing, printing, and dealing with the file system all into one class? There are solutions and patterns to mitigate this, but they’re complex and awkward (I’m looking at you, visitor pattern).
But at least question 4 is easy. The compiler will tell us if we don’t fully
implement an interface, so if we declare a class as implementing
Document we
can be sure that all of the squares in the grid are covered.
Static Functional Languages
Let’s see how the other half lives. Languages in the ML family like Haskell and F# tend to divide things up differently. Where an OOP language breaks that grid along column boundaries, a functional language breaks it into rows.
This even explains the names of the paradigms: Object-oriented languages place emphasis on objects (the columns). Functional languages place emphasis on the functions (the rows).
A Caml implementation of our example would look like:
type document = Text | Drawing | Spreadsheet fun draw (Text) = (* draw text doc... *) | draw (Drawing) = (* draw drawing doc... *) | draw (Spreadsheet) = (* draw spreadsheet... *) fun load (Text) = (* load text doc... *) | load (Drawing) = (* load drawing doc... *) | load (Spreadsheet) = (* load spreadsheet... *) fun save (Text) = (* save text doc... *) | save (Drawing) = (* save drawing doc... *) | save (Spreadsheet) = (* save spreadsheet... *)
(At least, I hope that’s right. Please let me know what I get wrong.)
The
document interface has become an algebraic datatype with cases for
the different concrete document types. Each operation is a single function
that uses pattern matching to select behavior appropriate for that type.
In other words, it switches up its answer to the first question. Functions are
lumped together, with a single
draw function having the logic to draw all
different types of documents together. This keeps different kinds of behavior
nicely isolated from each other: these functions could be put into different
files without any problem.
Question 3 is answered easily: just define a new function somewhere that
handles all of the different document types. Question 2 is where the pain is.
If we add a new document type, we’ll have to touch every function in the
codebase to add a new case for that document. If the core
document datatype
happens to be defined in code we don’t control, we’re hosed again.
Again, though, static typing helps us with question 4: the compiler will tell us if one of these functions doesn’t handle one of the document types. So there’s no net win between the two, we’ve just changed how we slice the same cake. Let’s look at a third option:
Dynamic Languages
Way on the other side of town are dynamic languages like Python, Ruby and Javascript (and their non-OOP progenitors like Scheme, but I’ll focus on OOP ones here because that’s what I’m most familiar with). They’re super flexible and tons of fun to code in. How do they stack up?
The big win is that you can generally organize your code how you want and add both new operations and new types with impunity. The normal case is to organize things like a static OOP language where all of the operations for a type are lumped together into one file.
However, the dynamism gives you more flexibility. If you want to add a new operation to existing types, you have the freedom to do so outside of the file where that type is defined. You can add new methods into existing classes. This lets you, for example, pull the save/load logic from our document classes out into separate files but then mix that back into the original classes so they’re still as easy to use. No visitor pattern in sight.
So that leaves one last question: How do we ensure all of the squares are covered? And that’s when the sad trombone comes in. There is no compile- time checking for this. The best you can do is write lots of tests and hope you’re covered.
Call me crazy, but I’m not happy with any of these solutions. I want the simplicity of defining a class and putting its core operations all in one place. At the same time, I want to be able to mix in new methods into classes outside of the file where its defined. I want to group some code by row (operation) and other code by column (type), each where it makes the most sense.
And once all that’s done, I want the language to be smart enough to tell if I forgot something or messed something up.
Magpie = Open Classes + Static Checking
Here’s how you’d accomplish this in Magpie. First, we’ll define an interface that all documents will implement:
interface Document draw() load() save() end
Then we’ll create some classes that implement them:
class TextDocument draw() // draw text doc... load() // load text doc... save() // save text doc... end class DrawingDocument draw() // draw drawing... load() // load drawing... save() // save drawing... end class SpreadsheetDocument draw() // draw spreadsheet... load() // load spreadsheet... save() // save spreadsheet... end
So far, this looks pretty much like the static OOP solution with a bit less
boilerplate. The biggest difference is that there’s no explicit
implements
Document on the classes. In Magpie, if a class has all of the methods that an
interface requires, then it is automatically considered to implement the
interface.
When you try to use the concrete class in a place where the interface is
expected, it will check then to make sure that the class implements it. Note
that it does this statically, before
main() has ever been called, like a
typical static language.
Extending a Class
Here is where it gets interesting. Now we decide we want to add printing support. In Magpie, classes and interfaces are open for extension. So we can just do:
extend interface Document print() end
If we try to run the program now, we’ll get type-check errors every place we
pass a concrete document class to something that expects the interface: the
classes no longer implement it since they lack the required
print() method.
To patch that up, we’ll implement those:
def TextDocument print() // print text doc... def DrawingDocument print() // print drawing... def SpreadsheetDocument print() // print spreadsheet...
(
def is one of two syntaxes for adding members to a class. It’s nice for
adding a single member to a class. If you’re adding a bunch of members to one
class, you can also do
extend class which works like a regular class
definition but adds to an existing class.)
We can do this wherever we like, in any file. This lets us keep all of the code for printing lumped together and isolated from the rest of the code just like a dynamic language.
The magical part is that this will be statically type-checked too. The program won’t run until we’ve made sure that every document type now has all four methods.
Magpie’s answers for the original four questions are:
How do we organize the code for this? However you like. Put stuff together where it makes sense.
How do we add new columns (new types of documents)? Like a typical OOP language: define a new class. If it has the necessary methods, it’s a
Document.
How do we add new rows (new operations you can perform on any document)? Add new methods to the classes that need them. This can be done outside of the file where the class is defined.
How do we ensure all of the squares are covered? Add the new operation to the interface too. The static checker will then make sure only classes that have the operation are used in places that expect a
Document.
When you’re defining things, you get the flexibility of a dynamic language. Before it runs, though, you get the safety of a static language. | http://journal.stuffwithstuff.com/2010/10/01/solving-the-expression-problem/?repost=yup | CC-MAIN-2015-27 | refinedweb | 1,939 | 73.88 |
WSTransferExpression (C Function)
Details
- dst and src need not be distinct.
- dst and src can be either loopback or ordinary links.
- WSTransferExpression() returns 0 in the event of an error, and a nonzero value if the function succeeds.
- WSTransferExpression() is often used to read an expression from a link in order to store it in a loopback link.
- WSTransferExpression() is passed (WSLINK)0 for dst, the expression on src will be discarded.
- WSTransferExpression() is declared in the WSTP header file wstp.h.
Examples
Basic Examples (1)
#include "wstp.h"
/* transfer an expression to link 2 from loopback link 1 */
void f(WSLINK lp2, WSLINK lp1)
{
/* send EvaluatePacket[ToExpression[str]] to lp1 */
if(! WSPutFunction(lp1, "EvaluatePacket", 1))
{ /* unable to put function to lp1 */ }
if(! WSPutFunction(lp1, "ToExpression", 1))
{ /* unable to put function to lp1 */ }
if(! WSPutString(lp1, "a = Table[RandomInteger[2,12]];"))
{ /* unable to put the string to lp1 */ }
if(! WSEndPacket(lp1))
{ /* unable to put the end-of-packet indicator to lp1 */ }
if(! WSFlush(lp1))
{ /* unable to put flush any outgoing data buffered in lp1 */ }
/* now transfer to lp2 from lp1 */
if(! WSTransferExpression(lp2, lp1))
{ /* unable to transfer an expression from lp1 to lp2 */ }
} | https://reference.wolfram.com/language/ref/c/WSTransferExpression.html | CC-MAIN-2019-22 | refinedweb | 193 | 58.69 |
In Django official documentation, they provide 2 examples on how to use predefined options as choices of a particular field.
The first example defines choices as a Tuple of Tuple, each option being the value of the outer tuple.
The first element in each inner tuple is the value to be set in the model, and the second element being its string representation.
# Example from Django official documentation YEAR_IN_SCHOOL_CHOICES = ( ('FR', 'Freshman'), ('SO', 'Sophomore'), ('JR', 'Junior'), ('SR', 'Senior'), )
In their second example, they suggest us to define choices as constant in our model class.
# Example from Django official documentation from django.db import models class Student(models.Model): # Constants in Model class FRESHMAN = 'FR' SOPHOMORE = 'SO' JUNIOR = 'JR' SENIOR = 'SR' YEAR_IN_SCHOOL_CHOICES = ( (FRESHMAN, 'Freshman'), (SOPHOMORE, 'Sophomore'), (JUNIOR, 'Junior'), (SENIOR, 'Senior'), ) year_in_school = models.CharField( max_length=2, choices=YEAR_IN_SCHOOL_CHOICES, default=FRESHMAN, )
There is a third option which is my preferred way of defining choices for a Field in Model - Enumeration (or Enum)
Enumeration is a set of symbolic names bound to unique constant values. We will be defining our Enums as a subclass of enum in standard Python library. enum in Python official documentation
In this post, I will walk you through using Enum as choices of field with a simple example. Assume there is a field to identify the language of a Book, given choices of language are English, German, Spanish, and Chinese.
First, we define a class for all choices:
from enum import Enum class LanguageChoice(Enum): # A subclass of Enum DE = "German" EN = "English" CN = "Chinese" ES = "Spanish"
Next, we define our model:
from django.db import models class Book(models.Model): title = models.CharField(max_length=255) language = models.CharField( max_length=5, choices=[(tag, tag.value) for tag in LanguageChoice] # Choices is a list of Tuple )
In our Book model, choices of the language field is a list of Tuple. For simplicity, we use list comprehension to generate our list of choices.
For each tuple in choices, the first element is the value being stored into model, while the second is its human-readable representation.
Now we have defined our Enums and model, how would we use it?
Let's say we want to add a Book entry with language 'DE':
# import your Models and Enums b = Book(title='Deutsch Für Ausländer', language=LanguageChoice.DE) b.save()
Note that how we supply the value for parameter language depends on the first element we defined in the Tuples of the list of choices. We use (tag, tag.value) which tag being an Enum type.
You may ask, "Why using Enum instead of the examples given in Django official documentation?"
According to PEP 435,.
First published on 2017-10-18
Republished on Hackernoon
Love Self-driving technology and machine learning. Community leader in DIYRobocar Hong Kong
How do you compare the enum when the record is read from the DB?
assert book.language == LanguageChoice.DE failed if book is read from DB. It looks like it is comparing a string with a Enum object, which does not work I believe. | https://melvinkoh.me/using-enum-as-model-field-choice-in-django-cjye489s1000auns144ysnifw?guid=none | CC-MAIN-2020-29 | refinedweb | 507 | 55.34 |
Java is one of the most popular and widely used programming language. The reason behind the popularity of Java is an open source environment. You can use any of programming language source code in this platform. Also, Java is fast, reliable and secure. From web application to app development, mobile phones to games; Java can be used everywhere.
Things you need to know before learning Java
Before learning Java, you need to have clear understandings of OOP’s concept as well as a basic experience of programming can also help you to boost up your learning. But, it is not necessary. If you have interest and dedication in learning, you can directly start learning Java.
- Understand the Basics of Java
Before start learning anything, you should have a clear understanding of its basic as well as familiar with its environment. It will help you to learn faster without any hustle.
- Step by Step Learning
Java contains a bulk of topics to learn all are used in each others concept. So, to start learning java, you need to start from its basic and then step by step move forward with the best practice of learned concepts.
- Sharpen your Coding
To make your coding best and sharpen, you need to do regular practice because until you don’t practice what you have learned, its just a waste of time. You will forget all of without regular practice.
- Explore More about Java
A regular study of Java and exploring more about its various topics, general use and facts help you to maintain your interest in learning.
- Study with your Friends
Group study is one of the best way for learning and making concepts more creative. With this you will know others way of explaining, their ideas and methods they use. Also, it helps you to solve your coding problems on the spot.
Setting up Java
When you will become comfortable with the Java environment, try running this simple program
/*codeatglance.com Java Program to print “Hello World” */ public class AVG { public static void main(String args[]) { System.out.println("Hello I am learning Java"); } }
Output:
Hello I am learning Java | https://www.codeatglance.com/java/how-to-start-learning-java/ | CC-MAIN-2020-10 | refinedweb | 357 | 62.98 |
For that matter (is it vain to reply to yourself?), I'm pretty sure you could just implement __getattr__() for the criteria class type in the example below, then pass the critera class instance as the formatting argument. That solves your issue with "dictionary entries" being pre-calculated. -----Original Message----- From: Keating, Tim [mailto:TKeating@origin.ea.com] Sent: Monday, August 19, 2002 4:24 PM To: 'Chris Cogdon'; db-sig@python.org Subject: RE: [DB-SIG] format vs pyformat Am I missing something? Aren't you constructing a string and passing it in either way? Why would the database adapter care how it was formatted? In any case, dictionary-based formatting isn't nearly as ugly if you already have the data in a dictionary and don't have to construct an anonymous one to pass to the format operator. This might seem like a pain, but I suggest you look at the vars() built-in function and consider the possibilities . . . -----Original Message----- From: Chris Cogdon [mailto:chris@cogdon.org] Sent: Monday, August 19, 2002 4:15 PM To: db-sig@python.org Subject: [DB-SIG] format vs pyformat In all my database coding to date, I've been using the 'format' parameter passing protocol to get my stuff into the database query. However, I keep reading and hearing that 'pyformat' is by far superior. However, whenever I've 'given it a go', it's always seemed far clunkier to me. For example, for pyformat to work, all your variables have to be in a dictionary already. Yes, this can be reasonably easy to set up. Viz: cur.execute ( "select * from people where name ilike %(name)s and age>%(age)s", { 'name':criteria.name, 'age':criteria.age } ) But I still find it much easier to use the positional parameters. Viz: cur.execute ( "select * from people where name ilike %s and age > %s", criteria.name, criteria.age ) The above is a simple example... the disparity between the two protocols becomes greater with greater complexity. I understand it's possible to pass a dictionary from a pre-existing namespace. Viz: cur.execute ( "select * from people where name ilike %(name)s and age>%(age)s", criteria.__dict__ ) But this assumes that the values to be passed are already pre-calculated, or you'll need to find 'temporary' space for them so you can throw them into the dictionary. pyformat also allows reuse of parameters, but I've seen this used too seldom to make it a big plus. To me, 'format' parameter passing still seems a lot easier to use, but there's quite a few DB connectors that support 'pyformat' over 'format': For example, PoPy only supports pyformat, while pyPgSQL supports both format and pyformat. Am I completely missing some 'neat trick' here that would make pyformat magically easy? What are other people's views on the ease of using the two formats ? Thanks in advance. -- ("`-/")_.-'"``-._ Chris Cogdon <chris@cogdon.org> . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL _______________________________________________ DB-SIG maillist - DB-SIG@python.org _______________________________________________ DB-SIG maillist - DB-SIG@python.org | https://mail.python.org/pipermail/db-sig/2002-August/002653.html | CC-MAIN-2016-30 | refinedweb | 516 | 57.87 |
I am sharing one of the possible ways to Trigger Foxtrot RPA from Nintex Workflow Cloud. Before we get into the scripts on how to do that, maybe it’s a good idea to explain a bit further in the following paragrah, on how from the architecture perspective this is done.
Architecture
Assuming you have a troop of robot soldiers (i.e. FoxBot) lead by a commander (i.e. FoxHub), ignoring the number of soldiers you need to have to form a troop, in our scenario it could be as little as 1 or 2. Since the army is deployed to the battlefield, the location of the army is changing (i.e. without a fixed IP), we are not able to reach out to the Commanders to send orders.
Since we are not suppose to enter the military zone, central general office can only use special communication where messages are being broadcasted over the encrypted radio frequency, and the army should have worker on duty to pick up and decrypt the message(s). As such we deployed a messenger/worker to each Commander (i.e. which is our Foxhub), the worker’s duty is to listen to Broadcast Messages from the central control room and pass the message to the Commander. The commander is then based on the received message to assign duty/job to its soldiers on what to do.
This architecture is depicted in the diagram below. In our scenario, Nintex Workflow Cloud is the engine for “Publishing” message over the RabbitMQ Message Queue system. We are not reaching to Foxhub to pass the messages, instead the Worker that is attached to FoxHub is Subscribed/listening to the Message Queue and pick up any message(s) that is for them to action on. This is safe and we do not need to worry on how to expose our FoxHub to the internet. Message Queue is super fast without us to worry if the FoxHub will be able to take the load of requests as they are queued. In our scenario you will notice the FoxHub will be triggered immediately whenever there is a message published.
This is exactly how we going to do:
- Setting Up Message Queue (i.e. RabbitMQ in our exercise)
- Create the Worker Application
- Create NWC workflow to publish message to the Message Queue
- Testing: Worker picks up the message(s) and talks to FoxHub to assign new job(s)
Setting Up Message Queue
In our scenario, we going to use RabbitMQ for the purpose, as the focus of this exercise is not about RabbitMQ, we are goin to leverage one of the cloud RabbitMQ provider solution to avoid having the need to install RabbitMQ ourselves. In my example, I am using the CloudAMQP.com (i.e. one of the RabbitMQ as a Service provider, the link will direct you to the available plans). For testing or development purpose, you may pick the free “Little Lemur – For Development” to start.
Once you have signed up, a instance will be provisioned. I provide my plan (i.e. I am using Tough Tiger plan here) details in the below capture as am example on what you will get, (please take note on the Red Arrowed highlighted details you will need in the connection later).
Create the Worker application
Worker can be a Windows console app or windows services. For this exercise we going to create it as a Windows Console Application so we can easily monitor the console logs and interact with the application over the console screen. In the event if this is created as a Windows service, we can also setup dependencies for it to auto start every time we start the Foxhub application.
Worker Application is a worker process (i.e. consumer/receiver/subscriber in Message Queue term). It subscribes to Message Queue, being notified whenever there is a new message published to the Queue by publisher. Upon notified and receiving a new message, the Worker is going to use Foxhub API to talk to Foxhub setting up jobs and assigning jobs to Foxbots/Foxtrots. FoxhubAPI.dll is provided in every FoxHub installation that comes with FoxTrot Suite installation.
We going to create a Windows Console Application using Visual Studio (i.e. I am using VS2017 for the purpose, but using .Net Framework 4.7.2), I realized when i compile my application, since FoxHubAPI.DLL is a 32-bit assembly compiled with the latest .Net Framework 4.7.2, I am forced to set the Target CPU to 32-bit and using .NET Framework 4.7.2 is required).
In the Visual Studio, create a new project and select C# Console App as shown in the capture below, give the project a name (i.e. Worker in my below example).
In order for our Worker Application to subscribe and listen to RabbitMQ, we going to install the RabbitMQ.Client API for .NET into our project. We can do this with Tools – NuGet Package Manager – Manage NuGet Package for Solution… from the Visual Studio menu. Search for RabbitMQ from the “Browse” tab as shown below, to find the RabbitMQ Client to install.
Besides communication to the RabbitMQ, the Worker application will also interact with FoxHub using the FoxHubAPI.dll assembly. Add the FoxHubAPI.dll by right click on the Worker solution to browse and add FoxHubAPI.DLL in the Solution Explorer. You should get something similar for the Solution Explorer to the screen capture below once done.
For the exercise purpose, the codes I shared below for the Worker.cs is hard-coded with RabbitMQ connection and FoxHub job queue details. My advice is you can consider to make these settings configurable at the later stage. The following code provide a basic testing I have done so far to prove a working listening and getting message from RabbitMQ and triggering FoxHub to add and get FoxBot to work on the newly added Job. You will need to change the connection values in the following code accordingly to your RabbitMQ setup, same to the RPA file i hardcoded for FoxHub to take.
using RabbitMQ.Client; using RabbitMQ.Client.Events; using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Worker { class Worker { private static void Main() { string strHubPCName = Environment.MachineName; string strAppPCName = Environment.MachineName; //Create the CFoxHub object: FoxHubAPI.CFoxHub objFoxHub = new FoxHubAPI.CFoxHub(strHubPCName, strAppPCName); //Initialize communication with FoxHub: if (objFoxHub.Init() == false) { //Communication with FoxHub failed! return; //Abort and do nothing. }; Console.WriteLine("Connected to Hub"); //Log into FoxHub: //objFoxHub.Login("", "worker", "password"); //Create a Dictionary object to hold the list of bots: Dictionary<int, string> objBotDict; //Get the list of bots: objBotDict = objFoxHub.GetBots(); //Used to capture the Queue Item ID returned by calling QueueJob(): int intQueueItemID; ConnectionFactory factory = new ConnectionFactory { UserName = "coqwpbee", Password = "mxhSRj04O4be85cOsXaCrOrSomethingElse", VirtualHost = "coqwpbee", HostName = "mustang.rmq.cloudamqp.com" };); //Add the job to the queue. Assign all bots to the job: //You may get the RPA file variable from your message instead //to replace with what I have hard coded here.. intQueueItemID = objFoxHub.QueueSoloJob(DateTime.Now.ToString("F"), "C:\\Users\\gank\\CallVBS.rpa", objBotDict.Keys.ToList()); //Run the job: objFoxHub.RunJob(intQueueItemID); int intStatus; //Retrieve the job's status: intStatus = objFoxHub.GetJobStatus(intQueueItemID); }; channel.BasicConsume(queue: "hello", autoAck: true, consumer: consumer); Console.WriteLine(" Press [enter] to exit."); Console.ReadLine(); } //Clean up objects: objBotDict = null; objFoxHub = null; } } }
Once compiled, we may execute the worker.exe, the console will be running waiting and listening to new message(s) from the RabbitMQ.
What is missing here as of now, is a publisher to publish message to the queue. For this, in our scenario, we are going to use Nintex Workflow Cloud to as a publisher to publish a message triggering the FoxHub to assign and get job done by FoxTrot/Bot. This is simple, as CloudAMQP provides Rest API Endpoint for the purpose. We are just going to add a “Call Http Web Service” action to send/publish a message to the RabbitMQ.
Nintex Workflow Cloud to publish message to RabbitMQ
CloudAMQP.com provides http end point for publishing message, what we need to do for Nintex Workflow Cloud is simply add the “Call a web service” action to send message via the CloudAMQP API. You may follow my example below for configuring the “Call a web service” action.
URL: https://<user>:<password>@<host>/api/exchanges/<virtual-host>/amq.default/publish
Request type: HTTP Post
Request content:
{“vhost”:”<vhost>”,”name”:”amq.default”,”properties”:{“delivery_mode”:1,”headers”:{}},”routing_key”:”<queue-name>”,”delivery_mode”:”1″,”payload”:”<message>”,”headers”:{},”props”:{},”payload_encoding”:”string”}
Additional Note:
- Since our Worker example I hard coded for the Worker to subscribe and listen to “hello” queue, the above <queue-name> value will have to set to “hello” in our example, but you may change it to a better queue name.
- I have my message in the format of “RPA;C:\path\to\rpa\file.rpa”, which i can have the Worker to pick up the message and locate the RPA project file to be assigned to the job queue in FoxHub.
Testing the Setup
To test the setup, simply do the following steps:
- Run the FoxHub (note: make sure you have at least one bot registered to the FoxHub)
- Run the Worker.exe (note: we never have any error handler in our code, as we need to connect to the FoxHub, we need to make sure the FoxHub is running before we run the Worker.exe). This should bring us the console with message of “Connected to Hub” and “Press [Enter] to exit. as shown below
- The above console shows the Worker is now active and listening to the RabbitMQ for new messages
- We can now trigger our Nintex Workflow Cloud workflow to run, which it will publish new message to the Message Queue.
- The Worker will immediately picks up the message and trigger FoxHob to add and assign job to FoxTrot/Bot to run.
Important Note:
1. I am using Visual Studio 2017 with .NET Framework 4.7.2
2. The FoxHubAPI.DLL is a 32-bit assembly, you will need to set your project target to run on x86
3. You can get the help content of FoxHubAPI from the Help menu of the FoxHub Application
4. There is no verification code to handle checking if FoxHub is running, as such you will need to start the FoxHub application before you run Worker.exe | https://blog.opus272.com/tag/foxbot/ | CC-MAIN-2021-04 | refinedweb | 1,737 | 55.64 |
User:Flutter/Help:HTML in wikitext
From Uncyclopedia, the content-free encyclopedia.
The following HTML elements are currently permitted[1]:
The following excerpt from Sanitizer", "span", */ );[3]:
[edit].
Using Template:Timc, "a height of {{h:title|6.1 km|20000 ft}} above sea level" gives "a height of 20000 ft above sea level" (note the hover box over "20000 ft").
[edit] Font
Note: This element is deprecated (should not be used) in favor of <span>.
For some attributes, like color, one can also use
a <font color="red">red</font> word.
giving
a red word
It's pointless to combine the legacy tag <font> with inline CSS; legacy browsers would ignore the CSS, while modern browsers support <span> (see above).
[edit].
[edit] MediaWiki namespace
In some pages in the MediaWiki namespace HTML does not work, and e.g. <span id=abc> produces the HTML <span id=abc> rendered by the browser as <span id=abc>.
[edit] Style pages
CSS and JS pages (see Help:User style) are not interpreted as wikitext, and therefore can have arbitrary HTML.
[edit] External links
- ↑ on Wackipedia.
- ↑ Currently, Wackipedia has the good HTML!!
- ↑ This is refered to | http://uncyclopedia.wikia.com/wiki/User:Flutter/Help:HTML_in_wikitext | crawl-002 | refinedweb | 192 | 66.44 |
itunes connect submission failurea-r-d Feb 3, 2014 5:34 PM
Hi All,
I was trying to submit to itunes connect today via application loader on OSX 10.8.5 application loader version 2.9 (439) and got the following message:
ERROR ITMS-9000: "This bundle is invalid. New apps and app updates submitted to the App Store must be built with Xcode 5 and iOS 7 SDK." at SoftwareAssets/SoftwareAsset (MZItmspSoftwareAssetPackage)
Does anyone know the solution? I am getting desperate here.
I compiled the app with AIR 4.0, Flex 11 (newest SDK on apache flex website as of last weekend). I have successfully submitted this application before, this is just an update. I also tried with the older AIR 3.9 compiler and got the same message
1. Re: itunes connect submission failurea-r-d Feb 3, 2014 7:12 PM (in response to a-r-d)
The fix was to go here and get the beta version of flex SDK and overlay on top of the old one.
2. Re: itunes connect submission failureKieranCMG Feb 3, 2014 8:23 PM (in response to a-r-d)
Hmm that didn't work - i am also getting that error even with the beta version overlay? Perhaps i am implimenting it wrong.
@ a-r-d
Are you able to provide more details or a link to the step by step work around in case i am missing something?
3. Re: itunes connect submission failurea-r-d Feb 4, 2014 6:13 AM (in response to KieranCMG)
You can look at this SO post, it has a link to a blog that gives you more details:
If you have never done the Flex / Air SDK overlay you should read that post. The only thing I have to add is to use the beta version that is marked "For Flex Developers" and not the other beta SDK.
I will note that I was able to submit to itunes connect successfully last night after compiling with the Beta AIR 4.0 SDK release.
4. Re: itunes connect submission failurespinlight Feb 4, 2014 10:03 AM (in response to a-r-d)
I'm running into this same error today. I posted a file last Friday without issue. I submitted several updates last week actually. Something changed very recently I think.
EDIT: Just grabbed the 4.0 beta from the lab. That allowed me to submit to Apple without error. (AIR 4.0.0.1619) for iOS.
EDIT 2: I cannot export from Flash to a 3GS (probably not iPad 1 either, but don't have one handy for testing). I'm fine with dropping support for these older devices, but can anyone confirm this is the case?
5. Re: itunes connect submission failureDan Zen Feb 6, 2014 8:45 PM (in response to spinlight)
Would like to know how we should be dealing with older devices - I heard about there being two versions. etc. supported by apple. What are people doing? - so that relates to spinlight's Edit 2.
Also... what happened to the uploading process? Why can't we upload to iTunes with AIR 3.9? Are we cutting it that close? Any comments from Adobe? - that relates to spinlight's first Edit.
Thanks,
Dan
6. Re: itunes connect submission failureDan Zen Feb 7, 2014 9:46 AM (in response to Dan Zen)
Well... tried uploading with AIR 4.0 SDK from - got it yesterday. I overlayed the new AIR files in Flash Builder 4.7 on PC (as I have a dozen times before), changed the namespace and recompiled adjusting my version numbers to match in the app.xml and iTunes. I published with a distribution certificate and matching distribution provisioning profile. The app uses the in-app purchasing ANE from the Adobe Gaming SDK. It was all working when I tested with a test account, etc.
So I copy the file over to the Mac. in the past I have had to unzip the ipa and get the app file out of the payload folder. I then zip this app file up and save it as the myappname.ipa. Then using the Application Loader that I got from the bottom of the Manage Apps page on iTunes I choose my app version to upload - I see that fine, then I send the myappname.ipa that I made. And I STILL GET THE ERROR THAT IT IS AN INVALID PACKAGE.
Anybody have an idea about what I might be missing. This is an ActionScript Mobile project in Flash Builder 4.7 on PC running AIR 4.0 then trying to upload with a Mac.
Any thoughts from the Adobe side???
Thanks!
7. Re: itunes connect submission failureColin Holgate Feb 7, 2014 10:08 AM (in response to Dan Zen)
You should go to the page that a.r.d. said to go to:
In the bottom part of that page is the Flex compatible version of the SDK. Don't go to the release page where you went yesterday, that version is several weeks old.
8. Re: itunes connect submission failureDan Zen Feb 7, 2014 12:36 PM (in response to Colin Holgate)
Oh - thanks - I see the difference now.
9. Re: itunes connect submission failureDan Zen Feb 7, 2014 9:34 PM (in response to Dan Zen)
Submitted successfully . Also - when rezipping on the Mac I forgot that you zip the payload folder not just the app file inside the payload folder as I had said in my earlier description.
10. Re: itunes connect submission failureSunilRana_Gateway Feb 26, 2014 2:27 AM (in response to Colin Holgate)
Submitted and application is approved by Apple with new Adobe air sdk 13.
with Regards,
Sunil | https://forums.adobe.com/thread/1397323 | CC-MAIN-2017-51 | refinedweb | 954 | 74.79 |
.
Numeric
PreviousNext display mode will give you.
PreviousNext
The last choice is the PreviousNextNumeric. Here, you will be able to navigate to previous page, next page and also there will be.
4
Here is the full XAML code for your reference:
<UserControl
xmlns=" height="20"
xmlns:sdk=""
xmlns:x=""
xmlns:viewmodels="clr-namespace:DataGridDemo1.ViewModels"
x: queries.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
AaronBastian wrote:This is great! As I am new to Silverlight, this answered my question on how to get my datagrid to only show 10 at a time!
AaronBastian wrote:I am loading my datagrid from a database query. The itemssource is loaded in the vb codebehind. How do I get the DataPager to bind to that code?
thatraja wrote:You may add my name for Chennai city in data which you are loading in grid.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/141883/Paginating-Records-in-Silverlight-DataGrid-using-P?fid=1603365&df=10000&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=Relaxed | CC-MAIN-2016-40 | refinedweb | 182 | 66.03 |
In the last article I wrote about how you can create your own small image using docker scratch image. The scratch image has the ability to execute basic binary files. I assume that you will have some code base that is then compiled to inserted into the scratch image. In order to do this, you can have a build machine that you maintain to create Linux executables, or you can use another docker image to create the binary and copy it to the scratch image. This is known as a multi-stage build that produces the smallest possible end state container. This whole process can be done in a single Dockerfile. Let’s start with a basic c program that prints out Hello from Docker when executed:
#include <stdio.h> int main() { printf("Hello from Docker\n"); return 0; }
This should be saved in the current director as hello.c. We then need to build a machine with gcc to compile the c program into a binary. We will call this machine builder. The Dockerfile for builder looks like this:
FROM ubuntu:latest AS builder # Install gcc RUN apt-get update -qy RUN apt-get upgrade -qy RUN apt-get install build-essential -qy COPY hello.c . # Build binary saved as a.out RUN gcc -o hello -static hello.c
This does the following:
- Use ubuntu:latest as the image
- RUN the commands to update and upgrade base operating system (-qy is to run quiet (-q) and answer yes (-y) to all questions)
- RUN the command to install build-essential which includes the gcc binary and libraries
- COPY the file hello.c from the local file system into current directory
- RUN gcc to compile hello.c into hello – This step is critical because we are using the compiler to include all required libraries with the static line without this the executable will fail while looking for a dynamically liked library
Let’s manually build this container to test the static linking using a small docker file:
FROM ubuntu:latest
Now let’s turn this into a container and test our commands to ensure we have the correct commands and order to create our builder container:
docker build -t builder .
This will build the container image called builder from ubuntu:latest from docker hub. Now lets run an instance of this container and give it a try.
docker run -it builder /bin/bash
You are now connected to the container and you can test all your commands to ensure they work
apt-get update -qy apt-get upgrade -qy apt-get install build-essential -qy #We cannot run the next command we need to copy the code using vi so we will install vi only in our test case apt-get install vim -qy #Copy the contents of hello.c into file named hello.c #COPY hello.c . # Build binary saved as a.out gcc -o hello hello.c
Let’s check if hello has dependancies on dynamic linked libraries:
root@917d6b3c9ea9:/# ldd hello linux-vdso.so.1 (0x00007ffc35dbe000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa76c376000) /lib64/ld-linux-x86-64.so.2 (0x00007fa76c969000)
As you can see it has dynamically linked libraries those will not work in scratch because they will not exist. Lets static link them using this command:
gcc -o hello -static hello.c root@917d6b3c9ea9:/# ldd hello not a dynamic executable
As you can see making sure we are not dynamically linking executables is critical. Now we know we have a working builder we can just take the executable and copy it to the scratch container for a very small container job. As you can see this process could be used to make very fast acting functions as a service on demand.
FROM scratch # Copy our static executable. COPY --from=builder hello / # Run the hello binary. ENTRYPOINT ["/hello"]
This takes the hello binary from the builder and puts it into our final image. Put them together in a single Dockerfile like this:
FROM ubuntu:latest AS builder # Install gcc RUN apt-get update -qy RUN apt-get upgrade -qy RUN apt-get install build-essential -qy #COPY the hello.c file from OS COPY hello.c . # Build the binary. RUN gcc -o hello -static hello.c FROM scratch # Copy our static executable. COPY --from=builder hello / # Run the hello binary. ENTRYPOINT ["/hello"]
Build the container which we will call csample using this command:
docker build -t csample . Sending build context to Docker daemon 3.584kB Step 1/9 : FROM ubuntu:latest AS builder ---> 7698f282e524 Step 2/9 : RUN apt-get update -qy ---> Using cache ---> 04915027a821 Step 3/9 : RUN apt-get upgrade -qy ---> Using cache ---> 998ea043503f Step 4/9 : RUN apt-get install build-essential -qy ---> Using cache ---> e8e3631eaba6 Step 5/9 : COPY hello.c . ---> Using cache ---> 406ad6aafe8f Step 6/9 : RUN gcc -o hello -static hello.c ---> Using cache ---> 3ebd38451f71 Step 7/9 : FROM scratch ---> Step 8/9 : COPY --from=builder hello / ---> Using cache ---> 8e1bcbc0d012 Step 9/9 : ENTRYPOINT ["/hello"] ---> Using cache ---> 5beac5519b31 Successfully built 5beac5519b31 Successfully tagged csample:latest
Try starting csample with docker:
docker run csample Hello from Docker
As you can see we have now used a container to build the executable for our container. | http://blog.jgriffiths.org/learning-docker-create-your-own-micro-image/ | CC-MAIN-2020-40 | refinedweb | 876 | 62.07 |
SqlMake 0.2.1
Command line tool to build a sql schema.
Installing SqlMake
Installing the sqlmake CLI tool currently requires you have some familiarities with the way python packages are distributed. For now sqlmake has been tested only with python 2.7 interpreter.
To install SqlMake and its dependencies using pip, run
pip install SqlMake
Running the sqlmake CLI
Getting help
sqlmake -h
Compiling a schema from a set of resources
sqlmake --out=myschema.sql path/to/project/folder
Preparing your files for SqlMake
A SqlMake project consists of files called resources stored in a folder. Every file with .sql extension, in project folder or subfolders are project resources.
SqlMake allows to add special instructions to a resource file, in a non obtrusive way :
- SQL comment line starts with –.
- SqlMake instructions starts with –#
Defining dependencies (DEPS)
To add dependencies to a resource file you add DEPS instructions at the top of the file. Each DEPS instruction provides a comma separated list of relative paths to resource files or folder in your project. If you are using folder dependency, SqlMake will automatically assumes that all the resources it contains are dependencies of the file that defines it.
Dependencies example
Assume the following project structure:
project/ ├── appschema │ ├── init.sql │ └── mytable.sql ├── public │ ├── add_extensions.sql │ └── functions.sql ├── README.txt └── roles.sql
So as get the appschema/mytable.sql resource to depends of the appschema/init.sql resource and of all the resources in the public folder just add the followings DEPS instruction at the top of the mytable.sql file
--# DEPS: init, ../public CREATE TABLE t_mytable( ...
Renaming schema elements (VARS)
SqlMake resources maybe used during development as normal SQL file without the help of the sqlmake CLI. The VARS instruction allows to define which name maybe redefined when compiling the schema. The sqlmake CLI allows to redefine some of the schema name by means of the –def option.
Renaming example
Let’s assume that in file mytable.sql, we want to allows renaming at compilation time the table t_mytable into something else and also to change table owner amvtek into another role defined by variable schema_owner. A VARS instruction will be added at the top of the file to make this possible
--# DEPS: init, ../public --# VARS: t_mytable, amvtek=owner_role create table t_mytable( id integer primary key, name varchar(80) not null, ... ); -- set table owner to role amvtek alter table t_mytable owner to amvtek;
To rename t_mytable into t_othertable and amvtek role into titus, one may use the sqlmake command like so
sqlmake --def t_mytable=t_othertable --def owner_role=titus path/to/mytable.sql
Unleashing the power of Jinja templates
SqlMake is built on top of the well known Jinja template engine . You may use any of the statements exported by Jinja such as if/endif, for/endfor embedding those in SQL comment line that starts with –#.
Jinja instruction example
Assumes that when in development we want our example table to be created in schema tests, and that tests shall be recreated each time we are loading the mytable.sql file in the development database. When compiling the full schema using sqlmake the commands necessary for this to happen shall not be executed. A simple Jinja conditional block, will make this a snapp
--# DEPS: init, ../public --# VARS: t_mytable, amvtek=owner_role --# if __development__ : -- sqlmake will not render this block -- as long as __development__ stays undefined... drop schema if exists tests; create schema tests; set search_path to tests, public; --# endif create table t_mytable( id integer primary key, name varchar(80) not null, ... ); -- set table owner to role amvtek alter table t_mytable owner to amvtek;
- Author: AmvTek developers
- License: MIT
- Package Index Owner: amvtek
- DOAP record: SqlMake-0.2.1.xml | https://pypi.python.org/pypi/SqlMake/0.2.1 | CC-MAIN-2016-40 | refinedweb | 614 | 54.73 |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Major
- Resolution: Won't Fix
- Affects Version/s: 3.3
- Fix Version/s: None
- Component/s: core/index
- Labels:None
- Lucene Fields:New
Description
after upgrading to lucene 3.1+, I see this in my log:
java.lang.AssertionError: TokenStream implementation classes or at least their incrementToken() implementation must be final
at org.apache.lucene.analysis.TokenStream.assertFinal(TokenStream.java:117)
at org.apache.lucene.analysis.TokenStream.<init>(TokenStream.java:92)
Turns out I derived TokenStream and my class was not declared final.
This silently breaks backward compatibility via reflection, scary...
I think doing this sort of check is fine, but throwing an java.lang.AssertionError in this case is too stringent.
This is a style check against lucene clients, a error log would be fine, but throwing an Error is too much.
See constructor implementation for:
Activity
This silently breaks backward compatibility via reflection, scary...
This is not true, there is nothing silent about it. It was listed in the backwards compatibility breaks section of 3.1:
Analyzer and TokenStream base classes now have an assertion in their ctor, that check subclasses to be final or at least have final implementations of incrementToken(), tokenStream(), and reusableTokenStream().
The problem behind the checks is that they are done by the class TokenStream, so even if you disable assertions for your own class this will still fail as soon as you enable assertions for the lucene package.
If you want to enable assertions for Lucene but disable assertions in your own code, the ctor should check the actual assertion status of the subclass using Class.desiredAssertionStatus() and not throw AssertionFailedError, as this would affect a class for which assertions are not enabled. Patch is easy.
The same applies for Analyzer.
In another issue (sorry, I don't find it, the JIRA search is dumb - oh f*ck it's Lucene! - maybe it was on mailing list), Shai Erea suggested to do the assertion check only for org.apache.lucene and org.apache.solr packages. There is already a patch somewhere - if I could find the issue.
Maybe it is just my lack of understanding why the assert is needed. My ignorance tells me it is for code styling. I am sure there is something deeper. Can someone enlighten me?
Most of this is explained in the related issues to
LUCENE-2389.
For analyzers enforcing finalness prevents errors in handling the two different tokenStream() methods (for reuse and not reuse). Also the problems in Lucene 2.9 to make the new TokenStream API backwards compatible (the famous "sophisticated backwards layer") lead to enforcing finalness for this API using the decorator pattern. With this assertion we found lots of bugs even in Lucene code that were caused by overriding methods to change behaviour that should never be done in that way for decorator APIs.
And this is NOT a HARD error. No production system is broken by that, as assertions are generally only enabled for testing. So the assertion failure simply tells your test system that you have to fix your code. In production without assertions, your code will still work. So where is the problem?
If you enable assertions on a production system you will see significant performance problems!!!
Here a patch that the assertion status of the actual class is used. If you disable assertions for your own code, but leave assertions in Lucene enabled, the failure will not trigger. This is the more correct approach. The reflection check should use the subclass' assertion status to enable/disable the check.
Hi, any comments? I would like to commit the "change" (it's not a fix as nothing is borken!) to trunk and 3.x,
Committed change to trunk revision: 1172227; 3.x revision: 1172228
This is not a problem, as this change was announced in the backwards compatibility section of Lucene version 3.1
Hmmm, yeah. Perhaps this assert should have only been for token streams in our namespace? | https://issues.apache.org/jira/browse/LUCENE-3420?focusedCommentId=13099227&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-15 | refinedweb | 663 | 58.69 |
Django-PJAX: The Django helper for jQuery-PJAX.
Project description
This project keeps the original structure, but add new features to it, and aims to keep django-pjax updated. Some goals are to keep this project working with Python 2.7+ and 3.3+ and also Django 1.5+.
Feel free to submit a PR and contribute to this project.
Compatibility
-
The pjax decorator:
pjax(pjax_template=None, additional_templates=None, follow_redirects=False)
pjax_template (str): default template.
additional_templates (dict): additional templates for multiple containers.
follow_redirects (bool): if True, all django redirects will force a page reload, instead of placing the content in the pjax context.
Decorate these views with the pjax decorator:
from djpjax import pjax @pjax() def my_view(request): return TemplateResponse(request, "template.html", {'my': 'context'})
After doing this, if the request is made via jQuery-PJAX, the @pjax() decorator will automatically swap out template.html for template-pjax.html.
More formally: if the request is a PJAX request, the template used in your TemplateResponse will be replaced with one with -pjax before the file extension. So template.html becomes template-pjax.html, my.template.xml becomes my.template-pjax.xml, etc. If there’s no file extension, the template name will just be suffixed with -pjax.
You can also manually pick a PJAX template by passing it as an argument to the decorator:
from djpjax import pjax @pjax("pjax.html") def my_view(request): return TemplateResponse(request, "template.html", {'my': 'context'})
You can also pick a PJAX template for a PJAX container and use multiple decorators to define the template for multiple containers:
from djpjax import pjax @pjax(pjax_template="pjax.html", additional_templates={"#pjax-inner-content": "pjax_inner.html") def my_view(request): return TemplateResponse(request, "template.html", {'my': 'context'})
Class
Install dependencies:
pip install -r requirements.txt
Run the tests:
python tests.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-pjax/ | CC-MAIN-2022-21 | refinedweb | 323 | 51.65 |
Fill Area Attributes class.
This class is used (in general by secondary inheritance) by many other classes (graphics, histograms). It holds all the fill area attributes.
Fill Area attributes are:
The fill area color is a color index (integer) pointing in the ROOT color table. The fill area color of any class inheriting from
TAttFill can be changed using the method
SetFillColor and retrieved using the method
GetFillColor. The following table shows the first 50 default colors.
SetFillColorAlpha(), allows to set a transparent color. In the following example the fill wheel contains the recommended 216 colors to be used in web applications. The colors in the Color Wheel are created by TColor::CreateColorWheel.
Using this color set for your text, background or graphics will give your application a consistent appearance across different platforms and browsers.
Colors are grouped by hue, the aspect most important in human perception Touching color chips have the same hue, but with different brightness and vividness.
Colors of slightly different hues clash. If you intend to display colors of the same hue together, you should pick them from the same group.
Each color chip is identified by a mnemonic (eg kYellow) and a number. The keywords, kRed, kBlue, kYellow, kPink, etc are defined in the header file Rtypes.h that is included in all ROOT other header files. We strongly recommend to use these keywords in your code instead of hardcoded color numbers, eg:
If the current style fill area color is set to 0, then ROOT will force a black&white output for all objects with a fill area defined and independently of the object fill style.
The fill area style defines the pattern used to fill a polygon. The fill area style of any class inheriting from
TAttFill can be changed using the method
SetFillStyle and retrieved using the method
GetFillStyle.
4000 to 4100 the window is 100% transparent to 100% opaque.
The pad transparency is visible in binary outputs files like gif, jpg, png etc .. but not in vector graphics output files like PS, PDF and SVG. This convention (fill style > 4000) is kept for backward compatibility. It is better to use the color transparency instead.
pattern_number can have any value from 1 to 25 (see table), or any value from 100 to 999. For the latest the numbering convention is the following:
The following table shows the list of pattern styles. The first table displays the 25 fixed patterns. They cannot be customized unlike the hatches displayed in the second table which be customized using:
gStyle->SetHatchesSpacing()to define the spacing between hatches.
gStyle->SetHatchesLineWidth()to define the hatches line width.
Definition at line 19 of file TAttFill.h.
#include <TAttFill.h>
AttFill default constructor.
Default fill attributes are taking from the current style
Definition at line 174 of file TAttFill.cxx.
AttFill normal constructor.
Definition at line 186 of file TAttFill.cxx.
AttFill destructor.
Definition at line 195 of file TAttFill.cxx.
Copy this fill attributes to a new TAttFill.
Definition at line 202 of file TAttFill.cxx.
Return the fill area color.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 30 of file TAttFill.h.
Return the fill area style.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 31 of file TAttFill.h.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 44 of file TAttFill.h.
Change current fill area attributes if necessary.
Definition at line 211 of file TAttFill.cxx.
Reset this fill attributes to default values.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 225 of file TAttFill.cxx.
Save fill attributes as C++ statement(s) on output stream out.
Definition at line 234 of file TAttFill.cxx.
Invoke the DialogCanvas Fill attributes.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 251 of file TAttFill.cxx.
Set the fill area color.
Reimplemented in TSpider, TTeXDump, TSVG, TPostScript, TPDF, TGX11, TGWin32VirtualXProxy, TGWin32, TGQuartz, and TVirtualX.
Definition at line 37 of file TAttFill.h.
Set a transparent fill color.
falpha defines the percentage of the color opacity from 0. (fully transparent) to 1. (fully opaque).
Definition at line 260 of file TAttFill.cxx.
Set the fill area style.
Reimplemented in TGX11, TGWin32VirtualXProxy, TGWin32, TGQuartz, TVirtualX, TSpider, and TPad.
Definition at line 39 of file TAttFill.h.
Fill area color.
Definition at line 22 of file TAttFill.h.
Fill area style.
Definition at line 23 of file TAttFill.h. | https://root.cern.ch/doc/master/classTAttFill.html | CC-MAIN-2021-04 | refinedweb | 718 | 61.33 |
mod_perl is an Apache
server extension that
embeds Perl within Apache, providing a
Perl interface to the Apache API. This allows us to develop
full-blown Apache modules in Perl to handle particular stages of a
client request. It was written by Doug MacEachern, and since it was
introduced, its popularity has grown quickly.
The most popular
Apache/Perl
module is Apache::Registry, which emulates the CGI environment,
allowing us to write CGI applications that run under
mod_perl. Since Perl is embedded within the
server, we avoid the overhead of starting up an external interpreter.
In addition, we can load and compile all the external Perl modules we
want to use at server startup, and not during the execution of our
application. Apache::Registry also caches compiled versions of our
CGI applications, thereby providing a further boost. Users have
reported performance gains of up to 2000 percent in their CGI
applications using a combination of mod_perl and
Apache::Registry.
Apache::Registry is a
response handler, which means that it
is responsible for generating the response that will be sent back to
the client. It forms a layer over our CGI applications; it executes
our applications and sends the resulting output back to the client.
If you don't want to use Apache::Registry, you can implement
your own response handler to take care of the request. However, these
handlers are quite different from standard CGI scripts, so we
won't discuss how to create handlers with
mod_perl. To learn about handlers along with
anything else you might want to know about
mod_perl, refer to Writing Apache
Modules with Perl and C by Lincoln Stein and Doug
MacEachern (O'Reilly & Associates, Inc.).
Before we go any further, let's
install
mod_perl. You can obtain it from CPAN at.
The Apache namespace is used by modules that are specific to
mod_perl. The installation is relatively simple
and should proceed well:
$ cd mod_perl-1.22
$ perl Makefile.PL \
> APACHE_PREFIX=/usr/local/apache \
> APACHE_SRC=../apache-1.3.12/src \
> DO_HTTPD=1 \
> USE_APACI=1 \
> EVERYTHING=1
$ make
$ make test
$ su
# make install
Refer to the installation directions that came with Apache and
mod_perl if you want to perform a custom
installation. If you're not interested in possibly developing
and implementing the various Apache/Perl
handlers, then you do not need
the EVERYTHING=1 directive, in which case, you
can implement only a PerlHandler.
Once that's complete, we need to
configure Apache. Here's a
simple setup:
PerlRequire /usr/local/apache/conf/startup.pl
PerlTaintCheck On
PerlWarn On
Alias /perl/ /usr/local/apache/perl/
<Location /perl>
SetHandler perl-script
PerlSendHeader On
PerlHandler Apache::Registry
Options ExecCGI
</Location>
As you can see, this is very similar to the manner in which we
configured FastCGI. We use the PerlRequire
directive to execute a startup script. Generally, this is where you
would pre-load all the modules that you intend to use (see Example 17-3).
However, if you are interested in loading only a small set of modules
(a limit of ten), you can use the PerlModule
directive instead:
PerlModule CGI DB_File MLDBM Storable
For
Apache::Registry
to honor taint mode and warnings, we must add directive the
PerlTaintMode and PerlWarn
directives. Otherwise, they won't be enabled. We do this
globally. Then we configure the directory we are setting up to run
our scripts.
All requests for resources in the /perl
directory go through the perl-script
(mod_perl) handler, which then passes the
request off to the Apache::Registry module. We also need to enable
the ExecCGI option. Otherwise,
Apache::Registry
will not execute our CGI applications.
Now, here's a sample configuration file in Example 17-3.
#!/usr/bin/perl -wT
use Apache::Registry;
use CGI;
## any other modules that you may need for your
## other mod_perl applications running ...
print "Finished loading modules. Apache is ready to go!\n";
1;
It is really a very simple program, which does nothing but load the
modules. We also want Apache::Registry to be pre-loaded since
it'll be handling all of our requests. A thing to note here is
that each of Apache's child processes will have access to these
modules.
If we do not load a module at startup, but use it in our
applications, then that module will have to be loaded once for each
child process. The same applies for our CGI applications running
under Apache::Registry. Each child process compiles and caches the
CGI application once, so the first request that is handled by that
child will be relatively slow, but all subsequent requests will be much faster.
In general, Apache::Registry, does provide a good emulation of a standard CGI
environment. However, there are some differences you need to keep in
mind:
The same precautions that apply to FastCGI apply to
mod_perl, namely, always use
strict mode and it helps to enable
warnings. You should also always initialize your
variables and not assume they are empty
when your script starts; the warning flag will tell you when you are
using undefined values. Your environment is not cleaned up with you
when your script ends, so variables that do not go out of scope and
global variables remain defined the next time your script is called.
Due to the fact that your code is only compiled once and then
cached, lexical variables in the body of your
scripts that you access within your subroutines create closures. For
example, it is possible to do this in a standard CGI script:
my $q = new CGI;
check_input( );
.
.
sub check_input {
unless ( $q->param( "email" ) ) {
error( $q, "You didn't supply an email address." );
}
.
.
Note that we do not pass our CGI object to
check_input
. However, the
variable is still visible to us from within that subroutine. This
works fine in CGI. It will create very subtle, confusing errors in
mod_perl. The problem is that the first time the
script is run on a particular Apache child process, the value of the
CGI object becomes trapped in the cached copy of
check_input. All future calls to that same
Apache child process will reuse the original value of the CGI object
within check_input. The solution is to pass
$q to check_input as a
parameter or else change $q from a lexical to a
global local variable.
If you are
not familiar with closures (they are not commonly used in Perl),
refer to the perlsub manpage or
Programming Perl.
The
constant
module creates
constants by defining them internally as subroutines. Since
Apache::Registry creates a persistent environment, using constants in
this manner can produce the following warnings in the error log when
these scripts are recompiled:
Constant subroutine FILENAME redefined at ...
It will not affect the output of your scripts, so you can just ignore
these warnings. Another alternative is to simply make them global
variables instead; the closure issue is not an problem for variables
whose values never change. This warning should no longer appear for
unmodified code in Perl 5.004_05 and higher.
Regular expressions that are
compiled with the o flag will remain compiled
across all requests for that script, not just for one
request.
File age
functions, such as -M, calculate their values
relative to the time the application began, but with
mod_perl, that is typically the time the server
begins. You can get this value from $^T . Thus
adding (time - $^T) to the age of a file will
yield the true age.
BEGIN
blocks are executed once when your script
is compiled, not at the beginning of each request. However,
END blocks are executed at the end of each
request, so you can use these as you normally would.
__END__
and __DATA__ cannot be used within CGI scripts with
Apache::Registry. They will cause your scripts to
fail.
Typically, your scripts should not call exit in
mod_perl, or it will cause Apache to exit
instead (remember, the Perl interpreter is embedded within the web
server). However, Apache::Registry overrides the
standard exit
command so it is safe for
these scripts.
If it's too much of a hassle to convert your application to run
effectively under Apache::Registry, then you should investigate the
Apache::PerlRun
module. This module uses the Perl interpreter embedded within Apache,
but doesn't
cache compiled versions of your code. As a
result, it can run sloppy CGI scripts, but without the full
performance improvement of Apache::Registry. It will, nonetheless, be
faster than a typical CGI application.
Increasing the speed of CGI scripts is only part of what
mod_perl can do. It also allows you do write
code in Perl that interacts with the Apache response cycle, so you
can do things like handle authentication and authorization yourself.
A full discussion of mod_perl is certainly
beyond the scope of this book. If you want to learn more about
mod_perl, then you should definitely start with
Stas Bekman's
mod_perl
guide, available at.
Then look at Writing Apache Modules with Perl and
C, which provides a very thorough, although technical,
overview
of
mod_perl. | https://docstore.mik.ua/orelly/linux/cgi/ch17_03.htm | CC-MAIN-2019-26 | refinedweb | 1,505 | 62.38 |
, this little program will print out the days since Sunday that the 1st of June was:
#include <time.h>
#include <stdio.h>
int main()
{
struct tm t;
/* What day is June 1st, 2002? */
memset(&t, 0, sizeof(struct tm));
t.tm_year = 2002 - 1900;
t.tm_mon = 6 - 1;
t.tm_mday = 1;
if ( mktime(&t) == -1 )
return -1;
printf("Days since Sunday: %i\n", t.tm_wday);
return 0;
}
The output is '6', meaning 6 days since Sunday, which is Saturday. The structure tm is part of C. The mktime attempts to fill in the struct and return the correct time, as long as enough information is given. The tm_year member is the number of years since 1900. The tm_mon member is the number of months since January (this is why 1 is subtracted). The tm_mday is the day of the month. The tm_wday member if the day of the week.
Hope this helps,
Marc
By the way, you can clean up your switch statement a little by allowing certain entries to 'fall through'. This means that you leave out the 'break'. For example:
switch(month) {
case 1:
case 3:
case 5:
case 7:
case 8:
case 10;
case 12:
printf("31 days\n");
days_of_month=31;
break;
case 2:
if(leap==1) {
printf("29 days\n");
days_of_month=29;
} else {
printf("28 days\n");
days_of_month=28;
}
break;
case 4:
case 6:
case 9:
case 11:
printf("30 days\n");
days_of_month=30;
break;
default: printf("Program error");
break;
}
One other thing - try to get rid to the goto statements in your program. They often make programs more difficult to read and maintain. One thing you could do is set days_of_month to -1 just before the switch statement, and put the switch statement in a while statement that continues while days_of_month is -1.
days_of_month = -1;
while ( days_of_month == -1 ) {
switch(month) {
/* the value gets set for valid data */
/* but is kept at -1 for invalid data */
} /* end of switch*/
} /* end of while - only break out is days_of_month != -1 *
day_relative_to_sunday
= ( day + 1 + (month*2) + ((int)(month+1)*3/5)
+ year + ((int)(year/4)) - ((int)(year/1000))
+ (year / 400) ) modulo 7
what's the value for day in the formula?
can u pls explain further? Thanx
In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today! | https://www.experts-exchange.com/questions/20316334/find-the-day-realtive-to-sunday.html | CC-MAIN-2018-22 | refinedweb | 410 | 81.83 |
Introduction: Intro to Java Programming
So if you have wandered onto this instructable, you are probably looking to learn how to program in java, or you just want to learn how to better understand how your computer/smartphone/tablet works. Java is a programming language that is compatible with almost every computing device in the world. Java was first produced at the very beginning of the widespread use of the internet, as it was designed to be used on the internet. This means that most dynamic websites that your use will usually contain both HTML and Java. So that means that Java code can be run as an applet, and integrated with a website. By learning Java you will be able to make web apps and draw graphics while learning how computers and the internet work.
Step 1: Downloads
Before you even think about programming with Java you will need to download a few pieces of software:
Java JDK - This is what makes your computer able to create compile and run your Java code
Eclipse IDE - What I've found to be the best development environment for Java
After you have installed these pieces of software, you will be able to start coding in Java
Step 2: Methods
In Java code is written in methods, so if you need to repeat code you can just call the method again. An example of this follows:
public void runner(){
integer();
}
public void integer(){
int x = 3
int b = 3 * x
int y = x/b
}
Methods are useful because they allow you to make your code shorter and more efficient, methods are always started and finished with curly braces { }. When you add a new method you should always make sure that you have indented the ending curly brace at the same indent as the beginning of the method
Step 3: Hello, World!
So now we get to start writing some Java code. The first thing that we are going to is learn how to print something in the console, I will explain everything in my comments, while in Java you can comment by putting // for a one line comment, or if you want to do a multi-line comment you start it with /* and finish it with */ . When I type System.out.println(" "); it will print what ever is in the parentheses and skip to the next line.
public class runner {//most beginners start by calling the first class runner
public static void main(String[] args) {// this is put in automatically by eclipse
System.out.println("Hello, World!");//this line will print out whatever is in between the quotes
}// you must always put code in between curly braces
}// note the different indents for each curly brace and each line of code
The output will look like this:
Hello, World!
So the first method is by default public static void main(String[] args){ this is where you will list the methods you create later in your code. After the first closing brace is where you can put new methods.
Step 4: Ints, Doubles, and Strings
So before we go any further, I must teach you about ints, doubles, and strings. A string is a group of alphanumeric characters that can be printed on the console or placed in a graphics window. An int is a variable that can have a value from 0 to 255 but cannot have a decimal, this can be used to change the colour of something in an animation or graphics, or can be used to compare two things. A double is a number that has a decimal point, and has a maximum value of 1.7976931348623157 * 10^308, doubles can for comparing things like dollar amounts that more than likely will have at least one decimal value.
Step 5: I/O
It's easy to write stuff to the screen in Java. As you learned in the previous step, all you have to do is:
System.out.println("This is the stuff I want to print to the screen!");
A Scanner is a useful piece of code that allows a computer to accept input from a user, a scanner can be used for strings, ints, and doubles. A scanner that is designed to look for a string, an int and a double will look like this:
To use the above scanner in a useful fashion it makes sense to print a question that a the user can answer with the input that the scanner is expecting, so it would look something like this:
System.out.print("What is your name? ");
So after you write the question you must tell the scanner that it must be ready to accept input from the user:
name = reader.nextLine();
After that you can just repeat the code for the other variables:
System.out.print("How old are you? ");
age = reader.nextInt();
System.out.print("How much do you make per hour? ");
hourlyWage = reader.nextDouble();
Next you can print the answers to the questions in paragraph form. To do this you must use println, like in the previous steps, except
when you print a variable you must use the closing quotes and place a + before each variable, and if you want to add text after the variable you must place another +. This is what it should look like:!");
So this is what the full code will look like when you are finished:
import java.util.*;
public class runner {
public static void main(String[] args) {
System.out.print("What is your name? ");//note that there is no need to type println because the scanner does it for you
name = reader.nextLine();
System.out.print("How old are you? ");
age = reader.nextInt();
System.out.print("How much do you make per hour? ");
hourlyWage = reader.nextDouble();!");
}
}
Step 6: Loops
So now we are going to learn some loops, the main ones are while, if, for, an do loops. A while loop will run an infinite loop while a certain condition is true. An if statement will run code once if a certain condition is true, but will only run once. A for loop runs exactly like a while loop, but is shorter to write. A do loop is a loop that runs once, then it checks if it meets a necessary condition, and if it does it will run again, but if it doesn't it will run the next line of code that you have. Here are some examples:
While loop:
int i = 20;
while (i < 25) { // we want to print 5 numbers
System.out.print(i + " ");
i++;
}
System.out.println();
If statement:
if( a <= b){
System.out.println("a is smaller or equal to b");
}else{
System.out.println("a is bigger than b");
}
For loop:
for (int i = 20; i < 25; i++) {
System.out.print(i + " ");
}
System.out.println();
Do loop:
do
{
System.out.println("i is : " + i);
i++;
}while(i < 5);
Step 7: Graphics
So now that you have a basic understanding of loops we can start having fun with graphics. To start with you need to use a library(those funny looking things at the top of your code) to create a window, then you need to start to draw different shapes with , different commands, note that when you create a graphics window the origin of the coordinate plane will start at the top left corner of the screen.
When you draw in Java, you must use the command g.draw or g.fill followed by either Line, Rect, Oval or Poly to draw a line rectangle oval or a custom polygon respectively, which would look something like this:
g.drawLine(20, 30, 40, 50);//draws a line, note that where you draw the is based on the coordinate that you input
g.fillRect(20, 30, 40, 50);// draw a rectangle that is filled
g.drawRect(20, 30, 40, 50);// when you type draw instead of fill it will just draw an outline of the shape
g.fillOval(20, 30, 40, 50);//draws an oval
If you want to make your own polygon then you must write this code:
Polygon poly = new Polygon();
poly.addPoint(50, 50);//each of these lines are a new point on your polygon
poly.addPoint(75, 75);
poly.addPoint(75, 100);
poly.addPoint(25, 100);
poly.addPoint(25, 75);
g.fillPolygon(poly);
Now if you want to add text you can simply draw it like everything else:
g.drawString("Hi there!", 40, 50);
Now if you type g.fill it will default to colour it black so to change the colour you must type the following:
g.setColor(Color.red);
or if you want to make your own colour you can change the int value of red green and blue like this:
int red=100, green=0, blue=255;
Color color = new Color(red, green, blue);
you can find a colour chart here
This code will draw a line, some shapes, a string, and colour the background, try to guess what it will look like before you run it on your computer:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
import java.text.*;
import java.util.*;
public class MyFrame extends JFrame {
/*
* Constructor
*
* sets up the window when it is created
*/
public MyFrame() {
super("Graphics Window");
Container container = getContentPane();
// if you want a bigger window, change the numbers
setSize(300, 200);
setVisible(true);
}
/*
* paint
*
* performs the drawing of the window
*/
public void paint(Graphics g) {
super.paint(g);
g.setColor(Color.red);
g.fillRect(50, 50, 200, 100);
g.setColor(Color.black);
g.drawLine(50, 50, 250, 150);
g.setColor(Color.blue);
g.fillOval(60, 90, 30, 30);
g.setColor(Color.yellow);
Polygon poly = new Polygon();
poly.addPoint(220, 70);
poly.addPoint(240, 90);
poly.addPoint(200, 90);
g.fillPolygon(poly);
g.setColor(Color.darkGray);
g.drawString("Smile!", 130, 170);
}
/**
* main
*
* creates the window
*/
public static void main(String[] args) {
MyFrame frame = new MyFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}
Now we will w
Step 8: Closing Statements
So that's some of the basics of writing code in Java, you can apply these lessons towards using Java in more advanced programs and even different programming languages by just changing the syntax a little. If you have any questions comment below or send me a private message, if I get enough positive feed back I will make a more advanced instructable, make sure to vote for me in the teacher contest and the mad science fair contest, we could really use those prizes at my school. Thanks!
Participated in the
The Mad Science Fair
Be the First to Share
Recommendations
Discussions
8 years ago on Introduction
So if you have wandered onto this instructable, you are probably looking to learn how to program in java, or you just want to learn how to better understand how your computer/smartphone/tablet works.
No, I wondered if there was anything much to learn in here, or if it would just talk a bit about a few things before ending with make sure to vote for me in the () contest(s)...
L | https://www.instructables.com/id/Intro-to-Java-Programming/ | CC-MAIN-2020-29 | refinedweb | 1,841 | 67.79 |
Archive for April, 2015
Getting started with MVC ASP .NET – Part 2
Hello Everyone,
In my previous blog , I tried to explain basic details related to MVC Framework.
Now, I would like to share detailed view of RAZOR engine , ASPX engine and various file and folder in shown in Solution Explorer.
What is RAZOR and ASPX ?
Razor as well as ASPX are View Engine. View Engine are responsible for rendering view into HTML form to browser. MVC supports both Web Form(ASPX) as well as Razor. Now, Asp.net MVC is open source and can work with other third party view engines like Spark, Nhaml.
What are difference between in RAZOR and Web Form(ASPX)?
What are different file and folder in MVC application.
MVC has default folder structure as show in below figure.MVC Framework follows naming conventions such as model will be in model folder, controller will be in controller folder, view will be in folder and so on. This naming method reduces code and makes easy for developer to understand the architecture.
The App_Data is used for storing application data.
The Content folder is used for static files like style sheets, themes, icons and images.
The Controllers folder contains the controller classes responsible for handling user input and responses.
The Models folder contains the classes that represent the application models. Models hold and manipulate application data.
The Views folder stores the HTML files related to the view of the application.The Views folder contains one folder for each controller and shared folder. The Shared folder is used to store views which is shared among the application like _Layout.cshtml.
The Scripts folder stores the JavaScript files of the application.
For more details you canTS). Common Type System (CTS) describes a set of types that can be used in different .Net languages in common . That is , the Common Type System (CTS) ensure that objects written in different .Net languages can interact with each other.For Communicating between programs written in any .NET complaint language, the types have to be compatible on the basic level .These types can be Value Types or Reference Types . The Value Types are passed by values and stored in the stack. The Reference Types are passed by references and stored in the heap. Common Type System (CTS) provides base set of Data Types which is responsible for cross language integration. The Common Language Runtime (CLR) can load and execute the source code written in any .Net language, only if the type is described in the Common Type System (CTS) .
Code Access Security.
Garbage.
.Net Framework Class Library (FCL)
The .NET Framework class library is a collection of reusable types that tightly integrate with the common language run-time.It consists of namespaces, classes, interfaces, and data types included in the .NET Framework..
.NET framework is a huge ocean.Hopefully, this article has explained some of the terms in the .NET platform and how it works.
For more details visit this link :.
Launch of Visual Studio 2015 in Summer
After a recent release of Visual Studio Community 2013 available for free which has nearly all features of Professional edition for non-enterprise application development, Now Microsoft has announced Visual Studio 2015 edition that will be available when release the final product this summer.
If you have a MSDN subscription then after the release of Visual Studio 2015, Visual Studio Premium 2013 and Visual Studio Ultimate 2013 would be clubbed into one single offering called Visual Studio Enterprise with MSDN.
Visual Studio Professional 2013 with MSDN would be upgraded to Visual Studio Professional 2015 with MSDN and Visual Studio Community edition 2013 will be upgraded to Visual Studio Community edition 2015.
Visual Studio Community edition and Visual Studio Professional with MSDN, our new Visual Studio Enterprise with MSDN would be 3 primary Visual Studio 2015 offerings.
Looking forward for new and exiting feature in Visual Studio 2015, specially Visual Studio 2015 community edition(free)
For more details check | http://itfreesupport.com/2015/04/page/2/ | CC-MAIN-2018-39 | refinedweb | 662 | 57.77 |
I assume that in the second file you are at least doing:
import main
and then referencing 'a' as 'main.a' in the code.
I was assuming that you were getting this most fundamental concept of
Python correct.
You should really post a complete (but small) example of real code
that shows your problem.
Graham
2008/7/21 Mephisto <badmephisto at gmail.com>:
> Hi, thank you fro your reply. I should have been a little clearer. I am
> running Windows XP and the entire code is processed within one single
> request, so I don't see how the problems with different interpreters should
> affect me. Is there something else that I could try?
>
> On Sun, Jul 20, 2008 at 9:54 PM, Graham Dumpleton
> <graham.dumpleton at gmail.com> wrote:
>>
>> 2008/7/21 Mephisto <badmephisto at gmail.com>:
>> > Hi, I have been trying to fix this for a while, but I cant seem to do
>> > it:
>> >
>> > essentially this is my code:
>> >
>> > in my main .py file that handles requests:
>> > from otherfile import othermethod
>> > def handler():
>> > global a
>> > a=5
>> > othermethod()
>> >
>> > in otherfile:
>> > def othermethod()
>> > #in this method, the value of a is not defined, even though i made it
>> > global!
>> >
>> > even though it says in the documentation that global data is normally
>> > retained and handled. And all of the previous code runs within the same
>> > request... am i doing something wrong? or not doing something?
>> > thank you
>>
>> Read:
>>
>>
>>
>>
>> Apache is a multiprocess web server (except on Windows), thus
>> subsequent requests will not necessarily be handled by the same
>> process.
>>
>> It is not clear from your example whether you are talking about data
>> persisting across requests, or whether you are talking about data seen
>> by other modules imported as part of same request.
>>
>> BTW, relying on globals is also dangerous for configurations where
>> Apache is running multithreaded.
>>
>> That document summaries issues about global data and persistence at the
>> end.
>>
>> Graham
>
> | http://modpython.org/pipermail/mod_python/2008-July/025460.html | CC-MAIN-2018-09 | refinedweb | 319 | 65.73 |
#include "ltwrappr.h"
L_INT LImageViewerCell::SetActionProperties (nAction, nSubCellIndex, pActionProperties, uFlags);
Sets the properties of a specific action.
Value that represents the actions for which to set the properties. If nAction is equal to or greater than 100 it is a user-defined action. Otherwise, it should be one of these predefined actions.. Pass -1 to set the properties of a specific action on all sub-cells. Pass -2 to set the properties of a specific action on the selected sub-cell.
Pointer to a structure that contains the properties to be set. The type of structure pointed to depends on the action specified in the nAction parameter.
Flag that determines which properties to set. set the scale action properties, a variable of type DISPSCALEACTIONPROPS should be declared and the members of that structure set to the desired values. Pass a pointer to this structure for the pActionProperties parameter. To set only the general properties, set uFlags to CONTAINER_ACTION_CONTAINERLEVEL. To set the specific properties, set uFlags to CONTAINER_ACTION_CELLLEVEL.
If this function is not called before applying the action to the container/cell, the default values are used. For more information on these default values, refer to the structures associated with each action.
Required DLLs and Libraries
For an example, refer to LImageViewer::Create.
Direct Show .NET | C API | Filters
Media Foundation .NET | C API | Transforms
Media Streaming .NET | C API | https://www.leadtools.com/help/sdk/v21/imageviewer/clib/limageviewercell-setactionproperties.html | CC-MAIN-2021-10 | refinedweb | 229 | 51.14 |
Not long ago, I was involved in submitting a really complex application built on top of Dynamics 365 to Microsoft AppSource. The application contains a lot of plugins and code activities that perform some complex tasks and automation. The team faced some issues that I think are worth sharing with others to save your time if you you are working on such a submission.
Microsoft, provides us with tools such as the Solution Checker that validates your solution including your plugin and web resource code. The problem is, that’s not all. When you submit an application to the AppSource team, it goes through a rigorous manual and automatic checks using tools that are not publicly available to us, developers. If there are issues in your code, your submission will be rejected with explanation on what to fix and with the list of issues ordered based on their priorities. To pass the submission, all critical and high priority issues need to be fixed (if you can convince the AppSource team that somethings needs to be done a certain way and can’t be done another way, they will mostly make an exception).
After the first submission, the app got rejected with tons of things to modify/fix (even after running the solution checker on all the solutions). To be honest, the documents they sent were scary (1000+) pages with explanations on the issues. After looking at the issue list, it turned out that 90% of the critical/high priority issues are related to writing thread safe plugins. Luckily, the fix was very easy for those issues but it cost us around 2 weeks of time to do another submission and get it verified again. The following are the most common critical issues.
Variables May Cause Threading Issues
A plugin in Dynamics, is a simple class that implements the IPlugin interface, and thus, has a single Execute method as a minimum. Almost always, you need to create the organization service, the tracing service, the context and maybe other object. A bare bone plugin that builds, will look something like this:
public class SomePlugin : IPlugin { public void Execute(IServiceProvider serviceProvider) { throw new NotImplementedException(); } }
A useful plugin, will have extra objects created so that we can communicate with the Dynamics organization,
public class SomePlugin: IPlugin { // Obtain the tracing service ITracingService tracingService = null; IPluginExecutionContext context = null; // Obtain the execution context from the service provider. public void Execute(IServiceProvider serviceProvider) { tracingService = (ITracingService) serviceProvider.GetService(typeof(ITracingService)); context = (IPluginExecutionContext) serviceProvider.GetService(typeof(IPluginExecutionContext)); } }
Now what’s wrong with the above plugin code? In a normal .NET application, this is a normal thing to do, but in a Dynamics plugin, it is not. To understand why, we need to understand how plugins get executed on our behalf behind he scenes. When a plugin runs for the first time (because of some trigger), most of the plugin global variables get cached, this happens when the constructor of the plugin is first executed. This means, in the next run, the same tracing service and context “may” be shared with the next run. This applies on any variable you define outside your function as a global variable in your plugin class. Ultimately, this causes threading issues (multiple runs of the same plugin instance compete for the same cached variable) and you may end up with extremely difficult-to-debug errors and unexplained deadlocks. The fix for the above, is very simple, just create your variables locally in the execute function, so each run of the plugin executes its own set of local variables.)); } }
This by default means, that any helper function in your plugin should get what it needs from its parameters and not from global variables. Assume you have a function that needs the tracing service, and this function get’s called from the Execute method, pass the tracing service that was created in the execute method to that function and don’t make it a global object.)); // do work here HelperFunction(tracingService,1,2,"string"); } private void HelperFunction(ITracingService tracingService, int param1, int param2, string param3) { //use tracing service here } }
On the other hand, anything that is read only (config string, some constant number) is safe to stay as a global class member.
Plugins That Trigger on Any Change
This problem is more common. The filtering attributes of a plugin, are a way to limit when that plugin executes. Try to have as few as possible of those filtering attributes, don’t specify all of them. At that time I was involved in that submission, the solution checker wasn’t able to detect such problem but it may have improved now.
Plugins That Update the Record Or Retrieve Attributes Again
This is also a common issue, when a plugin is triggered on an update of an entity record, it is really a bad idea to issue another update request to the same record again. An example of this can be the need to update fieldX based on the value of fieldY. When the plugin triggers on fieldY change, you issue an service.Update(entity) with the new value of fieldX. This implicates the performance of the whole organization and even worse, it can cause an infinite loop if the filtering attributes are not set properly. Another, bad use case is to issue a retrieve attributes query for the same record when pre-images and post images can be used to remedy that.
To be clear, sometimes, there is no way around issuing another retrieve inside the plugin or sending a self-update request, we had some of those cases and we were able to convince the AppSource team that our way was the only way.
Slow Plugins
As a general rule of thumb, your plugin should be slim and does a very small thing and does it fast. Plugins have some upper limit on the time they can run within and your plugin should never exceed that time (or not even half of it). When your plugin does exceed the time allocated for it, it is time for redesigning it.
Conclusion
While those issues have simple fixes in general, they can cause slowness and unexplained errors and a rejection from AppSource. Even if you are not submitting anything to AppSource, make sure that you set some ground rules for the developers working on the same code base on how to write good plugins. More on plugins best practices can be found here. | https://transform365.blog/2019/08/24/tips-on-dynamics-365-code-validation-for-appsource-submissions/ | CC-MAIN-2021-10 | refinedweb | 1,074 | 57.3 |
Prev
C++ VC ATL STL PPP Experts Index
Headers
Your browser does not support iframes.
Re: Pi number In C++ Please HELPPPP!!!!!!!!!
From:
red floyd <redfloyd@gmail.com>
Newsgroups:
comp.lang.c++
Date:
Wed, 18 Feb 2009 15:34:56 -0800 (PST)
Message-ID:
<aaa97611-beb0-40dd-924e-999fc680606f@h5g2000yqh.googlegroups.com>
On Feb 18, 12:13 pm, havocjoseph <havocjos...@gmail.com> wrote:
On Feb 17, 8:26 pm, r...@zedat.fu-berlin.de (Stefan Ram) wrote:
FSMehmet <tgulmemme...@gmail.com> writes:
Hey who can help me to find the 111 digit of pi number in C++
#include <iostream>
#include <ostream>
int main(){ unsigned long i, j, k, n, p,
q, x, m = 111, len = 370, pi[ 371 ];
for( x = len; x > 0; --x )pi[ x ]= 2;
::std::cout << "3."; n = 0; p = 0;
for( j = 0; j <= m; ++j )
{ q = 0; for( i = len; i > 0; --i )
{ x = 10 * pi[ i ]+ q * i;
pi[ i ]= x %( 2 * i - 1 );
q = x /( 2 * i - 1 ); }
pi[ 1 ]= q % 10; q = q / 10; if( q == 9 )++n;
else if( q == 10 ) { putchar( '1' + p );
for( ; n; --n )::std::cout << '0'; p = 0; }
else { if( j > 1 )::std::cout << p;
for( ; n; --n )::std::cout << '9'; p = q; }}
::std::cout << p << '\n'; }
Readability fail.
That was the whole point. OP wanted us to do his homework for him,
so Stefan gave him something highly obfusc). | https://preciseinfo.org/Convert/Articles_CPP/PPP_Experts/C++-VC-ATL-STL-PPP-Experts-090219013456.html | CC-MAIN-2022-05 | refinedweb | 233 | 79.6 |
Jan 31, 2008 10:44 PM|taowang|LINK
Hi,
I use VS2008 to create a Asp.net web application. Then I create a App_Code new folder. After that I add a new item, Class1.cs, under App_Code.
When I come back to default.aspx.cs and type Class1 within Page_Load, I found it is not recognized. If I compile, compiler error says "Class1" could not be found.
Any idea?
Thx
Tao
All-Star
16800 Points
Jan 31, 2008 11:05 PM|Jeev|LINK
If you are using the "web application project" instead of "web site", then the code does not get dynamically compiled. You would be better off adding these classes a class library project and adding the reference to your web application project
However if you are using website , the class in the App_code will get compiled dynamically
Feb 01, 2008 03:23 AM|taowang|LINK
Jeev,
I did use "web application project". You answer explains.
I tried this. Add new Class.cs. Build project. Then reference Class1 from default.aspx.cs. It still deosn't work. This leads to another question then. Does this mean, in "web application project", I'm not able to add a new Class file to define new classes?
Class1 is public with no property.
Thx
Tao
All-Star
67417 Points
Feb 04, 2008 04:10 AM|Vince Xu - MSFT|LINK
Hi,
Did you used in VS2005? If so, there are several suggestions.
1. You can put the new class file into any folder EXCEPT App_Code folder. It means that you can't directly access the class in the App_Code folder and you can put the class file in other folders.
2. For example, the new class file "Class1" in the folder "ppp".
namespace WebApplication1.ppp
{
public class Class1
{
}
}
You can call the "Class1" with WebApplication1.ppp.Class1 in the default.aspx.cs
But in VS2008, this issue has been resolved and every thing is ok.
Hope it helps.
Apr 25, 2008 11:13 AM|gyan_flip|LINK
Below:
ASP.NET 3.5 .net 3.5
Apr 25, 2008 11:15 AM|gyan_flip|LINK. Do anyone help?
usingSystem;
usingSystem.Linq; using System.Xml.Linq;namespace WebApplication1.App_Code
{public class Employee
{public string Name { get; set; }
}
}
Regards,
Gyan
ASP.NET 3.5 .net 3.5
Apr 29, 2008 08:38 AM|gyan_flip|LINK
Reply From Scott:
[" Hi Gyan,
If you use a "web application" project you'll want to add the class outside of the \app_code directory. When you do this it will be compiled with the rest of your application and you'll have intellisense against it.
If you use a "web site" project you can pull the class in \app_code and get intellisense.
Hope this helps,
Scott "]
This means that we cannot add \app_code and class files in \app_code folder when we are using "web application", only we can add \app_code folder in "web site".
ASP.NET 3.5
Member
49 Points
May 16, 2010 03:58 PM|dabhi_mayur|LINK
In Visual Studio Professional 2008, you can still create the
App_Code folder by right-clicking on the Web Project and selecting
Add, then Add Folder. Rename the new folder to App_Code.
Contrary to some recommendations on the Web that you should not use the App_Code folder because you cannot place common Web UI codes or classes in this folder, you can do so by setting Build Action of each class to Compile. Suppose this step is not done, the classes defined in this folder will not be visible to your other codes. This explains why people recommended against the use of App_Code folder in VS 2008.
Suppose you have a class called WebCommon.cs in this folder. All you have to do is to right-click on this file and select Properties. A properties window will appear. Set Build Action to Compile. Viola! The class can be accessed exactly like what you have seen in VS 2005.
9 replies
Last post Jun 24, 2010 01:27 PM by skamin | http://forums.asp.net/t/1213840.aspx | CC-MAIN-2015-18 | refinedweb | 665 | 75.61 |
When the first early access versions of Java 8 were made available, what seemed the most important (r)evolution were lambdas. This is now changing and many developers seem to think now that streams are the most valuable Java 8 feature. And this is because they believe that by changing a single word in their programs (replacing
stream with
parallelStream) they will make these programs work in parallel. Many Java 8 evangelists have demonstrated amazing examples of this. Is there something wrong with this? No. Not something. Many things:
- Running in parallel may or may not be a benefit. It depends what you are using this feature for.
- Java 8 parallel streams may make your programs run faster. Or not. Or even slower.
- Thinking about streams as a way to achieve parallel processing at low cost will prevent developers to understand what is really happening. Streams are not directly linked to parallel processing.
- Most of the above problems are based upon a misunderstanding: parallel processing is not the same thing as concurrent processing. And most examples shown about “automatic parallelization” with Java 8 are in fact examples of concurrent processing.
- Thinking about map, filter and other operations as “internal iteration” is a complete nonsense (although this is not a problem with Java 8, but with the way we use it).
So, what are streams
According to Wikipedia:
“a stream is a potentially infinite analog of a list, given by the inductive definition:
data Stream a = Cons a (Stream a)
Generating and computing with streams requires lazy evaluation, either implicitly in a lazily evaluated language or by creating and forcing thunks in an eager language.”
One most important think to notice is that Java is what Wikipedia calls an “eager” language, which means Java is mostly strict (as opposed to lazy) in evaluating things. For example, if you create a
List in Java, all elements are evaluated when the list is created. This may surprise you, since you may create an empty list and add elements after. This is only because either the list is mutable (and you are replacing a null reference with a reference to something) or you are creating a new list from the old one appended with the new element.
Lists are created from something producing its elements. For example:
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Here the producer is an array, and all elements of the array are strictly evaluated.
It is also possible to create a list in a recursive way, for example the list starting with 1 and where all elements are equals to 1 plus the previous element and smaller than 6. In Java < 8, this translates into:
List<Integer> list = new ArrayList<Integer>(); for(int i = 0; i < 6; i++) { list.add(i); }
One may argue that the
for loop is one of the rare example of lazy evaluation in Java, but the result is a list in which all elements are evaluated.
What happens if we want to apply a function to all elements of this list? We may do this in a loop. For example, if with want to increase all elements by 2, we may do this:
for(int i = 0; i < list.size(); i++) { list.set(i, list.get(i) * 2); }
However, this does not allow using an operation that changes the type of the elements, for example increasing all elements by 10%. The following solution solves this problem:
List<Double> list2 = new ArrayList<Double>(); for(int i = 0; i < list.size(); i++) { list2.add(list.get(i) * 1.2); }
This form allows the use of a the Java 5
for each syntax:
List<Double> list2 = new ArrayList<>(); for(Integer i : list) { list2.add(i * 1.2); }
or the Java 8 syntax:
List<Double> list2 = new ArrayList<>(); list.forEach(x -> list2.add(x * 1.2));
So far, so good. But what if we want to increase the value by 10% and then divide it by 3? The trivial answer would be to do:
List<Double> list2 = new ArrayList<>(); list.forEach(x -> list2.add(x * 1.2)); List<Double> list3 = new ArrayList<>(); list2.forEach(x -> list3.add(x / 3));
This is far from optimal because we are iterating twice on the list. A much better solution is:
List<Double> list2 = new ArrayList<>(); for(Integer i : list) { list2.add(i * 1.2 / 3); }
Let aside the auto boxing/unboxing problem for now. In Java 8, this can be written as:
List<Double> list2 = new ArrayList<>(); list.forEach(x -> list2.add(x * 1.2 / 3));
But wait... This is only possible because we see the internals of the
Consumer bound to the list, so we are able to manually compose the operations. If we had:
List<Double> list2 = new ArrayList<>(); list.forEach(consumer1); List<Double> list3 = new ArrayList<>(); list2.forEach(consumer2);
How could we know how to compose them? No way. In Java 8, the
Consumer interface has a default method
andThen. We could be tempted to compose the consumers this way:
list.forEach(consumer1.andThen(consumer2));
but this will result in an error, because
andThen is defined as:
default Consumer<T> andThen(Consumer<? super T> after) { Objects.requireNonNull(after); return (T t) -> { accept(t); after.accept(t); }; }
This means that we can't use
andThen to compose consumers of different types.
In fact, we have it all wrong since the beginning. What we need is to bind the list to a function in order to get a new list, such as:
Function<Integer, Double> function1 = x -> x * 1.2; Function<Double, Double> function2 = x -> x / 3; list.bind(function1).bind(function2);
where the
bind method would be defined in a special
FList class like:
public class FList<T> { final List<T> list; public FList(List<T> list) { this.list = list; } public <U> FList<U> bind(Function<T, U> f) { List<U> newList = new ArrayList<U>(); for (T t : list) { newList.add(f.apply(t)); } return new FList<U>(newList); } }
and we would use it as in the following example:
new Flist<>(list).bind(function1).bind(function2);
The only trouble we have then is that binding twice would require iterating twice on the list. This is because
bind is evaluated strictly. What we would need is a lazy evaluation, so that we could iterate only once.
The problem here is that the
bind method is not a real binding. It is in reality a composition of a real binding and a reduce. "Reducing" is applying an operation to each element of the list, resulting in the combination of this element and the result of the same operation applied to the previous element. As there is no previous element when we start from the first element, we start with an initial value. For example, applying (x) -> r + x, where r is the result of the operation on the previous element, or 0 for the first element, gives the sum of all elements of the list. Applying () -> r + 1 to each element, starting with r = 0 gives the length of the list. (This may not be the more efficient way to get the length of the list, but it is totally functional!)
Here, the operation is
add(element) and the initial value is an empty list. And this occurs only because the function application is strictly evaluated.
What Java 8 streams give us is the same, but lazily evaluated, which means that when binding a function to a stream, no iteration is involved!
Binding a
Function<T, U> to a
Stream<T> gives us a
Stream<U> with no iteration occurring. The resulting
Stream is not evaluated, and this does not depend upon the fact that the initial stream was built with evaluated or non evaluated data.
In functional languages, binding a
Function<T, U> to a
Stream<T> is itself a function. In Java 8, it is a method, which means it's arguments are strictly evaluated, but this has nothing to do with the evaluation of the resulting stream. To understand what is happening, we can imagine that the functions to bind are stored somewhere and they become part of the data producer for the new (non evaluated) resulting stream.
In Java 8, the method binding a function
T -> U to a
Stream<T>, resulting in a
Stream<U> is called
map. The function binding a function
T -> Stream<U> to a
Stream<T>, resulting in a
Stream<U> is called
flatMap.
Where is flatten?
Most functional languages also offer a
flatten function converting a
Stream<Stream<U>> into a
Stream<U>, but this is missing in Java 8 streams. It may not look like a big trouble since it is so easy to define a method for doing this. For example, given the following function:
Function<Integer, Stream<Integer>> f = x -> Stream.iterate(1, y -> y + 1).limit(x);
Stream<Integer> stream = Stream.iterate(1, x -> x + 1); Stream<Integer> stream2 = stream.limit(5).flatMap(f); System.out.println(stream2.collect(toList()))
to produce:
[1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5]
Using
map instead of
flatMap:
Stream<Integer> stream = Stream.iterate(1, x -> x + 1); Stream<Integer> stream2 = stream.limit(5).map(f); System.out.println(stream2.collect(toList()))
will produce a stream of streams:
[java.util.stream.SliceOps$1@12133b1, java.util.stream.SliceOps$1@ea2f77, java.util.stream.SliceOps$1@1c7353a, java.util.stream.SliceOps$1@1a9515, java.util.stream.SliceOps$1@f49f1c]
Converting this stream of streams of integers to a stream of integers is very straightforward using the functional paradigm: one just need to flatMap the identity function to it:
System.out.println(stream2.flatMap(x -> x).collect(toList()));
It is however strange that a
flatten method has not been added to the stream, knowing the strong relation that ties
map,
flatMap,
unit and
flatten, where
unit is the function from
T to
Stream<T>, represented by the method:
Stream<T> Stream.of(T... t)
When are stream evaluated?
Streams are evaluated when we apply to them some specific operations called terminal operation. This may be done only once. Once a terminal operation is applied to a stream, is is no longer usable. Terminal operations are:
forEach
forEachOrdered
toArray
reduce
collect
min
max
count
anyMatch
allMatch
noneMatch
findFirst
findAny
iterator
spliterator
Some of these methods are short circuiting. For example,
findFirst will return as soon as the first element will be found.
Non terminal operations are called intermediate and can be stateful (if evaluation of an element depends upon the evaluation of the previous) or stateless. Intermediate operations are:
filter
map
mapTo...(
Int,
Longor
Double)
flatMap
flatMapTo...(
Int,
Longor
Double)
distinct
sorted
peek
limit
skip
sequential
parallel
unordered
onClose
Several intermediate operations may be applied to a stream, but only one terminal operation may be use.
So what about parallel processing?
One most advertised functionality of streams is that they allow automatic parallelization of processing. And one can find the amazing demonstrations on the web, mainly based of the same example of a program contacting a server to get the values corresponding to a list of stocks and finding the highest one not exceeding a given limit value. Such an example may show an increase of speed of 400 % and more.
But this example as little to do with parallel processing. It is an example of concurrent processing, which means that the increase of speed will be observed also on a single processor computer. This is because the main part of each “parallel” task is waiting. Parallel processing is about running at the same time tasks that do no wait, such as intensive calculations.
Automatic parallelization will generally not give the expected result for at least two reasons:
- The increase of speed is highly dependent upon the kind of task and the parallelization strategy. And over all things, the best strategy is dependent upon the type of task.
- The increase of speed in highly dependent upon the environment. In some environments, it is easy to obtain a decrease of speed by parallelizing.
Whatever the kind of tasks to parallelize, the strategy applied by parallel streams will be the same, unless you devise this strategy yourself, which will remove much of the interest of parallel streams. Parallelization requires:
- A pool of threads to execute the subtasks,
- Dividing the initial task into subtasks,
- Distributing subtasks to threads,
- Collating the results.
Without entering the details, all this implies some overhead. It will show amazing results when:
- Some tasks imply blocking for a long time, such as accessing a remote service, or
- There are not many threads running at the same time, and in particular no other parallel stream.
If all subtasks imply intense calculation, the potential gain is limited by the number of available processors. Java 8 will by default use as many threads as they are processors on the computer, so, for intensive tasks, the result is highly dependent upon what other threads may be doing at the same time. Of course, if each subtask is essentially waiting, the gain may appear to be huge.
The worst case is if the application runs in a server or a container alongside other applications, and subtasks do not imply waiting. In such a case, (for example running in a J2EE server), parallel streams will often be slower that serial ones. Imagine a server serving hundreds of requests each second. There are great chances that several streams might be evaluated at the same time, so the work is already parallelized. A new layer of parallelization at the business level will most probably make things slower.
Worst: there are great chances that the business applications will see a speed increase in the development environment and a decrease in production. And that is the worst possible situation.
Edit: for a better understanding of why parallel streams in Java 8 (and the Fork/Join pool in Java 7) are broken, refer to these excellent articles by Edward Harned:
What streams are good for
Stream are a useful tool because they allow lazy evaluation. This is very important in several aspect:
- They allow functional programming style using bindings.
- They allow for better performance by removing iteration. Iteration occurs with evaluation. With streams, we can bind dozens of functions without iterating.
- They allow easy parallelization for task including long waits.
- Streams may be infinite (since they are lazy). Functions may be bound to infinite streams without problem. Upon evaluation, there must be some way to make them finite. This is often done through a short circuiting operation.
What streams are not good for
Streams should be used with high caution when processing intensive computation tasks. In particular, by default, all streams will use the same
ForkJoinPool, configured to use as many threads as there are cores in the computer on which the program is running.
If evaluation of one parallel stream results in a very long running task, this may be split into as many long running sub-tasks that will be distributed to each thread in the pool. From there, no other parallel stream can be processed because all threads will be occupied. So, for computation intensive stream evaluation, one should always use a specific
ForkJoinPool in order not to block other streams.
To do this, one may create a
Callable from the stream and submit it to the pool:
List<SomeClass> list = // A list of objects Stream<SomeClass> stream = list.parallelStream().map(this::veryLongProcessing); Callable<List<Integer>> task = () -> stream.collect(toList()); ForkJoinPool forkJoinPool = new ForkJoinPool(4); List<SomeClass> newList = forkJoinPool.submit(task).get()
This way, other parallel streams (using their own
ForkJoinPool) will not be blocked by this one. In other words, we would need a pool of
ForkJoinPool in order to avoid this problem.
If a program is to be run inside a container, one must be very careful when using parallel streams. Never use the default pool in such a situation unless you know for sure that the container can handle it. In a Java EE container, do not use parallel streams.
Previous articles
What's Wrong with Java 8, Part I: Currying vs Closures
What's Wrong in Java 8, Part II: Functions & Primitives
Hendy Irawan replied on Tue, 2014/05/20 - 12:28pm
Pierre-yves, I second many of your points. My immediate question is: Why now?
JDK 8 milestones have been available since 2 years ago. Had you published your series (voiced your observations) throughout the development period, many of these issues could have been improved (or at least considered).
Edward Harned replied on Tue, 2014/05/20 - 2:50pm
Great article, you’re batting 1000.
The real problem with parallel streams is that the underlying structure (fork/join) is defective. I’ve been writing a critique about this faulty framework for four years now. Parallel streams expose the awful decision by Oracle not to build a parallel engine themselves but to rely on an academic experiment underpinning a research paper as the basis of the parallel option.
Pierre-yves Saumont replied on Wed, 2014/05/21 - 1:54am
in response to:
Hendy Irawan
Hendy, There are several answers to your question. First, my experience with submitting remarks about the evolution of Java in the past has given absolutely no result. But this was not a surprise. One more important reason is that I was only assigned Java 8 evaluation at work in September 2013. I could have learn about Java 8 before in my spare time, but I had many other things to do, among which developing in other (more functional) languages.
But the main reason is perhaps that I do not think that Java 8 is wrong in itself. What decided me to write this series are the many examples I read about Java 8. I was using Java with the functional paradigm before Java 8. It has a cost. We have our own functional framework written in Java 6, with functions, immutable collections (missing in Java 8), monads (the most useful missing in Java 8), streams, actors and more. Most of the examples I have read about try to show how simple it is to write programs using the functional paradigm with Java 8 by just throwing in some magic words such as Optional or parallelStream. This is not true. There is still a big cost. It is worth it, but we must know the limitations and how to work around them. In the last article in the series, I will summarize what at think are good practices when using the functional paradigm with Java 8.
Pierre-yves Saumont replied on Wed, 2014/05/21 - 2:40am
in response to:
Edward Harned
Edward, thanks for you comment. By the way, your article is really a must read for all programmers considering using parallel streams and/or the Fork/Join framework. Having developed our own parallel engine, I feel quite frustrated each time I heard people considering Java 8 parallel stream as the right drop in solution for their problem.
By the way, I edited my article to include links to your two articles.
Peter Huber replied on Fri, 2014/06/06 - 3:57am
Blogs have to be filled, bytes have to fill the network cables, electrons have to be pushed around - so many usefull blog entries these days...
I don't get why we cannot do more than one Type in Transforming... Use "map" maybe?And about Fork/Join
Pierre-yves Saumont replied on Fri, 2014/06/06 - 8:12am
in response to:
Peter Huber
>I don't get why we cannot do more than one Type in Transforming... Use "map" maybe?
Using map twice as I did in the following example (I used bind as a generic name for map or flatMap, depending on the type of the function):
list.bind(function1).bind(function2);
is not composing functions. This is applying successively two functions. What is much more interesting is to be able to compose functions without applying them, in order to build new functions that will be reused later. This is what programming is all about. Otherwise, it is only scripting.
What I was saying is that a map (and flatMap) method should have been added to List. Instead, we have to use Stream, which can't be reused.
But the main interest of Stream is that it is lazily evaluated. This allows deferred evaluation, and more specifically parallel evaluation.
>And about Fork/Join
You do not give any valid reason. You trust Doug Lea. This kind of religious reason is irrelevant. But you are right that every programmer should look at the code before using it. Did you do so? If you did, you can then have good technical reasons and you should expose them. But instead, you are just saying things like:
>Please go on and accuse hibernate, spring, etc. then as well! And please don't stop, accuse Oracle of their DB and SQL and...
Did I wrote that Hibernate or Spring or Oracle DB was wrong? Why would you invent such lies instead of explaining us good use cases for parallel streams?
Use parallel streams on a single core machine and see what happen. Use parallel streams on a server and see what happens. Use parallel streams more than once at the same time in any application and see what happens.
>Try to match the optimization level the Java guys reached in for instance LongAdder then let's talk again
Are you able to match this optimization level yourself? Because if you are not, and if you are right that no one should argue before being able to do so, guess what you should have done!
Soylent Green replied on Fri, 2014/06/06 - 1:59pm
I understand that you're angry, Pierre-yves, but actually Peter has a point in claiming it's not the APIs Designers fault if APIs are used the "wrong" way. And I bet he just wanted to give an example with Spring and Oracle which are both tremendously capable instruments, if you know how to use them. if not, then it's the same as with parallel streams - you'll just get a bleeding nose. But still it's not Springs/Oracles fault if Devs don't rtfm...
Peter Huber replied on Fri, 2014/06/06 - 2:34pm
in response to:
Soylent Green
Thanks Soylent...people different than me might come to the conclusion that there's a reason people at Oracle do not listen ;-)
Anyway talking about function composition...I go for the straight forward approach, seem like easy...but I bet - yes I see it's not Lisp I know I did Lisp myself - it's not completly correct in terms of functional theoretical constructs...
Pierre-yves Saumont replied on Sun, 2014/06/08 - 4:27am
in response to:
Soylent Green
You're mostly right except that you're slightly wrong when you say that parallel streams may be useful IF you know how to use them. Oracle tells us how to use them. Oracle demonstrated uses of parallel streams. So everybody knows how parallel streams are intended to be used. I would say that parallel streams are useful if you know WHEN to use them. And most importantly when NOT to use them.
My answer to this question is quite simple: I cannot find any business use case for parallel streams. I do not say that it is not an interesting piece of software. It surely is. However, I could not find any useful use case for it.
I found two cases when parallel streams are efficient: parallelizing waiting tasks, and parallelizing intensive computing tasks on an otherwise idling machine with more than one processor.
The first use case was demonstrated several times with an example of tasks accessing a remote service. This kind of use case is fine, except that it forces the user to use parallel processing although concurrent processing would be much more efficient. If you have hundred tasks accessing a remote service (resulting mostly in tasks waiting rather than computing), it's inefficient to limit the number of threads to the number of available hardware threads. Using parallel streams will give increased performance compared to mono threading, but using a cached thread pool with as many threads as needed will perform much better.
The second use case (parallelizing intensive computing tasks on an otherwise idling machine with more than one processor) is purely theoretical. No one will ever have to implement this use case. Probably at least ninety percent of Java developers are business programmers creating task running on a J2EE server. Using parallel streams on a J2EE server, although “legal”, is nonsense because to get profit from parallel streams, you've got to master the context (in term of thread use). And you can't do this with J2EE. A J2EE server is already highly parallelizing tasks, so there is absolutely no benefit to have any new layer of parallelization for computing intensive tasks.
So my question is: can one show us some business use cases where parallel streams are efficient. Of course, the fact that I could not find any does not mean that parallel streams are bad. I would really love to see parallel streams shinning. But all examples I could see where biased. Most often in these examples, parallel streams are used to solve software problems, i.e. mostly academic problems which have nothing in common with real business problems developers have to solve.
In all use cases I have (and we I am not working under J2EE), I have seen that parallel streams where far less efficient than the solutions that had been developed before. In some situations, they even were a disaster.
So you can argue with good counter examples and valid argument, or you cant rant about blasphemy. You choice. But believe it or not, it will not make me angry! | http://java.dzone.com/articles/whats-wrong-java-8-part-iii | CC-MAIN-2014-42 | refinedweb | 4,313 | 63.39 |
list of subsets from N elements
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 21, 2005 10:45:00
0
Hi everyone, first time posting in this particular forum, hope i chose the right place
Ok so i have this problem; I am given a vector containing Strings in them say for example,"name","Telephone","address"..etc (personal information bascially).
To simplify the problem i think maybe ill just make each
string
in the array correspond to a letter in the alphabet ie.name=a,telephone=b,address=c.
Ok so say i am passed a vector with "a","b","c", i need to be able to create all the sets of these, without repeating any letters in a set.So basically my answer to this should just be;
a,b,c,ab,ac,bc,abc
But my method to do this must also be able to handle any amount of inputs, not just three, so i could have a,b,c,d,e...then must be able to find all the sets of those as well.
I started this using for loops but i think i began to realize that it didnt work for N inputs, because each time i wanted more inputs i needed more for loops, so its bascially useless what ive done so far.Maybe recursion or something might be useful, anyone have any ideas?
Thanks for patiently reading, hope to hear some suggestions!
National Research Council<br />Internet Logic Department
Stephen Huey
Ranch Hand
Joined: Jul 15, 2003
Posts: 618
posted
Feb 21, 2005 12:19:00
0
Sounds like a school assignment...yeah, I'd say go for recursion. If you need to double check your work (make sure you haven't added the same thing twice to a set), you might try using the java.util.Set class (or rather, maybe
java.util.HashSet
or something like that).
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 21, 2005 13:13:00
0
hey,
lol, its not a school assignment at all, haha, i wish it was, im actually writing a program at work, this is a minor part to it but its essential for forward progress in the project.I asked a few people here at work and they say they may have some notes from their Algorithms and Data structures classes that might be able to help me, but we'll see i supose.
I infact am not a huge fan of recursion, cause it actually requires more effort to think up the logic for it,(poor me eh) haha but i guess i can give it a try.As for the java.util.set class and the hashSet, ill give them a looking at, still any other ideas would be much appreciated!
thanks again
[ February 21, 2005: Message edited by: luc comeau ]
Igor Stojanovic
Ranch Hand
Joined: Feb 18, 2005
Posts: 58
posted
Feb 21, 2005 13:59:00
0
Hi luc,
I think you should use Varargs (
Methods That Allow Variable-Length Parameter Lists
)
The following class defines a method that accepts a vararg, and invokes the same method passing it first several arguments and then fewer arguments. The arguments come in as an array.
public class Varargs { public static void main(String...args) { Varargs test = new Varargs(); //call the method System.out.println(test.min(9,6,7,4,6,5,2)); System.out.println(test.min(5,1,8,6)); } /*define the method that can accept any number of arguments. */ public int min(Integer...args) { boolean first = true; Integer min = args[0]; for (Integer i : args) { if (first) { min = i; first = false; } else if (min.compareTo(i) > 0) { min = i; } } return min; } } The output is 2 1
When combined with the for each loop, varargs are very easy to use.
Note that varargs are new to SDK 5.0, so you need to compile for that version of
Java
to use them.
kind regards
Igor
[ February 21, 2005: Message edited by: Igor Stojanovic ]
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Feb 21, 2005 14:13:00
0
class CombinationsAll { String startCombo = "abcd"; public CombinationsAll() { for(int x = 1; x <= startCombo.length(); x++) { int[] arr = new int[x]; for(int y = 0; y < x; y++) arr[y] = y; getStartCombos(arr); } System.exit(0); } private void getStartCombos(int arr[]) { String thisCombo = ""; for(int x = 0; x < arr.length ; x++) thisCombo += startCombo.charAt(arr[x]); getCombos("",thisCombo); if(arr[0] == (startCombo.length()-1)-(arr.length-1-0)) return; if(arr[arr.length-1] == startCombo.length()-1) { for(int i = 0; i < arr.length;i++) { if(arr[i] == (startCombo.length()-1)-(arr.length-1-i)) { arr[i-1]++; for(int ii = i; ii < arr.length; ii++) { arr[ii] = arr[ii-1] + 1; } break; } } } else { arr[arr.length-1]++; } getStartCombos(arr); } private void getCombos(String str1, String str2) { int i = str2.length()-1; if(i < 1) System.out.println(str1+str2); else for(int ii = 0; ii <= i; ii++) getCombos(str1 + str2.substring(ii,ii+1), str2.substring(0, ii) + str2.substring(str2.length() - (i-ii))); } public static void main(String[] args) {new CombinationsAll();} }
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 22, 2005 06:03:00
0
hi everyone
Igor, thanks for the effort with the Varargs, i should have mentioned before that i am not using 1.5, i have started this project before it was released so im using 1.4.2, but it i cant figure something out i will consider that, thanks
Michael,I gave your code a run and the only thing that appears wrong withit is that it is writing out all "permutations" of the String "abcd"(well its not really wrong but in my situation it is),i infact need the combinations of "abcd" where ordering doesnt matter.So where your program counts ab,ba as two different elements, i infact would only count them as one, like i wouldnt consider them at all as being different.
Im going to try and trace through your code and see if i can figure out how to change this, but if its easy to fix and repost please feel free, but thanks for the time u took to post that in the first place, much appreciated !
P.s- has anyone ever heard of the "Banker's Sequence"
iv done a little research and aparently this sequence is the fastest way to approach this problem, but iv yet to find any implementation of it in java.
It does the subsets int he way you would suspect
if you had abcd, its would orde them like this
{a,b,c,d},{ab,ac,ad},{bc,bd},{cd},{abc,abd},{acd},{bcd},{abcd}
just a thought, maybe this might strike some more ideas.
[ February 22, 2005: Message edited by: luc comeau ]
Michael Dunn
Ranch Hand
Joined: Jun 09, 2003
Posts: 4632
posted
Feb 22, 2005 14:27:00
0
try this one then
class CombinationSets { String set = "abcd"; public CombinationSets() { for(int x = 1; x <= set.length(); x++) { int[] array = new int[x]; for(int y = 0; y < x; y++) array[y] = y; printSets(array); } } public void printSets(int[] arr) { String thisCombo = ""; while(true) { thisCombo = ""; for(int x = 0; x < arr.length ; x++) thisCombo += set.charAt(arr[x]); System.out.println(thisCombo); if(arr[0] == (set.length()-1)-(arr.length-1)) break; if(arr[arr.length-1] == set.length()-1) { for(int i = 0; i < arr.length;i++) { if(arr[i] == (set.length()-1)-(arr.length-1-i)) { arr[i-1]++; for(int ii = i; ii < arr.length; ii++) arr[ii] = arr[ii-1] + 1; break; } } } else arr[arr.length-1]++; } } public static void main(String[] args) {new CombinationSets();} }
David Harkness
Ranch Hand
Joined: Aug 07, 2003
Posts: 1646
posted
Feb 22, 2005 15:52:00
0
If it helps, realize that what you're effectively doing is counting in binary. Each element in the original list is a bit representing whether it is in or out of any given subset, so you're writing out all possible bit combinations. If there were three elements, the subsets would be
000 -- you probably want to omit the empty subset 001 010 011 100 101 110 111
You could use a recursive algorithm if the number of elements is small enough, but you can certainly rewrite it non-recursively.
Sorry, I only have time for that much right now, but hopefully it can give you enough to solve it.
luc comeau
Ranch Hand
Joined: Jan 20, 2005
Posts: 97
posted
Feb 23, 2005 06:52:00
0
hi everyone again,
David, i first would like to say thats exactly what i was working on yesterday, seems that i always come up with the same ideas as you as u post them, I wrote up some code that does the binary representation of them, for the sake of helping others wanting to do this ill post the code, or if you were intrested;
public static void output(int[] string, int position) { int[] temp_string = new int[length]; int index = 0; int i; for (i = 0; i < length; i++) { if ((index < position) && (string[index] == i)) { temp_string[i] = 1; index++; } else temp_string[i] = 0; } for (i = 0; i < length; i++) System.out.println( temp_string[i]); if (i%length==0) System.out.println("\r"); } public static void generate(int[] string, int position, int positions) { if (position < positions) { if (position == 0) { for (int i = 0; i < length; i++) { string[position] = i; generate(string, position + 1, positions); } } else { for (int i = string[position - 1] + 1; i < length; i++) { string[position] = i; generate(string, position + 1, positions); } } } else output(string, positions); } public static void main (String[] args) { length = 3; /*this is hard coded for now but can be any value */ for (int i = 0; i <= length; i++) { int[] string = new int[length]; generate(string, 0, i); } System.exit(0); }
I havn't yet decided what i want to do with these binary representations, but for the most part this thread has accomplished what i asked !!
Thanks
As for Michael, ill give your code a try and see if maybe i can find some way to incorporate your idea it may be useful for a later stage of my coding, non the less, thanks so much !
[ February 23, 2005: Message edited by: luc comeau ]
Luke Nezda
Greenhorn
Joined: Jun 23, 2008
Posts: 1
posted
Jun 23, 2008 21:08:00
0
Last solution offered is basically a copy/paste from the C++ code from the December 2000 academic paper "Efficiently Enumerating the Subsets of a Set" by J. Loughry, J.I. van Hemert, and L. Schoofs if anyone is interested in more details on the Banker's Sequence and this implementation by its original authors.
I agree. Here's the link:
subject: creating a list of subsets from N elements
Similar Threads
Beta Update
Vector within vector
Typical/Built-in way to do validate different sets of parameters?
ClassCastException ?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/375951/java/java/creating-list-subsets-elements | CC-MAIN-2014-52 | refinedweb | 1,851 | 66.37 |
NAME
epoll_create, epoll_create1 - open an epoll file descriptor
SYNOPSIS
#include <sys/epoll.h> int epoll_create(int size); int epoll_create1(int flags);
DESCRIPTION
epoll_create() opens an epoll file descriptor by requesting the kernel to allocate an event backing store dimensioned for size descriptors. The size is not the maximum size of the backing store but just a hint to the kernel about how to dimension internal structures. (Nowadays, size is ignored; see NOTES below.) The returned file descriptor is used for all the subsequent calls to the epoll interface. The file descriptor returned by epoll_create() must be closed by using close(2). If flags is 0, then, other than the fact that the obsolete flags.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/jaunty/en/man2/epoll_create.2.html | CC-MAIN-2016-07 | refinedweb | 135 | 65.52 |
XQuery is intended for labeling information extracted from multiple structured
sources like XML documents, object repositories and relational databases. <oXygen/>
offers help editing XQuery expression by the following means:
The content completion is triggered by CTRL-Space shortcut, at any point in the
expression. It shows the list of all the available XQuery functions and keywords. Each
function has been commented.
XQueries are very similar to the XSL
analyse documents that are stored on the local file system.
You can use the Drag and Drop triggered popup menu to easily
create XQuery FLWOR constructs or XPath expressions.
If you have a transformation scenario for the XQuery file and you specify an input
XML file, this one will also be added to the displayed input trees.
The overall structure of an XQuery module is presented: the
module name, the import declarations, the variables, the XML namespaces and the user
defined functions.
If you have Saxon8 SA (Schema Aware) installed you can use it as XQuery
processor.
<oXygen/> uses for XQuery the Saxon 8.1B processor. This is conformant to the XQuery
Working Draft. The processor is used in two cases: validation
and execution of the XQuery document. Although the execution implies a validation, it
is faster to syntactically check the expression and make sure it is valid before
executing it.
<oXygen/> integrates the xqDoc tool for generating HTML documentation for XQuery
files with just a couple of clicks. It accepts one or more XQuery files as input and
the function namespaces are configurable. | http://www.oxygenxml.com/xquery_editor.html | crawl-001 | refinedweb | 252 | 54.52 |
SYNOPSIS
#include <linux/bpf.h>
int bpf(int cmd, union bpf_attr *attr, unsigned int size);
DESCRIPTIONThe bpf() system call performs a range of operations related to extended Berkeley Packet Filters. Extended BPF (or eBPF) is similar to the original ("classic") BPF (cBPF) used to filter network packets. For both cBPF and eBPF programs, the kernel statically analyzes the programs before loading them, in order to ensure that they cannot harm the running system.
eBPF extends cBPF in multiple ways, including the ability to call a fixed set of in-kernel helper functions (via the BPF_CALL opcode extension provided by eBPF) and access shared data structures such as eBPF maps.
Extended BPF Design/ArchitectureeBPF maps are a generic data structure for storage of different data types. Data types are generally treated as binary blobs, so a user just specifies the size of the key and the size of the value at map-creation time. In other words, a key/value for a given map can have an arbitrary structure.
A user process can create multiple maps (with key/value-pairs being opaque bytes of data) and access them via file descriptors. Different eBPF programs can access the same maps in parallel. It's up to the user process and eBPF program to decide what they store inside maps.
There's one special map type, called a program array. This type of map stores file descriptors referring to other eBPF programs. When a lookup in the map is performed, the program flow is redirected in-place to the beginning of another eBPF program and does not return back to the calling program. The level of nesting has a fixed limit of 32, so that infinite loops cannot be crafted. At runtime, the program file descriptors stored in the map can be modified, so program functionality can be altered based on specific requirements. All programs referred to in a program-array map must have been previously loaded into the kernel via bpf(). If a map lookup fails, the current program continues its execution. See BPF_MAP_TYPE_PROG_ARRAY below for further details.
Generally, eBPF programs are loaded by the user process and automatically unloaded when the process exits. In some cases, for example, tc-bpf(8), the program will continue to stay alive inside the kernel even after the process that loaded the program exits. In that case, the tc subsystem holds a reference to the eBPF program after the file descriptor has been closed by the user-space program. Thus, whether a specific program continues to live inside the kernel depends on how it is further attached to a given kernel subsystem after it was loaded via bpf().
Each eBPF program is a set of instructions that is safe to run until its completion. An in-kernel verifier statically determines that the eBPF program terminates and is safe to execute. During verification, the kernel increments reference counts for each of the maps that the eBPF program uses, so that the attached maps can't be removed until the program is unloaded.
eBPF programs can be attached to different events. These events can be the arrival of network packets, tracing events, classification events by network queueing disciplines (for eBPF programs attached to a tc(8) classifier), and other types that may be added in the future. A new event triggers execution of the eBPF program, which may store information about the event in eBPF maps. Beyond storing data, eBPF programs may call a fixed set of in-kernel helper functions.
The same eBPF program can be attached to multiple events and different eBPF programs can access the same map:
tracing tracing tracing packet packet packet event A event B event C on eth0 on eth1 on eth2 | | | | | ^ | | | | v | --> tracing <-- tracing socket tc ingress tc egress prog_1 prog_2 prog_3 classifier action | | | | prog_4 prog_5 |--- -----| |------| map_3 | | map_1 map_2 --| map_4 |--
ArgumentsThe operation to be performed by the bpf() system call is determined by the cmd argument. Each operation takes an accompanying argument, provided via attr, which is a pointer to a union of type bpf_attr (see below). The size argument is the size of the union pointed to by attr.
The value provided in cmd is one of the following:
- BPF_MAP_CREATE
- Create a map and return a file descriptor that refers to the map. The close-on-exec file descriptor flag (see fcntl(2)) is automatically enabled for the new file descriptor.
- BPF_MAP_LOOKUP_ELEM
- Look up an element by key in a specified map and return its value.
- BPF_MAP_UPDATE_ELEM
- Create or update an element (key/value pair) in a specified map.
- BPF_MAP_DELETE_ELEM
- Look up and delete an element by key in a specified map.
- BPF_MAP_GET_NEXT_KEY
- Look up an element by key in a specified map and return the key of the next element.
- BPF_PROG_LOAD
- Verify and load an eBPF program, returning a new file descriptor associated with the program. The close-on-exec file descriptor flag (see fcntl(2)) is automatically enabled for the new file descriptor.
The bpf_attr union consists of various anonymous structures that are used by different bpf() commands:
union bpf_attr { struct { /* Used by BPF_MAP_CREATE */ __u32 map_type; __u32 key_size; /* size of key in bytes */ __u32 value_size; /* size of value in bytes */ __u32 max_entries; /* maximum number of entries in a map */ }; struct { /* Used by BPF_MAP_*_ELEM and BPF_MAP_GET_NEXT_KEY commands */ __u32 map_fd; __aligned_u64 key; union { __aligned_u64 value; __aligned_u64 next_key; }; __u64 flags; }; struct { /* Used by BPF_PROG_LOAD */ __u32 prog_type; __u32 insn_cnt; __aligned_u64 insns; /* 'const struct bpf_insn *' */ __aligned_u64 license; /* 'const char *' */ __u32 log_level; /* verbosity level of verifier */ __u32 log_size; /* size of user buffer */ __aligned_u64 log_buf; /* user supplied 'char *' buffer */ __u32 kern_version; /* checked when prog_type=kprobe (since Linux 4.1) */ }; } __attribute__((aligned(8)));
eBPF mapsMaps are a generic data structure for storage of different types of data. They allow sharing of data between eBPF kernel programs, and also between kernel and user-space applications.
Each map type has the following attributes:
- *
- type
- *
- maximum number of elements
- *
- key size in bytes
- *
- value size in bytes
The following wrapper functions demonstrate how various bpf() commands can be used to access the maps. The functions use the cmd argument to invoke different operations.
- BPF_MAP_CREATE
- The BPF_MAP_CREATE command creates a new map, returning a new file descriptor that refers to the map.
int bpf_create_map(enum bpf_map_type map_type, unsigned int key_size, unsigned int value_size, unsigned int max_entries) { union bpf_attr attr = { .map_type = map_type, .key_size = key_size, .value_size = value_size, .max_entries = max_entries }; return bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); }
The new map has the type specified by map_type, and attributes as specified in key_size, value_size, and max_entries. On success, this operation returns a file descriptor. On error, -1 is returned and errno is set to EINVAL, EPERM, or ENOMEM.
The key_size and value_size attributes will be used by the verifier during program loading to check that the program is calling bpf_map_*_elem() helper functions with a correctly initialized key and to check that the program doesn't access the map element value beyond the specified value_size. For example, when a map is created with a key_size of 8 and the eBPF program calls
bpf_map_lookup_elem(map_fd, fp - 4)
the program will be rejected, since the in-kernel helper function
bpf_map_lookup_elem(map_fd, void *key)
expects to read 8 bytes from the location pointed to by key, but the fp - 4 (where fp is the top of the stack) starting address will cause out-of-bounds stack access.
Similarly, when a map is created with a value_size of 1 and the eBPF program contains
value = bpf_map_lookup_elem(...); *(u32 *) value = 1;
the program will be rejected, since it accesses the value pointer beyond the specified 1 byte value_size limit.
Currently, the following values are supported for map_type:
enum bpf_map_type { BPF_MAP_TYPE_UNSPEC, /* Reserve 0 as invalid map type */ BPF_MAP_TYPE_HASH, BPF_MAP_TYPE_ARRAY, BPF_MAP_TYPE_PROG_ARRAY, };
map_type selects one of the available map implementations in the kernel. For all map types, eBPF programs access maps with the same bpf_map_lookup_elem() and bpf_map_update_elem() helper functions. Further details of the various map types are given below.
- BPF_MAP_LOOKUP_ELEM
- The BPF_MAP_LOOKUP_ELEM command looks up an element with a given key in the map referred to by the file descriptor fd.
int bpf_lookup_elem(int fd, const void *key, void *value) { union bpf_attr attr = { .map_fd = fd, .key = ptr_to_u64(key), .value = ptr_to_u64(value), }; return bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr)); }
If an element is found, the operation returns zero and stores the element's value into value, which must point to a buffer of value_size bytes.
If no element is found, the operation returns -1 and sets errno to ENOENT.
- BPF_MAP_UPDATE_ELEM
- The BPF_MAP_UPDATE_ELEM command creates or updates an element with a given key/value in the map referred to by the file descriptor fd.
int bpf_update_elem(int fd, const void *key, const void *value, uint64_t flags) { union bpf_attr attr = { .map_fd = fd, .key = ptr_to_u64(key), .value = ptr_to_u64(value), .flags = flags, }; return bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr)); }
The flags argument should be specified as one of the following:
- BPF_ANY
- Create a new element or update an existing element.
- BPF_NOEXIST
- Create a new element only if it did not exist.
- BPF_EXIST
- Update an existing element.
- On success, the operation returns zero. On error, -1 is returned and errno is set to EINVAL, EPERM, ENOMEM, or E2BIG. E2BIG indicates that the number of elements in the map reached the max_entries limit specified at map creation time. EEXIST will be returned if flags specifies BPF_NOEXIST and the element with key already exists in the map. ENOENT will be returned if flags specifies BPF_EXIST and the element with key doesn't exist in the map.
- BPF_MAP_DELETE_ELEM
- The BPF_MAP_DELETE_ELEM command deleted the element whose key is key from the map referred to by the file descriptor fd.
int bpf_delete_elem(int fd, const void *key) { union bpf_attr attr = { .map_fd = fd, .key = ptr_to_u64(key), }; return bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr)); }
On success, zero is returned. If the element is not found, -1 is returned and errno is set to ENOENT.
- BPF_MAP_GET_NEXT_KEY
- The BPF_MAP_GET_NEXT_KEY command looks up an element by key in the map referred to by the file descriptor fd and sets the next_key pointer to the key of the next element.
int bpf_get_next_key(int fd, const void *key, void *next_key) { union bpf_attr attr = { .map_fd = fd, .key = ptr_to_u64(key), .next_key = ptr_to_u64(next_key), }; return bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr)); }
If key is found, the operation returns zero and sets the next_key pointer to the key of the next element. If key is not found, the operation returns zero and sets the next_key pointer to the key of the first element. If key is the last element, -1 is returned and errno is set to ENOENT. Other possible errno values are ENOMEM, EFAULT, EPERM, and EINVAL. This method can be used to iterate over all elements in the map.
- close(map_fd)
- Delete the map referred to by the file descriptor map_fd. When the user-space program that created a map exits, all maps will be deleted automatically (but see NOTES).
eBPF map typesThe following map types are supported:
- BPF_MAP_TYPE_HASH
- Hash-table maps have the following characteristics:
- *
- Maps are created and destroyed by user-space programs. Both user-space and eBPF programs can perform lookup, update, and delete operations.
- *
- The kernel takes care of allocating and freeing key/value pairs.
- *
- The map_update_elem() helper with fail to insert new element when the max_entries limit is reached. (This ensures that eBPF programs cannot exhaust memory.)
- *
- map_update_elem() replaces existing elements atomically.
- Hash-table maps are optimized for speed of lookup.
- BPF_MAP_TYPE_ARRAY
- Array maps have the following characteristics:
- *
- Optimized for fastest possible lookup. In the future the verifier/JIT compiler may recognize lookup() operations that employ a constant key and optimize it into constant pointer. It is possible to optimize a non-constant key into direct pointer arithmetic as well, since pointers and value_size are constant for the life of the eBPF program. In other words, array_map_lookup_elem() may be 'inlined' by the verifier/JIT compiler while preserving concurrent access to this map from user space.
- *
- All array elements pre-allocated and zero initialized at init time
- *
- The key is an array index, and must be exactly four bytes.
- *
- map_delete_elem() fails with the error EINVAL, since elements cannot be deleted.
- *
- map_update_elem() replaces elements in a nonatomic fashion; for atomic updates, a hash-table map should be used instead. There is however one special case that can also be used with arrays: the atomic built-in __sync_fetch_and_add() can be used on 32 and 64 bit atomic counters. For example, it can be applied on the whole value itself if it represents a single counter, or in case of a structure containing multiple counters, it could be used on individual counters. This is quite often useful for aggregation and accounting of events.
- Among the uses for array maps are the following:
- *
- As "global" eBPF variables: an array of 1 element whose key is (index) 0 and where the value is a collection of 'global' variables which eBPF programs can use to keep state between events.
- *
- Aggregation of tracing events into a fixed set of buckets.
- *
- Accounting of networking events, for example, number of packets and packet sizes.
- BPF_MAP_TYPE_PROG_ARRAY (since Linux 4.2)
- A program array map is a special kind of array map whose map values contain only file descriptors referring to other eBPF programs. Thus, both the key_size and value_size must be exactly four bytes. This map is used in conjunction with the bpf_tail_call() helper.
This means that an eBPF program with a program array map attached to it can call from kernel side into
void bpf_tail_call(void *context, void *prog_map, unsigned int index);
and therefore replace its own program flow with the one from the program at the given program array slot, if present. This can be regarded as kind of a jump table to a different eBPF program. The invoked program will then reuse the same stack. When a jump into the new program has been performed, it won't return to the old program anymore.
If no eBPF program is found at the given index of the program array (because the map slot doesn't contain a valid program file descriptor, the specified lookup index/key is out of bounds, or the limit of 32 nested calls has been exceed), execution continues with the current eBPF program. This can be used as a fall-through for default cases.
A program array map is useful, for example, in tracing or networking, to handle individual system calls or protocols in their own subprograms and use their identifiers as an individual map index. This approach may result in performance benefits, and also makes it possible to overcome the maximum instruction limit of a single eBPF program. In dynamic environments, a user-space daemon might atomically replace individual subprograms at run-time with newer versions to alter overall program behavior, for instance, if global policies change.
eBPF programsThe BPF_PROG_LOAD command is used to load an eBPF program into the kernel. The return value for this command is a new file descriptor associated with this eBPF program.
char bpf_log_buf[LOG_BUF_SIZE]; int bpf_prog_load(enum bpf_prog_type type, const struct bpf_insn *insns, int insn_cnt, const char *license) { union bpf_attr attr = { .prog_type = type, .insns = ptr_to_u64(insns), .insn_cnt = insn_cnt, .license = ptr_to_u64(license), .log_buf = ptr_to_u64(bpf_log_buf), .log_size = LOG_BUF_SIZE, .log_level = 1, }; return bpf(BPF_PROG_LOAD, &attr, sizeof(attr)); }
prog_type is one of the available program types:
enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC, /* Reserve 0 as invalid program type */ BPF_PROG_TYPE_SOCKET_FILTER, BPF_PROG_TYPE_KPROBE, BPF_PROG_TYPE_SCHED_CLS, BPF_PROG_TYPE_SCHED_ACT, };
For further details of eBPF program types, see below.
The remaining fields of bpf_attr are set as follows:
- *
- insns is an array of struct bpf_insn instructions.
- *
- insn_cnt is the number of instructions in the program referred to by insns.
- *
- license is a license string, which must be GPL compatible to call helper functions marked gpl_only. (The licensing rules are the same as for kernel modules, so that also dual licenses, such as "Dual BSD/GPL", may be used.)
- *
- log_buf is a pointer to a caller-allocated buffer in which the in-kernel verifier can store the verification log. This log is a multi-line string that can be checked by the program author in order to understand how the verifier came to the conclusion that the eBPF program is unsafe. The format of the output can change at any time as the verifier evolves.
- *
- log_size size of the buffer pointed to by log_bug. If the size of the buffer is not large enough to store all verifier messages, -1 is returned and errno is set to ENOSPC.
- *
- log_level verbosity level of the verifier. A value of zero means that the verifier will not provide a log; in this case, log_buf must be a NULL pointer, and log_size must be zero.
Applying close(2) to the file descriptor returned by BPF_PROG_LOAD will unload the eBPF program (but see NOTES).
Maps are accessible from eBPF programs and are used to exchange data between eBPF programs and between eBPF programs and user-space programs. For example, eBPF programs can process various events (like kprobe, packets) and store their data into a map, and user-space programs can then fetch data from the map. Conversely, user-space programs can use a map as a configuration mechanism, populating the map with values checked by the eBPF program, which then modifies its behavior on the fly according to those values.
eBPF program typesThe eBPF program type (prog_type) determines the subset of kernel helper functions that the program may call. The program type also determines the program input (context)---the format of struct bpf_context (which is the data blob passed into the eBPF program as the first argument).
For example, a tracing program does not have the exact same subset of helper functions as a socket filter program (though they may have some helpers in common). Similarly, the input (context) for a tracing program is a set of register values, while for a socket filter it is a network packet.
The set of functions available to eBPF programs of a given type may increase in the future.
The following program types are supported:
- BPF_PROG_TYPE_SOCKET_FILTER (since Linux 3.19)
- Currently, the set of functions for BPF_PROG_TYPE_SOCKET_FILTER is:
bpf_map_lookup_elem(map_fd, void *key) /* look up key in a map_fd */ bpf_map_update_elem(map_fd, void *key, void *value) /* update key/value */ bpf_map_delete_elem(map_fd, void *key) /* delete key in a map_fd */
The bpf_context argument is a pointer to a struct __sk_buff.
- BPF_PROG_TYPE_KPROBE (since Linux 4.1)
- [To be documented]
- BPF_PROG_TYPE_SCHED_CLS (since Linux 4.1)
- [To be documented]
- BPF_PROG_TYPE_SCHED_ACT (since Linux 4.1)
- [To be documented]
EventsOnce a program is loaded, it can be attached to an event. Various kernel subsystems have different ways to do so.
Since Linux 3.19, the following call will attach the program prog_fd to the socket sockfd, which was created by an earlier call to socket(2):
setsockopt(sockfd, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd, sizeof(prog_fd));
Since Linux 4.1, the following call may be used to attach the eBPF program referred to by the file descriptor prog_fd to a perf event file descriptor, event_fd, that was created by a previous call to perf_event_open(2):
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
EXAMPLES
/* bpf+sockets example: * 1. create array map of 256 elements * 2. load program that counts number of packets received * r0 = skb->data[ETH_HLEN + offsetof(struct iphdr, protocol)] * map[r0]++ * 3. attach prog_fd to raw socket via setsockopt() * 4. print number of received TCP/UDP packets every second */ int main(int argc, char **argv) { int sock, map_fd, prog_fd, key; long long value = 0, tcp_cnt, udp_cnt; map_fd = bpf_create_map(BPF_MAP_TYPE_ARRAY, sizeof(key), sizeof(value), 256); if (map_fd < 0) { printf("failed to create map '%s'\n", strerror(errno)); /* likely not run as root */ return 1; } struct bpf_insn prog[] = { BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), /* r6 = r1 */ BPF_LD_ABS(BPF_B, ETH_HLEN + offsetof(struct iphdr, protocol)), /* r0 = ip->proto */ BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4), /* *(u32 *)(fp - 4) = r0 */ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), /* r2 = fp */ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = r2 - 4 */ BPF_LD_MAP_FD(BPF_REG_1, map_fd), /* r1 = map_fd */ BPF_CALL_FUNC(BPF_FUNC_map_lookup_elem), /* r0 = map_lookup(r1, r2) */ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* if (r0 == 0) goto pc+2 */ BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */ BPF_XADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0, 0), /* lock *(u64 *) r0 += r1 */ BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */ BPF_EXIT_INSN(), /* return r0 */ }; prog_fd = bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, prog, sizeof(prog), "GPL"); sock = open_raw_sock("lo"); assert(setsockopt(sock, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd, sizeof(prog_fd)) == 0); for (;;) { key = IPPROTO_TCP; assert(bpf_lookup_elem(map_fd, &key, &tcp_cnt) == 0); key = IPPROTO_UDP assert(bpf_lookup_elem(map_fd, &key, &udp_cnt) == 0); printf("TCP %lld UDP %lld packets, tcp_cnt, udp_cnt); sleep(1); } return 0; }
Some complete working code can be found in the samples/bpf directory in the kernel source tree.
RETURN VALUEFor a successful call, the return value depends on the operation:
- BPF_MAP_CREATE
- The new file descriptor associated with the eBPF map.
- BPF_PROG_LOAD
- The new file descriptor associated with the eBPF program.
- All other commands
- Zero.
On error, -1 is returned, and errno is set appropriately.
ERRORS
- EPERM
- The call was made without sufficient privilege (without the CAP_SYS_ADMIN capability).
- ENOMEM
- Cannot allocate sufficient memory.
- EBADF
- fd is not an open file descriptor.
- EFAULT
- One of the pointers (key or value or log_buf or insns) is outside the accessible address space.
- EINVAL
- The value specified in cmd is not recognized by this kernel.
- EINVAL
- For BPF_MAP_CREATE, either map_type or attributes are invalid.
- EINVAL
- For BPF_MAP_*_ELEM commands, some of the fields of union bpf_attr that are not used by this command are not set to zero.
- EINVAL
- For BPF_PROG_LOAD, indicates an attempt to load an invalid program. eBPF programs can be deemed invalid due to unrecognized instructions, the use of reserved fields, jumps out of range, infinite loops or calls of unknown functions.
- EACCES
- For BPF_PROG_LOAD, even though all program instructions are valid, the program has been rejected because it was deemed unsafe. This may be because it may have accessed a disallowed memory region or an uninitialized stack/register or because the function constraints don't match the actual types or because there was a misaligned memory access. In this case, it is recommended to call bpf() again with log_level = 1 and examine log_buf for the specific reason provided by the verifier.
- ENOENT
- For BPF_MAP_LOOKUP_ELEM or BPF_MAP_DELETE_ELEM, indicates that the element with the given key was not found.
- E2BIG
- The eBPF program is too large or a map reached the max_entries limit (maximum number of elements).
VERSIONSThe bpf() system call first appeared in Linux 3.18.
CONFORMING TOThe bpf() system call is Linux-specific.
NOTESIn the current implementation, all bpf() commands require the caller to have the CAP_SYS_ADMIN capability.
eBPF objects (maps and programs) can be shared between processes. For example, after fork(2), the child inherits file descriptors referring to the same eBPF objects. In addition, file descriptors referring to eBPF objects can be transferred over UNIX domain sockets. File descriptors referring to eBPF objects can be duplicated in the usual way, using dup(2) and similar calls. An eBPF object is deallocated only after all file descriptors referring to the object have been closed.
eBPF programs can be written in a restricted C that is compiled (using the clang compiler) into eBPF bytecode. Various features are omitted from this restricted C, such as loops, global variables, variadic functions, floating-point numbers, and passing structures as function arguments. Some examples can be found in the samples/bpf/*_kern.c files in the kernel source tree.
The kernel contains a just-in-time (JIT) compiler that translates eBPF bytecode into native machine code for better performance. The JIT compiler is disabled by default, but its operation can be controlled by writing one of the following integer strings to the file /proc/sys/net/core/bpf_jit_enable:
- 0
- Disable JIT compilation (default).
- 1
- Normal compilation.
- 2
- Debugging mode. The generated opcodes are dumped in hexadecimal into the kernel log. These opcodes can then be disassembled using the program tools/net/bpf_jit_disasm.c provided in the kernel source tree.
JIT compiler for eBPF is currently available for the x86-64, arm64, and s390 architectures.
COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | http://manpages.org/bpf/2 | CC-MAIN-2017-04 | refinedweb | 3,990 | 51.89 |
A new Flutter plugin.
For help getting started with Flutter, view our online documentation.
For help on editing plugin code, view the documentation.
example/README.md
Demonstrates how to use the quantize_plugin plugin.
For help getting started with Flutter, view our online documentation.
Add this to your package's pubspec.yaml file:
dependencies: quantize_plugin: ^0.1.1
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:quantize_plugin/quantize_plugin.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Fix
lib/quant_config.dart. (-1.99 points)
Analysis of
lib/quant_config.dart reported 4 hints:
line 54 col 14: Avoid using braces in interpolation when not needed.
line 55 col 15: Avoid using braces in interpolation when not needed.
line 56 col 13: Avoid using braces in interpolation when not needed.
line 60 col 17: Override
hashCode if overriding
==.
Format
lib/quant_result.dart.
Run
flutter format to format
lib/quant_result.dart.
Format
lib/quantize_plugin.dart.
Run
flutter format to format
lib/quantize_plugin.dart.
The package description is too short. (-20 points)
Add more detail to the
description field of
pubspec.yaml. Use 60 to 180 characters to describe the package, what it does, and its target use case. | https://pub.dev/packages/quantize_plugin | CC-MAIN-2019-30 | refinedweb | 251 | 52.46 |
Title Description
A businessman goes through an n × N square grid, to participate in a very important business activity.
He should enter from the upper left corner of the grid and exit from the lower right corner.
It takes 1 unit of time for each small square in the middle.
The merchant must cross out within (2n − 1) unit time.
When passing through each small square in the middle, you need to pay a certain fee.
The businessman expected to cross out at the least cost within the specified time.
How much is the minimum charge?
Note: it is not allowed to cross each small square diagonally (that is, it can only move up, down, left, right and cannot leave the grid).
Topic model
- Sub topic:Picking flowers
- Because it takes (2n-1) units of time to cross (2n-1), and it takes (2n-1) units of time to walk from (1,1) to (n, n), you can’t go back.
Title code
#include #include #include using namespace std; const int N = 110; int n; int w[N][N]; int f[N][N]; int main() { cin >> n; for(int i = 1; i <= n; i ++ ) for(int j = 1; j <= n; j ++ ) cin >> w[i][j]; memset(f, 0x3f, sizeof f); // Because min is calculated, it is initialized to the maximum, and the boundary problem needs to be considered for(int i = 1; i <= n; i ++ ) for(int j = 1; j <= n; j ++ ) { if(i == 1 && j == 1) f[i][j] = w[i][j]; // (1, 1) special judgment is required in the cycle, otherwise an error will occur else f[i][j] = min(f[i - 1][j], f[i][j - 1]) + w[i][j]; } cout << f[n][n] << endl; return 0; } | https://developpaper.com/acwing-1018-minimum-toll-linear-dp/ | CC-MAIN-2022-33 | refinedweb | 290 | 70.77 |
This is probably not limited to just double quotes but what seemed to be a simple thing gave me quite a few head-scratching minutes yesterday.
The task was straightforward. Customer has a double quote as part of a data string inside of a XML element. For example:
<attributeData>data"withdoublequote</attributeData>
The mapping logic required us to output the location of the double quote within the data string. OK, no problem. We would use the String Find functoid for this.
Unfortunately, when we used the double quote character as 2nd parameter to the functoid, we got an error when testing map:
‘userCSharp:StringFind(string(attributeData/text()) , """)’ is an invalid XPath expression. This is an unclosed string.
OK, so maybe I needed to use an escape sequence with the double quote character. For the next 15 minutes, I tried different combinations, including " and \”. None of them worked.
Of course, we could go with the old stand-by, a Script Functoid, with the following inline C#:
public int findQuote(string str)
{
return (str.IndexOf("\"") + 1);
}
That worked fine but not as easy to maintain and re-use. In the end, the combination that worked involved using an ASCII to Character Conversion functoid and the String Lookup functoid. For the ASCII to Character functoid, I used the ASCII value of 34 for the double quote character. I then connected the output to the String Lookup functoid as 2nd parameter and got the correct position. Just a simple trick and I think we can find use with other “special” characters as well.
Yup, When ever the issue comes to enter the special character perticularly in the HIPAA projects the AS|| to Character can solve it.
Hi,
Can you let me know if double quotes in a message can pass from one application to another connected over Biztalk server as it is or do they need some escape character or conversion to ascii character as mentioned above.Please answer | https://blogs.msdn.microsoft.com/biztalkcpr/2009/05/14/simple-task-of-passing-a-double-quote-to-a-string-functoid/ | CC-MAIN-2017-17 | refinedweb | 325 | 63.29 |
I understand the basics of sending a message via the Mailgun API using Python and requests from my site and all works fine. I would like to attach data from an HTML form using request.forms.get(''), but can't figure out the syntax to make it work. The link below is exactly what I need to do, except in Python instead of PHP.
How can I send the following form data for example through via Mailgun?
HTML FORM (Parts of it to get the point across)
<form action="/send" method="post">
<input name="name" placeholder="Name">
...
<button> ...
@route('/send', method='POST')
def send_simple_message():
variable_I_need_to_send = request.forms.get('firstname')
...
data={...",
"to": "MyName <myname@gmail.com>",
"subject": "Website Info Request",
"text": "Testing some Mailgun awesomness!",
"html": **variable_I_need_to_send**})
return '''
You can use the requests library:
import requests @route('/send/<firstname>', method='POST') def send_simple_message(first_name): ... data={...", "from": "{} <address>".format(first_name), "to": "MyName <myname@gmail.com>", "subject": "Website Info Request", "text": "Testing some Mailgun awesomness!", "html": '**I need to include form data here**'}) requests.post('', data=data) return | https://codedump.io/share/uVK9gMHn3W/1/sending-an-email-with-form-data-via-mailgun-and-python | CC-MAIN-2018-09 | refinedweb | 175 | 52.05 |
This blog is part 2 of our two-part series
For more background on dynamic time warping, refer to the previous post Understanding Dynamic Time Warping.
Background
Imagine that you own a company that creates 3D printed products. Last year, you knew that drone propellers were showing very consistent demand, so you produced and sold those, and the year before you sold phone cases. The new year is arriving very soon, and you’re sitting down with your manufacturing team to figure out what your company should produce for next year. Buying the 3D printers for your warehouse put you deep into debt, so you have to make sure that your printers are running at or near 100% capacity at all times in order to make the payments on them.
Since you’re a wise CEO, you know that your production capacity over the next year will ebb and flow – there will be some weeks when your production capacity is higher than others. For example, your capacity might be higher during the summer (when you hire seasonal workers), and lower during the 3rd week of every month (because of issues with the 3D printer filament supply chain). Take a look at the chart below to see your company’s production capacity estimate:
Your job is to choose a product for which weekly demand meets your production capacity as closely as possible. You’re looking over a catalog of products which includes last year’s sales numbers for each product, and you think this year’s sales will be similar.
If you choose a product with weekly demand that exceeds your production capacity, then you’ll have to cancel customer orders, which isn’t good for business. On the other hand, if you choose a product without enough weekly demand, you won’t be able to keep your printers running at full capacity and may fail to make the debt payments.
Dynamic time warping comes into play here because sometimes supply and demand for the product you choose will be slightly out of sync. There will be some weeks when you simply don’t have enough capacity to meet all of your demand, but as long as you’re very close and you can make up for it by producing more products in the week or two before or after, your customers won’t mind. If we limited ourselves to comparing the sales data with our production capacity using Euclidean Matching, we might choose a product that didn’t account for this, and leave money on the table. Instead, we’ll use dynamic time warping to choose the product that’s right for your company this year.
Load the product sales data set
We will use the weekly sales transaction data set found in the UCI Dataset Repository to perform our sales-based time series analysis. (Source Attribution: James Tan, jamestansc ‘@’ suss.edu.sg, Singapore University of Social Sciences)
import pandas as pd # Use Pandas to read this data sales_pdf = pd.read_csv(sales_dbfspath, header='infer') # Review data display(spark.createDataFrame(sales_pdf))
Each product is represented by a row, and each week in the year is represented by a column. Values represent the number of units of each product sold per week. There are 811 products in the data set.
Calculate distance to optimal time series by product code
# Calculate distance via dynamic time warping between product code and optimal time series import numpy as np import _ucrdtw def get_keyed_values(s): return(s[0], s[1:]) def compute_distance(row): return(row[0], _ucrdtw.ucrdtw(list(row[1][0:52]), list(optimal_pattern), 0.05, True)[1]) ts_values = pd.DataFrame(np.apply_along_axis(get_keyed_values, 1, sales_pdf.values)) distances = pd.DataFrame(np.apply_along_axis(compute_distance, 1, ts_values.values)) distances.columns = ['pcode', 'dtw_dist']
Using the calculated dynamic time warping ‘distances’ column, we can view the distribution of DTW distances in a histogram.
From there, we can identify the product codes closest to the optimal sales trend (i.e., those that have the smallest calculated DTW distance). Since we’re using Databricks, we can easily make this selection using a SQL query. Let’s display those that are closest.
%sql -- Top 10 product codes closest to the optimal sales trend select pcode, cast(dtw_dist as float) as dtw_dist from distances order by cast(dtw_dist as float) limit 10
After running this query, along with the corresponding query for the product codes that are furthest from the optimal sales trend, we were able to identify the 2 products that are closest and furthest from the trend. Let’s plot both of those products and see how they differ.
As you can see, Product #675 (shown in the orange triangles) represents the best match to the optimal sales trend, although the absolute weekly sales are lower than we’d like (we’ll remedy that later). This result makes sense since we’d expect the product with the closest DTW distance to have peaks and valleys that somewhat mirror the metric we’re comparing it to. (Of course, the exact time index for the product would vary on a week-by-week basis due to dynamic time warping). Conversely, Product #716 (shown in the green stars) is the product with the worst match, showing almost no variability.
Finding the optimal product: Small DTW distance and similar absolute sales numbers
Now that we’ve developed a list of products that are closest to our factory’s projected output (our “optimal sales trend”), we can filter them down to those that have small DTW distances as well as similar absolute sales numbers. One good candidate would be Product #202, which has a DTW distance of 6.86 versus the population median distance of 7.89 and tracks our optimal trend very closely.
# Review P202 weekly sales y_p202 = sales_pdf[sales_pdf['Product_Code'] == 'P202'].values[0][1:53]
Using MLflow to track best and worst products, along with artifacts
MLflow is an open source platform for managing the machine learning lifecycle, including experimentation, reproducibility, and deployment. Databricks notebooks offer a fully integrated MLflow environment, allowing you to create experiments, log parameters and metrics, and save results. For more information about getting started with MLflow, take a look at the excellent documentation.
MLflow’s design is centered around the ability to log all of the inputs and outputs of each experiment we do in a systematic, reproducible way. On every pass through the data, known as a “Run,” we’re able to log our experiment’s:
- Parameters – the inputs to our model.
- Metrics – the output of our model, or measures of our model’s success.
- Artifacts – any files created by our model – for example, PNG plots or CSV data output.
- Models – the model itself, which we can later reload and use to serve predictions.
In our case, we can use it to run the dynamic time warping algorithm several times over our data while changing the “stretch factor,” the maximum amount of warp that can be applied to our time series data. To initiate an MLflow experiment, and allow for easy logging using
mlflow.log_param(),
mlflow.log_metric(),
mlflow.log_artifact(), and
mlflow.log_model(), we wrap our main function using:
with mlflow.start_run() as run: ...
as shown in the abbreviated code below.
import mlflow def run_DTW(ts_stretch_factor): # calculate DTW distance and Z-score for each product with mlflow.start_run() as run: # Log Model using Custom Flavor dtw_model = {'stretch_factor' : float(ts_stretch_factor), 'pattern' : optimal_pattern} mlflow_custom_flavor.log_model(dtw_model, artifact_path="model") # Log our stretch factor parameter to MLflow mlflow.log_param("stretch_factor", ts_stretch_factor) # Log the median DTW distance for this run mlflow.log_metric("Median Distance", distance_median) # Log artifacts - CSV file and PNG plot - to MLflow mlflow.log_artifact('zscore_outliers_' + str(ts_stretch_factor) + '.csv') mlflow.log_artifact('DTW_dist_histogram.png') return run.info stretch_factors_to_test = [0.0, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5] for n in stretch_factors_to_test: run_DTW(n)
With each run through the data, we’ve created a log of the “stretch factor” parameter being used, and a log of products we classified as being outliers based upon the Z-score of the DTW distance metric. We were even able to save an artifact (file) of a histogram of the DTW distances. These experimental runs are saved locally on Databricks and remain accessible in the future if you decide to view the results of your experiment at a later date.
Now that MLflow has saved the logs of each experiment, we can go back through and examine the results. From your Databricks notebook, select the
icon in the upper right-hand corner to view and compare the results of each of our runs.
Not surprisingly, as we increase our “stretch factor,” our distance metric decreases. Intuitively, this makes sense: as we give the algorithm more flexibility to warp the time indices forward or backward, it will find a closer fit for the data. In essence, we’ve traded some bias for variance.
Logging Models in MLflow
MLflow has the ability to not only log experiment parameters, metrics, and artifacts (like plots or CSV files), but also to log machine learning models. An MLflow Model is simply a folder that is structured to conform to a consistent API, ensuring compatibility with other MLflow tools and features. This interoperability is very powerful, allowing any Python model to be rapidly deployed to many different types of production environments.
MLflow comes pre-loaded with a number of common model “flavors” for many of the most popular machine learning libraries, including scikit-learn, Spark MLlib, PyTorch, TensorFlow, and others. These model flavors make it trivial to log and reload models after they are initially constructed, as demonstrated in this blog post. For example, when using MLflow with scikit-learn, logging a model is as easy as running the following code from within an experiment:
mlflow.sklearn.log_model(model=sk_model, artifact_path="sk_model_path")
MLflow also offers a “Python function” flavor, which allows you to save any model from a third-party library (such as XGBoost, or spaCy), or even a simple Python function itself, as an MLflow model. Models created using the Python function flavor live within the same ecosystem and are able to interact with other MLflow tools through the Inference API. Although it’s impossible to plan for every use case, the Python function model flavor was designed to be as universal and flexible as possible. It allows for custom processing and logic evaluation, which can come in handy for ETL applications. Even as more “official” Model flavors come online, the generic Python function flavor will still serve as an important “catch all,” providing a bridge between Python code of any kind and MLflow’s robust tracking toolkit.
Logging a Model using the Python function flavor is a straightforward process. Any model or function can be saved as a Model, with one requirement: it must take in a pandas Dataframe as input, and return a DataFrame or NumPy array. Once that requirement is met, saving your function as an MLflow Model involves defining a Python class that inherits from PythonModel, and overriding the
.predict() method with your custom function, as described here.
Now that we’ve run through our data with several different stretch factors, the natural next step is to examine our results and look for a model that did particularly well according to the metrics that we’ve logged. MLflow makes it easy to then reload a logged model, and use it to make predictions on new data, using the following instructions:
- Click on the link for the run you’d like to load our model from.
- Copy the ‘Run ID’.
- Make note of the name of the folder the model is stored in. In our case, it’s simply named “model.”
- Enter the model folder name and Run ID as shown below:
import custom_flavor as mlflow_custom_flavor loaded_model = mlflow_custom_flavor.load_model(artifact_path='model', run_id='e26961b25c4d4402a9a5a7a679fc8052')
To show that our model is working as intended, we can now load the model and use it to measure DTW distances on two new products that we’ve created within the variable
new_sales_units:
# use the model to evaluate new products found in ‘new_sales_units’ output = loaded_model.predict(new_sales_units) print(output)
Next steps
As you can see, our MLflow Model is predicting new and unseen values with ease. And since it conforms to the Inference API, we can deploy our model on any serving platform (such as Microsoft Azure ML, or Amazon Sagemaker), deploy it as a local REST API endpoint, or create a user-defined function (UDF) that can easily be used with Spark-SQL. In closing, we demonstrated how we can use dynamic time warping to predict sales trends using the Databricks Unified Analytics Platform. Try out the Using Dynamic Time Warping and MLflow to Predict Sales Trends notebook with Databricks Runtime for Machine Learning today.
| https://databricks.com/blog/2019/04/30/using-dynamic-time-warping-and-mlflow-to-detect-sales-trends.html | CC-MAIN-2019-30 | refinedweb | 2,124 | 51.07 |
Hi, I ve been working on my assignment. Having problems. Please solve them for me. I will be thankful to you all.
What I am doing, or want to do is, read a file in a PROCESS, read it character by character, send each character, one by one through pipe, to the child of that PROCESS and convert it to Capital letter if it small lettered and wirte to another file. The code which I ve written so far is,
#include <unistd.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/wait.h> #include <stdlib.h> #include <stdio.h> int main () { int ascii_value = 0; /*used to get the ASCII value of each character*/ char character = '0'; /*used to read the input file, then used to write to output file, character by character*/ int in; /*used for getting reading file info*/ int out; /*used for getting writing file info*/ int file_pipes [2]; pid_t child1; char junk_character = '0'; int i = 0; int stat_val; int exit_code; pid_t child_exit; in = open ("file.in", O_RDONLY); out = open ("file.out", O_WRONLY|O_CREAT, S_IRUSR|S_IWUSR); if (pipe (file_pipes) == 0) { child1 = fork (); for (i = 0; i < 8; i++){ if (child1 == -1) { printf ("Child not created.\n"); exit (1); } else if (child1 == 0) { read (file_pipes [0], &junk_character, 1); printf ("Child process %c \n", junk_character); } else { read (in, &junk_character, 1); write (file_pipes [1], &junk_character, 1); printf ("Parent process %c \n", junk_character); // child_exit = wait (&stat_val); // if (WIFSTOPPED (stat_val)); } } } exit (1); }
The problem is I want the parent Process to "wait" till child process reads the character from the pipe, convert it, and write it to file. and I dont know how to do it.
"WAIT", not taught in the class, studied by myself in a book, but unable to use it correctly, or as I want it to be.
This code is not properly on my algorithm, as I am a newbie in LINUX programming, I ve taken some arbitrary values to see what is really happening, like in for loop I ve set i < 8. My arbitrary program is supposed to read 8 characters from the file, character by character, send it to child through pipe, where child prints on screen showing the child process got the character. The wait calls are commented, because they were not working right. | https://www.daniweb.com/programming/software-development/threads/185378/wait-call-in-linux-programming | CC-MAIN-2018-43 | refinedweb | 380 | 77.27 |
Writing CGI Scripts in Python
The Python home page is located at. On the site there is a list of mirror sites, and the current distribution of Python.
A tutorial and other documents including the Language Reference, Library Reference, a guide on how to extend and embed the interpreter and a FAQ can be found in the doc directory of the Python Home Page ().
Two books will soon be available about Python:
Programming Python, by Mark Lutz, O'Reilly and Associates Publishers.
Internet Programming with Python, by Aaron Watters, Guido Van Rossum (the author of the language) and James Ahlstrom, from MIS Press/Henry Holt Publishers. See.
And finally, there's a newsgroup devoted to Python: comp.lang.python.
In the following text, I will assume that you run your own HTTP daemon locally. My preference is Apache, but any server will do the work, if properly configured.
And of course, you should have installed Python on your system. You'll need to configure it to use the gdbm module, since it's used in count.py.
For the examples of scripts which interface with a relational database, I've used PostGres95 (and its contributed Python module, PyGres95). PostGres95 is available from. PyGres95 is available from.
To understand the following text, you should know how to write an HTML page, have a general idea of how CGI works, and have a little background with C programming.
Listing 3, helloworld.py, is our first script. It's very simple. Run from the command line, it will print an HTML document. But you should copy it to your cgi-bin directory, then call it from your browser with the URL.
This script displays a little message and the local time. Here, you need to note only one thing: the script must send a header describing the contents of the document. This is done by the means of the Content-type header. Common values include text/html, text/plain, image/gif or image/jpeg. The header is terminated by a blank line. It is used by the client browser, and won't appear in the generated page. And, as you'll see, the script is executed, and not just displayed in the browser. Everything printed to sys.stdout by the script will be sent to the client, while error messages will go to an error log (/usr/local/etc/httpd/logs/error_log, if you are using Apache).
Listing 4 is the well-known Count script written in Python. This is used to display a graphical counter of the number of times that a particular page has been accessed.
This script imports a module called cgi, which I'll describe later. It's used to retrieve the URL parameter passed to the script. This script interfaces with gdbm (which must be included in the modules list when Python is configured) to store { URL ; access count } couples.
This is our first introduction to Python dictionaries. A dictionary is generally referred to as an “associative array” in the literature. It means that you can access arrays by keys instead of indices. For example, if you want to handle an e-mail address book, with couples like these:
"Michel", "Michel.Vanaken@ping.be" "Veronique", "Vero@home.sweet.home"
Here is how you should retrieve the address of Michel in C and in Python:
struct { char *key ; char *addr ; } email[ MAX ] ; int i ; for( i=0 ; i<MAX ; i++ ) { if( strcmp( email[ i ].key, "Michel" ) = 0 ) { printf( "%s\n", email[ i ].addr ) ; break ; } } if( i = MAX ) { printf( "Not found\n" ) ; } if email.has_key( "Michel" ) : print email[ "Michel" ] else : print "Not found"
Adding an entry with Python is also very easy :
adds an entry if Homer is not a valid key, and overwrites the old value if it is already present.
We see that Content-type here is image/x-bitmap (since the browser is waiting for an <img src=...>).
Of course, the bitmaps aren't very pretty (I drew them with a paint package, saved them as xbm files, then used a lot of keyboard macros and M-Kill/Yank rectangles in Emacs). The goal of this script is not to reinvent the wheel, but to allow readers to compare it with other versions widely available on the Net in different languages.
In order to use this script, the gdbm database must be created. Change the current directory to your cgi-bin directory, run Python, and type:
import gdbm gdbm.open( "counters.gdbm", "n", 0666 )
and exit Python with Ctrl-D.
It should also be noted that the xbm file created by this script is bad. It contains an extraneous byte (added in the print_footer() function), in order to simplify the print_digit_values() function (in this version, there are no tests for comm | http://www.linuxjournal.com/article/1368?page=0,1 | CC-MAIN-2016-18 | refinedweb | 794 | 73.27 |
Stream.Read Method
When overridden in a derived class, reads a sequence of bytes from the current stream and advances the position within the stream by the number of bytes read.
[Visual Basic] Public MustOverride Function Read( _ <InteropServices.In(), _ Out()> ByVal buffer() As Byte, _ ByVal offset As Integer, _ ByVal count As Integer _ ) As Integer [C#] public abstract int Read( [ In, Out ] byte[] buffer, int offset, int count ); [C++] public: virtual int Read( [ In, Out ] unsigned char buffer __gc[], int offset, int count ) = 0; [JScript] public abstract function Read( buffer : Byte[], offset : int, count : int ) : Value
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been.
Example
[Visual Basic, C#, C++] The following example shows how to use Read to read a block of data.
[Visual Basic](1000) [C#] using System; using System.IO; public class Block { public static void); } } [C++] #using <mscorlib.dll> using namespace System; using namespace System::IO; int(S"number of bytes read: {0}", __box(numBytes | http://msdn.microsoft.com/en-us/library/29tb55d8(v=vs.71).aspx | CC-MAIN-2014-52 | refinedweb | 192 | 56.69 |
As Grumpy_Mike says, you can not have 2 program using the same port on a PC. That may be a cause of the problem.
Moving target is spec that keep changing. 1st I write this and then you say it must do that. Mumbo-jumbo meaning -you- don't understand, therefore it must be nonsense?
EOL is standard for End Of Line. I even spelled it out in the first sentence of that post. Did you bother to read?
You even QUOTE ME on that: they take 2 bytes per character
apparently you had a "brain freeze" and didn't understand what you previously posted.
you just kept asking pointless questions instead of posting code.
QuoteYou even QUOTE ME on that: they take 2 bytes per characterInteresting, so when C++ Strings are used in an IDE when writing code, the final machine code compiled for the processor will cause the processor to use two bytes per character as opposed to one byte per character when your method is used?
And you've got your code example now so when you're ready, let's hear why you don't like that.
I dunno why anyone would use C++ Strings on an Arduino, they take 2 bytes per character to do things that you might use on a PC. What a waste.
So what is your technical basis for your statement concerning the string functions using two bytes in the arduino environment vs. the standard C language?
I need to read a string sent from processing to the arduino. Seems simple enough. I have a processing program that justs write "abc123" to the arduino via serial and a arduino program that trys to read it and send it back. The problem I am getting is that the arduino serial monitor is showing stuff like this:...ab23ab23bcac123a123a1a11Weird. Here is my arduino code:
Rewrite, eliminating all the delay () calls. Basically delay (x) says "do nothing else until x mS elapse". So don't complain if stuff doesn't get done.
String readString(char terminator) { byte index=0; char data[18] = ""; while(index<18) { if(Serial.available() > 0) { char c = Serial.read(); if(c == terminator) break; data[index] = c; index++; } } return data;}
import processing.serial.*;Serial myPort;char TerminateChar = '*';void setup() { size(100,100); myPort = new Serial(this,Serial.list()[0],9600);}void draw() { background(50); //arduinoPort.write(composePitch(0.1)); /* char data[] = {'a','b','c','d','\n'}; for(char c:data) { arduinoPort.write(c); } */ myPort.write('a'); myPort.write('b'); myPort.write('c'); myPort.write('d'); myPort.write(','); if(myPort.available() > 0) { delay(5); String input = myPort.readStringUntil(','); print(input); } delay(200);}
... the people that always say remove the delay generally never post improved *working* code that removes the delay. Goes back to those that can vs. those that can't.
void setup() { Serial.begin(9600);} // end of setup// read into buffer until terminator received// timeout if not received within timeout_period mS// returns true if received within timeout period// returns false if not received, or no terminator received but buffer fullbool readStringUntil (char * data, const unsigned int length, const unsigned long timeout_period, const char terminator) { unsigned int index = 0; unsigned long start_time = millis (); while (index < length) { // check if time is up if (millis () - start_time >= timeout_period) return false; // no data in timeout period // if data, add to buffer if (Serial.available () > 0) { char r = Serial.read(); if (r == terminator) { data [index] = 0; // terminating null byte return true; } // end if terminator reached data [index++] = r; } // end if available } // end while loop return false; // filled up without terminator } // end of readStringUntilvoid loop() { // see if there's incoming serial data: if (Serial.available() > 0) { char foo [20]; if (readStringUntil (foo, sizeof foo, 500, '\n')) { Serial.print ("Got: "); Serial.println (foo); } else Serial.println ("timeout"); } } // end of loop
Quote from: zoomkat on Nov 12, 2011, 04:05 pm... the people that always say remove the delay generally never post improved *working* code that removes the delay. Goes back to those that can vs. those that can't.Apart from this?
/*Example of processing incoming serial data without blocking.Author: Nick GammonDate: 13 November 2011Released for public use.*/void setup(){ Serial.begin(9600);}const unsigned int MAX_INPUT = 20;char input_line [MAX_INPUT];unsigned int input_pos = 0;void loop(){ if (Serial.available () > 0) { char inByte = Serial.read (); switch (inByte) { case '\n': // end of text input_line [input_pos] = 0; // terminating null byte // terminator reached! process input_line here ... Serial.println (input_line); // reset buffer for next time input_pos = 0; break; case '\r': // discard carriage return break; default: // keep adding if not full ... allow for terminating null byte if (input_pos < (MAX_INPUT - 1)) input_line [input_pos++] = inByte; break; } // end of switch } // end of incoming data // do other stuff here like testing digital input (button presses) ...} // end of loop
case 2: // start of text input_pos = 0; break;
I don't quite understand why everyone says char arrays are better than Strings.
I need the string methods for later on. | http://forum.arduino.cc/index.php?topic=78392.msg594380 | CC-MAIN-2014-52 | refinedweb | 820 | 58.38 |
Hungarian Notation
After reading some of the topics below about code commenting and documentation, I started to wonder if anybody uses hungarian notation anymore.
I, personally, have always hated HN. I think it makes code extremely difficult to read. I much prefer to just camel-case my object names (i.e., thisIsTheNameOfAnObject).
But I know there are some people who prefer to tack little type identifiers onto their object names. I've especially seen alot of things that look like "ptrMyPointer", which I'm also not crazy about. This is akin, in my book, to calling something "strTableName" or "intCounter".
Object types, as far as I'm concerned, don't belong in the identifiers of objects.
Benji Smith
Friday, March 21, 2003
Hungarian is dying. Thankfully. The Microsoft .NET naming conventions explictly say NOT to use hungarian.
It might have made sense originally, but it was never used correctly, and required too much of a maintenance burden. Good riddance.
Chris Tavares
Friday, March 21, 2003
That being said, I do have one situation where I use hungarian.
In C++, particularly in COM code, I'll often times have the same logical data stored in different data types. Strings are the big offenders here. So I'll often have variables bstrName, lpszName, and csName, for "Name as a BSTR", "Name in zero terminated string" or "Name in a CString".
I think people who think Hungarian makes code "hard to read" have never learned to read Hungarian notation. Heck, I think Greek is hard to read. But if you actually take the time to learn it, it makes code that much MORE readable. You know so much more about what a variable is by looking at it.
I still use hungarian -- in C code it's essential, in C++ code it's helpful, we even have a standardized hungarian notation for SQL here at Fog Creek and it's extremely helpful.
Joel Spolsky
Friday, March 21, 2003
still stick with Hungarian, since it works.
Prakash S
Friday, March 21, 2003
The problem with HN is that the type of a variable can change as the system evolves.
wParam is a case in point. It isn't a WORD any more, is it? False information is worse than no information...
Keith Weiner
Friday, March 21, 2003
Joel and others,
What is it about C code that makes HN especially helpful in making the code more readable?
Would code in Java or VB or C# (or any other language, for that matter) benefit from an HN-style coding convention? Or is there something peculiar about C/C++ that makes it more essential for you to have type information in your identifiers?
Also, why is it that HN is falling our of fashion so quickly if it helps so much?
(I really am curious.)
Benji Smith
Friday, March 21, 2003
I've grown to like hungarian notation, if only because associating variable type and scope information in a variable name makes code easier to read and understand.
Furthermore, I've never found changing variable names along with variable types time consuming at all (just search and replace).
I suppose it could be irritating if you have code where a global var whose name collides with that of class member or local vars, or some similar situation. But that never really happens if you're using hungarian properly in the first place.
I've never really understood why everyone hates hungarian so much or why they think it actually makes code harder to read. Sounds like more of a cosmetic preference to me.
I do concede it requires additional discipline on the part of programmers and there's nothing that enforces it besides programmer discipline ... perhaps that is good enough reason not to use it in some environments. However, my experience has been that programmers not familiar with first ask questions about it, and then adopt it. I suppose I'M SPREADING THE DISEASE. BWAHAHAHA!!!
Nick B.
Friday, March 21, 2003
I'd actually go one step farther than Nick B. did -- if you're changing your variable types, especially between similar types (different string implementations, for instance, or the like), I find it incredibly useful to also change the name of the variable. Thus, I have to go look at every instance where it is used and make sure that things are really doing what I want.
Of course, using search and replace obviates this, so YMMV.
Steven C.
Friday, March 21, 2003
Everything gets called oWhatever or objWhatever in the end.
optimistic coder
Friday, March 21, 2003
Yeah, but at least you know its an object of type class CWhatever. Or do you ? ;)
99% of the Linux kernel is written in straight C (with a small amount of assembler for supporting the dozen or so CPU platforms it runs on). It is some of the most readable code I've ever come across, and there are no Hungarian notations floating around.
I loathe HN, and am glad to see it end of life. Not that I even encounter it in my daily work, but just to know that it will be out of its misery soon - that's a good thing.
Deconstructing HN: "lpsz"
LONG: yes, even as late as 1998, MSFT was still playing with LONG. Very quaint.
POINT to STRING: Dammit, its pointer to char.
ZERO terminated string. Aren't C "strings" by default zero terminated?
Why don't we add 'r' in there while we're at it - so we know it can be "randomly accessed". And 'x', EVERYTHING needs an X these days. Let's add x to lpszr to formally bring it into the new Millenium. Hmmm, what has 'X'? How about xenophobic? There ya go, the X for xenophobic (sort of gives you that 'closed source' warm fuzzy feeling).
xlpszrMyString[] = "Hello World";
Oh the gems... Forget HN, how 'bout these jewels:
typedef void* LPVOID; // FIXME: WTF?
What is so difficult about void*? Would it be so bad to not have to invoke the tagger just to know that you're dealing with a built in type? Kill it. Unplug the life support and say a prayer.
Nat Ersoz
Friday, March 21, 2003
Geeze, its as bad as "comment maturbation".
(Likely the best term I've come across lately, thanks!)
Quite Nick. Heaven forbid we need two instances of the same class, that'd just get confusing.
Well, there are some reasonable things like denoting member variables (like adding my or m_ or something like that to the front) and other "special case" situations where namespace collisions are inconvenient and obnoxious.
The LPVOID thing is somebody trying to turn C into their own language. Sheesh.
flamebait sr.
Friday, March 21, 2003
One of the great design points of perl is that all variables start with a symbol, a kind of hungarian notation I suppose.
Scalars start with $mystring, arrays @myarray hashes: %myhash, which makes reading code really easy once you're used to it.
Matthew Lock
Friday, March 21, 2003
I've grown to like 'simplified' hungarian.
So in a manged envrironment, sFoo means string. With plain C (or C++) szFoo means a zero-terminated string; wzFoo means a Unicode string.
But long or far, no. (Though I've seen win16 code where wz didn't mean unicode because it was #defined re-used code from people who really meant OLESTR).
Once you've simplified something to its essence (i or c for an int or counter), you certainly will have to do more than just search/replace if you change the type of a variable. And if it's just something dumb like a size (short/long), the simplified name doesn't change.
mb
Friday, March 21, 2003
Basically, if you make good design choices, like scoping variables appropriately, there is no need for HN.
Allofasudden, I'm doing some math...
y = alpha * sin( omega * t );
becomes:
dY = dAlpha * sin( dOmega *dT );
Well, of course, d means double - unless you're into math, then d connotes delta - and it reads a whole new way.
And then there is the sin() function, it reutrns double - so make it dSin(). Well, what the hell, it takes double too. Make it dSind().
And back to strings, familiar old strcat() now comes out lpszStrcat() - yet why stop there, it takes 2 strings, the second one const:
lpszStrcatlpszlpcsz();
Geeze, looks like the Bronx cheer. Whipe off the laptop screen when you're done with that, wouldja?
And don't even try for those Windows GUI functions that take about 17 parameters... Too much fun.
Does anyone use both?
By this, I mean do you differentiate between private variables etc(1) and public, e.g. parameter names?
Some places I've worked use HN for *everything*; their declarations look something like:
Dim lobjMyObject as company_objComponent.clsClassName
The also like branding things, which makes things even more confusing, especially if its done to every single publicly accessible object/name.
(This is not *my* idea, btw, and I also hate prefixing local scope variables - the first 'L' in lobjMy...).
Is there a "sensible usage guide" anywhere - preferably a heavy one that I can use for other purposes?
(1) Etc means function names to indicate the return type, constants ( I have seen the hateful mstrCONSTANT_NAME used by people)
Justin
Saturday, March 22, 2003
As a humble beginner VBA coder may I say I find it very useful to distinguish between control names: cboFirstName as opposed to lblFirstName
Same goes for remembering whats a string, what's an integer and what's a boolean.
Stephen Jones
Saturday, March 22, 2003
Speaking as someone who's still stuck, for the most part, back in VB6, I like HN. I've started learning C# and there's a lot to like about it, but it seems utterly ridiculous to call a control okButton, cancelButton or calculateTheSubtotalForMyUnderpantsPurchaseButton (hey, if you guys can come up with ridiculous examples of HN abuse...). If you're going to tack "Button" on the back of every button and "Combo" on the back of every combo box, why not use a Hungarian "cmd" wart on the beginning and take advantage of Intellisense to speed up your typing ever so slightly?
That being said, I have started moving away from using HN in parameter declarations. The API calls don't include them, so putting them in my own functions gets a bit distracting.
Sam Gray
Saturday, March 22, 2003
Good short article:
Key bad points:
1) Often wrong or misleading (e.g. wParam in Win32).
2) Ambiguous (is b a boolean or a byte, is f a flag or a float?).
3) No standard (lpsz vs. lpstr vs. lpcstr vs. lpc vs. psz etc., dw vs. ul, vs n).
4) Effort to maintain.
5) Does not take account of C's automatic promotion of types.
6) Prevents your code from being portable between 16/32/64 bit architectures.
7) Offers no automated type checking beyond what the compiler already does.
8) Lots of finger typing.
9) Enourages unnescessarily verbose variable names.
10) Conflicts with OOP principles.
Longer discussion on kuro5hin.org:
Tom Payne
Saturday, March 22, 2003
Nat & Tom:
I agree whol heartedly. All this goes to show that tracking object types is something that we should leave to the compiler. For my pennyworth, just name things clearly, concisely and consistently.
David Roper
Saturday, March 22, 2003
I would agree, Sam, however most VB etc programmers would *prefix* the control name with a mnemonic denoting the type, e.g. btnOK and btnCancel.
Of course, even with HN, code can be hard to read.
The following sample (apologies for text wrapping) illustrates, in my opinion, the need for coherent, readable variable names. Yes, I can read it...but the coder really isn't making it easy for me.
The argument for mnemonics is that they require less effort to type and are less prone to spelling errors. The arguments were made before 'option explicit' and 'auto complete' were features that were commonly available. In fact, I'm not sure even if they are available. I primarily use VB, Interdev and lately .Net; I have no idea if these functions are available in other development environments }
Sample extracted from MSDN copy of Charles Simonyi's "explication of the Hungarian notation identifier naming convention".
Sorry, that should have read "the argument for short variable names" - mnemonics are supposed to be short. Doh.
I started my programming career in VB3. The best practice back then was 3-letter-prefixed HN. Later in the VB5 or beginning VB6 era, bought a book using a single-letter-prefixed HN. I liked that better, but remember a rather heated discussion with my coworker on which HN was best. The discussion ended up with him using 3-HN, me using 1-HN.
Then I became familiar with Agile Programming and read all the good opinions on why HN is bad, and why energy should be used on selecting better name in the first place. It's years since I used HN, and I have learned it's all about habit.
I have also learned to apprechiate code which express intention. So I think HN is about implementation. Ditching HN freed my mind to concentrate more on solution, which I think is better.
Many of my classes starts out as primitive datatypes, mostly strings. Sometimes they deserve to be strings, but other times they evolve with behaviour or more complex data representations.
Thomas Eyde
Saturday, March 22, 2003
"One of the great design points of perl is that all variables start with a symbol, a kind of hungarian notation I suppose."
"Scalars start with $mystring, arrays @myarray hashes: %myhash, which makes reading code really easy once you're used to it."
No, all variables start with a symbol that indicate HOW YOU WANT TO USE THE VARIABLE. They're typecasts. Every Perl statement is littered with typecasts. You have to typecast both sides of a simple assignment like "$foo = $bar".
And this casting is useless in distinguishing simple scalars like "3" from references to "real" data structures (of arbitrary complexity). The "$" symbol LIES about what the variable holds.
rwh
Saturday, March 22, 2003
I used to use hard-core HN, but now I mostly just use the following prefixes:
p = pointer
n = count/number of
sz = null-terminated char string
wsz = null-terminated WCHAR string
is = boolean
runtime
Saturday, March 22, 2003
For Hungarian notation to work, there must an explicit company standard. Then it only takes an hour to read the standards document, and it's always clear what everything means.
Overall, Hungarian notiation added value when I started programming C in 1995 without an IDE, but it isn't worthwhile writing Java today with Eclipse. Here are some reasons for the change:
1) An IDE instantly tells you the type of any variable, so you don't need to search for the variable declaration to get that information.
2) Better programming practices, such as short functions and clear variable names, make Hungarian less necessary. Those practicies are more widespread now than they were before.
3) In C, not knowing a variable type can easily shoot you in the foot, such as when you call scanf. In Java, the compiler detects almost all type errors.
4) Other Java coders bitch endlessly when they encounter Hungarian notation.
A few months ago, my employer implemented a no-Hungarian policy. It only took me a few days to adjust, once I wrote the logic to convert the existing code.
Julian
Saturday, March 22, 2003
Any explicit convention (such as Hungarian) is very helpful when coding in a team, and for new developers to get acquianted with a large code base.
In OO, a notation which identifies the scope of a variable helps clear up what a piece of code is doing, instead of having to navigate back and forth through the source to look at where it was declared.
In a dynamically typed language such as Python, variables begining with two underscores are automatically made private variables.
For individual developers, the benefits are still there. I'm not the type who remember everything I have coded months after, and the improved readability helps.
For competent programmers, learning a notation is not much compared to picking a language. Hungarian notation throws off people a bit if they are learning C at the same time.
Chui Tey
Sunday, March 23, 2003
From Nat: "ZERO terminated string. Aren't C 'strings' by default zero terminated?"
I'm with you I that one. I never could figure it out. Is there such a thing as a non-zero terminated string in C?
Nick
Sunday, March 23, 2003
Well, it IS technically possible to implement a pascal style string using char arrays in C.
Why you'd want to is a completely different question.
Steve C.
Monday, March 24, 2003
What do HN users do in languages that allow you the create new types (i.e. classes)? Do you simply use some generic "obj" prefix for anything that is a reference? Doesn't this destroy one of the the purposes of the notation: to indicate type?
I've also noticed that in languages where HN is not used (Java) there are still some conventions to indicate scope. In Java code, I've seen use of a leading underscore to indicate that a variable is an instance variable on the object in question e.g. this._collection. Of course this is a bit redundant, because the use of "this" tells you that you're accessing an instance variable. I guess that if you use the underscore then it wil prevent aliasing problems if you forget to use "this".
The only other scopes are global and local/parameter, and if you design well then you shouldn't have globals (just Singletons, hahahahaha), so there's no need for any other scope indicator conventions.
Regarding access labels (private, protected, public), I've never felt the need for notation to indicate access, mainly because I've always used getters and setters (even for access to ancestor data).
I'm firmly in the "HNs are comments and comments lie" camp. I think HN is no use in modern strongly-typed languages (i.e. not C).
Alistair Bayley
Monday, March 24, 2003
I am really mixed up.
GUI Controls get control-type prefixes e.g. "btnWhatever" and "txtWhatever" and "chkWhatever". Combo-boxes are a bit of a mismatch, "cmb" or "cbo"? I picked this up from VB IIRC, but I apply it to C++ and Delphi and Java etc.
Variables get a scope prefix often - iWhatever for member variables, aWhatever for params etc. Think I got this from Symbian. But actually I sometimes use _whatever for member variables.
Beyond that, not much type info in the variable names.
Nice
Monday, March 24, 2003
Steve C,
Apparantly Microsoft used Pascal style strings in Excel.
Ged Byrne
Monday, March 24, 2003
I can deal with most hungarian conventions, and the lack of consistency never bothered me as long as it is somewhat intuitive. But whever I seem to go, the one convention that annoys me most is using hungarian for the name of a VB standard module. Why? To my mind, a module of string functions should be "StringFunctions" not "modStringFunctions" or "basStringFunctions".
Has there ever been a situation where someone was heard to say, "Thank God we use Hungarian for the name of the modules."
Ran Whittle
Monday, March 24, 2003
Ran, I have a tendency to do exactly that, and you're right; it's extremely silly. It's probably related to my having come to VB programming by way of Access 97, where it was advised to distinguish between tables and queries (although this is of dubious utility) and got taken waaay too far. (= 'mod' could also, theoretically, be used to distinguish a standard module from a class module, but now I'm probably being overly generous.
Sam Gray
Monday, March 24, 2003
A better place to put String functions, Ran, would be under String itself, but of course VB6/VBA don't have static methods. In Java you could group static and nonstatic methods all under class Whatever, but in VB6/VBA you have to put static methods in their own module. So one potential naming convention is to have the class Whatever and the module modWhatever; makes it easy to find your static methods.
(Or you could just move to VB.NET where you can use Shared to denote static methods.)
Kyralessa
Tuesday, March 25, 2003
Have you ever had to do this in your VB work? Can you give me a brief example? (I am not a Java guy)
Mixing static and non-static methods would seem to create an (possibly) unclear dependency between the class with the static method and the calling class, but in JAVA the advantage would seem to be that the static method prevents you from needing a third class. Since VB is forcing you to create a third structure for the method anyway, how does the naming similarity between the class and the module help? Wouldn't a neutral name for the module be better?
An ocean class and a mirror class would both have a reflect light method, but the dependency is not obvious. If VB is forcing me to create a thrid module to contain the reflection method wouldn't that module be better named Optics or something?
Interesting point, but I am having trouble envisioning this. What am I not getting?
Ran Whittle
Wednesday, March 26, 2003
Well, actually, any public procedure in a standard module is available throughout the project without having to prefix it with the module name; so the way you name your standard modules is more of an organizational issue than anything else. I guess as much as anything, using "mod" helps make sure you don't have a module named the same as a reserved word or a procedure, and it reminds you that you're dealing with a standard module so you don't go trying to instantiate it.
It also makes it easy when I'm looking for a procedure in my library; I type RLLib.mod and Autocomplete goes to all the modules, I pick the right one (RLLib.modDocumentation, say) and then when I hit the dot again I get just documentation-oriented procedures instead of having to hunt through every procedure in the library. Without a prefix like this, your standard modules are scattered through the Autocomplete list and you have to search for them. So basically the "mod" thing is just a time-saver.
Kyralessa
Wednesday, March 26, 2003
This is what I use in my C++ code:
SomeType SomeClass::SomePublicFnMember( int _firstParam, char const* _secondParam, SomeType& _thirdParam ... )
{
SomeType someLocalVar1 = m_memberVar1;
SomeType* someLocalVar2 = m_memberVar2;
SomeType& someLocalVar3 = _thirdParam;
SomeType& someLocalVar4 = somePrivateFnCall( _firstParam, _secondParam );
// just use HN where type DOES really matter
PoolOfNumbers numberPool;
int iResult = numberPool.GetResultAsInteger();
float fResult = numberPool.GetResultAsFloat();
char const* szResult = numberPool.GetResultAsString();
// though you could get rid of HN in this case as well
int intResult = numberPool.GetResultAsInteger();
float floatResult = numberPool.GetResultAsFloat();
char const* stringResult = numberPool.GetResultAsString();
// You get my point?
// About 'p' prefixes for pointers:
// What good is prepending a 'p' to a pointer's var name?
// What kind of 'useful' information would it give to you
// at a glance? Just to know if a '.' or a '->' operator
// should be placed when referencing any of its members?
// If you declared at the beginning of a function:
Client* pMyClient = getClient();
// and 30 or 40 lines later you had to use that client
// again, you could end up like this: "hmm, was it pMyClient
// or MyClient? was it a pointer or a reference or what???
// (yes you guessed, prepending a 'p' did not help, and
// made the code a little bit more cryptic.
}
Also, I use s_ for static vars and would use g_ if I ever happen to deal with globals (I doubt it)
As you can see, I don't use hungarian notation at all, just some scope context prefixes.
All I can say is, after being used to HN for well over 9 years at my company, I've ended up realizing that it makes no sense at all, except in a very few cases, where a var and its type are so tightly linked that you couldn't figure out its meaning without using some kind of HN (see example above). On top of that, my code is much more readable now, or at least, that's what it looks like to me.
I know for a matter of fact (it happened to me) that quitting HN can be a really painful change for a programmer who has been used to it for many years, but it's worth the effort.
My 2 cents. Thanks.
Trapazza
Wednesday, March 31, 2004
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/joelonsoftware/default.asp?cmd=show&ixPost=35396 | CC-MAIN-2015-40 | refinedweb | 4,155 | 63.19 |
What is the proper way to include actionscript code when the code you want to include has both variables and onEnterFrame actions?
for example, if I had two scripts;
//script A, to be included
var car_spd = 25;
onEnterFrame(){
if(Key.isDown(Key.RIGHT)){car._x += spd;}
}
//script B, the file include is added to
var truck_spd = 22;
onEnterFrame(){
if(Key.isDown(Key.RIGHT)){truck._x += spd;}
}
-----------
would I use #include "car.as" on the line after var truck_spd = 22;
or in the onEnterFrame?
or would I make two car files, like
car_variables.as
car_actions.as
and then include actions inside the onEnterFrame, and the variables outside the onEnterFrame?
if you want to add both scripts to one use:
#include "scriptA.as"
#include "scriptB.as"
or, if you wanted to add scriptB to scriptA, you could use:
#include "scriptB.as"
in scriptA.as
but no matter what you do, you'll have a problem. you can't have two onEnterFrame methods applied to the same movieclip. well, you can but only the first one will be overwritten by the 2nd. you need to apply those loops to different movieclips or incorporate the code from both into one onEnterFrame method.
so to play it safe do you think should stick the onEnterFrame stuff in functions mostly?
I know this works some of the time but there's a weird "instaneous" problem with things like acceleration. For example:
function gravity(mc){
acc = 0;
acc++;
mc._y += acc;}
onEnterFrame = function(){
gravity(ball_mc);
}
is treated as velocity, while
acc = 0;
function gravity(mc){
mc._y += acc;}
onEnterFrame = function(){
acc++; gravity(ball_mc);
}
is treated as acceleration; I'm still confused as to why, probably explaining my nervousness of using include.
using #include is exactly the same as if the code in the included file is pasted in place of the #include line of code. if you don't 100% understand that, re-read that sentence.
in fact, you should not use #include unless you completely understand what that's doing. the benefits of using it are minimal for you. and the drawback is that you don't completely understand it.
it's better that all the code in that file (or those files) be attached to that timeline so there's a better chance you understand what you're doing. | https://forums.adobe.com/thread/533063 | CC-MAIN-2018-17 | refinedweb | 384 | 66.33 |
Android
The Android distribution of openFrameworks is setup to work with either the Eclipse IDE or experimentally with the newer Android Studio IDE. The projects are currently using a custom toolchain based on Makefiles to compile and install applications.
Note: see the FAQ at the bottom of this page if you're having trouble.
The current version of the Android plugin for Eclipse has several problems with projects that mix C++ and Java code, so the projects are currently using a custom toolchain based on makefiles + Ant tasks to compile and install applications. If you are used to Android development in Eclipse, things are a little different. Check the following instructions to know how to install the development environment and compile/install applications.
Right now this is only tested on Linux and OS X. To use it on Windows, check the instructions on this link:
To use it you will need Eclipse, the Android SDK, the Android NDK, the Android Eclipse plugin and the openFrameworks for Android package.
Because of the custom build system openFrameworks uses for Android, you may need to use the exact version of the SDK and NDK specified here. For this release you should use SDK 21 and NDK r8d. Later versions will probably work but it's not guaranteed.
Summary
These instructions go into a lot of important detail, but the main steps are:
- Install Eclipse, Ant and the Android SDK and NDK.
- If you're using OS X, install the Developer Tools.
- Setup the Android Eclipse plugin.
- Download openFrameworks either from the download page, or clone from git.
- Set path variables so openFrameworks knows where SDK and NDK are.
- Import the openFrameworks projects into Eclipse.
- Compile and install one of the Android openFrameworks examples to confirm that everything works.
Installation
a) Eclipse: download the C/C++ edition of Eclipse 4.5 (Mars) or later for your platform from here:
You will need Java to use Eclipse, you can download it from java.com.
For Linux, it will probably be in the official repositories. For example, in Ubuntu:
sudo apt-get install openjdk-7-jdk
or
sudo apt-get install oracle-java8-installer
b) Android SDK: This is the software that allows you to write Android apps. openFrameworks apps are written in C/C++, but you will still need this to interact with the NDK. You can download it from:
Uncompress it in any folder on your hard disk. Later you'll need to tell eclipse where to find it.:.
d) openFrameworks for Android package: Download it from the downloads page:
You may also check out the openFrameworks source from GitHub (under master branch):
f) Set the paths for the NDK:
Edit this file:
openFrameworks/libs/openFrameworksCompiled/project/android/paths.make
This will tell openFrameworks where to find the android NDK. If you don't have this file, create it from the paths.make.default template in the same directory.
- Set the values of NDK_ROOT to their install paths
The final file has to look something like:
# Default paths.make file. # Enter the correct paths for your system and save this file as paths.make NDK_ROOT=/home/arturo/Code/android-ndk-r10e
g) Start Eclipse: You will see a pop up asking you what workspace to use. Just point it to: openFrameworks/examples/android.
h) Android Eclipse plugin:
There are detailed instructions here:
To install it, inside Eclipse go to Help > Install New Software...
Click 'Add...' and enter the following info:
Name: Android SDK
Location:
Press 'OK' and select the new repository in the "Work with:" drop down box in case it's not already selected.
You will see the SDK plugin in the list called "Developer Tools".
Select it and press 'Next' until you get to the "Review Licenses" screen. Check the "I accept the terms of the license" checkbox and press 'Finish'. Eclipse will download and install the Android plugin. Once it finishes press 'Yes' in the popup to restart Eclipse.
j) Configuring the Android plugin:
Once we have installed the Android plugin we need to tell it where to find the SDK. In Eclipse go to Window > Preferences > Android (or Eclipse > Preferences for OS X) and set the SDK location by browsing to the folder where you uncompressed the SDK before.
Now Eclipse knows where the SDK is.
Do the same for the path to the NDK.
Next you'll need to install the API files and optionally create an emulator to be able to test programs without uploading to the phone. Press the Android button in the Eclipse toolbar, or go to Window > Android SDK Manager.
First you need to install the SDK platform-tools and API package. Just click on the "Tools" tab and select the box for Android SDK Platform-tools. Then click on the "Android 4.2 (API 17)" tab and select the box for SDK Platform. It's important to use SDK version 4.2 (API 17) since the makefiles are configured for that version. It doesn't matter what version of the Android OS you want to develop for, apps are compiled using SDK 4.2, but they should work on any phone that is at least 2.2.
Once that is done you can create a new virtual device (AVD). Just select a name, the target Android version and a size for the virtual SD card.
k) Import openFrameworks into Eclipse:
Now that Eclipse has been completely configured to work with openFrameworks for Android, the last step is to import all the projects in the workspace. Go to File > Import and select General > Existing projects in the workspace...
Import in this order:
- openFrameworks/libs/openFrameworks
- openFrameworks/addons/ofxAndroid/ofAndroidLib
- openFrameworks/examples/android/androidEmptyExample
You can later import more examples if you want but it's recommended to not have many examples opened at the same time so eclipse works faster
l) Compile openFrameworks:
In the "Project Explorer" on the left side of the window, select the openFrameworks project. Choose the Android target in Project > Build Configurations > Set Active, and then click Project > Build Project. You can also do this from the toolbar by switching to the C/C++ perspective and clicking the toolbar button with a hammer.
m) Enable development in your device:
Enable USB debugging: Settings > Applications > Development > USB Debug (On Ice Cream Sandwich, this is in Settings > Developer options > USB Debugging). The device needs to be disconnected from the computer while you do this.
n) Connect the device now:
If you attempt to run your project and you don't have a device attached, Eclipse will start the Android emulator for you.
Linux users: adb needs permissions to access the USB device, follow the instructions here to fix your device permissions:
o) Now install and run an example project on the device:
- Connect the device.
- Check that it is being detected and restart adb server if necessary.
Select the AndroidRelease target. You can pick a target at Project > Build Configurations > Set Active.
Press the play button in the toolbar or Run > Run As > Android Application.
Note: If you get an error about an obsolete build.xml (or connected with that file), you can safely delete the build.xml file and recreate it using 'android update project -p
If everything went OK, the example should start on the device.
Notes
Data files should go in bin/data. During the build process everything in bin/data will get compressed to a resource in res/raw and then uncompressed and automatically copied to:
sdcard/cc.openframeworks.appname before running the app.
If you have resources that change like XML config files, it's better to generate them from the code since uploading them to the phone will overwrite the configuration
If there's no SD card in the device, examples that have resources won't work right now.
Naming of resources is really restrictive in Android, for example you cannot have several resources with the same name even if they have different extensions.
The AndroidDebug target does a different compilation process of the native code that allows it to detect linker errors that won't be detected when compiling in AndroidRelease mode. It is recommended to compile your application in AndroidDebug mode at least once or if your application crashes before starting. When installing applications on the device or emulator it is recommended to use the AndroidRelease mode since it's faster and the applications will be much smaller. There's also no support for debugging NDK applications in Eclipse, but you could theoretically use the NDK tools to debug an application compiled with AndroidDebug.
Test your application very often. Even if the last NDK allows for debugging, there's no support for native debugging in Eclipse and setting it up manually with the NDK is pretty hard. When an application crashes the debugger dies too, so it's hard to debug bad memory accesses and similar bugs.
Use the LogCat view in Eclipse. When programming for Android you cannot see the output of cout or printf, but if you use ofLog you can see its output in the LogCat. To open the view, go to Window > Show View > Others > Android > LogCat
You can see the output of the compiler in the Console tab and the output of your app in the LogCat one. Everything that is output by openFrameworks through ofLog will have an openFrameworks tag so you can use filters to see only your application's output.
There's a bug in the Android plugin that makes Eclipse build every C/C++ project in your workspace before running any app. You can avoid this by closing projects that you're not currently working on (right-click > Close Project).
Alternatively, you can create a separate workspace for your apps:
Create a folder inside openFrameworks/apps.
Open Eclipse and tell it to use this new folder as a workspace. Do the import steps again for the new folder, including openFrameworks, libs, addons but instead of importing all the examples, import only androidEmptyExample to have a template for your new projects.
Creating new applications
You can copy any of the examples and start a new application from there. It's currently far more difficult to create a project from scratch, since the makefiles and project settings contain a lot of details you would need to duplicate.
In Eclipse this is easily done by right-clicking on an existing project, selecting Copy, then right-clicking on the workspace and selecting Paste. A small Copy Project window will pop up for you to pick a new project name and location. For now, project name and directory must have the same name. Let's say your application is called myApp, this must also be the name of your folder.
After you're done copying the project, you'll need to change the name of the application in different places:
- In res/values/strings.xml change app_name value to the name of your application.
- In AndroidManifest.xml change the name of the package from cc.openframeworks.exampleName to cc.openframeworks.myApp
- in srcJava, select the package cc.openframeworks.exampleName, press F2 to rename it and call it cc.openframeworks.myApp
It's important to keep the package prefix as cc.openframeworks or some things can stop working. This will be fixed in future versions when Eclipse support for native code is better.
If the build fails:
- If it tells you that you're using an obsolete build.xml, delete it and regenerate it using 'android update project -p
'. The build.xml files in the examples directory should not contain anything especially unique.
- Are you including addons? They need to be specified in addons.make, and the case of the letters must match exactly (ie, ofxOpenCv works but ofxOpenCV won't work). This error will probably show up as missing header files or symbols.
- If you're getting a bunch of undeclared reference errors, check which version of the NDK you're using. For this version you should be using NDK r8d.
- If you get 'com.android.sdklib.build.ApkCreationException: Debug Certificate expired on
', you have to 'rm ~/.android/debug.keystore'. A new certificate will be generated automatically.
If you get the error: Make: ***This package doesn’t support your platform, probably you downloaded the wrong package?
- Open the C++ perspective: Window -> Open Perspective -> Other... -> C/C++
- Select openFrameworks in the Project Explorer
- Drop down the hammer button
- Select Android
If you get the error: Project 'androidEmptyExample' is missing required source folder: 'gen'
- Select androidEmptyExample in the Project Explorer
- Go to Menu -> Project -> Properties -> Android
- Select Android-19 in the right side
- Click OK
- Clean & Build the project, the error will be disappeared
- Revert back to Android-21 and Clean&Build again. You will have no more error.
If the build succeeds but you can't install it on the phone:
- Make sure you have your project selected in the Project Explorer before you tell it to run as an Android Application.
- If you get a message saying "Activity class ... does not exist.", make sure that its namespace is called cc.openframeworks.your_folder_name_here.OFActivity. This is what the Makefile currently expects. If it does not work even with a correct entry, and you are using an emulator, try using a real device instead.
If the build succeeds but your app crashes:
Check the libs folder. It should be populated with a library during the build. On Linux it is a file that ends with .so. If there is no library, the C++ build process is probably failing somewhere, or it is not being triggered at all. You can test the C++ build process separately using 'make AndroidDebug'. You may also see something like this in your LogCat:
E/AndroidRuntime(20743): Caused by: java.lang.UnsatisfiedLinkError: Couldn't load OFAndroidApp: findLibrary returned null E/AndroidRuntime(20743): at java.lang.Runtime.loadLibrary(Runtime.java:425) E/AndroidRuntime(20743): at java.lang.System.loadLibrary(System.java:554) E/AndroidRuntime(20743): at cc.openframeworks.OFAndroid.<clinit>(OFAndroid.java:535) E/AndroidRuntime(20743): ... 14 more
The device must have an SD card if you use resources in your openFrameworks app. Note that some devices have an internal SD card, like the Galaxy Tab 10.1.
- Make sure you've declared the appropriate permissions in AndroidManifest.xml (for instance, android.permission.CAMERA for cameras and android.permission.WRITE_EXTERNAL to interact with the SD card, which is necessary if you have resources.)
- Was bin/data accidentally erased by something or other? Does res/raw/your_project_name_resources.zip exist, and does it contain your resources? | http://openframeworks.cc/ja/setup/android-eclipse/ | CC-MAIN-2017-04 | refinedweb | 2,424 | 64.3 |
In most cases, the best way to extract information from an XML document is to parse the document with a parser compliant with SAX, the Simple API for XML. SAX defines a standard API that can be implemented on top of many different underlying parsers. The SAX approach to parsing has similarities to the HTML parsers covered in Chapter 22. networking frameworks. package supplies a factory function to build p, as well as convenience functions for simpler operation in typical cases. xml.sax also supplies exception classes, used to diagnose invalid input and other errors.
Optionally, you can also register with parser p other kinds of handlers besides the content handler. You can supply a custom error handler to use an error diagnosis strategy different from normal exception raising, and try additional possibilities are advanced and rarely used, so I do not cover them in this book.
The xml.sax package supplies exception class SAXException, and subclasses of it to support fine-grained exception handling. xml.sax also supplies three functions.
parsers_list is a list of strings, names of modules from which you would like to build your parser. make_parser tries each module in sequence until it finds one that defines a suitable function create_parser. After the modules in parsers_list, if any, make_parser continues by trying a list of default modules. make_parser terminates as soon as it can generate a parser p, and returns p.
file is a filename or a file-like object open for reading, containing an XML document. handler is generally an instance of your own subclass of class ContentHandler, covered later in this chapter. error_handler, if given, is generally an instance of your own subclass of class ErrorHandler. You don't necessarily have to subclass ContentHandler and/or ErrorHandler: you just need to provide the same interfaces as the classes do. Subclassing is often a convenient means to this end.
Function parse is equivalent to the code:
p = make_parser( ) p.setContentHandler(handler) if error_handler is not None: p.setErrorHandler(error_handler) p.parse(file)
This idiom is quite frequent in SAX parsing, so having it in a single function is convenient. When error_handler is None, the parser diagnoses errors by propagating an exception that is an instance of some subclass of SAXException.
Like parse, except that string is the XML document in string form.
xml.sax also supplies a class, which you subclass to define your content handler.
An instance h of a subclass of ContentHandler may override several methods, of which the most frequently useful are the following:
Called when textual content data is parsed. The parser may split each range of text in the document into any number of separate callbacks to h.characters. Therefore, your implementation of method characters usually buffers data, generally by appending it to a list attribute. When your class knows from some other event that all relevant data has arrived, your class calls ''.join on the list and processes the resulting string.
Called once when the document finishes.
Called when the element named tag finishes.
Called when an element finishes and the parser is handling namespaces. name and qname are like for startElementNS, covered later in this chapter.
Called once when the document begins.
Called when the element named tag begins. attrs is a mapping of attribute names to values, as covered in the next section.
Called when an element begins and the parser is handling namespaces. name is a pair (uri,localname), where uri is the namespace's URI or None, and localname is the name of the tag. qname (which stands for qualified name) is either None, if the parser does not supply the namespace prefixes feature, or the string prefix:name used in the document's text for this tag. attrs is a mapping of attribute names to values, as covered in the next section.
The last argument of methods startElement and startElementNS is an attributes object attr, a read-only mapping of attribute names to attribute values. For method startElement, names are identifier strings. For method startElementNS, names are pairs (uri,localname), where uri is the namespace's URI or None, and localname is the name of the tag. The object attr also supports methods that let you work with the qname (qualified name) of each attribute.
Returns the attribute value for a qualified name name.
Returns the (namespace, localname) pair for a qualified name name.
Returns the qualified name for name, which is a (namespace, localname) pair.
Returns the list of qualified names of all attributes.
For startElement, each qname is the same string as the corresponding name. For startElementNS, a qname is the corresponding local name for attributes not associated with a namespace (i.e., attributes whose uri is None); otherwise, the qname is the string prefix:name used in the document's text for this attribute.
The parser may reuse in later processing the attr object that it passes to methods startElement and startElementNS. If you need to keep a copy of the attributes of an element, call attr.copy( ) to get the copy.
All parsers support a method parse, which you call with the XML document as either a string or a file-like object open for reading. parse does not return until the end of the XML document. Most SAX parsers, though not all, also support incremental parsing, letting you feed the XML document to the parser a little at a time, as the document arrives from a network connection or other source. A parser p that is capable of incremental parsing supplies three more methods.
Call when the XML document is finished.
Passes to the parser a part of the document. The parser processes some prefix of the text and holds the rest in a buffer until the next call to p.feed or p.close.
Call after an XML document is finished or abandoned, before you start feeding another XML document to the parser.
The saxutils module of package xml.sax supplies two functions and a class that are quite handy to generate XML output based on an input XML document.
Returns a copy of string data with characters <, >, and & changed into entity references <, >, and &. entities is a dictionary with strings as keys and values; each substring s of data that is a key in entities is changed in escape's result string into string entities[s]. For example, to escape single and double quote characters, in addition to angle brackets and ampersands, you can call:
xml.sax.saxutils.escape(data,{'"':'"', "'":"'"})
Same as escape, but also quotes the result string to make it immediately usable as an attribute value, and escapes any quote characters that have to be escaped.
Subclasses xml.sax.ContentHandler and implements all that is needed to reproduce the input XML document on the given file-like object out with the specified encoding. When you must generate an XML document that is a small modification of the input one, you can subclass XMLGenerator, overriding methods and delegating most of the work to XMLGenerator's implementations of the methods. For example, if all you need to do is rename some tags according to a dictionary, XMLGenerator makes it quite simple, as shown in the following example:
import xml.sax, xml.sax.saxutils def tagrenamer(infile, outfile, renaming_dict): base = xml.sax.saxutils.XMLGenerator class Renamer(base): def rename(self, name): return renaming_dict.get(name, name) def startElement(self, name, attrs): base.startElement(self, self.rename(name), attrs) def endElement(self, name): base.endElement(self, self.rename(name)) xml.sax.parse(infile, Renamer(outfile))
The following example uses xml.sax to perform a typical XHTML-related task, very similar to the tasks performed in the examples of Chapter 22. The example fetches an XHTML page from the Web with urllib, parses it, and outputs all unique links from the page to other sites. The example uses urlparse to examine the links for the given site, and outputs only the links whose URLs have an explicit scheme of 'http':
import xml.sax, urllib, urlparse class LinksHandler(xml.sax.ContentHandler): def startDocument(self): self.seen = {} def startElement(self, tag, attributes): if tag != 'a': return value = attributes.get('href') if value is not None and value not in self.seen: self.seen[value] = True pieces = urlparse.urlparse(value) if pieces[0] != 'http': return print urlparse.urlunparse(pieces) p = xml.sax.make_parser( ) p.setContentHandler(LinksHandler( )) f = urllib.urlopen('') BUFSIZE = 8192 while True: data = f.read(BUFSIZE) if not data: break p.feed(data) p.close( )
This example is quite similar to the HTMLParser example in Chapter 22. With the xml.sax module, the parser and the handler are separate objects (while in the examples of Chapter 22 they coincided). Method names differ (startElement in this example versus handle_starttag in the HTMLParser example). The attributes argument is a mapping here, so its method get immediately gives us the attribute value we're interested in, while in the examples of Chapter 22 it was a sequence of (name,value) pairs, so we had to loop on the sequence until we found the right name. Despite these differences in detail, the overall structure is very close, and typical of simple event-driven parsing tasks. | http://etutorials.org/Programming/Python+tutorial/Part+IV+Network+and+Web+Programming/Chapter+23.+Structured+Text+XML/23.2+Parsing+XML+with+SAX/ | CC-MAIN-2018-34 | refinedweb | 1,528 | 56.66 |
- Implementing graphics into C++
- VC# compile problem
- plz help error remove urgent
- file handling in c
- processes created with fork
- c program of graphics
- Colored Output in C++
- Save a file, need help with filename... plz help?
- Signals to interrupt threads on Windows
- consulting firms / temp agencies
- How's this for an approach to a career in programming?
- Access 3027 Error, but I can still add records
- honest opinion, please
- Microsoft certification
- delphi 7, is checkbox ticked?
- Resolved C++ keep keeping 4 digits
- Explain Function Factorial()
- distributed version control
- managed applications
- C++ Network integration
- asp.net, Visual Web Developer & client-side web development
- C# for web apps?
- confused by modern terminology
- runtime errors.
- website server op sys's
- .NET language platforms
- client- & server-side web apps
- calculating the average of given numbers.
- Best C++ IDE?
- best skill set to re-enter programming?
- C Programming Project
- need help with c++ CODE PLEASE
- I need help with adding an item called "Keep on top" to explorer context menu, HELP?
- Best way to select a bounded random number.
- Make hashsum on MBR (Master Boot Record)
- neural network
- Ok new to C++ quick questions
- Total newb C++ problem..
- Pascal Tree pathway search help
- Microsoft Visual C++ Runtime installer disappears?
- VB.NET - Splitting up a text file & Pattern Matching
- Software Force 2.0.8 considered FORTRAN 2008 or...
- Merging two .Exe's in Windows
- Duel Combo box with Iframe
- Some pointers for a newbie coder
- Help with encrypted code
- C++ Destructors Not Calling
- How do I learn?
- Can someone tell me...
- Fortran question?
- C++ compiler
- [help needed]VB.Net distribution
- VB APP allow User to specify constraints on a chart
- problem with Reading and plotting data in VBA - excel application
- delphi 6 NMFTP and TRY Exceptions
- C++ - Hexadecimal - OR
- Loops
- zlib and visual studio 2005
- moving between folders (boost::filesystem)
- Brightness
- C \ Python interface
- WebBrowser control (Win32 API)
- c# How to pass data from two forms to the last form (confirm form)?
- C++ Code Help
- How do you get data from one form to a label in another form?
- How do you set a DateTimePicker back to todays date in c#?
- Envirommental variables everex
- Roll My Own
- C++ Key Press Check
- XWindows Command Set
- I am looking for a program that creates custom msi installation files
- How to trigger a UAC popup if needed
- Closures retaining values.
- Assembly Language stack issue
- looking for Multithreaded pattern
- Development Language Help
- stack implementation
- Help With Languages.
- Ubuntu Portrait Mode
- put values from Batch to an another program
- Graphical User Interfaces
- Shell script problem
- VB.net Tabbed browser + database
- Quick ODBC Question
- Class prototype not recognized in class implementation
- Looking for text matching algorithm /patterns
- WINWORD CONTROL in windows forms
- C---having problems with #define passed in by -D flag on gcc.
- C#
- [C++]2D Game programming?
- Question regarding sumation of floating point numbers
- Help needed with hex dump
- MySQL in Netbeans
- [C++] Need help with command line graphing
- VB script- NEED HELP!
- C# Help
- what is the best method to sort strings in real time or off line ?
- AES exchanging IVs
- Regex for VC++ 2008 to detect contents of href
- Run a Command upon an OK Click
- need some help with clicking a button to make the text align change
- How to build an operating system
- Binary Search
- Execute code from a text file? (C/C++/Assembly)
- project
- c++ void returning vs other kinds
- Learning Basic Programming, need some help
- Provide for you about Develop Adobe flex
- is there any online. web based c compiler /
- Parsing INF files
- Can you compile with this code?
- virtual sound card?
- Windows.h API C
- Hello all i don't know if such thing exist but im looking for some kind of IE toolba
- Windows Styles
- Need Help: C++ incriments in a for loop
- MASM Code
- VS2005 Problem fails with "No symbols loaded"
- Clipboard Edit
- Fiind a window name from its handle
- video storage
- OpenSSL sign
- How do I make a "Save as" in MS Visual Basic?
- Game Controller Programming...
- Resolved Filling a string with memcpy...
- VC# - Windows Media Player and Paint
- Excel Consolidation with an App.
- Where to start?
- Resolved Official C++ website
- C++ coding help -- Having issues with an if/else statement
- Resolved Custom Message Box Icons?
- Vb help
- Connecting C#.net app to sql
- Making a browser without the internet browsing
- Online fantasy sport
- c++ -- returning an array from function
- Sites like that provide EURO SOFTWARES are legitimate ?
- Need help: Results of a 4 Pass Radix sort only partially sorted.
- Excel and Vbasic and Replace
- Programming Problems Site
- How to extern #DEFINE?
- VBA: Newbie question
- what language to use for application?
- Good Beginners Program
- How does p2p network works (how client find other client in the network)
- Hello!
- Possibly need help with vbscript
- get a coder websites
- Gui Programming in C++?
- Question
- Hashtables
- Populating an array of dates with a start and an end date. VB.NET 2008
- How can I make a hash table for my sequences?
- domain check ?
- How to enter a line of data in C?
- Batch File (.bat) Coding
- looking for embedded DB other then sqlite
- Programming Help
- Can I improve my code?
- Understanding C++ References (&)
- C arrays and pointers etc. Help!
- Automatically send an email from Outlook
- Is there something like this in c++?
- challenging to program?
- Decent book to teach video game programming...
- How to UNDO last in VB.Net
- Where do I begin
- teleporters
- eVB for Pocket PC
- C Question
- c++ stringstream
- C Compiler
- Restrict the download of a file to once per IP address.
- Using malloc to dynamically increase the size of an array in C
- interface help for touchscreen till to online db.
- Video game programming
- [looking for] Dll Web host
- C# Pulling Process ID
- C Parsing
- MS types to standard types
- Where to start application coding
- What are manifest types? [solved]
- C - GNU regex library---trouble matching subgroups
- REG ADD with a .bat
- c# simple combobox
- .vbs script help- compare parent/child folders, move up & delete if identical names
- Learning C
- how to print only N numbers of string
- Making a .jar file
- Editor to remotely manage C++ code
- Simple parsing script help
- Help on template function in C++
- Printing from array function
- Conflicting types in array function declaration
- some vb help needed (6.0)
- Binary Tree Help (c Lang)
- C - trying to dynamically allocate an array of structs...
- VB Code To Update An Entry Based Upon # of Copies
- new to the forum, need homework help :).
- I need some help making a program...
- M$ Outlook Web Access (OWA)
- wysiwyg program maker?
- C loop, giving me trouble !! Can anyone help..?
- Mencoder for linux
- Find out if something is inside a given area.
- C++ So odd problem with connection from Form1 "is not namespace name... "
- C# MySQL and Datagrids...
- Data extract
- C Structure Arrays
- Need someone to double check a script
- How does scanf works?
- Finding the computer model
- C# and WPF
- Cron Expression
- manipulating text in visual foxpro
- is it possible to convert the access database e,g abc.mdb into the mysql tables
- Looking for free c++ simple profiler
- MySQL++
- sorting data saved as text in an access database
- Looking for free win32 gui (resource) editor
- Creating an undeletable file on Flash Disk
- writing and appending to file lots of times takes to long how to improve ?
- socket help?
- Log off Script
- geting full path of files and folders (boost::filesystem)
- Math
- web page script
- c# put a string in an integer
- c# reading and writing files
- VB Express & Databases
- C++ static variables loading in wrong order
- BATCH - Strings
- Makefile Optimization
- Whole Lines of Input
- GTK+ Application help (saving a file, copy and paste, and scrolling behavior)
- Advanced Sorting - Text file spaced out in blocks
- List All files recursively cross platform
- a bug in code for PIC microcontroller-help to debug
- How do I use a hyperlink in 1 VB WebBrowser to open a web page in another?
- "The memory could not be "read". "
- can i force connecting to local web server via internet network ?
- Binary Conversion from a SPI Temperature Device
- C++ Regular Expressions Library
- Total beginner
- Assembler
- what is the best url to check internet connection
- Password Program not Working
- C program for terminal keeps on shutting down.
- Help settle This (PHP, not script question)
- Info needed | http://www.codingforums.com/sitemap/f-21-p-11.html | CC-MAIN-2014-10 | refinedweb | 1,380 | 62.58 |
Release Notes for the Service Bus July 2014 Release
Updated: February 3, 2015
These release notes will be updated periodically.
For information about Microsoft Azure Service Bus pricing, see the Service Bus Pricing FAQ topic.
For a list of new features introduced in the Service Bus July 2014 release, see What's New in the Azure SDK 2.5 Release (July 2014).
Before running Service Bus applications, you must create one or more service namespaces. To create and manage your service namespaces, log on to the Azure Management portal, and click Service Bus. For more information, see How To: Create or Modify a Service Bus Service Namespace..
For information about quotas for Service Bus, see Service Bus Quotas. | https://msdn.microsoft.com/sv-se/library/hh667331.aspx | CC-MAIN-2015-11 | refinedweb | 118 | 56.76 |
Accessing CloudWatch metrics for Amazon SQS.
Amazon SQS console
.
In the list of queues, choose (check) the boxes for the queues that you want to access metrics for. You can show metrics for up to 10 queues.
Choose the Monitoring tab.
Various graphs are displayed in the SQS metrics section.
To understand what a particular graph represents, hover over
next to the desired graph, or see Available CloudWatch metrics for Amazon SQS.
To change the time range for all of the graphs at the same time, for Time Range, choose the desired time range (for example, Last Hour).
To view additional statistics for an individual graph, choose the graph.
In the CloudWatch Monitoring Details dialog box, select a Statistic, (for example, Sum). For a list of supported statistics, see Available CloudWatch metrics for Amazon SQS.
To change the time range and time interval that an individual graph displays (for example, to show a time range of the last 24 hours instead of the last 5 minutes, or to show a time period of every hour instead of every 5 minutes), with the graph's dialog box still displayed, for Time Range, choose the desired time range (for example, Last 24 Hours). For Period, choose the desired time period within the specified time range (for example, 1 Hour). When you're finished looking at the graph, choose Close.
(Optional) To work with additional CloudWatch features, on the Monitoring tab, choose View all CloudWatch metrics, and then follow the instructions in the Amazon CloudWatch console procedure.
Amazon CloudWatch console
.
On the navigation panel, choose Metrics.
Select the SQS metric namespace.
Select the Queue Metrics metric dimension.
You can now examine your Amazon SQS metrics:
To sort the metrics, use the column heading.
To graph a metric, select the check box next to the metric.
To filter by metric, choose the metric name and then choose Add to search.
For more information and additional options, see Graph Metrics and Using Amazon CloudWatch Dashboards in the Amazon CloudWatch User Guide.
AWS Command Line Interface
To access Amazon SQS metrics using the AWS CLI, run the
get-metric-statistics command.
For more information, see Get Statistics for a Metric in the Amazon CloudWatch User Guide.
CloudWatch API
To access Amazon SQS metrics using the CloudWatch API, use the
GetMetricStatistics action.
For more information, see Get Statistics for a Metric in the Amazon CloudWatch User Guide. | https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-access-metrics.html | CC-MAIN-2020-50 | refinedweb | 400 | 65.01 |
There was a learned discussion around this recently 715269 -- from which one concluded... well I wasn't sure...
The humble programmer needs a mental model for what goes on, and some way to describe it. So I wondered, what's wrong with the notion of a List in Scalar Context ?
The business of Context is peculiar to Perl, it's easy to miss first, second and possibly third time through. Bearing that in mind, this is what perldata says:. Note that the value of an actual array in scalar context is the length of the array; the following assigns the value 3 to $foo: $foo = @foo; # $foo gets 3
[download]
@foo = ('cc', '-E', $bar);
[download]
$foo = ('cc', '-E', $bar);
[download]
$foo = @foo; # $foo gets 3
[download]
But the notion of a list goes well beyond list literals, and we need a way to think about how lists are handled more generally.
You might think that lists are something to do with '()' as list constructor and ',' as list item separator. (Anonymous lists have the '[]' constructor...) Of course you would be wrong. The '()' are incidental, required because ',' is a sort of operator with lower precedence than '=', in particular. The '()' are not required at all in some contexts, notably following a list operator (eg print !).
Anyway, in the following, the stuff to the right of the '=' appear to be lists:
@r = () ;
@r = undef ; # sets $r[0] = undef
@r = 1234 ;
@r = (12, 23, 34, 45) ;
@r = 'a'..'z' ;
@s = ('a'..'z')[4..15] ;
@t = @r[7..16];
@q = 9..13 ;
@u = @r[@q] ;
@r{'a'..'z'} = (1..26) ;
@w = @r{'k'..'p'} ;
@p = 'k'..'p' ;
@w = @r{@p} ;
[download]
$r = () ; # Case 1 -- $r -> undef
$r = 1234 ; # Case 2 -- $r -> 1234
$r = (12, 23, 34, 45) ; # Case 3 -- $r -> 45
$r = 'a'..'z' ; # Case 4 -- error
$s = ('a'..'z')[4..15] ; # Case 5 -- $r -> 'p'
$t = @r[7..16] ; # Case 6 -- $r -> 'q'
$u = $r[@q] ; # Case 7 -- $r -> 'n'
$w = @r{'k'..'p'} ; # Case 8 -- $w -> 16
$w = @r{@p} ; # Case 9 -- $w -> 16
[download]
Of course, we entirely understand that an Array or a Hash in Scalar Context is quite different:
$r = @r ; # Case 1 -- $r -> 26
$v = %r ; # Case 2 -- $r -> 19/32
$r = (@r) ; # Case 3 -- $r -> 26
$v = (%r) ; # Case 4 -- $r -> 19/32
[download]
$r = (@r, @r) ;
$v = (%r, %r) ;
[download]
OK. The mental model so far, in expressions:
Now, subroutines: we have to understand that Context has a long arm -- the Context in which a subroutine is called, will, at a distance, apply to any expression yielding a return value. And different calls may provide different Contexts. This is slippery stuff, but at least the results are entirely consistent. If you take the examples above and replace each rhs by a call to a subroutine which returns that rhs, then the results are identical. But, it is worth noting that, for example, replacing:
return ($a, $b, $c, $d) ;
[download]
my @return = () ;
...
push @return, $a, $b ;
...
push @return, $c, $d ;
...
return @return ;
[download]
This is telling us something. A subroutine that is serious about being called in either Scalar or List Context, must ensure that it returns an explicit, well defined result in both cases -- which for Scalar may be the same as Array in a Scalar Context, or List in a Scalar Context, or anything else it cares to return. Further, it is foolish to call a subroutine in a Context which it does not explicitly support -- which implies that it is foolish to call a subroutine defined only to return a list, in a Scalar Context, and expect List in a Scalar Context semantics.
This is also true of Perl functions. There is no point assuming that a Perl function that returns a list in List Context will do anything useful at all in Scalar Context, except where it is defined to do so.
So, after all this (and thank you for reading this far!) am I clapping my hands ?
Well, yes I am. It seems to me that List in a Scalar Context is a perfectly good way of understanding the behaviour of lists in expressions, as discussed above.
However, the application is limited. In particular, it cannot be used to predict what a list returning operation will do if you decide to use it in Scalar Context. To that extent, the problem with the notion of List in a Scalar Context is that it is deceptively general, and dangerously so.
My conclusion: List in Scalar Context is a more general notion than Comma in Scalar Context, but you have to understand its limitations. And, it is much more important to understand that knowing that a Perl function or a subroutine returns a List in List Context, tells you nothing useful about its Scalar Context behavious.
I shall continue to clap, but in a quiet and restrained manner, as befits someone tip-toing through the minefield that is Perl Semantics !
BTW: I'm gagging to know how to describe why this:
$r = () = 1..27 ;
[download]
Finally, and for extra points (and points mean prizes), how does one describe the difference between a list and an array ?
In reply to | http://www.perlmonks.org/?parent=719099;node_id=3333 | CC-MAIN-2014-42 | refinedweb | 863 | 71.24 |
Theoffs in Missing Data Conventions
There are a number of schemes that have been developed to indicate the presence of missing data in an array of data. Generally, they revolve around one of two strategies: using a mask which globally indicates missing values, or choosing a sentinel value whichoffs: use of a separate mask array requires allocation of an additional boolean array which adds overhead in both storage and computation. A sentinel value reduces the range of valid values which can be represented, and may require extra (often non-optimized) logic in CPU &
Pandas’ choice for how to handle missing values is constrained by its reliance on the NumPy package, which does not have a built-in notion of NA values for non-floating-point datatypes.
Pandas could have followed R’s lead in specifying bit patterns for each individual data type to indicate nullness, but this approach turns out to be rather unwieldy in Pandas’ case.ly amount of overhead in special-casing various operations for various types, and the implementation would probably require a new fork of the NumPy package.
NumPy does have support for masked arrays – i.e. arrays which have a separate boolean mask array attached which marks
The first sentinel value used by Pandas is
None.
None is a Python singleton object which is often used for missing data in Python code.
Because it is a Python object, it cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type
'object' (i.e. arrays of Python objects):
import numpy as np import pandas as pd
vals1 = np.array([1, None, 3, 4]) vals1()
The use of Python objects in an array also means that if you perform aggregations like
sum() or
min() across an array with a
None value, you will generally get an error.
vals1.sum()
This is because addition in Python between an integer and
None is undefined.
NaN: Missing Numerical Data
The other missing data representation,
NaN (acronym for Not a Number) is different: it is a special floating-point value that is recognized by all systems which use the standard IEEE floating-point representation.
vals2 = np.array([1, np.nan, 3, 4]) vals2.dtype
Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array above, this array supports fast operations pushed into compiled code.
You should be aware that
NaN is a bit like a data virus which infects any other object it touches.
Regardless of the operation, the result of arithmetic with
NaN will be another
NaN:
1 + np.nan
0 * np.nan
Note that this means that the sum or maximum of the values is well-defined (it doesn’t result in an error), but not very useful:
vals2.sum(), vals2.min(), vals2.max()
Keep in mind that
NaN is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.
Examples
Each of the above sentinel representations has its place, and Pandas is built to handle the two of them nearly interchangeably, and will convert between the two sentinel values where appropriate:
data = pd.Series([1, np.nan, 2, None]) data
Keep in mind, though, that because
None is a Python object type and
NaN is a floating-point type, there is no in-type NA representation in Pandas for string, boolean, or integer values.
Pandas gets around this by type-casting in cases where NA values are present.
For example, if we set a value in an integer array to
np.nan, it will automatically be up-cast to a floating point type to accommodate the NA:
x = pd.Series(range(2), dtype=int) x[0] = None x
Notice that in addition to casting the integer array to floating point, Pandas automatically converts the
None to a
NaN value.
Though this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works well in practice and in my experience only rarely causes issues.
Here is a short table of the upcasting conventions in Pandas when NA values are introduced:
Keep in mind that in Pandas, string data is always stored with an
object dtype.
Operating on Null Values finish this section with a brief discussion and demonstration of these routines:
Detecting Null Values
Pandas data structures have two useful methods for detecting null data:
isnull() and
notnull().
Either one will return a boolean mask over the data, for example:
data = pd.Series([1, np.nan, 'hello', None])
data.isnull()
As mentioned in section X.X, boolean masks can be used directly as a Series or DataFrame index:
data[data.notnull()]
The
isnull() and
notnull() methods produce similar boolean results for DataFrames.
Dropping Null Values
In addition to the masking used above, there are the convenience methods,
dropna() and
fillna(), which respectively remove NA values and fill-in NA values.
For a Series, the result is straightforward:
data.dropna()=1) which are all null values:
df[3] = np.nan df
df.dropna(axis=1, how='all')
Keep in mind that to be a bit more clear, you can use
axis='rows' rather than
axis=0 and
axis='columns' rather than
axis=1.
For finer-grained control, the
thresh parameter lets you specify a minimum number of non-null values for the row/column to be kept:
df.dropna(thresh=3)
Here the first and last row have been dropped, because they contain only two non-null values.
Filling Null Values
We can fill NA entries with a single value, such as zero:
data.fillna(0)
We can specify a forward-fill to propagate the previous value forward:
# forward-fill data.fillna(method='ffill')
Or we can specify a back-fill to propagate the next values backward:
# back-fill data.fillna(method='bfill')
For DataFrames, the options are similar, but we can also specify an
axis along-which the fills take place:
df
df.fillna(method='ffill', axis=1)
Notice that if a previous value is not available during a forward fill, the NA value remains.
Summary
Here we have seen how Pandas handles null/NA values, and seen a few DataFrame and Series methods specifically designed to handle these missing values in a uniform way. Missing data is a fact of life in real-world datasets, and we’ll see these tools often in the following chapters. | https://www.oreilly.com/learning/handling-missing-data | CC-MAIN-2017-04 | refinedweb | 1,077 | 50.57 |
homebrew on Leopard fails to install
There is a report of homebrew failing to install on OS X Leopard.
Following the directions at this url there is an error:
There is a report of homebrew failing to install on OS X Leopard.
Following the directions at this url there is an error:
It would be great to get pygame packaged in homebrew. So people could just do brew install pygame.
I got this working on OSX Lion:
I tried to create a basic foruma to do it, but it failed to build so far.
I tried this `/usr/local/share/python/pip install pygame` But there is an error with the the camera module on x64 with SeqGrabComponent and friends stuff on osx Lion.
Homebrew doesn't duplicate other packaging systems, so they won't accept a pygame brew (since it would duplicate pip install pygame).
As of today, I can successfully(*) install pygame for the system python using:
brew install sdl sdl_image sdl_mixer sdl_ttf smpeg portmidi sudo pip install hg+
I think this is about as easy as it can get.
(*)When I do pygame.init(), it crashes with some issue about bumpy arrays (float96) but I guess this is a separate issue.
Cool, thanks for the info.
I fixed that float96 issue with numpy now.
So some updated instructions:
How to install via the python in your path (likely the apple python):
How to install to the brew supplied python.
Marking as resolved, since these instructions seem to work for people.
Issue
#58was marked as a duplicate of this issue.
i just tried the first brew instruction to install pygame and i received the error message "hg command not found"
Hello,
ah, ok. I've updated the instructions to list installing mercurial first.
I hope it works this time.
cheers,
Install mercurial from the .dmg from Or install mercurial via brew:
How to install via the python in your path (likely the apple python):
How to install to the brew supplied python.
@beniamino38, your float96 problem might be related to numpy package.
Try
pip install numpy, it might fix your problem.
I am receiving the following error when trying to
Any idea? :)
Never mind my last error, I fixed it. Was caused by a bad Mercurial plugin I installed (hg attic).
This error is unrelated to PyGame or Homebrew. Carry on now, move along with your daily lives, nothing to see here. :)
I get:
Error: This is a head-only formula; install with `brew install --HEAD smpeg`
So having to do that step separately.
Installed all deps via homebrew. Running latest xcode 4.3.
When I run /usr/local/share/python/pip install hg+ It fails with an clang error.
In file included from src/scale_mmx.c:33:
src/scale_mmx64.c:424;
^~~~~
In file included from src/scale_mmx.c:33:
src/scale_mmx64.c:499;
^~~~~
2 errors generated.
error: command '/usr/bin/clang' failed with exit status 1
I think the patch at fixes the problem. I don't know how to apply the patch.
I retried my build with Xcode 4.2.1: and It builds pygame just fine for pyhone homebrew Should I open another ticket for the Xcode 4.3 issue?
This problem is still very much alive for users who have Lion and Xcode 4.3 installed. Modifying the file src/scale_mmx64.c to use the 'movslq' opcode instead of 'movsxl' allows successful installation. It seems like an easy problem to fix in the sourse base.
I'm running into the same problem as acid junk on OSX Lion/XCode 4.3.
I am running OSX Snow Leopard 10.6.8 with Python 2.7 and am running into an issue.
At one point I had both Fink and Homebrew installed. I don't know if this is causing the below issue but I thought I'd mention it. Right now I only have Homebrew installed. While recently trying to install Python Imaging Library, I decided to clean up my installations by having Homebrew handle what it could. I uninstalled Python and ran the code suggested in this thread by @illume:
then I ran:
This all seemed to work really well. Except trying
came up with libraries that Homebrew didn't create, and it suggested deleting them. Knowing that there might be some consequences, I went ahead a deleted them.
Now when I run some very simple code to load a .jpg (without using PIL), it fails to load and I receive the following message:
Here is what 'brew list' looks like:
Here is the relevant python code:
I'm worried this has something to do with the libraries Homebrew asked me to delete, though sdl sdl_image sdl_mixer sdl_ttf smpeg portmidi are all installed. Any help would be much appreciated!
I have a little homebrew tap (=repository of additional formulae) especially for python projects.
I have made an initial attempt. It builds fine but I have not really tested this yet.
So basically:
brew tap samueljohn/python
brew install pygame
should work. Perhaps it needs a bit fixing, bur for me this builds.
@samueljohn Your recipe compiled, but I had to
pip install noseand
brew install --HEAD smpegbefore
brew install pygamewould work (with a long list of auto-installed dependencies). Unfortunately though, I wasn't able to get Python to import the module.
I ran into a couple of problems trying to install pygame for the first time on my mac with HomeBrew, so here's what I did.
I couldn't install smpeg, I was getting
so I just tapped homebrew/headonly:
brew install --HEAD smpegworked fine but during pygame installation, I also had to tap another repo.
Then, I installed nose with pip and gfortran from brew
After everything was fine
And voilà,
import pygameworks! :)
i'm triying to follow you guys .. but my brew fail to install smpeg .. i got this :
debug log :
thanks a lot for your help!
I'm having the same problem as @epifanio. My log file: log.txt
use revision 398 of smpeg instead of the latest (which uses sdl2). Add ":revision => '398'" to the head line in brew edit smpeg
For those unfamiliar with ruby, keep in mind you also need to add a comma, so the head line ends up looking like:
Hi! I installed pygame from the samueljohn/python tap on os x 10.8.3, and I cannot play ogg files for some reason, it says "Unrecognized music format". I cannot decide if it's a brew related problem, or something else. Anybody has a hint?
I was able to install pygame using both samueljohn/homebrew-python tag and the solution posted here on my OS X 10.7.5 Lion. However, I am also facing the issue of not having music (.wav and .ogg file) playing. I know this is a Mac Homebrew specific issue, because the script runs fine on my Windows machine and I can hear the sounds. This is extremely frustrating. I'm about to try installing python via homebrew once again and try to install the .dmg file provided in the pygame downloads page. I am running out of options and might need to resort to installing python from python.org instead of using homebrew. BTW, I've tried using MacPorts to install Python and Pygame successfully. Unfortunately, I had problems with wxpython (which is why I tried Homebrew). Really running out of options for a decent Pygame support on Mac without going through days of "making it work"
@zsombor brew install libvorbis and then reinstall sdl_mixer, then it works with ogg-files.
thanks, I'll try!
@acidjunk It builds if you force compilation with GCC:
CC=gcc pip install pygame
Compilation works with clang now. I removed smpeg from the homebrew install instructions because smpeg is not in homebrew any more.
Issue
#139was marked as a duplicate of this issue.
The pygame downloads page links to @illume comment for "homebrew install instructions".
I also had an issue with sound, would recommend using:
Would it be possible to create a wiki page here on bitbucket and link to it?
blurback's comment worked for me tonight on Mavericks . the pip install hg+http needed sudo. Thanks for the tip!
We are on github now. | https://bitbucket.org/pygame/pygame/issues/82/homebrew-on-leopard-fails-to-install | CC-MAIN-2020-29 | refinedweb | 1,375 | 75 |
Ticket #10377 (new defect)
x86 debug registers don't work reliably in a guest OS
Description
Some of x86 debug register events appears lost and not delivered to a guest OS.
To reproduce: Use gdb on a Linux guest to run a simple program like this:
#include <stdio.h> int main(int argc, char ** argv) { char ch = 0; for (;;) { ch = ch + 1; printf("%d\n", ch); } return 0; }
set a hardware breakpoint:
(gdb) watch ch Hardware watchpoint 2: ch (gdb)
do cont until the breakpoint fails to trigger, e.g.:
(gdb) cont Continuing. 111 112 Hardware watchpoint 2: ch Old value = 111 'o' New value = 113 'q' main (argc=1, argv=0x7fffffffe638) at t.c:8 8 printf("%d\n", ch); (gdb)
in this case writing 112 to ch did not trigger the breakpoint.
The problem appears reproducible with any host, guest or debugger.
Note: See TracTickets for help on using tickets.
Duplicate of #477? Is VT-x / AMD-V enabled for your VM? Please attach a VBox.log file. | https://www.virtualbox.org/ticket/10377 | CC-MAIN-2015-35 | refinedweb | 170 | 82.54 |
It's important to be able to calculate the rate of return on your investment portfolio. This information is necessary to understand your past investment earnings, get a picture of your current financial status and help you make decisions in the future. Unfortunately, many investors do not understand how to accurately calculate their return on investment for their portfolios, especially when it comes to mixed investments and accounting for deposits and withdrawals. When done correctly, calculating your return on investment is very useful for any investor.
Things You'll Need
- Financial calculator
- Portfolio statements
Determine your portfolio balance for a set period of time. The best way to calculate your rate of return is annually, since that is how interest rates are calculated and it's information you should know for your taxes. For each of your investment accounts, look up your balance on the last day of the previous year and then on the first day of the year before that.
Write down the dates and amounts of any deposits and withdrawals for each of these accounts. You will need to account for these separately when calculating your rate of return. Write all the amounts from the year for which you're calculating the rate of return.
List your balances, deposits and withdrawals in terms of cash flow. Your initial balance and each deposit should be considered negative numbers since this is money coming out of your pocket. Similarly, your withdrawals and final balance are positive numbers since this is money going into your pocket.
Use a tool that evaluates the internal rate of return (IRR) equation to calculate your return on investment. A financial calculator will already have the equation programmed, as will Microsoft Excel and other financial computer programs. The IRR equation is complex and difficult to solve by hand with just pencil and paper, especially if you have several deposits and withdrawals.
Enter your initial investment, deposits and withdrawals and final balance in your tool in terms of cash flow as you listed them previously. If you use a financial calculator, enter these values using the cash flow entering instructions specific to your calculator. In Excel, simply enter each cash flow value into a unique cell, all in one column. Then, use the IRR function on your calculator or computer program to calculate the rate of return for your portfolio.
Tips & Warnings
- Rate of return will be displayed as a percentage.
- If you made money, your rate will be positive, and if you lost money, it will be negative.
Related Searches
References
- Photo Credit investment image by Kit Wai Chan from Fotolia.com a blue empty binder image by timur1970 from Fotolia.com accounts image by Alexey Klementiev from Fotolia.com Accounting and finance image by MAXFX from Fotolia.com
Resources
You May Also Like
- What Is Rate of Return?
Time is money, or so the saying goes. Money isn't spent in a capitalist society without someone looking for something in return,...
- How to Calculate Value of Portfolio Returns
Portfolio return is the net change in a portfolio's value over a period. In the simplest case where the portfolio experiences neither...
- How to Use Excel To Calculate Investment Portfolio Returns
Excel contains an internal rate of return formula that calculates your annual portfolio return rate.
- How to Calculate Investment Rates of Return
Looking at a year-end statement that shows the "rate of return" on investments only tells part of the story on return. Whether...
- How to Calculate Portfolio Returns & Deviations
At first glance, it might be hard to understand what portfolio returns have to do with deviations. In the world of investments,...
- How to Calculate an IRA Return
An IRA is a retirement savings account where investors can invest in one of many types of investments. Each type of investment...
- How to Calculate Portfolio Return
When you invest in a mutual fund, you receive quarterly and annual reports that outline the performance of the fund; but it...
- What Is the Rate of Return on an Index Fund?
Index mutual funds take some of the guesswork away from investment decisions. These funds own a set basket of securities to mirror... | http://www.ehow.com/how_6503666_calculate-portfolio_s-rate-return.html | CC-MAIN-2017-22 | refinedweb | 696 | 54.52 |
US20090173801A1 - Water alteration structure and system having below surface valves or wave reflectors - Google PatentsWater alteration structure and system having below surface valves or wave reflectors Download PDF
Info
- Publication number
- US20090173801A1US20090173801A1 US12/012,225 US1222508A US2009173801A1 US 20090173801 A1 US20090173801 A1 US 20090173801A1 US 1222508 A US1222508 A US 1222508A US 2009173801 A1 US2009173801 A1 US 2009173801A1
- Authority
- US
- United States
- Prior art keywords
- water
- conduit
- holding vessel
- vessel
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted 158
- 230000004075 alteration Effects 0.000 title description 9
- 239000002352 surface water Substances 0.000 claims abstract description 23
- 239000000203 mixture Substances 0.000 claims description 16
- 230000000694 effects Effects 0.000 claims description 15
- 239000000463 material Substances 0.000 claims description 13
- 239000000126 substance Substances 0.000 claims description 4
- 238000010790 dilution Methods 0.000 claims 2
- 238000000034 method Methods 0.000 description
- 238000010586 diagram Methods 0.000 description 9
- 230000001965 increased Effects 0.000 description 9
- 239000002344 surface layer Substances 0.000 description 9
- 235000015097 nutrients Nutrition 0.000 description 8
- 238000004590 computer program Methods 0.000 description 6
- 230000001141 propulsive Effects 0.000 description 6
- 239000003643 water by type Substances 0.000 description 6
- 230000015556 catabolic process Effects 0.000 description 4
- 238000009833 condensation Methods 0.000 description 4
- 230000005494 condensation Effects 0.000 description 4
- 230000004059 degradation Effects 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 230000002708 enhancing Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000006011 modification reaction Methods 0.000 description 4
- 240000002925 Caryocar villosum 3
- 229910052799 carbon Inorganic materials 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 230000001419 dependent Effects 0.000 description 2
- 238000001704 evaporation Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000010410 layer Substances 0.000 description 2
- 230000000051 modifying Effects 0.000 description 2
- 230000003287 optical Effects 0.000 description 2
- 239000004033 plastic Substances 0.000 description 2
- 229920003023 plastic Polymers 0.000 description 2
- 229920000642 polymer Polymers 0.000 description 2
- RAHZWNYVWXNFOC-UHFFFAOYSA-N sulfur40OTc0LDE3MS42MjIgTCA5NS4xMDEyLDE1OC41UuMTAxMiwxNTguNTcyIEwgMTE3LjcwNSwxNDUuNTIyMDMuNDYnIHk9JzIwMi4143uNTU3LDM2Ljk5MTkgTCAzNC41MjMxLDMwLjQygNTcuMDksNDMuODczOSw0My44NzMYxLjAwODQsMzcuMDgwMDg0LDM3LjA4NjYgTCA3Mi4zNzU4LDQzLjY4LjA2NTQnyOTc5JyB5PSczOC40NzM0zEuMjMyMicgeT0nNTguMDY1NCc RAHZWNYVWXNFOC-UHFFFAOYSA-N 0.000 description 2
- IOMSIZWMDMOESQ-UHFFFAOYSA-N [C]1([C](C)[C]2[C][C][C]([C]([C]([C][C]1)C)N=O)CCC2)(C)C Chemical compound [C]1([C](C)[C]2[C][C][C]([C]([C]([C][C]1)C)N=O)CCC2)(C)C IOMSIZWMDMOESQ-UHFFFAOYSA-N 0.000 description 1
- 239000011358 absorbing material Substances 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000031018 biological processes and functions Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000001569 carbon dioxide Substances 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 239000004035 construction material Substances 0.000 description 1
- 230000001808 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000005574 cross-species transmission Effects 0.000 description 1
- 238000010192 crystallographic characterization Methods 0.000 description 1
- 230000003292 diminished Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 239000007789 gas Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003137 locomotive Effects 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910001092 metal group alloy Inorganic materials 0.000 description 1
- 150000002739 metals Chemical class9000003607 modifier Substances 0.000 description 1
- 150000002823 nitrates Chemical class 0.000 description 1
- 229910052813 nitrogen oxide Inorganic materials 0.000 description 1
- 235000021317 phosphate Nutrition 0.000 description 1
- 150000003013 phosphoric acid derivatives Chemical class 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 239000004417 polycarbonate Substances 0.000 description 1
- 229920000515 polycarbonate Polymers 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000005067 remediation Methods 0.000 description 1
- 150000004760 silicates Chemical class 0.000 description 1
- 230000003068 static Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- 239000002023 wood
Abstract the lowermost portion is configured to be submerged. At least one conduit extends from the lower side of the holding vessel. The at least one conduit has a length extending to a depth at which a property of water at the depth is substantially different from that of the water at the surface. The at least one conduit or the holding vessel.
Description
-)).
- For purposes of the USPTO extra-statutory requirements, the present application claims priority to U.S. patent application Ser. No. 12/006,823, entitled WATER ALTERATION STRUCTURE, Lowell L. Wood, Jr. and Victoria Y. H. Wood15, entitled WATER ALTERATION STRUCTURE MOVEMENT METHOD05, entitled WATER ALTERATION STRUCTURE APPLICATIONS04, entitled WATER ALTERATION STRUCTURE RISK MANAGEMENT OR ECOLOGICAL ALTERATION MANAGEMENT
- SYSTEMS.
- The description herein generally relates to the field of alteration of water temperatures and dissolved and particulate matter in bodies of water such as oceans, lakes, rivers, structures capable aiding in the alteration and control of such surface and subsurface water temperatures and compositions as well as of many applications and methods of making and using the same. The description also generally relates to the field of structures for altering the weather conditions for the genesis of and/or the maintenance of a hurricane and/or near-hurricane type weather.
- Conventionally, there is a need for structures for applications related to altering water properties such that there is a diminished contrast between near surface waters and waters found at greater depth, such as but not limited to atmospheric management, weather management, hurricane suppression, hurricane prevention, hurricane intensity modulation, hurricane deflection, biological augmentation, biological remediation, etc.
-. Also various structural elements may be employed depending on design choices of the system designer.
- In one aspect, a system for altering water properties includes a holding vessel configured to hold water, the holding vessel has at least one wall. The at least one wall extends at least above a mean surface water level. At least one conduit extends downward. The at least one conduit has a length extending to a depth at which at least one property of water at the depth differs substantially from that of water at the surface. The system further.
- In another aspect, a system for altering water properties includes a tub portion configured to hold water. The tub portion may be formed as a container having at least one side extending at least above a mean surface water level and the tub portion may be at least partially submerged. At least one conduit extends downward from the tub portion. The at least one conduit may have a length extending to a depth at which at least one property of water at the depth differs substantially from that of water at the surface The system further includes at least one aperture formed in at least one of the tub portion or the at least one conduit and located at a distance below the mean surface water level and at least one one way valve is coupled to the at least one aperture and allowing flow of water in only one direction.
- In yet another aspect, a system for altering water properties includes a holding vessel configured to hold water. The holding vessel has at least one wall. The at least one wall extends at least above a mean local surface water level. The system also includes at least one conduit extending downward from the holding vessel. The at least one conduit has a length extending to a depth below the local water surface. Further, the system includes at least one wave reflector. The wave reflector is coupled to at least one of the holding vessel and the at least one conduit, the wave reflector configured to reflect wavefronts toward the holding vessel.
- In addition to the foregoing, other system aspects are described in the claims, drawings, and text forming a part of the present disclosure.
- In addition to the foregoing, various other method and/or system and/or program product aspects are set forth and described in the teachings such as text (e.g., claims and/or detailed description) and/or drawings of the present disclosure.
-.
- The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description, of which:
FIG. 1is an exemplary diagram of a generalized vessel for holding and moving water. FIG. 2is an exemplary diagram of a pattern of deployment of a plurality of vessels similar to that of FIG. 1. FIG. 3is another exemplary diagram of a pattern of deployment of a plurality of vessels similar to that of FIG. 1. FIG. 4is an exemplary diagram of a generalized vessel for holding and moving water and depicting on-board propulsive devices. FIG. 5is a simplified depiction of a deployment of a plurality of vessels such as those depicted in FIG. 1in a geographic region, the simplified depiction not intended to imply any specific scale and the depiction of the vessels and watercraft not drawn to scale. FIG. 6is an exemplary block diagram of a generalized vessel for holding and moving water having an auxiliary conduit. FIG. 7is an alternative exemplary diagram of a generalized vessel for holding and moving water. FIG. 8is a top view of an exemplary generalized vessel for holding and moving water.
-.). Further, those skilled in the art will recognize that the mechanical structures disclosed are exemplary structures and many other forms and materials may be employed in constructing such structures.
- The need for mechanisms, devices, methods, systems, and structures which may be used to alter hurricanes either in their strength, their origin, or their direction of travel has been realized. Billions of dollars of destruction and damage is regularly attributable to hurricanes and hurricane-like tropical storms. Thus, great interest has arisen in controlling these powerful storms. Conventionally, it has been proposed to deploy barges equipped with upward-pointing jet engines into the paths of hurricanes. The jet engines would theoretically be configured to create mini-cyclones which would consume oceanic energy and thus prevent or suppress such high powered weather systems.
- Another potential solution involves the use of Dyn-O-Gel, a polymer that may absorb as much as 1,500 times its own weight in water to deprive a hurricane of atmospheric moisture. The concept involves the use of airplanes to drop Dyno-O-Gel into hurricanes to deprive them of moisture and thus of latent heat. The powder is suggested to convert into a gel when the atmospheric moisture is captured and would then reliquify when it encounters higher-osmolality ocean water.
- The jet engine solution has been met with great skepticism and the cost and feasibility are very uncertain. The use of a moisture absorbing gel requires the deployment of a huge volume of the absorbing gel material. Also, the use of a moisture absorbing material is still in the testing phase. The gel material after absorbing moisture falls to the ocean and may dissolve. Depending on the chemical composition of the gel, the gel be regarded as a pollutant. These various shortcomings considered, it may be desirable to provide a different approach for altering hurricane and/or tropical storm activity by providing a structure and method that solves at least one or more deficiencies of other systems known in the art. Because hurricanes and other tropical storms derive their energy from warm ocean water, it is logical to harness the great energies of the Earth's fluid envelopes to suppress or alter hurricanes or other tropical storms, and/or to employ the powers of motion within these envelopes over long time-intervals to modulate at least one property of an envelope that is exploited over much shorter time-scales and/or much more limited spatial scales for energizing a hurricane.
- A potential solution for cooling warm surface water has been explored by researchers with Atmocean, Inc. of Santa Fe, N.Mex. In the Atmocean approach, an elongated tube with a buoy is used to create an upwelling effect. The upwelling effect drives cold water from a depth to the surface.
-.
- Referring now to
FIG. 1, a cross-section of a water-borne structure or vessel 100 is depicted..
- −2 to 3°. Referring to
FIG. 2, an array 200 of vessels 100 is depicted. Such vessels may be arranged in a plurality of ways, including but not limited to positioning them in a water region in an array, such as array 200, in a random placement 300, as depicted in FIG. 3, within a region, and/or in any other arrangement. It may be desirable to determine the most suitable and/or optimal arrangements through computer modeling or other techniques. Referring now to FIG. 5, it may be seen that many vessels 100 may be dispersed throughout hurricane prone regions such as but not limited to the Gulf of Mexico 500 or the Caribbean Sea. Vessels 100, depicted for illustrative purposes only and not to scale are shown being dispersed in a relatively random pattern. Boats 510 may be used to tow vessels to desired locations. Also, other means such as self propulsion, airlifting, towing, or other methods to move vessels may also be used. In another embodiment, vessels 100 may be anchored in a variety of ways, including but not limited to anchored to the bottom, anchored using subsurface weights, anchored using sea anchors, or anchored to each other.
- Referring again to
FIG. 1vessel 100 may be one vessel in a system for altering water surface temperature. As such the tub 130 is one type of a holding vessel configured to hold water. Tub 130 includes at least one wall 110 (but may include multiple walls) which are coupled to a bottom portion 115. The at least one wall 110 extends above the water level and the bottom portion 115 is configured to be submerged. At least one conduit 125 extends from the bottom of the tub 130. In some, but not necessarily all, applications, it may be desirable for conduit 125 to have a length that extends to a depth at which the temperature of water at the depth (e.g., below line 160) is substantially less than water at the surface
- Vessel 100 may be held buoyant by both the materials used to construct vessel 100 as well as at least one ballast tank 120. Tanks 120 may be coupled to at least one pump 170 and at least one valve 180. In accordance with an exemplary embodiment, the height of wall 110 above the average water surface level may be varied and controlled depending on the time-varying height of the local waves and depending on the desired flow rate through conduit 125. One way in which to vary the height of wall 110 above the average water level 145 is to pump atmospheric air into tank 120 or out of tank 120. In conjunction with pump 170, valve 180 may be used to draw water into or out of tanks 120. In accordance with another exemplary embodiment, it may be desirable to have the ability to mechanically raise or lower at least a portion of wall 110 relative to the rest of the structure. It may also be desirable to control the raising and lowering of all or part of wall 110 in response to conditions adjacent to vessel 100 (e.g., water temperature, wave height).
- In another embodiment, water flow into vessel 100 may be via openings 175 in wall 110, rather than over the top of wall 110. Such openings may be configured to preferentially allow flow into vessel 100, instead of out of the vessel. In some embodiments, openings 175 are passive, using flaps, checkvalves, rotating drums, or similar mechanisms to support unidirectional flow. In other embodiments, openings 175 are actively controlled, utilizing motorized or variable setpoint flow control devices such as valves, flaps, rotating drums, or similar mechanisms.
- Walls 110 and bottom portion 115 as well as other parts of vessel 100 may be constructed of any of a variety of materials and preferably of a material substantially resistant to degradation in water. For example, vessel 110 may be substantially constructed from concrete, polymers, at least one of metals or metal alloys, fabrics, reinforced fabrics, and/or composite materials. In some applications, it may be advantageous for the construction materials to resist degradation only for a limited period of time, as degradation of the structure may diminish or eliminate expenditures associated with post-application retrieval of the structure. Furthermore, it may be advantageous to allow the structure to sink below the water surface or to the water bottom after application, where degradation may be preferred to occur. In an exemplary embodiment, conduit 125 may be formed of any of a variety of materials including both rigid materials and flexible materials. It may also be desirable to use stiffening structures in the conduit depending on the type of materials used. Such stiffening structures aid in maintaining the shape of conduit 125 under pressure and under stress. The stiffening structures may be placed at one or more locations along the length of the conduit. Further, such stiffening structures may be deployable and may aid in deployment along with a conduit which may also be deployable from tub 130. In yet another exemplary embodiment, it may be desirable to form vessel 100 from a material which would be known to degrade over time. This may be useful if it is known that a vessel has a desired lifespan or term of usefulness. Once the vessel's use is done, the vessel could sink or be sunk where it could subsequently degrade at a subsurface location.
- In an exemplary embodiment the holding vessel or tub 110 has a horizontal cross sectional dimension that is substantially greater than a horizontal cross sectional dimension of the conduit 125. In another exemplary embodiment holding vessel or tub 100 has a horizontal cross sectional dimension and/or shape that is substantially the same as the cross sectional dimension and/or shape as conduit 125. The pressure head created by the weight of the column of water above the conduit which is above the line 145 is used to pressurize the descending water in conduit 125. In an exemplary embodiment it may be convenient to have a power source 190 on board vessel 100. Power source 190 may be any of a variety of power sources, including but not limited to a solar cell, a wind generator, a wave power generator, a turbine turned by water descending in the conduit, a battery power source, a fuel powered power source, a thermoelectric power source, etc.
- In accordance with an embodiment a vessel 600 is depicted in
FIG. 6having a conduit 625. Disposed within conduit 625 is a turbine 630. Turbine 630 may be driven by the flow of water through conduit 625. Turbine 630 may be utilized for a variety of purposes including but not limited to generating power for a variety of purposes, maintaining buoyancy, controlling buoyancy, driving other turbines, increasing the water flow through conduit 625, etc.
- In accordance with other exemplary embodiments it may be desirable to equip vessel 100 with one or more propulsion systems. Referring now to
FIG. 4, a propulsion system may be in the form of a sail or a propeller 450 or other motorized propulsion producing device. Such a propulsive device may be powered by power source 460 or any other source of power. The propulsion system may be used to control the positioning of vessel 100 such that it remains at a specific area, moves in a specific pattern, and/or moves to a completely new location. A rudder 470, fin, sail, or other steering device may be coupled to vessel 100 to help guide vessel 100. Alternatively, a sail or a propeller 450 may be configured to change orientation to provide steering for vessel 100. Because different depths in bodies of water often have currents flowing in different directions or with different speeds, a propulsion system may involve the use of one or more sea anchors with mechanisms and control systems to effect proper placement of the sea anchors. In one exemplary embodiment, it may be desirable to construct vessel 100 with a shape such that its coefficient of drag is less in one direction than another. This may be accomplished by making the dimensions of vessel 100 longer in one direction than another, for example. Other methods and shapes may also be used to produce such an effect.
- In accordance with another exemplary embodiment, vessel 100 may include a movable conduit in which at least a portion 480 of conduit 425 may be movable in various directions in order to provide a propulsive force in a desired direction. In another exemplary embodiment, the movable portion may be one or more openings 455 which may be controlled, along the length of conduit 425. The propulsive force generated by water flow through conduit 425 may also be varied by opening and closing opening 485 using a controlled access device such as door 490 (or other aperture control devices such as but not limited to valves, etc.) that may control the flow rate through conduit 425.
- In an exemplary embodiment walls 410 of vessel 100 may be formed of multiple wall segments or multiple wall portions. The multiple wall segments of walls 410 form a closed shape to contain water within vessel 100. The wall segments may be curved or straight, may be movable in such a way as to help let in water or alternatively to release water. In one exemplary embodiment, vessel 100 may be permanently anchored to the water floor, temporarily anchored to the water floor, tied to a subsurface weight, tied to one or more sea-anchors, or may be freely movable. In one exemplary embodiment, vessel 100 is movable by coupling the vessel to a propulsive vessel, such as a tugboat or the like. In another exemplary embodiment, vessel 100 may include a wind capture structure, such as a sail 495 that may be used to harness wind power for moving the holding vessel. The wind capture structure may be used for controlling the amount that the at least one wall of the holding vessel extends above the water, that is it may also be used to provide lift to the holding vessel 100 structure, to help control how far above the water level that walls 410 extend. Sea anchors are functionally similar to sails, except instead of extending up into the atmosphere they are deployed into the water. Thus, sea anchors or current capture structures may be used for similar purposes as sails and wind capture structures. These include moving or holding the vehicle, generating power, providing lift, etc. Also in an exemplary embodiment, vessel 100 may have a ramp area 475 or other wave altering area that helps to control how the waves move water over the sides of vessel 100. This wave-altering structure may be a static or passive structure, or it may be an active device or structure having one or more components that are actuated or powered in order to have a time-dependent character or activity; the power for such purposes may be derived from any of the power-providing means discussed above, or may be derived from the wave-action itself. Further, in an exemplary embodiment, vessel 100 may have any of a variety of shapes including but not limited to circular, elongated, non-circular, shaped in a manner which aids in passively controlling orientation relative to wave motion, etc.
- Referring now to
FIG. 6, a vessel 600 is depicted. Vessel 600 includes a conduit 625 in which a turbine 630 is driven by the downward flow of water through conduit 625. In an exemplary embodiment, the turning turbine may be used for a variety of purposes including providing electric or mechanical power, providing control, providing propulsive power, etc. In one exemplary embodiment a secondary conduit 640 (which represents one or more conduits) may be used to bring cold water (such as below an ocean thermocline 650) to upper areas of warmer surface water to aid in cooling the warm surface water regions, enhance mixing of subsurface water with surface water, enhance mixing of surface water with subsurface water, raising subsurface nutrients to the surface, bringing surface nutrients to subsurface regions, etc. In one exemplary embodiment, turbine 630 may be used to drive a second turbine 635 in conduit 640 that pumps water up through conduit 640. Further, other mechanisms may be used to bring subsurface water upwards. In most places, deeper waters contain a greater concentration of nutrients than surface water, so conduit 640 may also be used to transport dissolved nutrients from deeper waters to waters near the surface of the body of water.
- It may be desirable to construct a vessel such as vessel 100 of
FIG. 1, in a variety of shapes and configurations depending on the use and on the desired performance characteristics. Referring to FIG. 7, an alternative exemplary embodiment of a vessel 700 is depicted in the form of an elongate tube that is designed to capture water at its top 710 and thereby develop a pressure head (as described earlier) to push surface water to subsurface levels. Walls 720 may form the structure without a bottom portion or a tub portion as shown in other exemplary embodiments. Walls 720 may be formed of any of a variety of materials as also described earlier. Water may be carried to any of a variety of subsurface levels depending on the design and desired performance, including but not limited to below thermocline 730. Alternatively, vessel 700 may capture water entering through one or more apertures at subsurface levels such as one way apertures 740, 750, and 760. One way apertures 740, 750, and 760 may include passive or active valves depending upon the desired performance or operating characteristics desired. In one embodiment, apertures 740, 750, and 760 may be designed to let in water based on specific conditions such as but not limited to changes in pressure, changes in temperature, etc. In accordance with another exemplary embodiment, apertures 740, 750, and 760 may vary in size. Such variance may be actively controlled or alternatively the variance in size may be built in such that the variance is based on the distance from the top of vessel. Further the distribution of apertures may be uniform over the outer surface of the vessel or may vary depending on design considerations. Apertures 740, 750, and 760 may include any of a variety of types of valves including various types of flap valves, stop valves, check valves, gate valves, etc. In an exemplary embodiment apertures 740, 750, and 760 may include one way valves that let water flow into the interior of vessel 700 and thereby contributing to the flow through vessel 700. Further, apertures 740, 750, and 760 may be controlled such that they are able to selectively let water in or let water flow out of vessel 700.
- In accordance with a particular exemplary embodiment, the configuration of vessel 700 may include a hollow cylindrical floating enclosure approximately 90 meters in diameter and 20 meters deep, these dimensions being for example and not limiting. Buoyancy may be provided by an inflated ring with a low freeboard. The cylindrical surface may include a continuous wall of non-return (one way) valves (e.g. valves 740, 750, and 760). Below this is a tube made of a plastic with slightly negative buoyancy long enough to reach down to the thermocline. Water can flow into the cylinder with very little resistance but cannot flow back through the valve wall. This will initially raise a head inside the cylinder which would be similar in magnitude to the amplitude of each incoming wave. But as soon as the head exceeds that needed to overcome the difference in density between the warm surface and the cold water below the thermocline, water will start to flow downwards. The head needed for a surface temperature of 25 C., constant down to a depth of 200 meters followed by a drop to 10 C. at the thermocline may be approximately 0.14 meters. The inertia of the water column inside the down-tube may be large such that the velocity will be almost steady and water will be sucked into the cylinder during any lull of the incoming waves.
- Generally, horizontal displacements of long period waves will go deeper than short period waves but short period waves do their displacing more often. The transfer rates for all periods between 6 and 10 second may be nearly the same for valve wall depths of 15 to 20 meters. For a 20 meter wall depth the flow volume would be about 2.8 cubic meters per second for each meter width of installation and each meter amplitude of the incoming wave. For example, in a one-meter amplitude regular wave this would be about 250 cubic meters per second for a 90 meter diameter unit. The thermal energy transfer would be this flow rate times the specific heat of water (4.28 MJ per cubic meter Kelvin) times the temperature difference of 15 C. This comes out to approximately 16 GW. Although these rates, sizes, and energy calculations are provided, the claims should not be viewed as so limiting. In fact these rates, times, and energies are provided merely for example.
- It has been shown that in addition to the drag from any currents there may also be a large force due to the momentum of waves which depends on the square of incident plus reflected minus transmitted amplitudes. Vessels 700 and like vessels may not have a firm attachment points for a mooring. Because nearly all ocean systems consist of gyres it may be possible to let vessels 700 or like vessels to drift freely but to release the water in a direction to produce a controlled amount of thrust towards the center of the local gyre as discussed above. In one exemplary embodiment, the walls having valves 740, 750, and 760 may be slightly elastic or resilient in nature. By providing such elastic walls, a negative head may be developed locally with the period of each incoming wave. Such elastic walls may be formed of but are not limited to being formed of polycarbonate or certain plastics which may have a desired elastic response.
- In another exemplary embodiment, a cap or top of vessel 700 may be used such that vessel 700 does not rely on wave overtopping to develop a pressure head. Rather, vessel 700 utilizes one way valves 740, 750, and 760 to develop a pressure head by receiving water flow through the valves from the under surface wave motion. In operation water flows through the valves and is forced downward to flow out the bottom of vessel 700. In one exemplary embodiment there may be substantially no pocket of air between the top or cover and the ocean surface. In another exemplary embodiment there may be a pocket of air between the top or cover and the ocean surface which when water is forced into vessel 700, may cause the air to become pressurized and thereby aid in producing downward flow. In one exemplary embodiment, the valves may be nonuniformly distributed on vessel 700. Further, the size, type and other specifications may be adapted to the depth. Some or more of the valves may be dynamic in nature with variable and possibly controllable characteristics. The controllability of the valves may be based on a variety of control algorithms including set points and the like.
- Referring now to
FIG. 8, a top view of a water alteration vessel 800 is depicted. Vessel 800 includes a water receiving portion defined in an exemplary manner by a cylinder 810. Cylinder 810 having a receiving area 820 which receives overtopping wave water. In one exemplary embodiment, a ramp region 830 helps to aid in bringing water into receiving area 820. Further, in an exemplar embodiment walls 840 which may be partially submerged and partially elevated above a mean surface level may act as wave reflectors or wave concentrators. In operation walls 840 help to concentrate incoming waves, having wavefronts moving for example in the direction 850, so that the waves are more apt to spill over the top of cylinder 810 into receiving area 820. Further, in an exemplary embodiment the orientation of vessel 800 may be controlled such that waves are incident in substantially the direction 850 in order to increase the efficiency with which wave water enters receiving area 820. In one exemplary embodiment, reflectors 840 may be movable. Such movability may be done in a passive or active manner and may be used to increase or decrease the efficiency in which wave water is directed toward holding vessel 810.
- The capability of the systems and methods described to enhance mixing between surface and subsurface water can be useful for other applications in addition to thermally based weather modification. One such application is to aid in ocean uptake of atmospheric CO2. Oceans are natural CO2 sinks, and represent the largest active carbon sink on Earth. This role as a sink for CO2 is driven by two processes, the solubility pump and the biological pump. The former is primarily a function of differential CO2 solubility in seawater and the thermohaline circulation, while the latter is the sum of a series of biological processes that transport carbon (in organic and inorganic forms) from the near-surface euphotic zone to the ocean's interior.
- The solubility pump is a nonbiological effect wherein CO2 first dissolves in the surface layer of the ocean. This surface layer can become saturated and its ability to absorb more carbon dioxide declines. Use of this system to promote mixing between surface and subsurface water enhances the efficacy of the solubility pump in at least two manners; by net transport of CO2-enriched water downwards, as well as by reducing the temperature of the surface water, thereby increasing its ability to dissolve CO2. The solubility pump enhancement induced by this system can also be useful for increasing ocean uptake of other atmospheric gases, such as methane, nitrogen oxides, sulfur dioxide, etc.
- While the biological pump currently has a limited effect on uptake of CO2 introduced into the atmosphere by human activities, there have been suggestions to increase the carbon sequestration efficiency of the oceans by increasing the surface-layer phytoplankton concentration, which is in many instances limited by insufficient surface-layer nutrients. Nitrates, silicates, and phosphates are, for instance, largely absent from surface waters, yet are considerably more abundant in subsurface oceans. These exemplary systems and methods can be used to mix surface and subsurface waters, thereby transporting nutrients towards the surface. This increase in surface nutrients can be useful in increasing the CO2 biological pump by increasing surface-layer phytoplankton concentrations. Increases in surface-layer nutrients can also be useful for increasing populations of water-based fauna or flora, both in oceans and in other water bodies, such as lakes, reservoirs, rivers, etc.
- The benefits of these systems and methods in increasing mixing between surface and subsurface water is not restricted to use in oceans, but can also be beneficial in other bodies of water, such as lakes, reservoirs, rivers, etc.
- electrically, magnetically or electromagnetically actuated devices,, mechanical, fluidic or other analogs. Those skilled in the art will also appreciate that examples of electromechanical systems include but are not limited to a variety of consumer electronics systems, as well as other systems such as motorized transport systems, factory automation systems, security systems, and communication/computing systems. Those skilled in the art will recognize that electromechanical of (a) an air conveyance (e.g., an airplane, rocket, hovercraft, helicopter, etc.), (b) a ground conveyance (e.g., a car, truck, locomotive, tank, armored personnel carrier, etc.), (c) a building (e.g., a home, warehouse, office, etc.), (d) an appliance (e.g., a refrigerator, a washing machine, a dryer, etc.), (e) a communications system (e.g., a networked system, a telephone system, a Voice over IP system, etc.), (f) a business entity (e.g., an Internet Service Provider (ISP) entity such as Comcast Cable, Quest, Southwestern Bell, etc), or (g) a wired/wireless services entity such as Sprint, Cingular, Nextel, etc.), etc.
-.
- are not expressly set forth herein for sake of clarity.
- (64)
1. A system for altering water properties in an outdoor body of water,_comprising:
a holding vessel configured to hold water, the holding vessel having at least one wall, the at least one wall extending at least above a mean surface water level;
at least one conduit extending downward from the holding vessel, the at least one conduit having a length extending to a depth at which at least one property of water at the depth differs substantially from that of water at the surface;.
2. The system of
claim 1, wherein the wall also forms the conduit.
3. The system of
claim 1, wherein the conduit has substantially the same cross sectional area as the cross sectional area of the holding vessel that extends above the water.
4. The system of
claim 1, wherein the conduit has a larger cross sectional area than the cross sectional area of the holding vessel that extends above the water.
5. The system of
claim 1, wherein the conduit has a smaller cross sectional area than the cross sectional area of the holding vessel that extends above the water.
6-15. (canceled)
16. The system of
claim 1, wherein the conduit extends to at least the depth of the thermocline.
17-99. (canceled)
100. The system of
claim 1, wherein the properties being altered comprise at least one of, water temperature, dissolved-gas concentration, water chemical composition or water biological composition.
101. The system of
claim 1, wherein the system is used to influence biological activity.
102. The system of
claim 1, wherein the at least one apertures are distributed around at least one of the at least one conduit or the holding vessel.
103. The system of
claim 1, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel.
104. The system of
claim 1, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel and the size of the apertures are based on the depth.
105. The system of
claim 1, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel and the distribution is based on the depth.
106. The system of
claim 1, wherein the at least one one way valves allow the flow of water into the at least one conduit or the holding vessel.
107. The system of
claim 1, wherein the at least one one way valves are selectively changeable such that the one way valves can be configured to allow flow in a chosen direction.
108. The system of
claim 1, wherein the at least one valves include passive valves.
109. The system of
claim 1, wherein the at least on valves include flap valves.
110. The system of
claim 1, further comprising:
at least one wave reflector, the wave reflector coupled to at least one of the holding vessel and the at least one conduit, the wave reflector configured to reflect wavefronts toward the holding vessel.
111. The system of
claim 1, further comprising
a top coupled to the holding vessel, the top preventing the in flow of water by overtopping.
112. A system for altering water properties of an outdoor body of water, comprising:
a tub portion configured to hold water, the tub portion formed as a container having at least one side extending at least above a mean surface water level and the tub portion being at least partially submerged;
at least one conduit extending downward from the tub portion, the at least one conduit having a length extending to a depth at which at least one property of water at the depth differs substantially from that of water at the surface;
at least one aperture formed in at least one of the tub portion or the at least one conduit and located at a distance below the mean surface water level; and
at least one one way valve coupled to the at least one aperture and allowing flow of water in only one direction.
113. The system of
claim 112, wherein the side also forms the conduit.
114-205. (canceled)
206. The system of
claim 112, wherein the conduit includes one or more openings located at preferred depths.
207. The system of
claim 112, wherein the conduit comprises multiple openings distributed over an area to effect, at depth, rapid dilution of outflow water with ambient water.
208. The system of
claim 112, wherein the properties being altered comprise at least one of, water temperature, dissolved-gas concentration, water chemical composition or water biological composition.
209. The system of
claim 112, wherein at least one of the tub portion or the conduit contains at least one internal structure that is configured to maintain a desired cross sectional shape or condition in at least a portion of the tub portion or conduit.
210. The system of
claim 112, wherein at least one of the tub portion or the conduit has a cross sectional shape that is configured to reduce drag when moving relative to the water.
211. The system of
claim 112, wherein at least part of the vessel wall has openings.
212. The system of
claim 112, wherein at least part of the vessel wall has openings and the openings are configured to allow water to flow substantially in only one direction.
213. The system of
claim 112, wherein the system is used to influence biological activity.
214. The system of
claim 112, wherein at least part of the tub portion has openings and the openings are controllable.
215. The system of
claim 112, wherein the at least one apertures are distributed around at least one of the at least one conduit or the holding vessel.
216. The system of
claim 112, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel.
217. The system of
claim 112, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel and the size of the apertures are based on the depth.
218. The system of
claim 112, wherein the at least one apertures are distributed at various depths on at least one of the at least one conduit or the holding vessel and the distribution is based on the depth.
219. The system of
claim 112, wherein the at least one one way valves allow the flow of water into the at least one conduit or the holding vessel.
220. The system of
claim 112, wherein the at least one one way valves are selectively changeable such that the one way valves can be configured to allow flow in a chosen direction.
221. The system of
claim 112, wherein the at least one valves include passive valves.
222. The system of
claim 112, wherein the at least on valves include flap valves.
223. The system of
claim 112, further comprising:
at least one wave reflector, the wave reflector coupled to at least one of the tub portion and the at least one conduit, the wave reflector configured to reflect wavefronts toward the tub portion.
224. The system of
claim 112, further comprising
a top coupled to the holding vessel, the top preventing the in flow of water by overtopping.
225. A system for altering water properties of an outdoor body of water, comprising:
a holding vessel configured to hold water, the holding vessel having at least one wall, the at least one wall extending at least above a mean local surface water level;
at least one conduit extending downward from the holding vessel, the at least one conduit having a length extending to a depth below the local water surface; and
at least one wave reflector, the wave reflector coupled to at least one of the holding vessel and the at least one conduit, the wave reflector configured to reflect wavefronts toward the holding vessel.
226. The system of
claim 225, wherein the wall also forms the conduit.
227-323. (canceled)
324. The system of
claim 225, wherein the conduit includes one or more openings located at preferred depths.
325. The system of
claim 225, wherein the conduit comprises multiple openings distributed over an area to effect, at depth, rapid dilution of outflow water with ambient water.
326. The system of
claim 225, wherein the conditions being altered comprise at least one of, water temperature, dissolved-gas concentration, water chemical composition or water biological composition.
327. The system of
claim 225, wherein at least one of the conduit and the holding vessel comprises stiffening structures.
328. The system of
claim 225, wherein at least one of the conduit and the holding vessel comprises stiffening structures and the stiffening structures are deployable.
329. The system of
claim 225, wherein the conduit extends to at least the depth of the thermocline.
330. The system of
claim 225, wherein at least one of the holding vessel or the conduit contains at least one internal structure that is configured to maintain a desired cross sectional shape or condition in at least a portion of the holding vessel or conduit.
331. The system of
claim 225, wherein at least one of the holding vessel or the conduit has a cross sectional shape that is configured to reduce drag when moving relative to the water.
332. The system of
claim 225, wherein at least part of the vessel wall has openings and the openings are configured to allow water flow substantially in only one direction.
333. The system of
claim 225, wherein at least part of the vessel wall has openings and the openings are controllable.
334. The system of
claim 225, wherein the at least one wall comprises multiple wall segments.
335. The system of
claim 225, further comprising:.
336. The system of
claim 225, wherein the wave reflectors are at least partially buoyant.
337. The system of
claim 225, wherein the wave reflectors are substantially curved structures.
338. The system of
claim 225, wherein the wave reflectors are substantially straight structures.
339. The system of
claim 225, wherein the wave reflectors are substantially formed of the same material as at least one of the holding vessel or the at least one conduit.
340. The system of
claim 225, wherein the wave reflectors are substantially movable.
341. The system of
claim 225, wherein the wave reflectors are substantially movable in a passive manner.
342. The system of
claim 225, wherein the wave reflectors are substantially movable in an active manner.
Priority Applications (5)
Applications Claiming Priority (1)
Related Parent Applications (1)
Publications (2)
Family
ID=40843778
Family Applications (1)
Country Status (1)
Cited By (10)
Families Citing this family (3)
Citations (48)
Family Cites Families (6)
- 2008
- 2008-01-30 US US12/012,225 patent/US8715496B2/en active Active | https://patents.google.com/patent/US20090173801A1/en | CC-MAIN-2021-39 | refinedweb | 8,055 | 51.18 |
Why Visualization?
A human mind can easily read and understand a chart or image as compared to looking thru the large chunk of data in a table or a spreadsheet, Data visualization is a powerful technique to visualize and get meaningful insight from the datasets. For example, Bar graphs can easily tell you the monthly or yearly trends for your sale, expenses etc. or Pie chart can help you to find out what is the percentage of items from the total value.
So Data visualization is a more readable format to see thru the data.
Visualization for Python developers
Coming to the choice of Visualization library for Python developers, there are not much tools/packages available to give the entire flexibility like D3.js, which is a low level visualization library in Javascript and gives more control to the user to render the graphs as per their choice. Bokeh is comparable to D3.js to some extent but still D3.js has an upper hand in terms of flexibility and to do anything you like to do in the graphs.. Primarily it uses python and is an open source & cross platform and developed on the lines of MATLAB. So matlab users will feel some similarity between both of these packages.
Installing Matplotlib
Install anaconda package from here, it’s a data science platform in python, it includes scientific and analytical packages, Some of these packages are NumPy, pandas, SciPy, Matplotlib, and Jupyter. After installing Anaconda, Open python terminal and check for the matplotlib version installed using below command on the terminal
import matplotlib
matplotlib.__version__
Ipython Notebook
Go to the terminal and navigate to the folder where you want to store all your tutorials and type “Ipython Notebook” and see a Browser with your current folder structure should be opened, something as shown in the image below
Plotting a simple graph
import matplotlib pyplot which is collection of command style functions similar to MATLAB. There are several function in pyplot which is used to create the figure and manages other parameters like plot area, labels, axes etc.
Now using matplotlib only thing which you need to worry is about your data, once it is fed to matplotlib and specify the kind of graph that you are looking for then it does everything for you.
%matplotlib inline # plot the graph in notebook
import matplotlib.pyplot as plt #Import the pyplot library
We would be using numpy arange to create the array of values from 0 to 4. So here is the output for np.arange(4):
array([0, 1, 2, 3, 4])
This output will be fed to the plot function of pyplot:
plt.plot(np.arange(4))
Matplotlib considers the value of Y-axis and generates automatically the X-axis values accordingly. However plot function can take both X & Y range of values.
X=[1,4,6,7]
Y=[1,3,4,5]
plt.plot(Y,X)
Line Style:
In the plot function there is a third optional argument, which is to control the line style, by default the value is blue dash(-) i.e. b-, however you can change it as per your choice, so if you wanted to plot red circles in the above graph instead of blue solid line, enter the third arg as shown:
plt.plot(np.arange(7),'ro')
Similarly for blue dotted lines ‘b–‘ and green traingles ‘g^’ can be used.
Line Property:
Line has many properties that user can set as per their requirements, Complete list of the line attributes are shown here
I will show how to use two of the line attributes linestyle(ls) & linewidth(lw), linestyle sets the type of line to be used like solid’, ‘dashed’, ‘dashdot’, ‘dotted’ and linewidth sets the width of the line, see this example below
X=[1,4,6,7]
Y=[1,3,4,5]
plt.plot(Y,X,lw=5.0,ls='--')
Axis:
One of the important factor in any of the graph is axis and it’s label, you can use plot axis function to define the X-axis and Y-axis labels for your graph, You need to provide the following parameters for axis function [xmin, xmax, ymin, ymax]
plt.plot(np.arange(7))
plt.axis([0,6,0,8])
In the above plot you can see how the X-axis is divided between 0 to 6 and Y-axis divided between 0 to 8 as defined in the axis function.
Also, if you want to add any text anywhere in the graph then you can use plot text() function, which takes mandatorily 3 arguments i.e. X,Y as data coordinates and S as String or text other parameters fontfict and withdash are optional. so you can use this function if you want to add text in the above graph stating that this is a straight line or want to provide text for the coordinates
We will plot a simple barchart here and will add text(6th item) in red color to the plot
y = np.arange(10)
x = np.arange(10)
plt.bar( x,y, width, color="blue") #Bar Plot
plt.text(3, 6, r'6th item',fontsize=12,color='red') #Adding text for the 6th element
In plt.text() function the first two arguments are the coordinates(3,6) on the plot where you want to enter the text. Additionally you can provide the parameter fontsize & color if you want to highlight something in color or bigger fontsize.
Now we want to label our X & Y axes and provide a title for the graph, Will plot a simple histogram and add these lables & titles to it:
randnumbers = np.random.randn(1000)
plt.hist(randnumbers)
plt.title("My Histogram") # Title of the Graph
plt.xlabel("Value") # X-axis label
plt.ylabel("Frequency") # Y-axis label
This was a small demo on how to get started with matplotlib for python developers, However matplotlib is much more powerful and helps you to draw the graph fast and effectively. Please check the matplotlib official website to take a deep dive into this. | https://kanoki.org/2017/07/09/get-started-with-matplotlib-data-visualization-for-python/ | CC-MAIN-2017-43 | refinedweb | 1,010 | 57.91 |
Navigating the tree programmatically
In section 15.1 we mentioned that there was no direct way to get a Python list of the children of a given item in the tree, let alone the index of a specific child. To do that, you need to walk the tree nodes yourself using the methods in this section.
To start walking the tree, get the root using GetRootItem(). This method returns the wx.TreeItemId of the root item of the tree. You can then use methods such as GetItemText() or GetItemPyData() to retrieve more information about the item.
Once you have an item, getting its children involves a kind of an iterator which lets you walk through the list of children one by one. You get the first child in the subtree with the method GetFirstChild(item) which returns a two-element tuple (child, cookie). The item is the wx.TreeItemId of the first child, and the second is a special token value. In addition to telling you what the first child is, this method initializes an iterator object that allows you to walk through the tree. The cookie value is just a token that allows the tree control to keep track of multiple iterators on the same tree at the same time without them interfering with each other.
Once you have the cookie from GetFirstChild(), you can get the rest of the children by repeatedly calling GetNextChild(item, cookie). The item is the ID of the parent tree item, and the cookie is the cookie as returned by GetFirstChild() or the previous call to GetNextChild(). The GetNextChild() method returns a two-element tuple (child, cookie) . If there is no next child, you've reached the end of the child list, and the system returns an invalid child ID. You can test this by using the method wx.TreeItemId.IsOk(), or using the Python shortcut ofjust testing the item, since it has a magic __nonzero__ method. The following helper function returns a list of the text for each child of a given tree item.
def getChildren(tree, parent): result = []
item, cookie = tree.GetFirstChild(parent) while item:
result.append(tree.GetItemText(item))
item, cookie = tree.getNextChild(parent, cookie) return result
This method gets the first child of the given parent item, adds its text to the list, then loops through the child items until it gets an invalid item, at which point it returns the result. The order in which items are displayed is based on the current display state of the tree—you will get the items in the exact order of the current display from top to bottom.
To cut right to the end and get the last child for a given parent, you can use the method GetLastChild(item), which returns the wx.TreeItemId of the last item in the list. Since this method is not used to drive an iterator through the entire child list, it does not need the cookie mechanism. If you have the child and you want the parent, the method GetItemParent(item) will return the tree ID of the parent of the given item.
You can walk back and forth between items at the same level using the methods GetNextSibling(item), and GetPrevSibling(item). These methods return the tree ID of the appropriate item. Since these methods are also not used to drive iterators, they do not need a cookie. If there is no next or previous item because you have reached the end of the list, the method returns an invalid item (i.e., item.IsOk() == False.
To determine if an item has any children, use the method ItemHasChil-dren(item) which returns a Boolean True or False. You can set whether an item has children using the method SetItemHasChildren(item, hasChildren=True) . If an item has its children property set to True, it will display onscreen as though it had children, even if there are no actual children. This means that the item will have the appropriate button next to it, allowing it to be collapsed or expanded even if there is nothing to actually show by expanding the item. This is used to implement a virtual tree control where not all items logically in the tree have to physically be there, saving runtime resources. This technique is demonstrated in section 15.7. | https://www.pythonstudio.us/wxpython/navigating-the-tree-programmatically.html | CC-MAIN-2019-22 | refinedweb | 720 | 70.53 |
HYPOT(3) BSD Programmer's Manual HYPOT(3)
hypot, hypotf, cabs, cabsf - Euclidean distance and complex absolute value functions
libm
#include <math.h> double hypot(double x, double y); float hypotf(float x, float y); double cabs(struct complex { double x; double y; } z); float cabsf(struct complex { float x; float y; } z);
The hypot() and cabs() functions compute the sqrt(x*x+y*y) in such a way that underflow will not happen, and overflow occurs only if the final result deserves it. hypot(Infinity, v) = hypot(v, Infinity) = +Infinity for all v, including NaN.
Below 0.97 ulps. Consequently hypot(5.0, 12.0) = 13.0 exactly; in gen- eral, hypot and cabs return an integer whenever an integer might be ex- pected. The same cannot be said for the shorter and faster version of hypot and cabs that is provided in the comments in cabs.c; its error can exceed 1.2 ulps.) = +In- finity. This is intentional; it happens because hypot(Infinity, v) = +In- finity for all v, finite or infinite. Hence hypot(Infinity, v) is in- dependent of v. Unlike the reserved operand fault on a VAX, the IEEE NaN is designed to disappear when it turns out to be irrelevant, as it does in hypot(Infinity, NaN).
math(3), sqrt(3)
Both a hypot() function and a cabs() function appeared in Version 7 AT&T UNIX.
The cabs() and cabsf() functions use structures that are not defined in any header and need to be defined by the user. As such they cannot be prototyped properly. MirOS BSD #10-current May 6, 1991. | http://mirbsd.mirsolutions.de/htman/sparc/man3/cabsf.htm | crawl-003 | refinedweb | 268 | 52.7 |
Feb 20, 2012 09:51 PM|dtsob75|LINK
I am trying to get the sample page for the Custom Buttons to work (AjaxToolkitSampleSite/HTMLEditor/OtherSamples/EditorWithCustomButton). I dropped the sample code into a Web app project, then added the App_Images, App_Code, App_Scripts in to the app, in addition to the page. When I try to run it, I get a compile error:
The type or namespace name 'Samples' does not exist in the namespace 'AjaxControlToolkit.HTMLEditor' (are you missing an assembly reference?)
It runs in a web site project, just not in a web application project.
I have tried several things, but nothing seems to work. Any ideas?
ajaxToolkit htmleditor
All-Star
16797 Points
MVP
Feb 20, 2012 11:58 PM|Ken Tucker|LINK
If you double click on the error in the error window does it take you to a using (c#) or imports (vb) statement. It might be a namespace used in the sample application you did create in your project. If that is the case you could comment out the line of code
All-Star
69949 Points
Feb 21, 2012 03:13 AM|chetan.sarode|LINK
Do the online examples of Editor work for you?
Are you sure that you use the latest version of Toolkit' DLL in your BIN folder?
ajaxToolkit htmleditor
Feb 21, 2012 01:52 PM|dtsob75|LINK
Ken,
Yes, it takes me to the Using, but when I remove that line, I get an error in the code behind that the control does not exist.
Feb 21, 2012 01:55 PM|dtsob75|LINK
The sample works fine, as long as it is used in the web site project. It is only when I add the source to a web application project that I have the problem. I think it may be that I need a Register directive, but I can't seem to hit on the proper syntax. Since the Custom editor example does not use a user control, it does not like the src option. And, any other combination that I have used seems problematic.
Frustrating. This should be (and probably is) so simple.
Feb 21, 2012 08:39 PM|dtsob75|LINK
An update on this. I have it working, if I use a placeholder on the webpage and then create and add the my customized editor to the placeholder controls collection. I have been playing with different combinations of attributes for the Register directive, but nothing works.
I have the extended editor class and the customized button classes within my web app project. So I use the namespace of the project, plus the folder name (I am using a sub folder named "Classes" to hold those classes.). When I create a directive like:
<%@ Register Namespace="HTMLEditorPrototype2.Classes" TagPrefix="SimpEd" %>
and the create the control in the aspx file:
<SimpEd:SimpleEditor</SimpEd:SimpleEditor>
I get an error that says Unknown server tag "SimpEd:SimpleEditor"
Does that point to anything I am missing?
5 replies
Last post Feb 21, 2012 08:39 PM by dtsob75 | http://forums.asp.net/t/1771780.aspx | CC-MAIN-2013-48 | refinedweb | 502 | 61.67 |
Please Help!! Pygame not responding...
ok, so i got pygame installed with macports but now it seems like there is yet another problem. When I try to run a simple program like this:
or any other, ive tried a few and i save and run the pygame icon bounces once and then stops and no window or anything shows up. I then click on it and it says application not responding...ANy help would be greatly appreciated. Thank you,
Zack P.S it does work in terminal....but not with IDLE {{{
! /usr/bin/env python
import pygame w=640 h=480 screen = pygame.display.set_mode((w, h)) pygame.draw.line(screen, (255, 0, 0), (0, 0), (w, h)) pygame.display.flip() }}}
Hi,
you need an event loop, because otherwise the window doesn't know how to respond to things (is not responding).
Take a look at the examples to see how events work.
cheers, | https://bitbucket.org/pygame/pygame/issues/97/please-help-pygame-not-responding | CC-MAIN-2017-51 | refinedweb | 154 | 86.1 |
Summary: Microsoft Scripting Guy, Ed Wilson, talks about translating VBScript script into Windows PowerShell, and he says it is not a very good idea.
Hey, Scripting Guy! I love VBScript. I know, I know. It is not fashionable these days to say that, but it is a fact. I love using VBScript. It is powerful, and over the last ten years or so, I have written literally hundreds of scripts that we use at my company on a daily basis. I simply do not have time to translate these scripts in Windows PowerShell. I am sorry, but that is all there is to it…just sayin’.
—ML
Hello ML,
Microsoft Scripting Guy, Ed Wilson, is here. The Scripting Wife and I are going through what could be called TechEd withdrawl. After nearly a week of non-stop activity, literally from 6:00 AM to midnight each day, we find ourselves missing the fun, engagement, and excitement—like a rabbit locked out of a vegetable garden. It was too much, and now it is too little. Luckily, in a few weeks there is the TechStravaganza 2013 in Atlanta, and then there is TechEd 2013 Europe in Madrid. So we get a chance to do it all over again.
Anyway, ML, I thought I would take some time to review some of my email sent to scripter@microsoft.com, and I came across your letter. Here are some thoughts…
Never. Never. Never. Never. Never!
Of the more than 3,000 VBScript scripts I wrote years ago, I have translated fewer than two dozen into Windows PowerShell. Each of those was for academic purposes, not for real-life purposes. The reason is that Windows PowerShell and VBScript are completely different. You may as well try to turn applesauce into mashed potatoes. It might seem like a good idea at the time—but dude! (or dudette!), I am not certain of a really acceptable outcome.
Need some proof? OK, how about this one:
This script is a basic WMI script from the Scripting Guys Script Center Repository. It is called Retrieving BIOS Information, and it does a great job. In fact, I have used this particular script on more than one occasion over the last decade and a half—in production and in teaching situations. Here is the script:
strComputer = "." )
So we have a great VBScript script that retrieves bios information from local and remote computers, and it is something we use all the time. So we decide to translate it to Windows PowerShell. Here is the result:
$strComputer = "."
$colItems = get-wmiobject -class "Win32_BIOS" -namespace "root\CIMV2" `
-computername
}
This is an actual script that comes from the Script Center Repository: List BIOS Information. I am including the link because it is a perfect example of a horrible Windows PowerShell script, and anyone teaching a Windows PowerShell class should have access to a script such as this. (And no, I did not write this script—it was bulk uploaded four years ago using the Scripting Guys Live ID…Oh well.)
Comparing the two scripts, I see very little difference. Here are the two scripts (somewhat truncated) side-by-side:
Instead of translating VBScript into Windows PowerShell, take advantage of the inherent capabilities of Windows PowerShell instead of forcing Windows PowerShell to do VBScript. Indeed, it is not my intention to make fun of VBScript, because I have also seen Windows PowerShell code that was written in C#, or C++, or Perl.
The cool thing is that Windows PowerShell is very flexible, and it is possible to get things up and running easily. Here is the previous BIOS script, written so that it takes advantage of Windows PowerShell:
Get-WmiObject Win32_Bios
That is it.
This connects to WMI and retrieves BIOS information. It can also be written by using the CIM cmdlets from Windows PowerShell 3.0 as shown here:
Get-CimInstance Win32_Bios
The thing that is incredible in both examples is that there are fewer LETTERS in the command than there are LINES in the previous script.
So, ML, please, please, please, please do not translate your old VBScript scripts into Windows PowerShell. If you do, you will do the following:
ML, your job as an IT pro is to get stuff done, not to write code (Windows PowerShell, VBScript, or otherwise). Therefore, if you have something that works, stay with it.
If or when you need NEW code, I would advise you to write it in Windows PowerShell. To get started, see the Scripting with Windows PowerShell in the Script Center. You might also want to pick up a copy of my book, Windows PowerShell 3.0 Step by Step, and follow along there.
ML, there are my thoughts about translating VBScript Ed.
I'm new to Powershell, and finding it's deep integration in Windows, and the power of pipe-lining very efficient compared to previous languages. Currently working on migrating our corporate scripts and tasks to Powershell. Learning alot from this blog (and the Scripting Games). Cheers :-)
It was a hard transition from vbscript to powershell for me, after 6 or 7 years of powershell, I still don't think I'm as proficient with PS as I was with VB. Of course, in my VB (and batch) days, I was much more involved in day to day administration, so, it made my job easier. Now, I do a lot more paperwork :) need the following: Get-PaperWork | Complete-PaperWork !!
Fun to see this topic after it came up at TechEd last week! What a perfect example, too!
I still have a few thousand VBScripts as well, and they still work great. PowerShell is very cool, but I'm glad MS keeps WSH around.
I found that "Get-WmiObject Win32_Bios" only gave me 5 lines of output. I had to use: "get-wmiobject -class "Win32_BIOS" | Format-List -Property *" to get all properties. Having said that, in the end, the shorter version produced the most *useful* information. | http://blogs.technet.com/b/heyscriptingguy/archive/2013/06/12/translate-vbscript-script-into-powershell-not.aspx | CC-MAIN-2015-18 | refinedweb | 993 | 73.17 |
#include <MeshRefine.H>
This class manages grid generation from sets of tagged cells. It is designed to be a pure virtual base class from which another class may be derived with a specific grid-generation algorithm (for example, the BRMeshRefine class). in BRMeshRefine.
define function -- size of RefRatios will define maximum number of levels
Reimplemented in BRMeshRefine.
create hierarchy of grids from a single level of tags
This function creates a hierarchy of grids from a single level of tags on BaseLevel. If tags exist, then all levels will have grids. Returns the new finest level of grids.
create hierarchy of grids from tags at all levels
This function creates a hierarchy of grids from tags at all refinement levels. It is possible that not all levels will return with grids, since there may not be tags at all levels. Returns the new finest level of grids.
returns vector of refinement ratios
returns fillRatio
returns blocking factor
returns proper nesting buffer size
sets vector of refinement ratios
sets fillRatio
sets blocking factor
sets proper nesting buffer size
has this object been defined properly?
sets proper nesting region granularity.
constructs a set of boxes which covers a set of tagged cells
constructs a set of boxes which covers a set of tagged cells by using the algorithm of choice. Everything should be on the same level, and blocking factor is not applied. Boxes will be on the same refinement level as the tags. This would normally be a protected function, but it can be useful to call it on its own, so it has been left public.
Implemented in BRMeshRefine.
computes local blockFactors used internally to enforce the BlockFactor
This function computes values for m_local_blockfactors array, which is the amount that tags on a level are coarsened in order to guarantee that the grids on the next finer level are coarsenable by the BlockFactor.
Computes proper nesting domains.
This should only be called by refine. it assumes that everything has already been coarsened by the local blocking factor | http://davis.lbl.gov/Manuals/CHOMBO-RELEASE-3.0/classMeshRefine.html | CC-MAIN-2018-05 | refinedweb | 337 | 62.88 |
Closed Bug 235782 Opened 17 years ago Closed 16 years ago
A new, more flexible tabbed browsing control for Camino
Categories
(Camino Graveyard :: Tabbed Browsing, enhancement)
Tracking
(Not tracked)
Camino0.9
People
(Reporter: me, Assigned: mikepinkerton)
Details
Attachments
(9 files, 17 obsolete files)
User-Agent: Build Identifier: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.7b) Gecko/20040224 Camino/0.7+ Ever since the release of Panther, I don't like Camino's tabs as much as I used to. Attached is a patch that uses a custom control for manipulating Camino's tabbed browsing view and, in addition to making them a little easier to hit with the mouse (IMO), has the following features: 1. Close button on each tab. (recently added by Mike to the standard tabs) 2. The favicon is replaced by the progress indicator when the tab is still loading, rather than the close button. In addition to those, it should now be possible to add drag-to-rearrange to the tabs. I am currently working on that. I didn't have it ready for this patch, and wanted to start getting feedback from developers on: 1. The possibility of someone liking this idea enough to commit an implementation of it to Camino. 2. Things I'll need to fix/remove/change/do better/etc. in order to make that happen, assuming that it's OK in principle. 3. Feedback on the look and feel. Thanks are due to Mike for answering dumb questions on IRC and the list, and to Jasper for providing some artwork. Any aesthetic appeal this patch holds is thanks to him. Any ugliness is my fault. Thanks also to the Adium folks for some ideas/inspiration. This patch is currently packaged into 4 parts. I separated the source into added files + a patch file, because I could not get "cvs add" to add the files to my working directory without write access to the repository, and didn't find a way around that. The .nib is attached as a tarball, and the graphics are in a separate tarball. Any feedback on how to package this better would be most welcome. Reproducible: Always Steps to Reproduce: 1. 2. 3.
Confirming as an RFE.
Status: UNCONFIRMED → NEW
Component: General → Tabbed Browsing
Ever confirmed: true
Can't get the archive of the NIB file to decompress correctly. I end up with a .nib holding just the CVS folder. Can you repost it please ?
Got the wrong BrowserWindow.nib earlier
minor resource tweaks
Attachment #142409 - Attachment is obsolete: true
Thanks for the new files. The NIB package is now ok. The patch works perfectly. The close button images are the same as the ones I provided for the patch for bug 211570 I think. I originally took the images from Safari, and converted them in PNG because Bugzilla did not let me attach the TIFF files. However, the PNG images don't look as good as the original TIFF ones. Maybe Japser could design some new cool images...
jerome, bugzilla has nothing against tiff files. can you put them in a zip and attach that if that's easier for you?
(In reply to comment #10) > Maybe Japser could design some new cool images... Jasper has something in progress... I in fact owe him an email on this very subject :-)
The patch was applied and a few was used. I think that it is good :-) By the way, when tab of the maximum number is opened like a screen shot, each tab shows which page or distinction is difficult. When a mouse pointer is placed on tab, is it possible to add the function to help distinction of expressing a page title name as a tool tip like firefox?
I think the active tab should be clearly highlighted against the inactive ones and inactive ones should be dimmed.
I tried to replace active_tab_bg with active_tab_bg. The active tab is clearly highlighted.
Sorry, I've mistaken. I tried to replace active_tab_bg with active_tab_bg. The active tab is clearly highlighted.
Visually, this is completely backwards now. Historically things that are not active are greyed out, and here this the /active/ item that's darkened. It really should be reversed.
(In reply to comment #16) > Created an attachment (id=143103) > the screenshot At Camino which I built, active tab is displayed by highlite. Are all patches applied?
> At Camino which I built, active tab is displayed by highlite. > Are all patches applied? No. I replaced active_tab_bg with active_tab_bg
I think we need not ape Safari's interface. Safari uses Metal interface and an active tab is connected to Location field. In Aqua interface the acive tab is highlighted in blue face and the inactive ones remain white, so users can easily confirm where they are.
In the same concept I tried Graphite interface.
Dude you have been switching images around like a mad man. You completly misunderstood the chrome. The active tab is the one that has a clear white appearance, the inactive tabs are the ones with the dimmed grey look. Why you might ask? Well because in real life objects that go into the background are not lighter then objects in the foreground but are darker. It's aproven metaphor by both Firefox and Safari and real life of course. You should use that, I'm under the impression that you are looking to much to apple's tabs which had no real appearant color logic to me, appart from being the other way around which always seemed really odd to me. As you may see in this iamge: I tweaked the ui it a bit more so that inactive tabs have a subtle hollow look, while the active tab has a direct opposite gradient. Making it look as if the active tab is reaaly more in the foreground. I would defenitly not give the inactive tabs a light color since it would make it look to much like the bookmark bar, which might result in confusion amongst users. Note that the dark color of the inactive tabs make the tab bar stand out much better then what you did. As for color I doubt that we need it. I think that the addition of color like the blue you used makes the interface look cluttered together with all the site icons. The dark grey and light plastics make it look much cleaner and proffesional. Would we want to add al kinds of extra chrome for the blue theme? That would be a waste of space, especially if we can make one appearance that works in both graphite and blue environments.
I said which interface is more friendly for users. When we use tabs in Aqua interface, we usually confirm that darkend tab shows the window it contains. Even if it's old as you say, in Aqua interface we should do so like Mail.
In Finder view as Icons, as List and as Columns is the selected item lighter than others? No, it's marked as darkened. What we need is we can distinguish explicitly which item is selected. The important thing is that we confirm which item is selected at a glance. So I think the metaphor sholud have uniformity. In Safari Apple put away uniformity designed by themselves, because they use Metal interface.
Yes, but those do not indicate the same information. In list view, it indicates the column that is currently sorting the list view. Tabs signify completely separate window environments, not the sorting of information. It's a completely different task. At any rate, this is a wasted argument: we need to do what users expect, and users expect that tabs in the background will be darker or "greyed" out.
as a user and sysadmin who is often confronted with "less qualified" users, I agree with Neil and Jasper that the active tab is supposed to be lighter. I think Jasper's screenshot in #22 just looks awesome (the tabs). I have seen a lot of different tabs already and that is *definitely* the best I have seen so far. I loved it at first sight ;-)
(In reply to comment #25) Contrary to your opinion, I think most users expect that the active tab is highlighted and inactive tabs are dimmed, because tabs are buttons to indicate which window is active.
Just to throw my two cents in: - 2 or 3 or 5 or 10 people's opinions of what they think most users will expect tabs to be like really don't amount to much. Anyone can think whatever they want about "most users", but unless someone has actual data from usability studies/testing, it's not an especially useful argument either way. - Moreover, I submit that it doesn't matter either way from a usability perspective. The selective item shouldn't be "lighter" or "darker"--it should be *distinctive*. Given a bunch of things which are the same, and exactly one that isn't, people can easily pick out the one that is distinctive, be it because it's lighter, darker, another color, bigger, whatever. This is a skill that is fundamental to the way our brains work, and one we master very early (think Sesame Street). So long as the active tab is, to some degree, visually distinctive, the usability task is over. Now, if the issue were something like a dialog box with only OK and cancel, where there are only two choices and no other clues to which is active, and their choice is final in at least some sense, then it would be important what people's gut feeling is ("what happens when I press return?"). But this is a fully interactive, user-created set of tabs. They have titles, which either do or don't match the current page, and they become active when you click them. Even if you sat someone down with two tabs open with the same title, it would *still* be trivial to figure out what the colors mean. So, 5-10 seconds of acclimation, tops, for an application that will likely be used very frequently. Who cares? --- Now, from a purely asthetic standpoint, I like the lighter active tab, because it creates a visual blending with a lot of page content which appeals to me.
O.K. This is my last statement about this. This shouldn't be treated as anyone's taste, but as uniformity of user's interface. 1) Safari doesn't use Aqua interface, so it adopts lighter active tab. But its active tab is inverted and connected to Location bar, and tab has no URL icon. Consequently this had the consequence that Apple uses Metal interface. 2) If Camino adopts tabs like Safari, the least it should make it clear that the active tab is connected to the active window.
It may crash, if Tab is opened further and a page is displayed, when load is applied to Camino by displaying many pages simultaneously by Tab etc. It does not crash, even if it carries out the same operation by official Nightly build.
Actually I can get Camino to crash with any build if I have it open too many pages at once (dragging a large bookmark folder). I don't think it's the tabs.
(In reply to comment #30) > It may crash, if Tab is opened further and a page is displayed, when load is > applied to Camino by displaying many pages simultaneously by Tab etc. > > It does not crash, even if it carries out the same operation by official > Nightly build. Thanks. I think I've got that figured out now and will soon update the patch to fix that.
I'm not sure this will help TabView development, but just for reference... Shiira, a new cocoa tab browser using Apple's Web Kit rendering engine like OmniWeb. Shiira Project : Nightly Build 040420 : ShiiraSrc 040420 : TabBarViewSrc :
Target Milestone: --- → Camino0.9
FYI, there’s a private build available with these new tab controls via MZ: <>. A couple of nits: I’d like to see a pixel or two of space between the top of the tabs and the toolbar above. The background tabs should have rounded corners like the foreground tab. The mouseover effect on background tabs leaves a space on the right if and only if there is another tab to the right.
(In reply to comment #34) > FYI, there’s a private build available with these new tab controls via MZ: > <>. > > A couple of nits: > > I’d like to see a pixel or two of space between the top of the tabs and the > toolbar above. Yeah, this might be nice... > The background tabs should have rounded corners like the foreground tab. IMHO this would make for too much visual clutter in the tab bar.
Hi, Geoff. Thank you for your efforts. I'm using Isaac's private build. It works fine including drag'n'drop operation. But it sometimes seems to log "Got mouseUp!". Except for that it works perfectly for now.
Since we're about to start on the 0.9 buglist, can we get and update from Geoff on how things are going? This will help us decide how much effort to put into fixing thing in our existing tab implementation over the next few months.
yeah, i'd like to see where we're at with this. is it time to start looking at code so we can get the bulk of it landed into the trunk?
Geoff, Any chance we can get this checked into the trunk. The more eyes we can get to bang on this feature the better...
(In reply to comment #39) > Geoff, > > Any chance we can get this checked into the trunk. The more eyes we can get to > bang on this feature the better... Hi Guys, Sorry I'm slow... I've been unexpectedly swamped. I've got a patch that I think is almost good enough to go in, and I just need to clean up a conflict against HEAD, smoke test, and package/post it here. I'll try to do that in the next 48 hours so others can review and this can move forward! Geoff
I'm trying to produce a patch and currently running into a pretty basic (I think) CVS problem that's blocking my progress. Does anyone know how to get 'cvs diff' to work properly when there's a conflict? My patch creates a conflict in BrowserTabViewItem.mm, partly because it moves NSTruncatingTextandImageCell out to a new file (there are other conflict areas too...), and when I try to do a cvs diff to produce a patch, I get conflict markers in the diff. As you might expect, the patch will not work that way. If anyone could post some advice, either here or via email to me, I'd very much appreciate it. I'm stumped as to why my changes are getting conflict markers in the cvs diff rather than getting a patch file which updates the current revision to my local copy. If I can get past this problem, I should be able to get a patch up here in very short order. Due to some unexpected travel, I'll have limited access to my mac for a few days, so worst case, I expect to get it up here Thursday. Thanks, Geoff
you need to update your local tree to the latest version and resolve any conflicts that creates so that your local tree builds again. Then your diff will be clean. make sure you do a cvs diff -N to include the new files that you've cvs add'ed.
Geoff - status update?
(In reply to comment #43) > Geoff - status update? It's a long, convoluted story not worth posting here :-)... the short version: I'm now back online and just got a new build box (bonus: it's a laptop, so I can hack when I travel!!). It's now got developer tools and a fresh camino tree and is building, as well as a restore from the archives of my old box. As soon as it finishes, I'm making a new patch and posting it. If that's in the next hour or so, it may be tonight. If not, I won't go to bed tomorrow night until I have a good patch.
sounds good :) thanks
Here's a patch folks can start to look at... CVS gives me the following when I attempt to add the files: cvs [server aborted]: "add" requires write access to the repository so cvs diff -N doesn't do what we want. I'm posting a patch as well as a tarball of the new classes. I'm having a funny problem with the nib file, so I'm not posting that just yet. I'll either sort it out tomorrow or show up on irc asking for advice :-) (Unless someone knows of a way to get IB or some utility to diff/ merge nibs) Please let me know what you see that needs to be addressed to get this into shape to be added. Geoff
Any known issues?
(In reply to comment #49) > Any known issues? The main thing is that I haven't been able to get a new nib built based on the one in HEAD. I'm sure I'm forgetting something simple, and just need some time to look at that later today. I've been testing with one of my old nibs, but there appears to have been some change to the one in CVS since I made that, so I'm reluctant to post that here. I'll either post a good one today or come asking questions on IRC this evening. The only other issue I'm aware of is that I've occasionally seen a little bit of quirkiness in the drawing (e.g. separator line between tabs can be drawn 1px off where it should be) and would like to see if I can tweak the drawing to make that go away. It's pretty subtle, though, and I have not yet been able to reproduce it with this patch against HEAD on my current machine. All the functionality of the current tabs should be there and working. If anything is not that's an error, so please let me know.
Here (finally) is a working .nib to go along with it. Also posting the steps it took to get it right, since this was a little error-prone :-) Steps to building a working .nib: 1. Parse the following classes: IconTabViewItem BrowserTabView BrowserTabViewItem BrowserContainerView BrowserTabBarView RolloverTrackingCell TabButtonCell TruncatingTextAndImageCell 2. In the Browser Window set the BrowserTabView to tabless, borderless. Change the custom class on the NSTabViewItem to BrowserTabViewItem. 3. Change the height of the BrowserContainerView to 456 3. Add a custom view to the BrowserContainerView. Set its size: -x 0 -y 434 -w 761 -h 22 Set its class to "BrowserTabBarView". Connect its mTabView outlet to the BrowserTabView. 4. Connect the mTabBar outlet on the BrowserTabView to the BrowserTabBarView. 5. Connect the mTabBar outlet on the BrowserContainerView to the BrowserTabBarView. 6. Connect the mTabView outlet on the BrowserContainerView to the BrowserTabView.
(In reply to comment #47) > Created an attachment (id=155663) > Changes to files already in the tree > Could you post these using diff -u it's a lot easier to read.
This revision fixes some visual display glitches Jasper found and prevents tabs from giving mouseover feedback for background windows. It's also a unified diff, as requested.
Attachment #155663 - Attachment is obsolete: true
The new classes, reflecting the same tweaks as the patch, uploaded separately because I couldn't get the cvs diff to include them.
Attachment #155665 - Attachment is obsolete: true
This is working great for me. I've been able to patch/add files/add nib and build with no problem. Love the new tabs. Great work Geoff. P.S. I have to move the spinner/favicon to the left of the page title in the tab. I'm sorry but no browser I've ever used puts them to the left. And to those who have a problem with it being next to the close button: how did you ever navigate the colored window buttons in OSX in the first place?
Ugh. I meant no browser with tabs puts the favicon on the right. sheesh
Mike - have you looked at this patch yet? I'm going to try to build it and get it ready to land this weekend. Wondering if you know of any reasons it can't hit the trunk before I go through it. I'll also make a nice diff package so its easy to apply. think it would be OK to land on the branch after 1,2, and 4 are taken care of. 3 can wait a bit, but it needs to be done, sooner rather than later hopefully. I will post a code-level review after we figure out what to do with the UI. Thanks for the great work - I hope you're not put off by pickiness, but a tight review process helps a lot in the end. And it makes you feel better when it does land :)
If you don't see the dividers between the tabs and the selected tab being lighter, then something probably went wrong with your build, or some images are missing or something.. > actually prefer to have both the close box and the favicon on the left--I think people are making the usability issues there to be worse than they are; I've never accidentally closed a tab when i meant to drag it or vice-versa. One other thing I've noticed: dragging links from a page to the tab bar to create a new tab only seems to work sporadically, I haven't been able to track down the exact circumstances it works under, but it fails often enough to warrant looking at.
I had a friend who doesn't work on web browsers :) or would be considered an "advanced" computer user look at the new tabs and he almost immediately, about the icons, said "why don't you just put the icon next to the close button?" As we haven't considered this before (that I know of), any thoughts? I think having the close button on the far left and the icon immediately to the right of it, followed by the title might not be a bad idea. Something to think about.
Josh, you probably forgot to also download and add the chrome files. Anyway Geoff is preparing a new patch with updated files fixing some visual issues. I can asuree you chrome is available. About the site icon and close icon. We released a versions with all options to see what users liked. 1) left side site icon, right side close box. Most people liked this the most 2) left side close box, right side site icon. People expected the close box to be on the right side, because of the way site icons and site names are displayed everywhere else in Camino. 3) site icon and close box on the left side. People hated it, including me. Most people thought it look cluttered and very odd. And general feeling was that it was the perfect situation for people to eccidentaly click and close the tab. And I agree. This is not the option to go with. With 16px icons you should never but functionalities to close to each other. Spread them appart. It look better and works better period. The reason why Geoff hasn't worked on the 16+ feature is because he wanted the patch to be checked and tested on feature parity first to make further feature additions easier instead of makin one huge patch that would be hard to review and even harder to track bugs with. And we haven't decided et on how to implement the 16+ issue as we think the Safari way lacks certain things. I hope to descuss some ideas soon with you guys. For now we are concentrating on getting this first patch perfect so we can land it to make further changes much easier for Geoff.
This is the tab chrome. First, remove any files beginning with tab_ from your xcode project, then untar this archive in camino/resources/images/chrome. If it looks like there are no dividers, etc., you're probably missing these files.
This revision reorganizes the drawing routines a bit to correct some glitches uncovered by Jasper's new chrome, and generally simplify things. It also moves the favicon to the left of the label, per Josh's most recent suggestion to give folks a chance to try it this way.
Attachment #156010 - Attachment is obsolete: true
Goes with the latest revision of the patch...
Attachment #156011 - Attachment is obsolete: true
As Ender said, these two sound like missing chrome. >. As Jasper mentioned, we are thinking about this, and don't plan to just ignore it. After discovering firsthand how much fun it is to maintain a large chunk of code that lives outside the tree, I was kind of hoping I'd be able to do this after the patch hits the trunk :-). >'m not attached to the current placement. Here's my logic: 1. Since 1984, the close box goes on the left all the time. I didn't want to violate that lightly. 2. I thought the close button and the favicon together looked kind of cluttered, so I moved it to the end. If the consensus is that I'm full of it on either point, just let me know what is preferred... I don't feel strongly about it at all, but had to pick a layout for the first cut. In the most recent version of the patch, just so you can easily see and play with the difference, I've moved the favicon. Please continue to be picky... I want the best code possible and am not the least bit put off by pickiness. (Ask Jasper :-)).
(In reply to comment #66) >. At this point, I think I'm ready for a code level review. The only further revisions I'm currently planning are any that you guys feel are necessary before this hits the trunk. (Like the icon placement) Once this hits the trunk I'd like to add: 1. Nice handling of >16 tabs. 2. Drag-to-reorder. I'm ready to work on these now, but would like to wait till this hits the trunk, as it'll be much easier to maintain a patch that way :-). I also suspect the code is easier to review in 3 chunks than in one big shot.
We've debated close buttons on tabs in the past (e.g., bug 155292). I'll renew the suggestion to eliminate close buttons on tabs in favor of a unified close widget in the tab bar, a la Firefox, or just adding the "Close Tab" toolbar button on the far right of the toolbar default set for all users. This would clean up the proposed tab UI considerably.
In the future, perhaps gzip one folder containing all the latest stuff for patching the trunk. That will help keep the attachments list under control. Thanks!
I just compiled with the new tab code, including chrome :) It looks great, and works great! I actually like the close button and the favicon being next to each other on the left. Its sort of like the browser's new icon set - it is a bit awkward at first, but you get used to it quickly. This solution makes sense in terms of usability rules - the close button has always been on the left in Mac OS, and the icon has always been on the left in Mac OS. I know lots of people object at first, but I think it just takes some getting used to. Its nice not having anything on the right side of the tabs. If you don't have any other patches coming up in the next day or two Geoff, I'll start the code review and we'll get this to hit the trunk. Good work!
(In reply to comment #70) > If you don't have any other patches coming up in the next day or two Geoff, I'll > start the code review and we'll get this to hit the trunk. Good work! Thanks! I'm not planning any new patches for now unless you guys see a problem that requires one, so please go ahead with the review.
(In reply to comment #65) > 1. Since 1984, the close box goes on the left all the time. > I didn't want to violate that lightly. > 2. I thought the close button and the favicon together > looked kind of cluttered, so I moved it to the end. I think these are good points. If I may suggest, though, the question isn't so much about the appearance of clutter. Rather, it's about the proximity of a destructive option next to a safe one. It's about one's chances of accidentally closing a tab when his intention is the opposite: to keep it permanently by dragging it somewhere safe. (Yes, I know the title text is a drag source but the point of the favicon-on-the-left is to mimic established Finder behaviors which connect with a corresponding variety of user habits.) Leaving aesthetics and themes completely out of it (this bug is not the place for that), one -mechanical- piece I think bug 159510 truly got right was to make the close box smaller and harder to hit by accident. IMHO what makes the current scheme appear cluttered is that the close box and favicon are presented as peers. You could fix that by scaling down the close button, making it bubble-shaped like the window's (but not necessarily red if you don't want to), and moving it closer to the upper left. Then I think each element would be more clearly defined, and they'd no longer seem to be at cross purposes.
This is a unified patch incorporating Jasper's latest chrome and fixes for the last couple visual glitches we identified. I don't see any flag to check here to indicate it, but I believe this is ready for review. Josh, could you begin the code review?
Attachment #155819 - Attachment is obsolete: true
Attachment #156192 - Attachment is obsolete: true
Attachment #156193 - Attachment is obsolete: true
Attachment #156194 - Attachment is obsolete: true
Mr.crot uploaded flexible tabbed Camino this web site : "Camino 1.7branch 2004/8/15" at When I tested it on 10.2.8, Console show this message whenever I created new tab : WindowServer[178]: CGXRemoveTrackingArea : Invalid tracking area on 10.3.5, Console don't show this message.
Nits: - lots of formatting problems all over. I'll list a few examples here - reverse logic like "nil != tab" to "tab != nil" (constant on right) - so with formatting changes, "while( nil != tab ) {" becomes "while (tab != nil) {" - if( (nil == backgroundImage)||(nil == tabButtonDividerImage) ){ <- wrong if ((nil == backgroundImage) || (nil == tabButtonDividerImage)) { <- right - be careful about how you divide lines - don't divide on operators (line 117 of BrowserTabBarView.mm for exammple) [tabButtonDividerImage compositeToPoint:NSMakePoint( tabButtonFrame.origin.x - [tabButtonDividerImage size].width, tabButtonFrame.origin.y ) operation:NSCompositeSourceOver]; Instead divide at argument points (:) - fix indentation between lines 106 and 122 of BrowserTabBarView.mm - use && to combine "if" statements in lines 163 and 164 of BrowserTabBarView.mm - any object assigned to "backgroundImage" will leak in BrowerTabBarView if "loadImages" gets called more than once. The safe thing to do is to pad it and any other variable like it with a release if the variable is not null before assigning - maybe replace ------------ if( !button ) { return nil; } return [button tabViewItem]; ------------ with ------------ return (button) ? [button tabViewItem] : nil; ------------ lines 301-304 BrowserTabBarView.mm (same at line 136) - in BrowserTabBarView, can "tabBarDefaultHeight" be constant? Review of more serious issues later :) the formatting really needs to get cleaned up before we can continue?
Sorry for so many posts in a row, but here is some more: In RolloverTrackingCell.mm: -(void)dealloc { [super dealloc]; if (mUserData) { [mUserData release]; } } Always dealloc super at the end of the method. So: -(void)dealloc { if (mUserData) { [mUserData release]; } [super dealloc]; } Same thing in TabButtonCell.mm.
Josh, Thanks for your patient review... as we discussed, I am currently preparing the patch for a second pass. (In reply to comment #76) >? That's not clear at all :-). Somehow I got those lines in the wrong order. I meant to have * Original Contributor: * Simon Fraser <sfraser@netscape.com> * Adapted from BrowserTabViewItem.mm by Geoff Beier Is that clearer? (In reply to comment #77) > >. > Do you mean that I should check for mActiveTabButton == nil there, or should I conclude that the nil check is unnecessary? Or do you mean something else altogether that I've missed? > --------------------------------------------------- > >. > Sorry. The comments are left over from the way I initially did it. I moved the favicon as a trial so you and others could see how it would feel... since that seems to be the preferred way, I'll leave it and update the comment accordingly. Let me know on the questions above, and I'll fix those along with the formatting. Thanks again for your help and patience!
(In reply to comment #78) > Do you mean that I should check for mActiveTabButton == nil there, or should I > conclude that the nil check is unnecessary? Or do you mean something else > altogether that I've missed? The obj-c runtime checks for nil on method calls and fails silently, returning nil for the expression. you do not need to check for nil here. Also, i agree with Josh. |if (constant == expression)| is bad form. While technically it visually separates assignment from comparison, it's much harder to read because it doesn't flow they way you would read it if you were writing psuedocode. It's backwards from the way you think about comparisons in natural language.
> * Original Contributor: > * Simon Fraser <sfraser@netscape.com> > * Adapted from BrowserTabViewItem.mm by Geoff Beier > Is that clearer? No - its hard to word correctly. How about: Contributors: Geoff Beier <me@mollyandgeoff.com> Based on BrowserTabViewItem.mm by Simon Fraser <sfraser@netscape.com>
This is the second cut; it should incorporate all comments from the review thus far plus it removes some code that was no longer necessary (and commented out in the originally submitted version) and makes the favicon display slightly better in the absence of the progress indicator.
Attachment #156316 - Attachment is obsolete: true ------------------------ Waiting for a new patch before reviewing.
(In reply to comment #82) > > ------------------------ Generally that happens when the correct .nib is not copied into the final executable. Assuming you have extracted and copied the included BrowserWindow.nib to mozilla/camino/resources/localized/English.lproj/BrowserWindow.nib you need to make sure to delete Camino.app from the build directory prior to performing your build within XCode. For some reason XCode doesn't always copy over the .nib to the final product when it's been modified outside InterfaceBuilder like this. Geoff
Ugh - I must have forgotten to run the touch command on the nib...
Is a separate BrowserContainerView file necessary for one method? Perhaps you plan to expand that file in the future and it is easiest to land it that way? -------------------------------------------------- Nit: There is no need for braces on conditional statements that contain only one line, e.g.: if (NSIntersectsRect(tabButtonFrame,rect)) { [tabButton drawWithFrame:tabButtonFrame inView:self]; } Should be: if (NSIntersectsRect(tabButtonFrame,rect)) [tabButton drawWithFrame:tabButtonFrame inView:self]; (example is from BrowserTabBarView.mm) -------------------------------------------------- BrowserTabBarView.mm, line 169 NSLog(@"Here's where we'd handle the drag among friends rather than the drag manager"); This log message should not be checked in. Please comment it and its conditional code out. -------------------------------------------------- BrowserTabBarView.mm, line 389 BOOL rv = [mDragDest prepareForDragOperation: sender]; if (rv == NO) { should be: BOOL rv = [mDragDest prepareForDragOperation: sender]; if (!rv) { -------------------------------------------------- In some places, like lines 94 and 105 of RolloverTrackingCell.mm, you are using tabs instead of 2 spaces. You should be able to set that behavior to be correct for you in the Xcode preferences. -------------------------------------------------- Starting at line 43 of TabButtonCell.mm, you make a series of #define macros. These should not be macros. Standard practice is to make those static (and perhaps const) variables defined at the top of the file or a logically equivalent place (in this case the same place you placed the #define macros). -------------------------------------------------- I'm only going to review the new files tonight, and that is all I have to say about them. With some of the changes I suggested above they are good enough to go in. I will review the patch tomorrow. The tabs look great. The close-box on the left with the favicon to its immediate right looks great. Creating and closing tabs is really fast. I didn't see any glitches. I love this patch :)
One problem that I noticed is that the contextual menu text for moving a tab to a new window is misleading. It doesn't move the tab's actual contents - it just closes the tab and opens a new window with the same url. This can be demonstrated with. The could be a problem on pages with forms where users accidentally terminate sessions or lose previously entered information, expecting that the tab simply got moved to a new location. If you can come up with better text, great. Perhaps "Open Tab URL in New Window." Otherwise we may want to drop this feature until we can make it less misleading.
yes, #define bad. const good. eg: const short kFoo = 8;
(In reply to comment #86) > If you can come up with better text, great. Perhaps "Open Tab URL in New > Window." Otherwise we may want to drop this feature until we can make it less > misleading. This patch does not change the menu display in any way; it simply uses the existing contextual menu. As such, I'd prefer to make that change as a separate patch but if you think that needs to be part of this one for it to land I'll add that. (In reply to comment #87) > yes, #define bad. const good. eg: > > const short kFoo = 8; I agree. I was blindly copying BookmarksToolbar.mm for the style here :-). I'll make that and the other changes Josh cited, bar the context menu, later today and repost. (Unless you guys think the contextual menu change needs to be included with this patch.)
(In reply to comment #85) > Is a separate BrowserContainerView file necessary for one method? Perhaps you > plan to expand that file in the future and it is easiest to land it that way? Sorry for the spam... I missed this on the first pass. The reason I separated BrowserContainerView out in the first place was because I was adding a couple of other methods and needed to access them outside of BrowserContentView. If you guys prefer, I will return this class to its original location, as neither of these is true at this time. I don't have any specific plans to expand it, as I have moved the logic that I originally had there elsewhere. Let me know what the preference is here.
Review of patch part of the package... ----------------------------------- In the contex for your patch, there is the line in BrowserTabView.mm: // Only to be used with the 2 types of tab view which we use in Chimera. If you respin the patch please replace the reference to Chimera with Camino. ----------------------------------- Please visually separate comments from the end of a line of code: NSRect iconRect = [self convertRect: [mLabelCell imageFrame] fromView: nil];//NSMakeRect(0, 0, 16, 16); Either put spaces around the "//" or put it above the line. ----------------------------------- + if( remove ) { formatting. should be: if (remove) { ----------------------------------- None of this stuff is serious, so just do it if you respin the patch.
This iteration should address all of Josh's comments to date. While doing that I found and fixed a potential bug in the drag receiving code. That is included as well.
Comment on attachment 157618 [details] All files required to patch HEAD to contain this feature r+ with comment given on IRC being taken care of : 1) nil == 2) CVS in the tar file 3) Bad Licences headers Nice work.
Attachment #157618 - Flags: review?(qa-mozilla) → review+
This is identical to the last one except that, per qa-mozilla@hirlimann.net's review: 1. License headers are corrected. 2. Missed constant is corrected. 3. CVS directory is removed from nib archive 4. browser.patch is now produced with diff -u2
Attachment #157618 - Attachment is obsolete: true
Attachment #157643 - Flags: review?(joshmoz)
Comment on attachment 157643 [details] All files required to patch HEAD to contain this feature this review+ assuming Geoff only changed what he said he changed
Attachment #157643 - Flags: review?(joshmoz) → review+
is there a time estimate on when this new tabs code will be adopted by the nightly builds?
patch has landed
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED | https://bugzilla.mozilla.org/show_bug.cgi?id=235782 | CC-MAIN-2020-50 | refinedweb | 6,877 | 73.27 |
Applying Attributes
Use. In J#, the attribute is attached using special comment syntax.
Specify positional parameters and named parameters for the attribute.
Positional parameters are required and must come before any named parameters; they correspond to the parameters of one of the attribute's constructors. Named parameters are optional and correspond to read/write properties of the attribute. In C++, C#, and J#, specify name=value for each optional parameter, where name.
using System; public class Example { // Specify attributes between square brackets in C#. // This attribute is applied only to the Add method. [Obsolete("Will be removed in next version.")] public static int Add(int a, int b) { return (a + b); } } class Test { static void Main() { // This generates a compile-time warning. int i = Example.Add(2, 2); } }
import System.*; public class Example { // Specify attributes with comment syntax in J#. // This attribute is applied only to the Add method. /** @attribute Obsolete("Will be removed in next version") */ public static int Add(int a, int b) { return (a + b); } } class Test { public static void main() { // This generates a compile-time warning. int MyInt = Example.Add(2,2); } }
Applying Attributes at the Assembly Level
If you want to apply an attribute at the assembly level, use the Assembly keyword. The following code shows the AssemblyNameAttribute applied at the assembly level.
When this attribute is applied, the string "MyAssembly" is placed in the assembly manifest in the metadata portion of the file. You can view the attribute either by using the MSIL Disassembler (Ildasm.exe) or by creating a custom program to retrieve the attribute. | https://msdn.microsoft.com/en-us/library/bfz783fz(v=vs.80).aspx?cs-save-lang=1&cs-lang=csharp | CC-MAIN-2015-32 | refinedweb | 262 | 51.04 |
"
Mel Gorman's fragmentation avoidance patches have been discussed here a few
times in the past. The core idea behind Mel's work is to identify pages
which can be easily moved or reclaimed and group them together. Movable
pages include those allocated to user space; moving them is just a matter
of changing the relevant page table entries. Reclaimable pages include
kernel caches which can be released should the need arise. Grouping
these pages together makes it easy for the kernel to free large blocks of
memory, which is useful for enabling high-order allocations or for vacating
regions of memory entirely.
In the past, reviewers of Mel's patches have disagreed over how they should
work. Some argue in favor of maintaining separate free lists for the
different types of allocations, while others feel that this sort of memory
partitioning is just what the kernel's zone system was created to do. So,
this time around, Mel has posted two sets of patches: a list-based grouping mechanism
and a new ZONE_MOVABLE
zone which is restricted to movable allocations.
The difference this time around is that the two patches are designed to
work together. By default, there is no movable zone, so the list-based
mechanism handles the full job of keeping alike allocations together. The
administrator can configure in ZONE_MOVABLE at boot time with the
kernelcore= option, which specifies the amount of memory which is
not to be put into that zone. In addition, Mel has posted some comprehensive information on how
performance is affected by these patches. In an unusual move, Mel has
included a set of videos showing just how memory allocations respond to
system stress with different allocation mechanisms in place; the image at
the right shows one frame from one of those videos. The demonstration is
convincing, but one is left with the uneasy hope that the creation of
multimedia demonstrations will not become necessary to get patches into the
kernel in the future.
These patches have found their way into the -mm tree, though Andrew Morton
is still unclear on whether he thinks they are worthwhile or not. Among
other things, he is concerned about how they fit with other, related work,
especially memory hot-unplugging and per-container memory limits. While
patches addressing both areas have been posted, nothing is really at a
point where it is ready to be merged. This
discussion between Mel and Andrew is worth reading for those who are
interested in this topic.
The hot removal of memory can clearly be helped by Mel's work - memory
which is subject to removal can be restricted to movable and reclaimable
allocations, allowing it to be vacated if need be. Not everybody is
convinced that hot-unplugging is a useful feature, though. In particular,
Linus is opposed to the idea. The biggest
potential use for hot-unplugging is for virtualization; it allows a
hypervisor to move memory resources between guests as their needs change.
Linus points out that most virtualization mechanisms already have
mechanisms which allow the addition and removal of individual pages from
guests; there is, he says, no need for any other support for memory
changes.
Another use for this technique is allowing systems to conserve power by
turning off banks of memory when they are not needed. Clearly, one must be
able to move all useful data out of a memory bank before powering it down.
Linus is even more dismissive of this idea:
More information on his objections is available here for those who are interested. In short,
Linus thinks it would make much more sense to look at turning off entire
NUMA nodes rather than individual memory banks. That notwithstanding, Mark
Gross has posted a patch enabling
memory power-down which includes some basic anti-fragmentation
techniques. Says Mark:
It has also been suggested that resident set size limits (generally
associated with containers) can solve many of the same problems that the
anti-fragmentation work is aimed at. Rik van Riel was heard to complain in response that RSS limits
could aggravate the scalability problems currently being experienced by
the Linux memory management system. That drew questions from people like
Andrew, who were not really aware of those problems. Rik responded with a few relatively vague
examples; his ability to be specific is evidently restricted by agreements
with the customers experiencing the problems.
That led to a whole discussion on whether it makes any sense to try to
address memory management problems without test cases which demonstrate
those problems. Rik argues that fixing
test cases tends to break things in the real world. Andrew responds:
Rik has put together a page
describing some problem workloads in an attempt to push the discussion
forward.
One of Andrew's points is that trying to fix memory management problems
caused by specific workloads in the kernel will always be hard; the kernel
simply does not always have the information to know which pages will be
needed soon and which can be discarded. Perhaps, he says, the right answer
is to make it easier for user space to communicate its expected future
needs. To that end, he put together a pagecache management tool for
testing. It works as an LD_PRELOAD library which intercepts
file-related system calls, tracks application usage, and tells the kernel
to drop pages out of the cache after they have been used. The result is
that common operations (copying a kernel tree, for example) can be carried
out without forcing other useful data out of the page cache.
There were some skeptical responses to this posting. There was also
some interest and some discussion of how smarter, application-specific
policies could be incorporated into the tool. A possible backup tool policy, for example,
would force the output file out of memory immediately, track pages read
from other files and force them back out - but only if they were not
already in the page cache, and so on. It remains to be seen whether
anybody will run with this tool and try to use it to solve real workload
problems, but there is some potential there. The kernel does not always
know best.
Short topics in memory management
Posted Mar 6, 2007 23:18 UTC (Tue) by jcm (subscriber, #18262)
.
Posted Mar 7, 2007 2:02 UTC (Wed) by k8to (subscriber, #15413)
[Link]
Posted Mar 7, 2007 4:28 UTC (Wed) by njs (subscriber, 6 8:57 UTC (Wed) by im14u2c (subscriber, #5246)
[Link]
Posted Mar 7, 2007 9:38 UTC (Wed) by ibukanov (subscriber, #3942)
[Link]
Posted Mar 7, 2007 9:44 UTC (Wed) by im14u2c (subscriber, #5246)
[Link]
That's the sound of a tongue-in-cheek jab at Windows going over your head. :-)
Posted Mar 7, 2007 9 10 16:57 UTC (Thu) by im14u2c (subscriber, #5246)
[Link]
"A pointer is nothing but an offset from memory location 0, after all."
Well, these days it is on most architectures...
Posted Mar 7, 2007 10:49 UTC (Wed) by zlynx (subscriber, #2285)
[Link]
All the problems could be solved of course, but who is going to rewrite all that code? Again.
Posted Mar 7, 2007 11:14 UTC (Wed) by dion (subscriber, #2764)
[Link]
If I were to implement a browser cache in a memory mapped file then the mapping would be the first thing to get allocated and it would never change.
Garbage collection and MM
Posted Mar 7, 2007 13:11 UTC (Wed) by aanno (subscriber, #6082)
[Link]
Posted Mar 8, 2007 1:26 UTC (Thu) by ncm (subscriber, #165)
[Link]
Every time the above is pointed out, somebody pops up and says that some new or old wrinkle has potential to mitigate the problems. Invariably there's a paper with lots of artificial benchmarks, running on a machine dedicated to nothing but running those benchmarks. Invariably such programs interact badly with real programs on real machines.
Besides its fundamental problems, GC never advances much because it can't be encapsulated. For every place that needs it, it must be re-done from scratch. The cunning tricks of the last implementation don't work in the next.
Academia can't see limitations of GC because the ideal academic program only ever manages memory. Real-world programs must manage other much more limited resources -- network sockets, database connections, disks -- and any method sufficient to manage them suffices for memory as well. No current language threatens C++ for serious programming, however useful such a language would be. In large part this is because any such rival would first need to impress academics who, for the most part, have no clue about what makes a language actually useful.
Posted Mar 8, 2007 3:55 UTC (Thu) by aanno (subscriber, #6082)
[Link]
Posted Mar 8, 2007 5:22 UTC (Thu) by nix (subscriber, #2304)
[Link]
Years ago now, someone (Mike Stump?) ran some benchmarks that showed GCC incurring cache stalls every *twenty instructions* or thereabouts. Small wonder that it slowed down!
Careful moves are now underway (and have been for a while) to migrate objects with simple lifetime rules back into obstacks. The obstacks are still garbage-collected, but the obstack is a *single* GCed object with good cache locality, where the myriads of objects it replaces were not.
I doubt GCC will ever leave GCC, either: for objects with complex lifetime rules, there's really no maintainable alternative. But for the simple ones, using suballocators with better cache locality (like obstacks) is a good idea.
Posted Mar 8, 2007 8:25 UTC (Thu) by pflugstad (subscriber, #224)
[Link]
This has not been the case for several years now. Most GCs (certainly in Java) systems use a
generational/compacting collector. As such, dead objects aren't touched at all. And this was 3+ years ago - it's gotten even better since then. When you do Java, you really do need to re-think how you program.
As someone else said, GC is everywhere these days, even embedded. The ease and clarity of developing in a GC language (Python, Perl, Java, etc), far outweigh the performance penalty you may see with GC. This is especially true for the vast majority of programs where performance is not seriously a concern, such as those with human interactions. I've done a lot C. I've done a lot of C++. I've done Python. I've done Java. I'll take Python/Java 6 days a week (but not twice on Sundays - sometimes you do need performance :-).
Posted Mar 8, 2007 9:32 UTC (Thu) by aanno (subscriber, #6082)
[Link]
Posted Mar 8, 2007 13:35 UTC (Thu) by ncm (subscriber, #165)
[Link]
However, people are working on GC implementations. Expect complaints about cache abuse by scripts, too, soon.
Posted Mar 8, 2007 17:04 UTC (Thu) by njs (subscriber, #40338)
[Link]
I'm told that some of the hottest Java GC techniques actually involve reference counting these days, because you can massively optimize your actual walking -- the only time a cycle can be created is when a reference count is decremented, and thus achieves a number greater than 0. This is pretty rare, and it also tells you that any cycle that was just created must involve that object in particular, so you don't have to tromp through all memory either.
None of this affects your original point, though, because reference counting already trashes caches by itself -- especially in the multiprocessor case, where supposedly read-only access to variables is suddenly triggering cache flushes...
Depending on your cache hierarchy and the characteristics of your GC, you can minimize its impact, though. E.g., in gcc, I thought I remember some trick where you only run the collector between passes, since you know already that that's when everything becomes garbage, and also where it doesn't matter if you trash the cache? Similarly, Graydon was saying something about in initial implementations of firefox's GC, they would just run it after page load, because no-one notices if the browser pauses for 400 milliseconds then, they're just looking at the page.
Long run: build garbage collection into the RAM hardware! That'll work around those pesky cache issues ;-)
GC languages and domination
Posted Mar 8, 2007 7:46 UTC (Thu) by kevinbsmith (subscriber, #4778)
[Link]
On the desktop, I can't think of a single GUI app that I would rather write (or see written) in C or C++ instead of one of the languages mentioned above. Heck, even command-line utilities are often (usually?) written in perl or some other scripting language (not to mention bash). I guess it depends on your definition of "serious" programming.
Like it or not, GC is pervasive, and still increasing in popularity. It just makes sense to have the inexpensive computers do the extra work instead of the expensive programmers.
Posted Mar 8, 2007 12:41 UTC (Thu) by nevyn (subscriber, #33129)
[Link]
For web.
Posted Mar 9, 2007 4:45 UTC (Fri) by aanno (subscriber, #6082)
[Link]
Posted Mar 12, 2007 5:04 UTC (Mon) by ekj (subscriber, .
Posted Mar 9, 2007 14:39 UTC (Fri) by cpeterso (guest, #305)
[Link]
Posted Mar 12, 2007 4:50 UTC (Mon) by ekj (subscriber, #1524)
[Link]
Sure, sure, "just do it correctly" would work, in principle. Except that in *practice* we've been using C for like forever in computer-terms, and *still* the classical memory-managment problems keep coming up, even in well-audited clueful code. So, obviosuly, "just do it correctly" isn't going to solve the problem.
So, who is most likely to improve their ability of handling memory-allocation? Computers (who grow in various ways by leaps and bounds) or human beings (who's been struggling with manual memory-managment in C for decades, and this far seems to be making very very little, if any, progress.)
Also, the overwhelming part of code written does not care about performance. They don't care *enough* to be willing to take the extra hit on development-time needed to do manual memory-managment anyway.
It's not about laziness. I don't particularily care if my employer wishes to hire me for a week to do something in Python, or if he prefers paying me for 2 weeks to solve the same problem in C. (or for that matter a month and solve it in assembler)
My employer cares though. He wants a problem solved. He'll probably opt for the python-version, even if it runs 3 times slower. Especially since he knows that it's a simple thing to re-write any routines that *do* need performance in C if that should turn out to be nessecary.
Posted Mar 13, 2007 13:24 UTC (Tue) by pimlott (guest, #1535)
[Link]
Real-world programs must manage other much more limited resources -- network sockets, database connections, disks -- and any method sufficient to manage them suffices for memory as well.
Memory management is fundamentally much harder.
Posted Mar 15, 2007 5:21 UTC (Thu) by renox (guest, #23785)
[Link]
They got significant improvement on their benchmark by making the VM and the GC communicate.
Of course whether this show real like improvement is anyone guess..
One annoying thing with these papers is that they use copying GCs which doesn't interact well with C-based libraries: they're useful only for Java not the other scripting language which tend to reuse C-based libraries..
Posted Mar 8, 2007 5:56 UTC (Thu) by ranmachan (subscriber, #21283)
[Link]
diff -Naru pagecache-management/fadv.c pagecache-management/fadv.c
--- pagecache-management/fadv.c 2007-03-03 20:02:20.000000000 +0100
+++ pagecache-management/fadv.c 2007-03-04 11:35:56.000000000 +0100
@@ -9,6 +9,7 @@
#include <fcntl.h>
#include <limits.h>
#include <errno.h>
+#include <linux/fadvise.h>
int main(int argc, char *argv[])
{
diff -Naru pagecache-management/pagecache-management.c pagecache-management/pagecache-management.c
--- pagecache-management/pagecache-management.c 2007-03-03 21:14:00.000000000 +0100
+++ pagecache-management/pagecache-management.c 2007-03-04 14:55:27.000000000 +0100
@@ -15,6 +15,7 @@
#include <unistd.h>
#include <dlfcn.h>
#include <limits.h>
+#include <linux/fadvise.h>
#include "sync_file_range.h"
@@ -152,9 +157,12 @@
static ssize_t (*_write)(int fd, const void *buf, size_t count);
static ssize_t (*_pwrite)(int fd, const void *buf, size_t count, off_t offset);
+static size_t (*_fwrite)(const void *ptr, size_t size, size_t nmemb, FILE *stream);
static ssize_t (*_read)(int fd, void *buf, size_t count);
static ssize_t (*_pread)(int fd, void *buf, size_t count, off_t offset);
+static size_t (*_fread)(void *ptr, size_t size, size_t nmemb, FILE *stream);
static int (*_close)(int fd);
+static int (*_fclose)(FILE *fp);
static int (*_dup2)(int oldfd, int newfd);
static int symbols_loaded;
@@ -176,6 +184,10 @@
if (dlerror())
abort();
+ _fwrite = dlsym(handle, "fwrite");
+ if (dlerror())
+ abort();
+
dlerror();
_read = dlsym(handle, "read");
if (dlerror())
@@ -185,10 +197,18 @@
if (dlerror())
abort();
+ _fread = dlsym(handle, "fread");
+ if (dlerror())
+ abort();
+
_close = dlsym(handle, "close");
if (dlerror())
abort();
+ _fclose = dlsym(handle, "fclose");
+ if (dlerror())
+ abort();
+
_dup2 = dlsym(handle, "dup2");
if (dlerror())
abort();
@@ -222,6 +242,22 @@
return (*_pwrite)(fd, buf, count, offset);
}
+size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream)
+{
+ load_symbols();
+ write_was_called(fileno(stream), size*nmemb);
+ return (*_fwrite)(ptr, size, nmemb, stream);
+}
+
+#undef fwrite_unlocked
+
+size_t fwrite_unlocked(const void *ptr, size_t size, size_t nmemb, FILE *stream)
+{
+ load_symbols();
+ write_was_called(fileno(stream), size*nmemb);
+ return (*_fwrite)(ptr, size, nmemb, stream);
+}
+
ssize_t read(int fd, void *buf, size_t count)
{
load_symbols();
@@ -236,6 +272,29 @@
return (*_pread)(fd, buf, count, offset);
}
+size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream)
+{
+ load_symbols();
+ read_was_called(fileno(stream), size*nmemb);
+ return (*_fread)(ptr, size, nmemb, stream);
+}
+
+#undef fread_unlocked
+
+size_t fread_unlocked(void *ptr, size_t size, size_t nmemb, FILE *stream)
+{
+ load_symbols();
+ read_was_called(fileno(stream), size*nmemb);
+ return (*_fread)(ptr, size, nmemb, stream);
+}
+
+int fclose(FILE *fp)
+{
+ load_symbols();
+ close_was_called(fileno(fp));
+ return (*_fclose)(fp);
+}
+
int close(int fd)
{
load_symbols();
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/224829/ | crawl-002 | refinedweb | 2,997 | 58.82 |
Hey,
I am writing a C++ program that uses the "ofstream" function. The output is a C++ file and in the output file I need to use char function.
But only one problem is that i can't add the qutation marks. Here is the code:
#include <fstream.h>
#include <iostream.h>
#include <stdio.h>
#include <stdlib.h>
int main( )
{
char *nme = new char[50];
char exten = ".cpp";
cout<<"Name of the file:\n";
cin>>nme;
int fname = strcat(nme, exten);
ofstream a_file(fname);
a_file<<"#include <iostream.h>\n";
a_file<<"#include <stdlib.h>\n";
a_file<<"\n";
a_file<<"int main( )\n";
a_file<<"{\n";
a_file<<" char a = "
and that is how far i can get because it will not let me add quotation marks to the outputed file. If you can help me please do.Thanks
Forum Rules | http://www.antionline.com/showthread.php?259589-Ofstream-help!!!&mode=hybrid | CC-MAIN-2018-05 | refinedweb | 136 | 79.46 |
this sounds really stupid but looked through my notes several times and cannot find what command line arguements are. could someone give me an example.PLEASE.
Printable View
this sounds really stupid but looked through my notes several times and cannot find what command line arguements are. could someone give me an example.PLEASE.
Command-line arguments are arguments(parameters) passed into the program when it is run from the command prompt (or passed to the program when a file is dragged and dropped onto an executable in Windows for example).
Let's say you have a program called test.exe and you want to run it with some command-line arguments. At the command prompt you could do something like:
test foo.txt bar.txt
... where foo.txt and bar.txt are the two command-line arguments passed into the program.
Provided you're in the correct directory where your exe is located. So adding onto simplification where hk_mp5kpdw left, (for Windows)
say your exe named test.exe is stored in C-drive and screamer folder. Then you first go to the command-prompt(some still call it DOS prompt) and then go to the c:\screamer folder and then do what mp5kpdw suggested. ;)
Incase you knew that and you actually needed a programming example, here it is:
Code:
#include <stdio.h>
int main(int argc, char* argv[]){
/* If arguments have been passed */
if (argc > 0){
/* Prints 1st argument in argv[1] (argv[0] is the name of the .exe file) */
printf("Hello, %s\n",argv[1]);
}
else{
/* If no arguments were passed... */
printf("Anybody out there?\n");
}
return 0;
} | http://cboard.cprogramming.com/c-programming/63371-command-line-arguements-printable-thread.html | CC-MAIN-2015-27 | refinedweb | 271 | 75.4 |
If.
“Part of what we do is clinical proteomics,” Mallick explains. “The goal is to be able to take a drop of blood from a patient, measure it in extreme detail, and based on what we find, diagnose if someone has cancer or not.”
It takes one hour to examine a single drop of a patient’s blood in the mass spectrometer at Cedars-Sinai, and the result creates about 50GB of time-series data. All told, this process generates about 1TB of data per day.
“We have a high-performance computing cluster — I think it’s rated 367th among the top 500 supercomputers — which we use to do sophisticated computational analysis on all the data we collect,” Mallick says. “We want to discover patterns that differentiate a patient group from another. If you can identify who is likely to respond [to a specific therapy], you’ve saved a lot of lives.”
Managing that data had been a problem, however. Many of the solutions Cedars-Sinai tried used tapes as their last-tier medium, which made computational analysis and pattern searches unacceptably slow. Other solutions were cumbersome and made moving data across tiers exceedingly complicated. Adding capacity required significant technical prowess and occasionally forced Cedars-Sinai to break its blood archive in smaller sections. It was time for a change.
“We wanted something that was easy to use, highly available, could scale very easily, and could also handle the performance requirements,” Mallick explains.
For Cedars-Sinai, Isilon offered the perfect solution. In April of last year the Center installed two tiers of Isilon clustered storage systems: an IQ1920 to collect data series from the mass spectrometer and an IQ6000 for long-term storage to support the supercomputing cluster. The 1920 can collect data independently for two to three weeks. When appropriate, the data is pushed to the 6000.
The Cedar-Sinai experience exemplifies the main advantages of a clustered NAS system such as the Isilon IQ. The Isilon solution’s single namespace can seamlessly grow to hundreds of terabytes, and it makes updates easy. Cedars-Sinai’s initial installation was about 20TB, but the total system has grown to almost 300TB. The hospital has been adding 30TB every two months without downtime. Moreover, the cost of the Isilon system was comparable or lower than previous solutions the hospital tried. | http://www.infoworld.com/article/2657054/infrastructure-storage/cedars-sinai-cures-storage-ills-with-clustered-nas.html | CC-MAIN-2017-09 | refinedweb | 389 | 54.22 |
Do you ever do WSDL-first web service development? Regardless of the reason that you do this (e.g. you’re an architectural-purist, your mother didn’t hold you enough), this style of service design typically works fine with BizTalk Server solutions. However, if you decide to build a one-way input service, you’ll encounter an annoying, but understandable error.
Let’s play this scenario out. I’ve hand-built a WSDL that takes in an “employee update” message through a one-way service. That is, no response is needed by the party that invokes the service.
The topmost WSDL node defines some default namespace values and then has a type declaration which describes our schema.
<wsdl:definitions <!-- declare types--> <wsdl:types> <xs:schema <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element <xs:element </xs:sequence> </xs:complexType> </xs:element> </xs:schema> </wsdl:types>
Next, I defined my input message, port type with an operation that accepts that message, and then a binding that uses that port type.
<!-- declare messages--> <wsdl:message <wsdl:part </wsdl:message> <!-- decare port types--> <wsdl:portType <wsdl:operation <wsdl:input <:operation> </wsdl:binding>
Finally, I created a service declaration that has an endpoint URL selected.
<!-- declare service--> <wsdl:service <wsdl:port <soap:address </wsdl:port> </wsdl:service> </wsdl:definitions>
I copied this WSDL to the root of my web server that so that it has a URL that can be referenced later.
Let’s jump into a BizTalk project now. Note that if you design a service this way (WSDL-first), you CAN use the BizTalk WCF Service Consuming Wizard to generate the schemas and orchestration messaging ports for a RECEIVE scenario. We typically use this wizard to build artifacts to consume a service, but this actually works pretty well for building services as well. Anyway, I’m going to take the schema definition from my WSDL and manually create a new XSD file.
This is the only artifact I need to develop. I deployed the BizTalk project and switched to the BizTalk Administration Console where I will build a receive port/location that hosts a WCF endpoint. First though, I created a one-way Send Port which subscribes to my message’s type property and emits the file to disk.
Next I added a new one-way receive port that will host the service. It uses the WCF-Custom adapter so that I can host the service in-process instead of forcing me to physically build a service to reside in IIS.
On the General tab I set the address to the value from the WSDL (). On the Binding tab I chose the basicHttpBinding. Finally, on the Behavior tab, I added a Service Behavior and selected the serviceMetadata behavior from the list. I set the externalMetadataLocation to the URL of my custom WSDL and flipped the httpGetEnabled value to True.
If everything is configured correctly, the receive location is started, and the BizTalk host is started (and thus, the WCF service host is opened), I can hit the URL of my BizTalk endpoint and see the metadata page.
All that’s left to do is consume this service. Instead of building a custom application that calls this service, I can leverage the WCF Test Client that ships with the .NET Framework. After adding a reference to my BizTalk-hosted service, and invoking the service, two things happened. First, the message is successfully processed by BizTalk and a file is dropped to disk (via my Send Port). But secondly, and most important, my service call resulted in an error:
The one-way operation returned a non-null message with Action=”.
Yowza. While I could technically catch that error in code and just ignore it (since BizTalk processed everything just fine), that’d be pretty lazy. We want to know why this happened! I got this error because a“one way” BizTalk receive location still sends a message back to the caller and my service client wasn’t expecting it..
How do I fix this? Actually, it’s fairly simple. I returned to my hand-built WSDL and added a new, empty message declaration.
<!-- declare messages--> <wsdl:message <wsdl:part </wsdl:message> <wsdl:message
I then made that message the output value of my operation in both my port type and binding.
<!-- decare port types--> <wsdl:portType <wsdl:operation <wsdl:input <wsdl:output <:output> <soap:body </wsdl:output> </wsdl:operation> </wsdl:binding>
After copying the WSDL back to IIS (so that my service’s metadata was up to date), I refreshed the service in the WCF Test Client. I called the service again, and this time, got no error while the file was once again successfully written to disk by the send port.
BizTalk Server, and the .NET Framework in general, have decent, but not great support for WSDL-first development. Therefore, it’s wise to be aware of any gotchas or quirks when going this route.
Categories: BizTalk, General Architecture, SOA, WCF/WF
Hi Richard,
Nice post as usual, I have come across the same thing a few times, and I think that unless you are using the MSMQ binding you have to be a little careful with the one way attribute which is something Ive seen a lot of people not really pay much attention to.
If you arent using MSMQ and send a one way web service call your client only really knows that the data hit IIS but if you get an error which would result in a soap fault the client isnt going to get it and there is no persistance/durability so the message is probably going to be lost.
When you implement the workaround above the trade of is that there is very slightly more latency in the call because the client will wait until the message has passed through the wcf host (probably IIS) and also through the recieve port and is persisted to the message box. So in exchange for this very small change you get the ability to be sure that your message is in biztalk. | https://seroter.wordpress.com/2010/12/05/error-with-one-way-wsdl-operations-and-biztalk-receive-locations/ | CC-MAIN-2017-39 | refinedweb | 1,014 | 59.64 |
Usage¶
To use the fmt library, add
format.h and
format.cc from
a release archive
or the Git repository to your project.
Alternatively, you can build the library with CMake.
If you are using Visual C++ with precompiled headers, you might need to add the line
#include "stdafx.h"
before other includes in
format.cc.
Building the library¶
The included CMake build script can be used to build the fmt library on a wide range of platforms. CMake is freely available for download from.
CMake works by generating native makefiles or project files that can be used in the compiler environment of your choice. The typical workflow starts with:
mkdir build # Create a directory to hold the build output. cd build cmake <path/to/fmt> # Generate native build scripts.
where
<path/to/fmt> is a path to the
fmt repository.
If you are on a *nix system, you should now see a Makefile in the current directory. Now you can build the library by running make.
Once the library has been built you can invoke make test to run the tests.
You can control generation of the make
test target with the
FMT_TEST
CMake option. This can be useful if you include fmt as a subdirectory in
your project but don’t want to add fmt’s tests to your
test target.
If you use Windows and have Visual Studio installed, a
FORMAT.sln
file and several
.vcproj files will be created. You can then build them
using Visual Studio or msbuild.
On Mac OS X with Xcode installed, an
.xcodeproj file will be generated.
To build a shared library set the
BUILD_SHARED_LIBS CMake variable to
TRUE:
cmake -DBUILD_SHARED_LIBS=TRUE ...
Header-only usage with CMake¶
You can add the
fmt library directory into your project and include it in
your
CMakeLists.txt file:
add_subdirectory(fmt)
or
add_subdirectory(fmt EXCLUDE_FROM_ALL)
to exclude it from
make,
make all, or
cmake --build ..
Settting up your target to use a header-only version of
fmt is equaly easy:
target_link_libraries(<your-target> PRIVATE fmt-header-only)
Building the documentation¶
To build the documentation you need the following software installed on your system:
Python with pip and virtualenv
-
Less with
less-plugin-clean-css. Ubuntu doesn’t package the
clean-cssplugin so you should use
npminstead of
aptto install both
lessand the plugin:
sudo npm install -g less less-plugin-clean-css.
First generate makefiles or project files using CMake as described in
the previous section. Then compile the
doc target/project, for example:
make doc
This will generate the HTML documentation in
doc/html.
Android NDK¶
fmt provides Android.mk file that can be used to build the library with Android NDK. For an example of using fmt with Android NDK, see the android-ndk-example repository. | https://fmt.dev/5.1.0/usage.html | CC-MAIN-2019-22 | refinedweb | 463 | 66.74 |
(Pinks) MOD EDIT:Cholistan Experience video [blip]AYHUmQYA[/blip] The video is 30 mins....
//youtu.be/
//youtu.be/
more videos all in good time
My apologies for inconvenience
First of all, I must say a million thnx to Almighty that we got this oppurtunity to be able to go and particpitate in this wonderful cholistan rally experience. We had some real testing time with various vehicular issues but we managed through all, completed the rally and arrived back all in one piece! ALHAMDOLILLAH.
To start with, here are some trivia from our Cholistan Rally 2010 participation
<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com<o:p></o:p>Total distance travelled in LC 2160km (home to home, ISB)<o:p></o:p><o:p></o:p.<o:p></o:p><o:p></o:p>Qualifying total distance 3.7km approx <o:p></o:p><o:p></o:p>Total sleep I had in the total trip of 4 days was approx 14 hours.<o:p></o:p><o:p></o:p>Total registered participating vehicles were 88<o:p></o:p><o:p></o:p>In random draw for qualifying, Desert Devil’s turn was 42nd and mine 79th for starting the qualification run. <o:p></o:p><o:p></o:p>In qualification I and Desert Devil were 35th and 71st out of 88 cars according to fastest times.<o:p></o:p><o:p></o:p>LC qualifying time was 2:44:37, and Pajero 3: 00:50<o:p></o:p><o:p></o:p <o:p></o:p><o:p></o:p>Maximum speed achieved in Qualifying by LC 115km, and 147-148kmph in the rally itself (during stage 1).<o:p></o:p><o:p></o:p>Started LC qualifying run on cold tyre pressure of 20psi. For race day had increased to 21psi due to some long straights at high speed. The tyre pressure had increased to 28 at midway 20minute break.<o:p></o:p><o:p></o:p>The Pajero was at 19psi cold pressure for both qualifying and race day. <o:p></o:p><o:p></o:p>On the way back on M2, finally managed to check the top speed of LC, its engine cuts-out at 190kmph due to a speed limiter. <o:p></o:p><o:p></o:p>Both Desert Devil n I had topped our fuel tanks before race and both finished with way less than quarter tank after 226km. Pajero’s tank is 60 or 70litres, LC 95litres (main fuel tank).<o:p></o:p><o:p></o:p>Despite our best plans, we both did not manage to do any recce of race track before the rally run. So essentially we drove according to what we saw during the race! Some guidance was given by Qasim Saidhi about few dangerous objects/ runs.<o:p></o:p><o:p></o:p>LC had an external roll-cage specially built (by Ehsan Kiani, thnx very much) it behaved perfectly, The pajero had a inside roll cage. Roll cages are now requirement for all participants in the rally. <o:p></o:p><o:p></o:p>Desert Devil managed to bring his Pajero Home safely despite the fact that he had non-functional tachometer before the race start and also the temperature needle packed up within the first few check points of rally. <o:p></o:p><o:p></o:p>We had 11 different check points during the rally, where we had to get stamped our individual rally race cards. <o:p></o:p><o:p></o:p>Saw several desert lizards (6-8", with raised heads looking at us) crossing our racing line.<o:p></o:p><o:p></o:p>Saw several car body parts scattered on track as well on the way, including side mirror, a coil spring and bumper parts<o:p></o:p><o:p></o:p>Also came across a dead donkey, with a part of bumper next to it on track side (obviously some *** driving the vehicle ran into another ***! pardon my language)<o:p></o:p><o:p></o:p>This is all what I can think off from the top of my head. Pics n some vdo’s hopefully later.<o:p></o:p>
Also came across a dead donkey, with a part of bumper next to it on track side (obviously some *** driving the vehicle ran into another ***! pardon my language)
LOL.
great discription
@ desertdevil@ nn
thx to add valuable info abt ur trip and congr8 to come back in one piece.
can any one give detail of nadir magsi vehicle?
Congrats both on an amazing feat. I'm sure it must have been an experience of a lifetime. don't know if you caught the rallying bug yet
pics pics pics
Some pics from my side. These were taken by a basic digital system.
alllaaaaa
That's all for tonight, more later.
good going you guys.... the lc performed quiet good, loving the first picture of it airborne
lovely pictures and good to hear that you guys came back safe and sound....
few from my side......
Pajero with roll cage
at the registration desk......
Kashif (Fishak) helping in oil change before the Recee...
Recee time......
Burhan (Laparwah) posing with Drawar Fort | https://www.pakwheels.com/forums/t/cholistan-rally-2010-experience/105309 | CC-MAIN-2017-04 | refinedweb | 887 | 72.46 |
High current consumption in deep sleep: 80 mA
Hi, I'm testing GPy on deep sleep mode, and the current consumption is around 80 mA. I was expecting something in the uA range, do you know what's wrong?
I'm invoking the deep sleep with:
import machine machine.deepsleep(10000)
I have a boot.py file with WiFi configuration. I'm using GPy v1.0.
@averri said in High current consumption in deep sleep: 80 mA:
OSError: the requested operation failed
We just mitigated this through [1] by invoking
lte.deinit()with the "reset" operation like
lte.deinit(detach=False, reset=True)
[1]
- serafimsaudade last edited by serafimsaudade
I'm having the same problem. When I arrive at home a will insert a SIM card and check if the current consumption decrease.
Update:
I have insert a SIM card. And make lte.deinit() and de current consumption in deepsleep drop to 0.022 mA.
@robert-hh The LTE deinit is not working, please see the message above.
It does not work. I'm using the following code:
import pycom import time import json import machine from mqtt import MQTTClient from network import WLAN, LTE def sleep(millis): print('Going to sleep...') lte = LTE() lte.deinit() machine.deepsleep(millis) def handler(topic, data): print(topic, data) payload = json.loads(data.decode('utf-8')) msg = payload['msg'] if 'msg' in payload else None if msg == 'deepSleep': # Here we enter deep sleep. sleep(10000) def connect_mqtt(): """Connects to the MQTT queue.""" print('Connecting to the MQTT server...') c = MQTTClient('alex-device', 'xxx', port=1883, user='xxx', password='xxx') c.set_callback(handler) c.connect() c.publish('inTest', 'Device connected!') c.subscribe('outTest') return c def wait_for_wlan(): while True: if WLAN().mode() == WLAN.STA: break else: time.sleep(0.5) def main(): wait_for_wlan() c = connect_mqtt() while True: time.sleep(0.5) # Check MQTT message. c.check_msg() if __name__ == '__main__': main()
The result:
Going to sleep... Traceback (most recent call last): File "main.py", line 112, in <module> File "main.py", line 95, in main File "mqtt.py", line 206, in check_msg File "mqtt.py", line 193, in wait_msg File "main.py", line 34, in handler File "main.py", line 15, in sleep OSError: the requested operation failed Pycom MicroPython 1.18.0.r1 [v1.8.6-849-9569a73] on 2018-07-20; GPy with ESP32 Type "help()" for more information. | https://forum.pycom.io/topic/3832/high-current-consumption-in-deep-sleep-80-ma/8 | CC-MAIN-2020-50 | refinedweb | 396 | 70.7 |
Trying to significantly improve your company’s ability to build and run good software? Forget Docker, public cloud, Kubernetes, service meshes, Cloud Foundry, serverless, and the rest of it. Over the years, I’ve learned the most important place you should start: continuous integration and delivery pipelines. Arguably, “apps on pipeline” is the most important “transformation” metric to track. Not “deploys per day” or “number of microservices.” It’s about how many apps you’ve lit up for repeatable, automated deployment. That’s a legit measure of how serious you are about being responsive and secure.
All this means I needed to get smarter with Concourse, one of my favorite tools for CI (and a little CD). I decided to build an ASP.NET Core app, and continuously integrate and deliver it to a Cloud Foundry environment running in AWS. Let’s go!
First off, I needed an app. I spun up a new ASP.NET Core Web API project with a couple REST endpoints. You can grab the source code here. Most of my code demos don’t include tests because I’m in marketing now, so YOLO, but a trustworthy pipeline needs testable code. If you’re a .NET dev, xUnit is your friend. It’s maintained by my friend Brad, so I basically chose it because of peer pressure. My .csproj file included a few references to bring xUnit into my project:
- “Microsoft.NET.Test.Sdk” Version=”15.7.0″
- “xunit” Version=”2.3.1″
- “xunit.runner.visualstudio” Version=”2.3.1″
Then, I created a class to hold the tests for my web controller. I included one test with a basic assertion, and another “theory” with an input data set. These are comically simple, but prove the point!
public class TestClass { private ValuesController _vc; public TestClass() { _vc = new ValuesController(); } [Fact] public void Test1(){ Assert.Equal("pivotal", _vc.Get(1)); } [Theory] [InlineData(1)] [InlineData(3)] [InlineData(20)] public void Test2(int value) { Assert.Equal("public", _vc.GetPublicStatus(value)); } }
When I ran dotnet test against the above app, I got an expected error because the third inline data source led to a test failure, since my controller only returns “public” companies when the input value is between 1 and 10. Commenting the offending inline data source led to a successful test run.
Ok, the app was done. Now, to put it on a pipeline. If you’ve ever used shameful swear words when wrangling your CI server, maybe it’s worth joining all the folks who switched to Concourse. It’s a pretty straightforward OSS tool that uses a declarative model and containers for defining and running pipelines, respectively. Getting started is super simple. If you’re running Docker on your desktop, that’s your easiest route. Just grab this Docker Compose file from the Concourse GitHub repo. I renamed mine to docker-compose.yml, jumped into a Terminal session, switched to the folder holding this YAML file, and ran docker-compose up -d. After a second or two, I had a PostgreSQL server (for state) and a Concourse server. PROVE IT, you say. Hit localhost:8080, and you’ll see the Concourse dashboard.
Besides this UX, we interface with Concourse via a CLI tool called fly. I downloaded it from here. I then used fly to add my local environment as a “target” to manage. Instead of plugging in the whole URL every time I interacted with Concourse, I created an alias (“rs”) using fly -t rs login -c. If you get a warning to sync your version of fly with your version of Concourse, just enter fly -t rs sync and it gets updated. Neato.
Next up? The pipeline. Pipelines are defined in YAML and are made up of resources and jobs. One of the great things about a declarative model, is that I can run my CI tests against any Concourse by just passing in this (source-controlled) pipeline definition. No point-and-ciick configurations, no prerequisite components to install. Love it. First up, I defined a couple resources. One was my GitHub repo, the second was my target Cloud Foundry environment. In the real world, you’d externalize the Cloud Foundry credentials, and call out to files to build the app, etc. For your benefit, I compressed to a single YAML file.
resources: - name: seroter-source type: git source: uri: branch: master - name: pcf-on-aws type: cf source: api: skip_cert_check: false username: XXXXX password: XXXXX organization: seroter-dev space: development
Those resources tell Concourse where to get the stuff it needs to run the jobs. The first job used the GitHub resource to grab the source code. Then it used the Microsoft-provided Docker image to run the dotnet test command.
jobs: - name: aspnetcore-unit-tests plan: - get: seroter-source trigger: true - task: run-tests privileged: true config: platform: linux inputs: - name: seroter-source image_resource: type: docker-image source: repository: microsoft/aspnetcore-build run: path: sh args: - -exc - | cd ./seroter-source dotnet restore dotnet test
Concourse isn’t really a CD tool, but it does a nice basic job of getting code to a defined destination. The second job deploys the code to Cloud Foundry. It also uses the source code resource and only fires if the test job succeeds. This ensures that only fully-tested code makes its way to the hosting environment. If I were being more responsible, I’d take the results of the test job, drop it into an artifact repo, and then use that artifact for deployment. But hey, you get the idea!
jobs: - name: aspnetcore-unit-tests [...] - name: deploy-to-prod plan: - get: seroter-source trigger: true passed: [aspnetcore-unit-tests] - put: pcf-on-aws params: manifest: seroter-source/manifest.yml
That was it! I was ready to deploy the pipeline (pipeline.yml) to Concourse. From the Terminal, I executed fly -t rs set-pipeline -p test-pipeline -c pipeline.yml. Immediately, I saw my pipeline show up in the Concourse Dashboard.
After I unpaused my pipeline, it fired up automatically.
Remember, my job specified a Microsoft-provided container for building the app. Concourse started this job by downloading the Docker image.
After downloading the image, the job kicked off the dotnet test command and confirmed that all my tests passed.
Terrific. Since my next job was set to trigger when the first one succeeded, I immediately saw the “deploy” job spin up.
This job knew how to publish content to Cloud Foundry, and used the provided parameters to deploy the app in a few seconds. Note that there are other resource types if you’re not a Cloud Foundry user. Nobody’s perfect!
The pipeline run was finished, and I confirmed that the app was actually deployed.
Finished? Yes, but I wanted to see a failure in my pipeline! So, I changed my xUnit tests and defined inline data that wouldn’t pass. After committing code to GitHub, my pipeline kicked off automatically. Once again it was tested in the pipeline, and this time, failed. Because it failed, the next step (deployment) didn’t happen. Perfect.
If you’re looking for a CI tool that people actually like using, check out Concourse. Regardless of what you use, focus your energy on getting (all?) apps on pipelines. You don’t do it because you have to ship software every hour, as most apps don’t need it. It’s about shipping whenever you need to, with no drama. Whether you’re adding features or patching vulnerabilities, having pipelines for your apps means you’re actually becoming a customer-centric, software-driven company.
Categories: .NET, ASP.NET Web API, Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, OSS, Pivotal | https://seroter.wordpress.com/2018/06/06/creating-a-continuous-integration-pipeline-in-concourse-for-a-test-infused-asp-net-core-app/?shared=email&msg=fail | CC-MAIN-2019-39 | refinedweb | 1,281 | 66.33 |
10 April 2013 05:50 [Source: ICIS news]
By Pearl Bantillo
SINGAPORE (ICIS)--Saudi Arabia’s National Chemical Carriers Ltd (NCC) will buy out its 50:50 joint venture partner – Norwegian logistics firm Odfjell – in the NCC Odfjell Chemical Tankers JLT (NOCT) by June, the Saudi firm’s parent Bahri said late on Tuesday.
This would entail dissolution of 18 ships pool of 40,000 deadweight tonne (dwt) or higher, according to Bahri, also known as the National Shipping Company of Saudi Arabia, in a filing to the Saudi Stock Exchange.
“It is also agreed that NCC will assume the commercial management of the two 75,000dwt large chemical tankers owned separately by NCC and Odfjell, which are presently under construction,” Bahri said.
NOCT, which was formed in 2009, will be a 100% subsidiary of NCC from 1 June 2013. NCC will acquire Odfjell’s 50% in NOCT at net book value, equivalent to
Saudi Riyal (SR) 1.7m ($453,333) at the end of last year, it said. ?xml:namespace> NCC currently owns 23 vessels, 11 of which are jointly operated with Odfjell under NOCT, while three are operating under bareboat agreements and nine are under time charter agreements, Bahri said.
The acquisition is in line with NCC’s aim of becoming a full-fledged operator in the chemical tanker market “in order to serve the expansions in petrochemical production and export from the Arabian Gulf region in general and Saudi Arabia in particular,” it said.
NCC is pursuing major fleet expansions and expects to receive its 75,000dwt large chemical tanker by the end of the year. The tanker is currently being built by South Korean Daewoo Shipbuilding & Marine Engineering Co, Bahri said.
Bahri owns 80% of NCC, while the remaining 20% is held by Saudi | http://www.icis.com/Articles/2013/04/10/9656882/bahri-subsidiary-buys-out-odfjell-in-saudi-shipping-joint-venture.html | CC-MAIN-2014-52 | refinedweb | 298 | 53.75 |
String and Character equality in Swift 4 :
In swift, we can compare two strings or two characters using ‘equal to ’ and ‘not equal to’ operators ( == and != ).Let’s explore how strings and characters are compared in swift :
Check if two strings and characters are equal or not :
We can use ’==’ or ‘!=’ to check if two strings are equal or not :
import UIKit let strOne = "Hello World !!" let strTwo = "Hello World !!" if strOne == strTwo { print ("Both strings are equal") }else{ print ("Strings are not equal") }
It will print :
Both strings are equal
Similarly for characters :
import UIKit let charOne = "A" let charTwo = "A" if charOne == charTwo { print ("Both characters are equal") }else{ print ("Characters are not equal") }
It will print the same :
Both characters are equal
How equality is determined :
Two String or Character values are equal only if their extended grapheme clusters are canonically equivalent. Means , in swift we can represent one character as a extended grapheme cluster which is one or more unicode scalar sequence. Even if two values are look same but they are not canonically equivalent, they will be considered as not equal. i.e. é can be denoted as “\u{E9}” or “\u{65}\u{301}“. Both are canonically equivalent. Similarly ‘A’ can be represent as “\u{41}” (english) or “\u{410}” (Russian) . They are visually same but not canonically equal.
import UIKit let charOne = "\u{E9}" let charTwo = "\u{65}\u{301}" let charThree = "\u{41}" let charFour = "\u{410}" print ("charOne ",charOne) print ("charTwo ",charTwo) if(charOne == charTwo){ print ("Above characters are equal ") }else{ print ("Above characters are not equal") } print ("charThree ",charThree) print ("charFour ",charFour) if(charThree == charFour){ print ("Above characters are equal ") }else{ print ("Above characters are not equal") }
It will give the following output :
charOne é charTwo é Above characters are equal charThree A charFour А Above characters are not equal | https://www.codevscolor.com/string-character-equality-swift-4 | CC-MAIN-2021-10 | refinedweb | 310 | 63.53 |
Error 4 The name 'ConfigurationManager' does not exist in the current context
Are you trying to use the class ConfigurationManager in your .NET desktop application and getting the error "Error 4 The name 'ConfigurationManager' does not exist in the current context"? Read this post to find how to resolve this error and how to use ConfigurationManager class in place of old ConfigurationSettings class.
I have been using the old ConfigurationSettings in many of my old .NET applications to read configuration data from the app.config file. I knew this class was outdated and VS.NET kept reminding me about replacing it with the new ConfigurationManager class. Since the old code worked well, I did not feel the need to replace it with the new class. So, I left all my old code as it is.
Today I was developing a Google+ API library for .NET and had to read few configuration from the config file. I thought I would use the new ConfigurationManager class to read the data from the app.config file instead of sticking to the old class.
I typed the class ConfigurationManager and expected the intellisense to help me how to proceed. Surprisingly, intellisense complained it could not recognize and it suggested me to generate a new class with that name. I double checked and confirmed I am using the right class name. Everything looked alright and still Visual Studio could not recognize the class.
A quick look at MSDN showed me the System assembly where this class is defined. I looked at the project references and found this assembly is not referenced in the project references. Adding a reference to System.configuration.dll solved this problem.
Are you also getting the error Error 4 The name 'ConfigurationManager' does not exist in the current context?
Try the following steps:
Once you add a reference to System.Configuration, you should be able to use ConfigurationManager class, which is part of the System.Configuration namespace. So, remember to import the namespace as shown below:
C# Example:
using System.Configuration;
VB.NET Example:
imports System.Configuration
Once you add a reference to the System.Configuration assemly and use the namespace as shown above, you should be able to use the ConfigurationManager without any error. Here is an example of how to use it to read configuration settings from app settings file.
Example:
string customerId = ConfigurationManager.AppSettings["CustomerId"];
Why micosoft dont provide ConfigurationManager class in windows project. but it is available by default in asp.net | https://www.dotnetspider.com/resources/43418-Error-The-name-ConfigurationManager-does-not.aspx | CC-MAIN-2022-05 | refinedweb | 415 | 50.33 |
Symbols, the newest JavaScript primitive, bring a few benefits to the language and are particularly useful when used as object properties. But, what can they do for us that strings cannot?
Before we explore symbols too much let’s first look at some JavaScript features which many developers might not be aware of.
There are essentially two types of values in JavaScript. The first type is primitives, and the second type is objects (which also includes functions). Primitive values include simple value types such as numbers (which includes everything from integers to floats to
Infinity to
NaN), booleans, strings,
undefined, and
null (note: even though
typeof null === 'object',
null is a still primitive value).
Primitive values are also immutable. They can’t be changed. Of course, a variable with a primitive assigned can be reassigned. For example, when you write the code
let x = 1; x++;, you've reassigned the variable
x. But, you haven't mutated the primitive numeric value of
1.
Some languages, such as C, have the concept of pass-by-reference and pass-by-value. JavaScript sort of has this concept too, though, it’s inferred based on the type of data being passed around. If you ever pass a value into a function, reassigning that value will not modify the value in the calling location. However, if you modify a non-primitive value, the modified value will also be modified where it has been called from.
Consider the following example:
function primitiveMutator(val) {
val = val + 1;
}
let x = 1;
primitiveMutator(x);
console.log(x); // 1
function objectMutator(val) {
val.prop = val.prop + 1;
}
let obj = { prop: 1 };
objectMutator(obj);
console.log(obj.prop); // 2
Primitive values (except for the mystical
NaN value) will always be exactly equal to another primitive with an equivalent value. Check it out here:
const first = "abc" + "def";
const second = "ab" + "cd" + "ef";
console.log(first === second); // true
However, constructing equivalent non-primitive values will not result in values which are exactly equal. We can see this happening here:
const obj1 = { name: "Intrinsic" };
const obj2 = { name: "Intrinsic" };
console.log(obj1 === obj2); // false
// Though, their .name properties ARE primitives:
console.log(obj1.name === obj2.name); // true
Objects play an elemental role in the JavaScript language. They’re used everywhere. They’re often used as collections of key/value pairs. However, this is a big limitation of using them in this manner: Until symbols existed, object keys could only be strings. If we ever attempt to use a non-string value as a key for an object, the value will be coerced to a string. We can see this feature here:
const obj = {};
obj.foo = 'foo';
obj['bar'] = 'bar';
obj[2] = 2;
obj[{}] = 'someobj';
console.log(obj);
// { '2': 2, foo: 'foo', bar: 'bar',
'[object Object]': 'someobj' }
Note: It’s slightly off topic, but the
Map data structure was created in part to allow for key/value storage in situations where a key is not a string.
Now that we know what a primitive value is, we’re finally ready to define what a symbol is. A symbol is a primitive which cannot be recreated. In this case a symbols is similar to an object as creating multiple instances will result in values which are not exactly equal. But, a symbol is also a primitive in that it cannot be mutated. Here is an example of symbol usage:
const s1 = Symbol();
const s2 = Symbol();
console.log(s1 === s2); // false
When instantiating a symbol there is an optional first argument where you can choose to provide it with a string. This value is intended to be used for debugging code, it otherwise doesn’t really affect the symbol itself.
const s1 = Symbol('debug');
const str = 'debug';
const s2 = Symbol('xxyy');
console.log(s1 === str); // false
console.log(s1 === s2); // false
console.log(s1); // Symbol(debug)
Symbols have another important use. They can be used as keys in objects! Here is an example of using a symbol as a key within an object:
const obj = {};
const sym = Symbol();
obj[sym] = 'foo';
obj.bar = 'bar';
console.log(obj); // { bar: 'bar' }
console.log(sym in obj); // true
console.log(obj[sym]); // foo
console.log(Object.keys(obj)); // ['bar']
Notice how they are not returned in the result of
Object.keys(). This is, again, for the purpose of backwards compatibility. Old code isn't aware of symbols and so this result shouldn't be returned from the ancient
Object.keys() method.
At first glance, this almost looks like symbols can be used to create private properties on an object! Many other programming languages have hidden properties in their classes and this omission has long been seen as a shortcoming of JavaScript.
Unfortunately, it is still possible for code which interacts with this object to access properties whose keys are symbols. This is even possible in situations where the calling code does not already have access to the symbol itself. As an example, the
Reflect.ownKeys() method is able to get a list of all keys on an object, both strings and symbols alike:
function tryToAddPrivate(o) {
o[Symbol('Pseudo Private')] = 42;
}
const obj = { prop: 'hello' };
tryToAddPrivate(obj);
console.log(Reflect.ownKeys(obj));
// [ 'prop', Symbol(Pseudo Private) ]
console.log(obj[Reflect.ownKeys(obj)[1]]); // 42
Note: There is currently work being done to tackle the issue of adding private properties to classes in JavaScript. The name of this feature is called Private Fields, and although this won’t benefit all objects, it will benefit objects which are class instances. Private Fields are available as of Chrome 74.
Symbols may not directly benefit JavaScript for providing private properties to objects. However, they are beneficial for another reason. They are useful in situations where disparate libraries want to add properties to objects without the risk of having name collisions.
Consider the situation where two different libraries want to attach some sort of metadata to an object. Perhaps they both want to set some sort of identifier on the object. By simply using the two character string
id as a key, there is a huge risk that multiple libraries will use the same key.
function lib1tag(obj) {
obj.id = 42;
}
function lib2tag(obj) {
obj.id = 369;
}
By making use of symbols, each library can generate their required symbols upon instantiation. Then the symbols can be checked on objects, and set to objects, whenever an object is encountered.
const library1property = Symbol('lib1');
function lib1tag(obj) {
obj[library1property] = 42;
}
const library2property = Symbol('lib2');
function lib2tag(obj) {
obj[library2property] = 369;
}
For this reason it would seem that symbols do benefit JavaScript.
However, you may be wondering, why can’t each library simply generate a random string, or use a specially namespaced string, upon instantiation?
const library1property = uuid(); // random approach
function lib1tag(obj) {
obj[library1property] = 42;
}
const library2property = 'LIB2-NAMESPACE-id'; // namespaced approach
function lib2tag(obj) {
obj[library2property] = 369;
}
Well, you’d be right. This approach is actually pretty similar to the approach with symbols. Unless two libraries would choose to use the same property name, then there wouldn’t be a risk of overlap.
At this point the astute reader would point out that the two approaches haven’t been entirely equal. Our property names with unique names still have a shortcoming: their keys are very easy to find, especially when code runs to either iterate the keys or to otherwise serialize the objects. Consider the following example:
const library2property = 'LIB2-NAMESPACE-id'; // namespaced
function lib2tag(obj) {
obj[library2property] = 369;
}
const user = {
name: 'Thomas Hunter II',
age: 32
};
lib2tag(user);
JSON.stringify(user);
// '{"name":"Thomas Hunter II","age":32,"LIB2-NAMESPACE-id":369}'
If we had used a symbol for a property name of the object then the JSON output would not contain its value. Why is that? Well, just because JavaScript gained support for symbols doesn’t mean that the JSON spec has changed! JSON only allows strings as keys and JavaScript won’t make any attempt to represent symbol properties in the final JSON payload.
We can easily rectify the issue where our library object strings are polluting the JSON output by making use of
Object.defineProperty():
const library2property = uuid(); // namespaced approach
function lib2tag(obj) {
Object.defineProperty(obj, library2property, {
enumerable: false,
value: 369
});
}
const user = {
name: 'Thomas Hunter II',
age: 32
};
lib2tag(user);
// '{"name":"Thomas Hunter II",
"age":32,"f468c902-26ed-4b2e-81d6-5775ae7eec5d":369}'
console.log(JSON.stringify(user));
console.log(user[library2property]); // 369
String keys which have been “hidden” by setting their
enumerable descriptor to false behave very similarly to symbol keys. Both are hidden by
Object.keys(), and both are revealed with
Reflect.ownKeys(), as seen in the following example:
const obj = {};
obj[Symbol()] = 1;
Object.defineProperty(obj, 'foo', {
enumberable: false,
value: 2
});
console.log(Object.keys(obj)); // []
console.log(Reflect.ownKeys(obj)); // [ 'foo', Symbol() ]
console.log(JSON.stringify(obj)); // {}
At this point we’ve nearly recreated symbols. Both our hidden string properties and symbols are hidden from serializers. Both properties can be extracted using the
Reflect.ownKeys() method and are therefor not actually private. Assuming we use some sort of namespace / random value for the string version of the property name then we've removed the risk of multiple libraries accidentally having a name collision.
But, there’s still just one tiny difference. Since strings are immutable, and symbols are always guaranteed to be unique, there is still the potential for someone to generate every single possible string combination and come up with a collision. Mathematically this means symbols do provide a benefit that we just can’t get from strings.
In Node.js, when inspecting an object (such as using
console.log()), if a method on the object named
inspect is encountered, that function is invoked and the output is used as the logged representation of the object. As you can imagine, this behavior isn't expected by everyone and the generically-named
inspect method often collides with objects created by users. There is now a symbol available for implementing this functionality and is available at
require('util').inspect.custom. The
inspect method is deprecated in Node.js v10 and entirely ignored in v11. Now no one will ever change the behavior of inspect by accident!
Here’s an interesting approach that we can use to simulate private properties on an object. This approach will make use of another JavaScript feature available to us today: proxies. A proxy essentially wraps an object and allows us to interpose on various interactions with that object.
A proxy offers many ways to intercept actions performed on an object. The one we’re interested in affects when an attempt at reading the keys of an object occurs. I’m not going to entirely explain how proxies work, so if you’d like to learn more, check out our other post: JavaScript Object Property Descriptors, Proxies, and Preventing Extension.
We can use a proxy to then lie about which properties are available on our object. In this case we’re going to craft a proxy which hides our two known hidden properties, one being the string
_favColor, and the other being the symbol assigned to
favBook:
It’s easy to come up with the
_favColor string: just read the source code of the library. Additionally, dynamic keys (e.g., the
uuid example from before) can be found via brute force. But without a direct reference to the symbol, no one can access the 'Metro 2033' value from the
proxy object.
Node.js Caveat: There is a feature in Node.js which breaks the privacy of proxies. This feature doesn’t exist in the JavaScript language itself and doesn’t apply in other situations, such as a web browser. It allows one to gain access to the underlying object when given a proxy. Here is an example of using this functionality to break the above private property example:
const [originalObject] = process
.binding('util')
.getProxyDetails(proxy);
const allKeys = Reflect.ownKeys(originalObject);
console.log(allKeys[3]); // Symbol(fav book)
We would now need to either modify the global
Reflect object, or modify the
util process binding, to prevent them from being used in a particular Node.js instance. But that's one heck of a rabbit hole. If you're interested in tumbling down such a rabbit hole, check out our other blog post: Protecting your JavaScript APIs. | https://www.tefter.io/bookmarks/83151/readable | CC-MAIN-2019-47 | refinedweb | 2,051 | 56.76 |
:
So how do you support this sort of scenario using the Entity Framework?
When designing the solution we need to remember 2 key points.
Essentially these two points are at odds with one another.
The solution is to clone Entities whenever we read from the Cache, this way attaching clones won't affect any other threads.
If this was a webforms solution we would probably want to write something like this:
var customer = new Customer{ Firstname = txtFirstname.Text, Surname = txtSurname.Text, Email = txtEmail.Text, Street = txtStreet.Text, City = txtCity.Text, State = statesCache.GetSingle( s => s.ID = Int32.Parse(ddState.SelectedValue) ), Zip = txtZip.Text } ctx.AddToCustomers(customer); ctx.SaveChanges();
But this has one big problem. When you Add the customer to the ObjectContext the cloned State is added too. If we do this the Entity Framework thinks it needs to insert the State into the database. Which we don't want to do.
So we have to tell the Entity Framework that the cloned State is already in the database by using AttachTo(...):
var state = statesCache.GetSingle( s => s.ID = Int32.Parse(ddState.SelectedValue) );
// See Tip 13 to avoid specifying the EntitySet // as a string ctx.AttachTo("States", s); Then we can go ahead and build the customer: var customer = new Customer{ Firstname = txtFirstname.Text, Surname = txtSurname.Text, Email = txtEmail.Text, Street = txtStreet.Text, City = txtCity.Text, State = state, Zip = txtZip.Text } ctx.SaveChanges();
If you are alert you may have noticed that I no longer call AddToCustomers(...).
Why? Well when you build a relationship to something that is already in the context (State = state), the customer gets added automatically.
Now, at the point SaveChanges() is called, only the Customer is saved to the database. The State isn't persisted at all, because the Entity Framework is convinced it hasn't changed.
Interestingly we can use the fact that the State isn't persisted to our advantage.
Because the key property of the State is all the Entity Framework actually needs to build the relationships, it doesn't matter if all the other properties are wrong, that key property is all we actually need to clone.
I.e. our cloning code can be very shallow:
public State Clone(State state) { return new State {ID = state.ID}; }
Or as a lambda something like this:
var cloner = (State s) => new State {ID = s.ID};
So long as we don't intend to modify the clone, this is all we actually need. Now we know what we want, it's pretty easy to write a very simple generic class that provides caching and 'clone on read' services:
public class CloningCache<T> where T : class { private List<T> _innerData; private Func<T, T> _cloner;
public CloningCache(IEnumerable<T> source, Func<T, T> cloner) { _innerData = source.ToList(); _cloner = cloner; } public T GetSingle(Func<T, bool> predicate) { lock (_innerData) { return _innerData .Where(predicate) .Select(s => _cloner(s)) .Single(); } } }
Notice that the GetSingle(...) method clones the results it finds.
And using this cloning cache is very simple:
var statesCache = new CloningCache<State>( ctx.States, (State s) => new State {ID = s.ID} );
The first parameter to the constructor is the data to cache (i.e. all the States in the database), and the second parameter is how to implement the cloning we need to safely use the cache across multiple ObjectContexts.
Once you've initialized this cache (probably in the Global.asax) you can dump it in a static variable for wherever you need to access this reference data.
Let me know if anything is unclear or you have any questions. | http://blogs.msdn.com/b/alexj/archive/2009/04/22/tip-14-caching-entity-framework-reference-data.aspx | CC-MAIN-2015-06 | refinedweb | 592 | 68.16 |
From: Ruediger Berlich (ruediger.berlich_at_[hidden])
Date: 2008-08-05 18:43:21
Hi John,
I get no warnings whatsoever for this code with gcc 4.3.1 on an OpenSUSE
11 / 64 bit box.
Best, Ruediger
John Maddock wrote:
> Folks I have a large amount of code that generates warnings of the kind:
>
> t.cpp:6: warning: shadowing built-in function `double
> std::acos(double)'
>
> Actually many thousands of these warnings when building with g++ -Wshadow
> :-(
>
> Normally I would just fix the code, but in this case I don't see how, I
> have code that looks like:
>
> #include <cmath>
>
> template <class T>
> T f(T v)
> {
> // We want acos found via ADL in case T is a class type:
> using std::acos;
> return acos(v);
> }
>
> int main()
> {
> f(2.0);
> return 0;
> }
>
> Which just on it's own produces:
>
> $ g++ -I. -Wall -Wshadow -c t.cpp 2>&1 | less
>
> t.cpp: In function `void f(T)':
> t.cpp:6: warning: shadowing built-in function `double std::acos(double)'
> t.cpp: In function `void f(T) [with T = double]':
> t.cpp:12: instantiated from here
> t.cpp:6: warning: shadowing built-in function `double std::acos(double)'
> t.cpp:12: instantiated from here
>
> But I can't see any other way to code this... certainly using a "using
> namespace std;" in place of the "using std::functionname;" is *not* a
> valid fix (it can lead to lookup ambiguities).
>
> Any ideas folks?
>
> Thanks, John.
>
> _______________________________________________
> Unsubscribe & other changes:
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2008/08/140715.php | CC-MAIN-2021-43 | refinedweb | 270 | 77.13 |
Mathematics
for
Managerial, Life and Social Sciences
University Preparation
ePrep Course
Mathematics for Managerial, Life and Social Sciences is one of the ten specially designed ePrep courses meant to help NSFs, NSmen and others to better prepare themselves for university studies, whether in a local university in Singapore, or in a university overseas.
Poor comprehension of mathematics is often listed as the major cause of poor academic performance in universities. This is true not only in the case of engineering and physical science students. It has been found to be true among the managerial, life and social science students as well.
This Mathematics for Managerial, Life and Social Sciences ePrep course is developed in collaboration with the publishers of the textbook, Applied Mathematics for Managerial, Life and Social Sciences by Soo T Tan. The textbook comes free with the course, together with lots of good learning materials provided by the publishers. You can see some samples materials below. After studying this course, you will have built a very strong foundation in mathematics that will enable you to deal with the mathematical requirement of your university studies.
In the course site, good learning materials on a few other subjects are also made available. Samples are also provided below. Together, they enable a student to be very well prepared for their university studies, in any discipline.
A retired NTU professor is acting as the tutor. He can be reached via email or WhatsApp messaging.
Audio: Intro to Mathematics for Managerial, Life and Social Sciences.
Main Course Contents
- FUNDAMENTALS OF ALGEBRA.
- FUNCTIONS AND THEIR GRAPHS.
- EXPONENTIAL AND LOGARITHMIC FUNCTIONS.
- MATHEMATICS OF FINANCE.
- SYSTEMS OF LINEAR EQUATIONS AND MATRICES
- LINEAR PROGRAMMING.
- SETS AND PROBABILITY.
- ADDITIONAL TOPICS IN PROBABILITY.
- THE DERIVATIVE.
- APPLICATIONS OF THE DERIVATIVE.
- INTEGRATION.
- CALCULUS OF SEVERAL VARIABLES.
The details of the topics are given here.
Learning materials for all chapters are provided, but for the purpose of certification, a student has to pass the tests for the first three chapters and any other three from the remaining nine chapters.
What You Get in this Course
I. Free Textbook
“Applied Mathematics for Managerial, Life and Social Sciences” is a very popular mathematics textbook for those doing managerial, life or social sciences degrees.
II. Free Consultation
A retired NTU professor is acting as the tutor. You can consult him via email or WhatsApp, even beyond the official course duration.
III. Materials Online
1 PowerPoint and video files.
2 Answers/solutions to all questions/problems in the textbook.
3 Online exercises.
4 Problems and solutions in files.
5 Bonus learning materials in geometry, trigonometry, more calculus, probability and statistics, as well as on other subjects such as physics, business finance, corporate finance, economics, engineering economy, biotechnology, life science, business ethics, engineering ethics, Python programming and psychology.
6 Samples of main course materials and bonus course materials are provided below.
IV. Digital Certificate
A digital certificate will be issued if you have successfully completed this ePrep course and passed all the tests at the end of each of the ten compulsory chapters.
Samples of Course Materials
1. Video Lesson (Logarithmic Function)
This short video lesson illustrates the relationship between base and exponent and the resulting number expressed in exponential form or logarithmic form.
2. Question and Answer (Linear Programming)
Question: A financier plans to invest up to $500,000 in two projects. Project A yields a return of 10% on the investment whereas project B yields a return of 15% on the investment. Because investment in project B is riskier than the investment in project A, the financier has decided that the investment in project B should not exceed 40% of the total investment. How much should she invest in each project in order to maximize the return on her investment?
Answer: Let x and y denote the amount (in thousands of dollars) to be invested in project A and project B, respectively. Since the amount available for investment is up to $500,000, we have
x + y ≤ 500
Next, the condition on the allocation of the funds implies that
y ≤ 0.4(x + y), –0.4x + 0.6y ≤ 0, or –2x + 3y ≤ 0
The linear programming problem at hand is
Maximize P = 0.1x + 0.15y subject to
x + y ≤ 500
–2x + 3y ≤ 0
x ≥ 0, y ≥ 0
Samples of Bonus Materials
1. Video Lesson on Business Finance (Intrinsic Value Vs Market Value)
This short video explains the differences between market price and intrinsic value of an asset such as stock and bond.
More on Business Finance ePrep Course
2. Video Lesson on Physics (Positions and Velocities)
This short video lesson illustrates how velocity-time graph can be derived from position-time graph.
More on Physics ePrep Course
3. Question and Solution on Probability & Statistics (Probability Distribution)
Question: Suppose that the unemployment rate in a given community is 7%. Four households are randomly selected to be interviewed. In each household, it is determined whether or not the primary wage earner is unemployed. If the 7% rate is correct, find the probability distribution for x, the number of primary wage earners who are unemployed.
Answer: Let U be the event that the primary wage earner is unemployed and let E be the event that the primary wage earner is employed. There are 16 simple events with unequal probabilities.
p(0) = P[E∩E∩E∩E] = (.93)4 = .7481
p(1) = 4P(E)3P(U) = 4 (.93)3 (.07) = .2252
p(2) = 6P(E)2P(U)2 = 6 (.93)2 (.07)2 = .0254
p(3) = 4P(E)P(U)3 = 4 (.93) (.07)3 = .0013
p(4) = P[U∩U∩U∩U] = (.07)4 = .000024
4. Video Lesson on Corporate Finance (Pricing of an Asset)
This video lesson discusses the fundamental idea of finance that the price of an asset such as stock, bond, or real estate investment depends on the present values of future cash flows, and how these present values can be computed.
More on Corporate Finance ePrep Course
Concepts in Finance
5. Cross-Word Puzzle on Biotechnology (Animal Cloning)
6. Worked Example on Engineering Economy (Project Evaluation)
Question: A small airline executive charter company needs to borrow $160,000 to purchase a prototype synthetic vision system for one of its business jets. The SVS is intended to improve the pilots’ situational awareness when visibility is impaired. The local (and only) banker makes this statement: “We can loan you $160,000 at a very favorable rate of 12% per year for a five-year loan. However, to secure this loan, you must agree to establish a checking account (with no interest) in which the minimum average balance is $32,000. In addition, your interest payments are due at the end of each year, and the principal will be repaid in a lump-sum amount at the end of year five.” What is the true effective annual interest rate being charged?
Answer. The cash-flow diagram from the banker’s viewpoint appears below. When solving for an unknown interest rate, it is good practice to draw a cashflow diagram prior to writing an equivalence relationship. Notice that P0 = $160,000 − $32,000 = $128,000. Because the bank is requiring the company to open an account worth $32,000, the bank is only $128,000 out of pocket. This same principal applies to F5 in that the company only needs to repay $128,000 since the $32,000 on deposit can be used to repay the original principal.
The interest rate (IRR) that establishes equivalence between positive and negative cash flows can now easily be computed:
P0 = F5(P/F, i ′ %, 5) + A(P/A, i ′ %, 5),
$128,000 = $128,000(P/F, i ′ %, 5) + $19,200(P/A, i ′ %, 5).
If we try i ′ = 15%, we discover that $128,000 = $128,000.
Therefore, the true effective interest rate is 15% per year.
7. Video Lesson on Life Science (Cloning)
This short video discusses the process called nucleus transfer of cloning the first mammal, the sheep Dolly.
More on Life Science Eprep Course
8. Python Programming (Calculating Hourly-Rated Wages)
Code:
def calcWeeklyWages(totalHours, hourlyWage):
if totalHours <= 40:
totalWages = hourlyWage*totalHours
else:
overtime = totalHours – 40
totalWages = hourlyWage*40 + (1.5*hourlyWage)*overtime
return totalWages
hours = float(input(‘Enter hours worked: ‘))
wage = float(input(‘Enter dollars paid per hour: ‘))
total = calcWeeklyWages(hours, wage)
print(‘Wages = $’,total)
Output:
Enter hours worked: 50
Enter dollars paid per hour: 12
Wages = $ 660.0
9. Economics (Costs in the Short Run and in the Long Run)
1. The division of total costs into fixed and variable costs will vary from firm to firm
- Some costs are fixed in the short run, but all are variable in the long run.
- For example, in the long run, a firm could choose the size of its factory.
- Once a factory is chosen, the firm must deal with the short-run costs associated with that plant size.
2. The long-run average-total-cost curve lies along the lowest points of the short-run average-total-cost curves because the firm has more flexibility in the long run to deal with changes in production.
3. The long-run average total cost curve is typically U-shaped, but is much flatter than a typical short-run average-total-cost curve.
4. The length of time for a firm to get to the long run will depend on the firm involved.
5. Definition of the short-run: the period of time in which some factors of production cannot be changed.
6. Definition of the long-run: the period of time in which all factors of production can be altered.
10. Discrete Mathematics (Recurrence Relation)
Question:
Solution:
Remarks
These are some samples of the bonus materials on other subjects and they illustrate how comprehensive and broad-base this Mathematics ePrep course is for preparing students for their university studies or for their careers. While not all the bonus course materials may be of interest to the students who take up this ePrep course on Mathematics, they can choose which of these bonus course materials are of use to them and disregard the rest.
Remember not to short-change yourself – do not go for any of those low-grade courses prepared by any “Tom-Dick-And-Harry” who self-claim to be an industry expert or mathematics expert, especially if you are preparing for academic university studies or career advancement! You do not need thousands of such half-baked courses. As you can see, with this single Mathematics prep course, you have a good collection of materials on many other subjects as well. You also get a hard copy textbook “Applied Mathematics for the Managerial, Life and Social Sciences” textbook.
Go only for a high-quality specially-designed academic course such as this Mathematics e_prep course for getting you a head start in university, or in your career.
Example Applications of Mathematics in Various Fields
Here are some examples of applications of mathematics:
Applications of Exponential and Logarithmic Functions to Solve Growth Problems
- level of absorption of drugs
- forensic science to determine time of death
- growth of population, tumor, bacteria
- spread of diseases
- effect of advertisement on sales
Applications of Linear Programming to Solve Optimization Problems
- Social program planning
- advertising
- investment-asset allocation
- agriculture-crop planning
- nutrition planning
Applications of Functions and Their Graphs
Deb Farace of Pepsico testified that she shared the mathematical model on how sale is impacted by weather with buyers and resulted in increase in sale because the buyers were able to better place buy orders according to demand due to different weather conditions.
Applications of Derivatives
Richard Mizak of Kroll Zollo Cooper testified that he used mathematical models to help distressed companies to improve their operations.
Who should take this ePrep course on Mathematics for Managerial, Life and Social Sciences?
Mathematics is a must-do course for almost all students as most students do not have a strong foundation in mathematics and weakness in mathematics is often the obstacle to proper learning and understanding the depth of a principle!
NTU PaCE offers two mathematics e_Prep courses. The Engineering Mathematics – Calculus is the course meant for those studying engineering, mathematics, or physical sciences and others who need a very strong mathematics foundation in their studies such as those doing data analytics, computer science, etc. While this Mathematics for Managerial, Life and Social Sciences e_Prep course is focused on those doing managerial, life, and social sciences, it is also very useful for those doing all other studies as well.
This e_prep course will not only provide the students with a very strong foundation in mathematics but also in many other subjects such as business finance, corporate finance, economics, engineering economy, life science, biotechnology, physics, mechanics, ethics, Python programming, discrete mathematics, and psychology.
Even for those not going to any university due to various reasons, this is an opportunity to enhance their mastery of the many mathematical concepts involved in the issues they face, and to prove that they are capable of completing a university-level course. A certificate may come in handy when applying for a job.
Everyone is welcome and there is no pre-requisite as there are introductory materials and also we will provide the necessary guidance for those who need it. | https://eprepcourses-sg.online/mathematics-eprep/ | CC-MAIN-2021-39 | refinedweb | 2,207 | 51.99 |
Py.
Background
First, this is a panorama of the scipy technology stack. NumPy is the foundation. It provides the data structure of multi-dimensional arrays and various computations that operate upon it. Further up, attentions should be paid to the following: scipy, which is mainly for various scientific computing operations; and pandas, with DataFrame as the core concept. It provides functions, such as processing and cleaning, for table-type data. On the next level, the classical library, scikit-learn, is one of the most well-known machine learning frameworks. At the top level is a variety of libraries for vertical fields, such as astropy, which is mainly oriented to the astronomical field, and biopython, which is oriented to the biological field.
From the scipy technology stack, you can see that numpy plays a key role, and a large number of upper-level libraries use the data structure and computation provided by numpy.
Real-world data is not as simple as the two-dimensional data, such as tables. Most of the time, we often have to deal with multi-dimensional data. For example, for common image processing, the data to be processed includes the number of pictures, the length and width of the pictures, and RGBA channels. This comprises the four-dimensional data. Such examples are too numerous to enumerate. With such multi-dimensional processing capabilities, we have the ability to deal with a variety of more complex or even scientific fields. Meanwhile, multi-dimensional data itself contains two-dimensional data, so we also have the ability to process table-type data.
In addition, if we need to explore the inner nature of the data, it is absolutely not sufficient to just perform statistics and other similar operations on the table data. We need to use deeper “mathematical” methods, such as matrix multiplication and Fourier transformations, to analyze the data at a deeper level. As numpy is a library for numerical computation, we think it, together with various upper-level libraries, is very suitable for meeting these needs.
Why Mars?
So, why do we need to work on the Mars project? Let’s take an example.
We try to use the Monte Carlo method to calculate pi. The method is actually very simple, which is to solve a particular problem using random numbers. As shown in the figure, we have a circle with a radius of 1 and a square with a side length of 2. We generate many random points, and then we can calculate the value of pi using the formula in the lower right corner, that is, 4 multiplied by the number of points falling in the circle divided by the total number of points. The more randomly generated points, the more accurate the calculated pi is.
This is very easily implemented using pure Python. We only need to traverse N times, generate X and Y points and calculate whether it falls within a circle. It takes more than 10 seconds to run 10 million points.
Cython is a common way to reduce the time required to execute the Python code. Cython defines a superset of the Python language, translates the language into c/c++, and then compiles it to speed up the execution. Here, we have added several types of variables, and you can see a 40% performance improvement over pure Python.
Cython has now become a standard configuration for Python projects, and the core third-party library of Python basically uses Cython to speed up Python code execution.
The data in this example is of one type, so we can use a specialized numerical computation library to speed up the execution of this task very quickly through vectorization. Numpy is the inevitable choice. With numpy, what we need is an array-oriented mindset that reduces loops. We first use numpy.random.uniform to generate a two-dimensional array of N 2, then data * 2 to square all the data in the array, and then sum(axis=1) to sum axis=1 (that is, in the row direction).
At this time, we obtain a vector of the length N, then we use numpy.sqrt to find the square of each value of the vector. If the result < 1, we obtain a boolean vector to determine whether each point falls into the circle or not. We can find out the total number of points by following a sum at the end. It may be difficult to get started with numpy at first, but after becoming more familiar with it, you should realize how convenient this method of writing can be. It is actually very intuitive.
As you can see, by using numpy, we have written simpler code, and the performance has been greatly improved, which is more than 10 times better than that of the pure Python.
Can the numpy code be optimized? The answer is yes. We use a library called numexpr to fuse several numpy operations into one operation to speed up the numpy execution.
As you can see, the performance of code optimized by numexpr is more than 25 times better than that of the pure Python code.
At this time, the code is already running quite fast. If we have a GPU, we can use hardware to accelerate the task execution.
A library called cupy is strongly recommended for this type of task, which provides an API that is consistent with numpy. By simply replacing the import, numpy code can be run on NVIDIA graphics card.
At this time, the performance has been greatly improved by more than 270 times. It is quite remarkable.
To improve the accuracy of the Monte Carlo method, we have increased the amount of computation by a factor of 1,000. What happens in this situation?
Yes, this is the memory overflow (OutOfMemory) we encounter from time to time. Worse still, in jupyter, OutOfMemory sometimes causes processes to be killed and even results in previous runs being lost.
The Monte Carlo method is relatively easy to handle. We simply divide the problem into 1,000 blocks, each solving 10 million entries of data, and then write a loop and make a sum. However, the whole computing time is over 12 minutes, which is too slow.
At this point, we can find that, during the entire operation, only one CPU is actually working, while other cores are not. So, how do we parallelize operations within numpy?
Some operations in numpy can be parallelized, such as using tensordot for matrix multiplication. Most other operations cannot use multiple cores. To parallelize operations within numpy, we can:
- Write tasks using multi-threads and multi-processes
- Use a distributed method
It is still very easy to rewrite the task of calculating pi by the Monte Carlo method into a implementation of multi-threads and multi-processes. We write a function to process 10 million data entries. Then, we submit this function 1,000 times through ThreadPoolExecutor and ProcessPoolExecutor of concurrent.futures respectively to be executed by multi-threads and multi-processes. We can see that the performance can be improved by 2 or 3 times.
However, calculating pi using the Monte Carlo method is very easily written in parallel, so we need to consider more complicated situations.
import numpy as npa = np.random.rand(100000, 100000)
(a.dot(a.T) - a).std()
We have created a matrix “a” of 100,000 * 100,000, and the input is about 75 GB. We multiply matrix “a” by the transposition of “a”, subtract “a” itself, and finally calculate the standard deviation. The input data of this task can’t easily be stuffed into memory, and the subsequent parallel writing operation is even more difficult.
This brings us to the question — what kind of framework do we need? Ideally, our framework should be able to satisfy the following requirements:
- It should provide familiar interfaces. Cupy, for example, can parallelize the code originally written in numpy by simply replacing the “import”.
- It should be scalable. Even if it is as small as a stand-alone machine, it can use multi-core parallelism. If it is as large as a large cluster, it can support the scale of thousands of machines to handle tasks together in a distributed manner.
- It should support the use of hardware, such as GPUs, to accelerate task execution.
- It should support various optimizations, such as fusion, and can use some libraries to accelerate the operation of fusion.
- Although we only perform in-memory computing, we do not want to run out of memory on a single machine or cluster, otherwise the task fails. We should dump temporarily unused data into storage, such as onto disks, to ensure that the entire computation can be completed even if insufficient memory is available.
What Is Mars and What Is It Capable of Doing?
Mars is a framework that intends to solve these type of problems. Currently, Mars includes tensors: the distributed multi-dimensional matrix computing.
The task scale for calculating pi using the Monte Carlo method with the size of 10 billion is 150 GB, which causes OOM. By using the Mars tensor API, you only need to replace
import numpy as np with
import mars.tensor as mt, and the subsequent computations are exactly the same. However, there is one difference. A Mars tensor needs to be triggered by
execute, which has the advantage of optimizing the entire intermediate process as much as possible, such as fusion. This method is not very helpful for debugging, but we will provide the eager mode in the future which can trigger the computation for each step, which is exactly the same as with numpy code.
As shown above, this computation time is equivalent to that of parallel writing, and the peak memory usage is just a little bit more than 1 GB. This shows that we can achieve full parallelism and save the memory usage through Mars tensors.
Currently, Mars has implemented 70% of the common numpy interfaces. For a complete list, see here. We have been working hard to provide more numpy and scipy interfaces. We have recently completed our support for Inverse Matrix computing.
Mars tensors also support GPUs and sparse matrices. eye is used to create a unit diagonal matrix, which only has a value of 1 on the diagonal and wastes storage if stored densely. Currently, Mars tensors only support two-dimensional sparse matrices.
How Does Mars Achieve Parallelization and Save on Memory Usage?
Like all the dataflow frameworks, Mars also has the concept of Computational Graphs. The difference is that Mars contains the concept of coarse-grained and fine-grained graphs. The client-written code generates a coarse-grained graph on the client. After it is submitted to the server, there is a tiling process and the coarse-grained graph is tiled into a fine-grained graph. Then we schedule the fine-grained graph to execute.
Here, the client-written code is expressed in memory as a coarse-grained graph composed of tensors and operands.
When the client calls the
execute method, the coarse-grained graph is serialized to the server. After deserialization, we tile the graph into a fine-grained graph. For a matrix of 10002000, assuming that the chunk size on each dimension is 500, then it is tiled into 24 for a total of 8 chunks.
Later, we perform tiling operations for each implemented operands, namely operators, tiling a coarse-grained graph into a fine-grained graph. At this time, we can see that if there are 8 cores in a stand-alone machine, then we can execute the entire fine-grained graph in parallel. In addition, given 12.5% of the memory size, we can complete the computation of the entire graph.
However, before we actually start the execution, we fuse the entire graph, which means that the fusion is optimized. When the three operations are actually executed, they are fused into one operator. For different execution targets, we use the fusing support of numexpr and cupy to fuse and execute CPU and GPU operations respectively.
The examples above are all tasks that can be easily executed in parallel. As we mentioned earlier, the fine-grained graph generated after tiling is actually very complex. In real-world computing scenarios, such tasks are actually a lot.
To fully schedule the execution of these complex fine-grained graphs, we must meet some basic principles to make the execution efficient enough.
First, the allocation of initial nodes is very important. For example, in the figure above, suppose we have two workers. If we allocate 1 and 3 to one worker, and 2 and 4 to another worker, then when 5 or 6 are scheduled, they need to trigger a remote data pull, resulting in greatly reduced execution efficiency. If we initially allocate 1 and 2 to one worker, and 3 and 4 to another worker, the execution is very efficient. The allocation of initial nodes has significant impact on the overall execution. Therefore, we need to have a global grasp of the whole fine-grained graph before we can achieve a better initial node allocation.
In addition, the policy of depth-first execution is also very important. Suppose we only have one worker at this time, and after executing 1 and 2, if we schedule 3, the memory allocated to 1 and 2 can not be released, because 5 has not been triggered yet. However, if we schedule 5 after executing 1 and 2, then the memory allocated to 1 and 2 can be released after the execution of 5, which saves the most memory during the entire execution.
Therefore, initial node allocation and depth-first execution are the two most basic principles. However, these two points alone are far from enough, because there are many challenging tasks in the overall execution scheduling for Mars. Also, this is an object we need to optimize for the long term.
Mars Distributed Framework
As mentioned above, Mars is essentially a scheduling system for fine-grained heterogeneous graphs. We schedule fine-grained operators to various machines. In actual execution, we call libraries, such as numpy, cupy, and numexpr. We have made full use of a mature and highly optimized stand-alone library, instead of reinventing the wheel in these fields.
During this process, we may encounter some difficulties:
- We use the master-slave architecture, so how can we avoid the master becoming a single point?
- How does workers avoid GIL (Global Interpreter Lock) restrictions of Python?
- The control logic of the master is very complicated. We can easily write highly coupled and long code. But how can we decouple the code?
Our solution is to use the actor model. The actor model defines a parallel mode. That is, everything is an actor. Each actor maintains an internal state, and they all hold a mailbox. Actors communicate with each other through messages. Messages received are placed in the mailbox. Actors retrieve messages from the mailboxes for processing, and one actor can only process one message at a time. An actor is the smallest parallel unit. An actor can only process one message at a time, so you do not need to worry about concurrency at all. Concurrency should be handled by the actor framework. Also, whether all actors are on the same machine or not becomes irrelevant in the actor model. The actor model naturally supports distributed systems provided that actors can complete message transmissions on different machines.
An actor is the smallest parallel unit. Therefore, when writing code, we can decompose the entire system into many actors, and each actor takes a single responsibility, which is somewhat similar to the idea of object-oriented programming, so that our code can be decoupled.
In addition, after the master is decoupled into actors, we can distribute these actors on different machines, so that the master is no longer a single point. At the same time, we have these actors allocated based on the consistent hash. If any scheduler crashes in the future, the actors can be reallocated and recreated based on the consistent hash to achieve the purpose of fault tolerance.
Finally, actors are run on multiple processes, and each process has many coroutines. This way, workers are not restricted by GIL.
JVM languages, such as Scala and Java, can use the actor framework, akka. For Python, there is no standard practice. We think that a lightweight actor framework should be able to meet our needs, and we don’t need some of the advanced functions provided by akka. Therefore, we have developed Mars actors, a lightweight actor framework. The entire distributed schedulers and workers of Mars are on the Mars actors layer.
This is the architecture diagram of our Mars actors. When we start the actor pool, sub-processes start several sub-processes on the basis of concurrency. The main process has a socket handler to accept messages delivered by remote socket connections, and a dispatcher object to distribute messages according to their destinations. All of the actors are created on sub-processes. When an actor receives a message for processing, we call the
Actor.on_receive(message) method through a coroutine.
For one actor to send a message to another actor, there are three situations to consider.
- If they are in the same process, then they can be called directly through a coroutine.
- If they are in different processes on a machine, then the message is serialized and sent to the dispatcher of the main process through the pipeline. The dispatcher obtains the process ID of the target by unlocking the binary header information and sends it to the corresponding sub-process through the corresponding pipeline. The sub-process just triggers the message of the corresponding actor through the coroutine for processing.
- If they are on different machines, the current sub-process sends serialized messages to the master process of the corresponding machine through the socket, and then the machine sends the messages to the corresponding sub-process through the dispatcher.
Coroutines are used as a parallel method in sub-processes, and the coroutines themselves have strong performance in IO processing, so the actor framework also has good IO performance.
The figure above shows how to calculate pi using the Monte Carlo method using only Mars actors. Two actors are defined here. One actor is the ChunkInside, which accepts the size of a chunk to calculate the number of points falling into the circle. The other actor is the PiCaculator, which accepts the total number of points to create a ChunkInside. In this example, 1,000 chunkinsides are directly created and then triggered for calculation by sending messages. Specifying address at
create_actor allows the actors to be allocated to different machines.
As shown in the figure, the performance of just using Mars actors is faster than that of multi-processes.
To sum up, by using Mars actors, we can easily write distributed code without GIL restrictions, which improves IO efficiency. In addition, code becomes easier to maintain due to the decoupling of actors.
Now, let’s take a look at the complete distributed execution process of Mars. As shown above, we have 1 client, 3 schedulers and 5 workers. The client creates a session and the session creates a SessionActor object on the server. The object is allocated to scheduler1 through the consistent hash. At this time, the client runs a tensor. First, SessionActor creates a GraphActor, which tiles a coarse-grained graph. If there are three nodes on the graph, three OperandActors are created and allocated to different schedulers respectively. Each OperandActor controls the submission of the operand, the supervision of the task status, and the release of memory. At this time, it is found that the OperandActors of 1 and 2 do not find dependencies, and there are sufficient cluster resources, then they submit the tasks to the corresponding worker for execution. After the execution is completed, they notify 3 of the task completion. The data is executed by different workers, so the data pulling operation is triggered first and then executed after the worker to execute the tasks is determined. If the client knows that the task is completed by polling GraphActor, the operation of pulling data to local is triggered. The entire task is completed.
We have made two benchmarks for Mars distribution. The first one is to add 1 to each element of 3.6 billion entries of data, and multiply it by 2. In the figure, the red cross is the execution time of numpy. As can be seen, the performance is several times better than that of numpy. The blue dotted line is the theoretical running time, and we can see the actual acceleration is very close to the theoretical time acceleration. In the second benchmark, we have increased the amount of data to 14.4 billion data entries. After adding 1 to each element and multiplying it by 2, we can see that the stand-alone numpy can not complete the task. At this time, we can also achieve a good acceleration ratio for this task.
What to Expect Next?
The source code for Mars has already been posted on Github, allowing more prople to participate in Mars’ development:.
As mentioned above, in the subsequent Mars development plan, we will support eager mode, allowing each step to trigger execution and improving the performance-insensitive task development and debug experience; We will support more numpy and scipy interfaces; and it is important that we provide a 100% pandas compatible interface. Based on Mars tensors, GPUs are also supported; We provide scikit-learn compatible machine learning support; We also provide the ability to schedule custom functions and classes on fine-grained graphs to enhance flexibility; Finally, our clients do not actually rely on Python and can serialize coarse-grained graphs in any language, so we can provide a multi-language client version, depending on the needs.
In short, open source is very important to us. We can’t implement the parallelization of the huge scipy technology stack alone. We need everyone interested to help us build it together.
Q&A
Finally, I would like to share some common questions and answers presented at the PyCon conferences. The general summary is as follows:
- Mars performs some specific computations, such as SVD (Singular Value Decomposition). Here is some test data from cooperation projects with clients. The input data is a matrix of 800,000,000 * 32 for SVD. After SVD is finished, the matrices are multiplied to be compared with the original matrix. The whole computation process uses 100 workers (8 cores) and takes 7 minutes to complete.
- When will Mars be open-sourced? A: It is already open-sourced:
- Will Mars be reverted to close-sourced at a later date? A: No
- Is Mars a static graph or a dynamic graph? A: Currently, it is a static graph. After the eager mode is done, it will be able to support the dynamic graph.
- Will Mars involve deep learning? A: Not presently
Reference: | https://alibaba-cloud.medium.com/pycon-china-2018-in-depth-analysis-of-mars-35e6f6f9c51?source=post_page-----35e6f6f9c51-------------------------------- | CC-MAIN-2021-17 | refinedweb | 3,829 | 55.44 |
This document forms part of the documentation set for the accompanying software and describes the usable interfaces exposed by snmp.dll and mib.dll. Snmp.dll is a C# class library for the .NET framework. It has been developed on the Windows platform and may be useful on others also. In contains two namespaces, X690 and Snmp. The X690 namespace contains an implementation of the Basic Encoding Rules (BER) of Abstract Syntax Notation 1 (ASN.1) as specified by international standards (ISOs code for this standard in X.690) and used within Snmp. Snmp is specified in an Internet Standard, and uses a logical Management Information Base (MIB). Implementation of at least the mib-2 (RFC1213) is mandatory for all computer systems connected to the Internet. Mib.dll is a C# class library that handles the translation of MIB object identifiers (OID) sch as 1.3.6.1.2.1.1.4.0 to readable names such as system.sysContact.0. It also collects the help strings from the system mib files (on windows systems these are in the system folder, usually c:\windows\system32.) It contains one namespace, RFC1157. Getting Started with Snmp.dll and Mib.dll The simplest possible call on the Snmp protocol is to GET a single MIB entry such as system.sysContact.0 from an agent (host) such as localhost with community public. The traditional snmputil.exe would do this using the command snmputil get ict-main-s.msroot.student.paisley.ac.uk public system.sysContact.0 To do this programmatically, proceed as follows: RFC1157.Mgmt mib = new RFC1157.Mgmt(); ManagerSession sess=new ManagerSession(localhost,public); ManagerItem mi=new ManagerItem(sess,mib.OID(mgmt.mib-2.system.sysContact.0)); Console.WriteLine(mi.Value.ToString()); This code is explained as follows. 1. You will need a an instance of Mgmt() to encode and decode IODs:new RFC1157.Mgmt()Store the returned value in a RFC1157.Mgmt variable, mib, say. This constructor takes some time at present, as it reads all the mib definitions it finds in the system folder. 2. Create a ManagerSession to the chosen agent and community by new ManagerSession(localhost,public) Store the returned value in a ManagerSession variable, sess, say. 3. Translate the given OID into a uint[]:uint[] oid = mib.OID(mgmt.mib-2.system.sysContact.0);The prefix mgmt.mib-2 is added to provide a uniform starting point for the numerous mibs you will find on your system. A typical Windows-2000 system has 20 mibs defined. All of the mibs begin iso.dod.internet. followed by such things as mgmt.mib-2. 4. Create a ManagerItem to obtain the result: new ManagerItem(sess,oid) Store the returned value in a ManagerItem variable, mi, say. 5. Now mi.Oid contains the actual OID returned (which can be converted to a string using mib.OID() again), and mi.Value the actual value object. You can call ToString on this value to obtain a readable version. You can also retrieve a SubTree by using the GET/.GETNEXT combination. The RFC1157 namespace Def Def corresponds to a single MIB node definition. Properties string name The readable name of this node uint[] path Read-only. The OID of this node Methods string[] Kids The children of this node Def this[] Two indexers of a Def are provided: by the OID component in binary or string form.
Mgmt Constructor new Mgmt() Create a new Mgmt instance, inirialised using all the mib definitions in the system folder. Properties Def def The internet node in the MIB (1.3.6.) Methods string OID(uint[]) Lookup a given OID and return the cooresponding readable version. uint[] OID(string) Lookup a given OID and return the corresponding binary version The Snmp namespace ManagerItem Constructor new ManagerItem(ManagerSession,uint[]) Create a new ManagerItem from the given ManagerSession and OID new ManagerItem(ManagerSession, X690.Universal) Used by ManagerSubTree to construct a list of items based on information returned by the agent.
Properties uint[] Oid Read-only. The Oid found in the MIB. X690.Universal Value Read-only. The Value returned in ASN-1 format.ManagerSession Constructor new ManagerSession(string agt,string com) Create a new ManagerSession for the given agent and community. The agent string is passed to DNS to resolve. Properties string agentAddress The string identifying the agent string agentCommunity The string identifying the community Methods (Advanced methods in italics) void Close() Close the connection with the agent X690.Universal[] Get ( params X690.Universal[]) Perform a GET according to the Snmp protocol, for the given variable bindings. bool GetNext(ref X690.Universal) Perform a GETNEXT for the given variable binding reference (which is replaced by the returned binding). The returned value indicates success or failure. X690.Universal PDU(SnmpType, params X690.Universal[]) Construct an SNMP PDU for the given list of variable bindings X690.Universal VarBind(uint[]) Construct a variable binding for a given oid.ManagerSubTree Constructor new ManagerSubTree(ManagerSession,uint[]) Create a new ManagerSubTree from the given ManagerSession and OID Properties int Length Read-only. The number of items found in the MIB. ManagerItem this[ix] Read-only. The list of ManagerItems recovered. If the ManagerSubTree is called ms, then ms[0],..,ms[ms.Length-1] are valid. Method void Refresh() Fetch the SubTree anew from the agent. Note that this may change the length and contents of the list of items completely.The remaining classes in this namespace are advanced. SnmpBER Implements the encoding rules for the additional ASN.1 types defined for SNMP. Subclasses X690.Universal. SnmpTag Implements the list of tags for the additional ASN.1 types defined for SNMP. Subclasses X690.BERTag. SnmpType Enumerates the ASN.1 tags defined for SNMP. The X690 namespace This is not a full implementation of the X690 standard, but appears to be sufficient for use here. BER The BER class defined protected methods ReadByte and WriteByte. It has one public bool member Encode, which specifies whether encoding or decoding is in progress. It is used as the base class for other classes in this namespace. BERTag A subclass of BER which specifies the rules for encoding the tag (type component) in an encoding. BERType Enumeration: the ASN.1 type classes Universal, Application, Context, and Private. BitSet A utility class for handling bits. The .NET Framework contains a BitArray which aims to do something similar. Integer Implements the universal integer class (arbitrary precision), and provides automatic conversion to various standard types in the .NET framework such as short, ulong etc. Real Implements the universal real class (arbitrary precision), and provides automatic conversion to various standard types in the .NET framework such as float, double. Universal A subclass of BER which implements the encoding of the standard Universal types, including Integer and Real. UniversalType Enumerates all the Universal types defined in X690.
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/uploadfile/malcolmcrowe/snmplib11232005011613am/snmplib.aspx | CC-MAIN-2015-27 | refinedweb | 1,142 | 52.36 |
In the beginning God created the heaven and the earth. And human beings. And human beings created digital kitchen timers, like this one.
And human beings said, Cool but a little dismal. Let's make a better one!
And God said, I heard your call, let there be Arduinos: and there was Arduinos all over the earth, and that was good.
And human beings took an Arduino and created a better kitchen timer, like this one.
And God saw the new kitchen timer and said: I saw your new kitchen timer and it looks awful, but it seems too much fun! And that is good. :-)What you need to have
Now that you know where all this came from, let's go deep into it.
All components I used came from the Arduino Starter Kit, including the small breadboard you see in the pics and in the video. Feel free to accomodate the project into a larger one, if you wish.
You'll need a power source too: while playing around, the PC' USB port and cable will be enough.What you need to do
First: please gather all needed components from the Starter Kit or your preferred component bin; if you don't have one, don't panic. There's plenty of on the Internet. You can find the component list below.
And, well, you'll need the code too. It's in its box, below again.How it works
Basically, like any other similar device you can buy for a buck at any store near you. But this is yours. And this will show you how those little gadgets actually work.
The keyword here is: current mode. The timer itself can run in only one out of four modes at a time:
- IDLE - the timer is awaiting for your input, showing the currently set time amount; this is also the initial mode after power up or reset.
- SETUP - you can enter this mode by long-pressing S4 (in the code this is called also "reset button"); here, by using S3 ("start stop button"), you can choose which value to change in order to set the elapsed time to be counted down later; finally, using S2 ("down button") and S1 ("up button") respectively, you can decrease or increase the choosen value (hours, minutes or seconds).
- RUNNING - You can enter this mode by pressing S3, while leaving it will require both S3 or S4 (which will lead you to IDLE mode).
- RINGING - When the desider amount of time is elapsed, this mode is automatically activated; you can leave it (i.e., make the little boy stop ringing) by pressing any switch.
First, we need to include the proper libraries:
#include <LiquidCrystal.h>
#include <TimeLib.h>
If you don't have them already, you'll need to download and install them:
- Paul Stoffregen's Time Library (please download the ZIP from the green "Clone or download" button)
- Arduino LiquidCrystal Library
Next, let's initialize that nice LCD module:
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
Please feel free to scramble the pins at your will in order to obtain a nice wiring layout: don't follow me in this, as I did a terrible wiring plan! :D For instance, you can reverse the latter four pins in the above statement in order to avoid the yellow wires crossing you can see in the schematic below (obviously, you'll have to adjust the button pin constants accordingly, see below). Play, have fun! The life with Arduinos starts right after that copy/paste!
The next 51 code lines contain the static variables declaration and initialization. Please feel free to browse them, their crystal-clear names and some scattered comments will guide you understanding the whole thing.
The setup() function carries out the usual preliminary steps you've seen gazillions of times in any Arduino sketch out there and so far. The only notable statement is the first, which will set the initial LCD display cursor's position. Because, yes: this module requires you to setup a position along its rows and cols and then to "print" something, which will appear starting from that position.
Now let's move to the loop() function.
First of all, let's discover the switch statuses. In order to achieve this, the following code block is used for nearly each of them:
/*
* Start/Stop button management
*/
startStopButtonPressed = false;
startStopButtonState = digitalRead(startStopButtonPin);
if(startStopButtonState != startStopButtonPrevState)
{
startStopButtonPressed = startStopButtonState == HIGH;
startStopButtonPrevState = startStopButtonState;
}
A digitalRead is issued against the related pin and the result is compared to a previously readed value: if something has changed, the new value is stored for future reference and the bool "xxxButtonPressed" static variable is set to true if the button is pressed.
Looking at the circuit diagram below, you'll notice that each input pin is forced to LOW by a 10k resistor unless the corresponding switch is pressed and the pin itself is directly connected to +5V. A fairly classic scenario, uh?
Previously, I said "nearly each of them" because there's one button that acts in a different way than the others: S4. Its code block is capable of detecting the aforementioned long press in order to enter SETUP mode.
Next comes the mode management block switch: each case looks at the button state triggers ("xxxButtonPressed") and redirects the flow toward the proper new state, or performs the proper action.
case MODE_IDLE:
if(resetButtonPressed)
{
Reset();
}
if(resetButtonLongPressed)
{
currentMode = MODE_SETUP;
}
if(startStopButtonPressed)
{
currentMode = currentMode == MODE_IDLE ? MODE_RUNNING : MODE_IDLE;
if(currentMode == MODE_RUNNING)
{
// STARTING TIMER!
startTime = now();
}
}
break;
The previous code snippet shows how the IDLE mode is managed, and it's pretty self-explanatory. Another example shows how any button press while ringing will stop it:
case MODE_RINGING:
if(resetButtonPressed || startStopButtonPressed || downButtonPressed || upButtonPressed)
{
currentMode = MODE_IDLE;
}
break;
Looks easy, isn't it? :-) It is.
The next block - "Time management" - performs the actual time difference calculation, triggers the RINGING mode and actually rings the buzz when it's time to do so.
The last block - "LCD management" - manages the LCD display for each mode by printing the proper strings at their proper locations.
That's it.Wrap up and action!
Now that this little puppy has no more secrets to you, let's see it in action. Thanks for watching, and have fun! | https://www.hackster.io/i-and-myself/arduino-kitchen-timer-db8ba6 | CC-MAIN-2021-39 | refinedweb | 1,045 | 61.77 |
When we think of passing an open descriptor from one process to another, we normally think of either
A child sharing all the open descriptors with the parent after a call to fork
All descriptors normally remaining open when exec is called
In the first example, the process opens a descriptor, calls fork, and then the parent closes the descriptor, letting the child handle the descriptor. This passes an open descriptor from the parent to the child. But, we would also like the ability for the child to open a descriptor and pass it back to the parent.
Current Unix systems provide a way to pass any open descriptor from one process to any other process. That is, there is no need for the processes to be related, such as a parent and its child. The technique requires us to first establish a Unix domain socket between the two processes and then use sendmsg to send a special message across the Unix domain socket. This message is handled specially by the kernel, passing the open descriptor from the sender to the receiver. steps involved in passing a descriptor between two processes are then as follows:
Create a Unix domain socket, either a stream socket or a datagram socket.
If the goal is to fork a child and have the child open the descriptor and pass the descriptor back to the parent, the parent can call socketpair to create a stream pipe that can be used to exchange the descriptor.
If the processes are unrelated, the server must create a Unix domain stream socket and bind a pathname to it, allowing the client to connect to that socket. The client can then send a request to the server to open some descriptor and the server can pass back the descriptor across the Unix domain socket. Alternately, a Unix domain datagram socket can also be used between the client and server, but there is little advantage in doing this, and the possibility exists for a datagram to be discarded. We will use a stream socket between the client and server in an example presented later in this section.
One process opens a descriptor by calling any of the Unix functions that returns a descriptor: open, pipe, mkfifo, socket, or accept, for example. Any type of descriptor can be passed from one process to another, which is why we call the technique "descriptor passing" and not "file descriptor passing."
The sending process builds a msghdr structure (Section 14.5) containing the descriptor to be passed. POSIX specifies that the descriptor be sent as ancillary data (the msg_control member of the msghdr structure, Section 14.6), but older implementations use the msg_accrights member. The sending process calls sendmsg to send the descriptor across the Unix domain socket from Step 1. At this point, we say that the descriptor is "in flight." Even if the sending process closes the descriptor after calling sendmsg, but before the receiving process calls recvmsg (in the next step), the descriptor remains open for the receiving process. Sending a descriptor increments the descriptor's reference count by one.
The receiving process calls recvmsg to receive the descriptor on the Unix domain socket from Step 1. It is normal for the descriptor number in the receiving process to differ from the descriptor number in the sending process. Passing a descriptor is not passing a descriptor number, but involves creating a new descriptor in the receiving process that refers to the same file table entry within the kernel as the descriptor that was sent by the sending process.
The client and server must have some application protocol so that the receiver of the descriptor knows when to expect it. If the receiver calls recvmsg without allocating room to receive the descriptor, and a descriptor was passed and is ready to be read, the descriptor that was being passed is closed (p. 518 of TCPv2). Also, the MSG_PEEK flag should be avoided with recvmsg if a descriptor is expected, as the result is unpredictable.
We now provide an example of descriptor passing. We will write a program named mycat that takes a pathname as a command-line argument, opens the file, and copies it to standard output. But instead of calling the normal Unix open function, we call our own function named my_open. This function creates a stream pipe and calls fork and exec to initiate another program that opens the desired file. This program must then pass the open descriptor back to the parent across the stream pipe.
Figure shows the first step: our mycat program after creating a stream pipe by calling socketpair. We designate the two descriptors returned by socketpair as [0] and [1].
The process then calls fork and the child calls exec to execute the openfile program. The parent closes the [1] descriptor and the child closes the [0] descriptor. (There is no difference in either end of the stream pipe; the child could close [1] and the parent could close [0].) This gives us the arrangement shown in Figure.
The parent must pass three pieces of information to the openfile program: (i) the pathname of the file to open, (ii) the open mode (read-only, read–write, or write-only), and (iii) the descriptor number corresponding to its end of the stream pipe (what we show as [1]). We choose to pass these three items as command-line arguments in the call to exec. An alternative method is to send these three items as data across the stream pipe. The openfile program sends back the open descriptor across the stream pipe and terminates. The exit status of the program tells the parent whether the file could be opened, and if not, what type of error occurred.
The advantage in executing another program to open the file is that the program could be a "set-user-ID" binary, which executes with root privileges, allowing it to open files that we normally do not have permission to open. This program could extend the concept of normal Unix permissions (user, group, and other) to any form of access checking it desires.
We begin with the mycat program, shown in Figure.
unixdomain/mycat.c
1 #include "unp.h"
2 int my_open(const char *, int);
3 int
4 main(int argc, char **argv)
5 {
6 int fd, n;
7 char buff[BUFFSIZE];
8 if (argc != 2)
9 err_quit("usage: mycat <pathname>");
10 if ( (fd = my_open(argv[1], O_RDONLY)) < 0)
11 err_sys("cannot open %s", argv[1]);
12 while ( (n = Read(fd, buff, BUFFSIZE)) > 0)
13 Write(STDOUT_FILENO, buff, n);
14 exit(0);
15 }
If we replace the call to my_open with a call to open, this simple program just copies a file to standard output.
The function my_open, shown in Figure, is intended to look like the normal Unix open function to its caller. It takes two arguments, a pathname and an open mode (such as O_RDONLY to mean read-only), opens the file, and returns a descriptor.
8 socketpair creates a stream pipe. Two descriptors are returned: sockfd[0] and sockfd[1]. This is the state we show in Figure.
9–16 fork is called, and the child then closes one end of the stream pipe. The descriptor number of the other end of the stream pipe is formatted into the argsockfd array and the open mode is formatted into the argmode array. We call snprintf because the arguments to exec must be character strings. The openfile program is executed. The execl function should not return unless it encounters an error. On success, the main function of the openfile program starts executing.
17–22 The parent closes the other end of the stream pipe and calls waitpid to wait for the child to terminate. The termination status of the child is returned in the variable status, and we first verify that the program terminated normally (i.e., it was not terminated by a signal). The WEXITSTATUS macro then converts the termination status into the exit status, whose value will be between 0 and 255. We will see shortly that if the openfile program encounters an error opening the requested file, it terminates with the corresponding errno value as its exit status.
unixdomain/myopen.c
1 #include "unp.h"
2 int
3 my_open(const char *pathname, int mode)
4 {
5 int fd, sockfd[2], status;
6 pid_t childpid;
7 char c, argsockfd[10], argmode[10];
8 Socketpair(AF_LOCAL, SOCK_STREAM, 0, sockfd);
9 if ( (childpid = Fork()) == 0) { /* child process */
10 Close(sockfd[0]);
11 snprintf(argsockfd, sizeof(argsockfd), "%d", sockfd[1]);
12 snprintf(argmode, sizeof(argmode), "%d", mode);
13 execl("./openfile", "openfile", argsockfd, pathname, argmode,
14 (char *) NULL);
15 err_sys("execl error");
16 }
17 /* parent process - wait for the child to terminate */
18 Close(sockfd[1]); /* close the end we don't use */
19 Waitpid(childpid, &status, 0);
20 if (WIFEXITED(status) == 0)
21 err_quit("child did not terminate");
22 if ( (status = WEXITSTATUS(status)) == 0)
23 Read_fd(sockfd[0], &c, 1, &fd);
24 else {
25 errno = status; /* set errno value from child's status */
26 fd = -1;
27 }
28 Close(sockfd[0]);
29 return (fd);
30 }
23 Our function read_fd, shown next, receives the descriptor on the stream pipe. In addition to the descriptor, we read one byte of data, but do nothing with it.
When sending and receiving a descriptor across a stream pipe, we always send at least one byte of data, even if the receiver does nothing with the data. Otherwise, the receiver cannot tell whether a return value of 0 from read_fd means "no data (but possibly a descriptor)" or "end-of-file."
Figure shows the read_fd function, which calls recvmsg to receive data and a descriptor on a Unix domain socket. The first three arguments to this function are the same as for the read function, with a fourth argument being a pointer to an integer that will contain the received descriptor on return.
9–26 This function must deal with two versions of recvmsg: those with the msg_control member and those with the msg_accrights member. Our config.h header (Figure D.2) defines the constant HAVE_MSGHDR_MSG_CONTROL if the msg_control version is supported.
10–13 The msg_control buffer must be suitably aligned for a cmsghdr structure. Simply allocating a char array is inadequate. Here we declare a union of a cmsghdr structure with the character array, which guarantees that the array is suitably aligned. Another technique is to call malloc, but that would require freeing the memory before the function returns.
27–45 recvmsg is called. If ancillary data is returned, the format is as shown in Figure. We verify that the length, level, and type are correct, then fetch the newly created descriptor and return it through the caller's recvfd pointer. CMSG_DATA returns the pointer to the cmsg_data member of the ancillary data object as an unsigned char pointer. We cast this to an int pointer and fetch the integer descriptor that is pointed to.
lib/read_fd.c
1 #include "unp.h"
2 ssize_t
3 read_fd(int fd, void *ptr, size_t nbytes, int *recvfd)
4 {
5 struct msghdr msg;
6 struct iovec iov[1];
7 ssize_t n;
8 #ifdef HAVE_MSGHDR_MSG_CONTROL
9 union {
10 struct cmsghdr cm;
11 char control[CMSG_SPACE(sizeof (int))];
12 } control_un;
13 struct cmsghdr *cmptr;
14 msg.msg_control = control_un.control;
15 msg.msg_controllen = sizeof(control_un.control);
16 #else
17 int newfd;
18 msg.msg_accrights = (caddr_t) & newfd;
19 msg.msg_accrightslen = sizeof(int);
20 #endif
21 msg.msg_name = NULL;
22 msg.msg_namelen = 0;
23 iov[0].iov_base = ptr;
24 iov[0].iov_len = nbytes;
25 msg.msg_iov = iov;
26 msg.msg_iovlen = 1;
27 if ( (n = recvmsg(fd, &msg, 0)) <= 0)
28 return (n);
29 #ifdef HAVE_MSGHDR_MSG_CONTROL
30 if ( (cmptr = CMSG_FIRSTHDR(&msg)) != NULL &&
31 cmptr->cmsg_len == CMSG_LEN(sizeof(int))) {
32 if (cmptr->cmsg_level != SOL_SOCKET)
33 err_quit("control level != SOL_SOCKET");
34 if (cmptr->cmsg_type != SCM_RIGHTS)
35 err_quit("control type != SCM_RIGHTS");
36 *recvfd = *((int *) CMSG_DATA(cmptr));
37 } else
38 *recvfd = -1; /* descriptor was not passed */
39 #else
40 if (msg.msg_accrightslen == sizeof(int))
41 *recvfd = newfd;
42 else
43 *recvfd = -1; /* descriptor was not passed */
44 #endif
45 return (n);
46 }
If the older msg_accrights member is supported, the length should be the size of an integer and the newly created descriptor is returned through the caller's recvfd pointer.
Figure shows the openfile program. It takes the three command-line arguments that must be passed and calls the normal open function.
unixdomain/openfile.c
1 #include "unp.h"
2 int
3 main(int argc, char **argv)
4 {
5 int fd;
6 if (argc != 4)
7 err_quit("openfile <sockfd#> <filename> <mode>");
8 if ( (fd = open(argv[2], atoi(argv[3]))) < 0)
9 exit((errno > 0) ? errno : 255);
10 if (write_fd(atoi(argv[1]), "", 1, fd) < 0)
11 exit((errno > 0) ? errno : 255);
12 exit(0);
13 }
7–12 Since two of the three command-line arguments were formatted into character strings by my_open, two are converted back into integers using atoi.
9–10 The file is opened by calling open. If an error is encountered, the errno value corresponding to the open error is returned as the exit status of the process.
11–12 The descriptor is passed back by write_fd, which we show next. This process then terminates. But, recall that earlier in the chapter, we said that it was acceptable for the sending process to close the descriptor that was passed (which happens when we call exit), because the kernel knows that the descriptor is in flight, and keeps it open for the receiving process.
The exit status must be between 0 and 255. The highest errno value is around 150. An alternate technique that doesn't require the errno values to be less than 256 would be to pass back an error indication as normal data in the call to sendmsg.
Figure shows the final function, write_fd, which calls sendmsg to send a descriptor (and optional data, which we do not use) across a Unix domain socket.
lib/write_fd.c
1 #include "unp.h"
2 ssize_t
3 write_fd(int fd, void *ptr, size_t nbytes, int sendfd)
4 {
5 struct msghdr msg;
6 struct iovec iov[1];
7 #ifdef HAVE_MSGHDR_MSG_CONTROL
8 union {
9 struct cmsghdr cm;
10 char control[CMSG_SPACE(sizeof(int))];
11 } control_un;
12 struct cmsghdr *cmptr;
13 msg.msg_control = control_un.control;
14 msg.msg_controllen = sizeof(control_un.control);
15 cmptr = CMSG_FIRSTHDR(&msg);
16 cmptr->cmsg_len = CMSG_LEN(sizeof(int));
17 cmptr->cmsg_level = SOL_SOCKET;
18 cmptr->cmsg_type = SCM_RIGHTS;
19 *((int *) CMSG_DATA(cmptr)) = sendfd;
20 #else
21 msg.msg_accrights = (caddr_t) & sendfd;
22 msg.msg_accrightslen = sizeof(int);
23 #endif
24 msg.msg_name = NULL;
25 msg.msg_namelen = 0;
26 iov[0].iov_base = ptr;
27 iov[0].iov_len = nbytes;
28 msg.msg_iov = iov;
29 msg.msg_iovlen = 1;
30 return (sendmsg(fd, &msg, 0));
31 }
As with read_fd, this function must deal with either ancillary data or older access rights. In either case, the msghdr structure is initialized and then sendmsg is called.
We will show an example of descriptor passing in Section 28.7 that involves unrelated processes. Additionally, we will show an example in Section 30.9 that involves related processes. We will use the read_fd and write_fd functions we just | http://codeidol.com/unix/unix-network-programming/Unix-Domain-Protocols/Passing-Descriptors/ | CC-MAIN-2014-42 | refinedweb | 2,530 | 62.78 |
Abstraction
In software Engineering, abstraction means the act of hiding irrelevant classes from users to reduce complexity. This means that users can create some functions without worrying about how they work. They know what the function does but they don't need to know the logic behind it. Let's look at our smartphones for instance we can use the camera to take pictures, but we know don't the operations that are happening in the background while taking the picture.
In Python, we can achieve abstraction by using abstract classes and interfaces. You may be wondering, what is an abstract class? It is any class that contains at least one abstract method and cannot be instantiated, but can be subclassed. An abstract class is useful when designing large functions and it also provides large interfaces for the implementation of components. Let's create an abstraction in python using the abc module.
Code
from abc import ABC, abstractmethod class Shape(ABC): # abstract method @abstractmethod def sides(self): print('This is the main class') def normal(self): print('This is a normal class') class Triangle(Shape): def sides(self): print(' Triangle has 3 sides') class Square(Shape): def sides(self): print(' Square has 4 sides') t = Triangle() t.sides() t.normal() s = Square() s.sides() s.normal()
Output
Triangle has 3 sides This is a normal class Square has 4 sides This is a normal class
Explaination
We created the abstract base class Shape using the ABC module that was import from Python, and also created an abstract method sides(). The Triangle() and Square() classes inherited the base class and have their own side() methods. The were created and the sides() methods was invoked, The sides() methods, hidden definitions were activated when the user creates the Triangle() and Square() objects and invoked the sides() method. The sides() method defined in the Shape() class in never invoked, while the normal() method is invoked because it is not an abstract method.
Conclusion
Abstraction is important whenever we want to hide core functionalities from users. An abstract class can contain both a normal method and an abstract method. We noticed that abstract method in the abstract class are never activated while the normal method was activated.
Top comments (0) | https://dev.to/isaiaholadapo/abstraction-in-python-4bi8 | CC-MAIN-2022-40 | refinedweb | 375 | 60.85 |
As we know, if we press the F8(Play) keyboard button, iTunes or Music .app opened in default on macOS. Some Swift classes are available for preventing this keyboard shortcut but they are not compatible with C# & Xamarin. Fore example, Spotify macOS app have this ability. If you press play button on UI once, it takes the control over iTunes and handles the "Play" button key event.
I have this code block. However, macOS cannot fires the code block because of iTunes. The other keys like letters and numbers working correctly:
private NSEvent KeyboardEventHandler(NSEvent theEvent) { ushort keyCode = theEvent.KeyCode; if(keyCode== 100) //rbKeyF8 100 is play button. { switch (isplaying) { case true: StopMusic(); break; case false: PlayMusic(); break; } } // NSAlert a = new NSAlert() //{ // AlertStyle = NSAlertStyle.Informational, // InformativeText = "You press the " + keyCode + " button :)", // MessageText = "Hi" //}; //a.RunModal(); return theEvent; }
How can we do it with C# ? Thanks.
Register a new subclass
[Register ("App")] public class App : NSApplication { public App (System.IntPtr p) : base (p) { } public override void SendEvent (NSEvent theEvent) { base.SendEvent (theEvent); } }
In Info.plist change the PrincipalClass
<key>NSPrincipalClass</key> <string>App</string>
Answers
It looks like you need to subclass NSApplication and look at sendEvent
Apple does not make this easy
I read the Stackoverflow question before. But I have a stuck thought in my mind which is critical:
How can I create a NSApplication subclass ? In existing classes or should I create a new class ? Not my application is a NSApplication ? How will two NSApplication work at the same time?
Thanks.
Register a new subclass
In Info.plist change the PrincipalClass
Thanks ChrisHamons, it solved my problem. Now my app responds the play button press. However, iTunes opened after the my app's respond. How can I prevent it to open ?
I don't believe you can mark the event as "handled" give how Cocoa handles NSEvents.
I'm not sure what APIs exist beyond AppleScript for talking to iTunes. | https://forums.xamarin.com/discussion/174323/prevent-itunes-music-app-start-after-pressing-f8-play-button-with-c-xamarin | CC-MAIN-2021-25 | refinedweb | 322 | 68.47 |
Playing with Apache Beam on Azure — Part 1: A Basic Example
A basic exploration of Apache beam with Azure, using it to connect to storage and transform and move data into Cosmos DB.
Something I’ve been working on recently is using Apache Beam in Azure to move and transform data. The documentation on this is sparse to the point of being non-existent so hopefully I’ll be able to fill some gaps with what I discovered successfully implementing a small Apache Beam pipeline running on Azure, moving data from Azure Storage to Cosmos DB (Mongo DB API).
Apache Beam
First off, what is Apache Beam? Well basically it’s a framework for developing data processing jobs for streaming or batch data, which can then run on a variety of underlying engines or “runners”. So using one of the Beam SDKs you can develop a job to take, say, JSON messages from some source, do some transformations on them and push them to a downstream sink, and then you have the option to run those jobs on a number of distributed processing engines, for instance:
- Apache Spark
- Apache Flink
- Apache Samza
- Google Cloud Dataflow
There is also a Direct Runner which runs things standalone on a single machine, and is typically used for development and testing etc. There’s plenty more information on Apache Beam here: Learn about Beam (apache.org)
In this example I’ll use the Direct Runner to validate we can connect to things in Azure and do stuff with that data. In a later post I’ll talk about setting up Beam to run on other engines like Flink on AKS (maybe).
Connectors and Azure
Which does bring me to the subject of connectors. As with any data processing tooling, the key question is — will it be able to connect to my data sources? If you can’t fetch your data or put it where you want it, this limits the value of any tool. The list of “default” connectors for Apache Beam is here: Built-in I/O Transforms (apache.org)
As you can see — the default list has a lot of gaps where Azure is concerned. If you look at the list of supported filesystem interfaces for instance there’s support for AWS and GCP storage but not Azure storage.
However that’s not the end of the story, as there exist SDK extensions to allow reading of Azure Storage accounts, they’re just not very well documented.
Similarly on the Cosmos DB end there is no connector currently for the Core API, but in my example I was using the Cosmos DB Mongo API, which worked perfectly using the Beam Mongo DB connector.
Let’s Just Get On With The Example
First off, I’m going to assume you’re familiar with a few things to follow this example:
- Java programming
- The Maven build automation tool
- Navigating around the Azure portal and basic management of Azure Storage and Cosmos DB accounts.
In this simple example I’m going to create a pipeline using the Java SDK for Apache Beam that:
- Creates a connection to an Azure blob storage account
- Reads a sample.csv file from that account
- Converts each row in the CSV file to a JSON document
- Writes the JSON documents to Cosmos DB using the Mongo API
I’m not going to show you how to create the storage / Cosmos accounts, or upload CSV files to the storage account, there’s plenty of documentation covering that sort of thing.
For now we’ll use Apache Beam’s Direct Runner, which essentially means it will run stand alone on your dev machine. Where Azure is concerned you can also tell Beam to run this on Databricks or HD Insight using the Spark Runner, or Apache Flink on AKS using the Flink runner. I’ve had this running on Flink on AKS too, and how I did that I may cover in an additional post.
Before we start in on the code, let’s get the dependencies out of the way. I used Maven for this example, and we need to add dependencies for
- The Beam Java SDK
- The Beam Direct Runner for Java (the “runtime” for Beam)
- The Beam Java SDK for Azure IO
- The Beam Java SDK for Mongo (to write to Cosmos with)
Here’s a snippet from the pom.xml:
<dependencies>
<!-- Adds a dependency on the Beam SDK. -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>2.27.0</version>
</dependency> <dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>2.27.0</version>
<scope>runtime</scope>
</dependency> <dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-azure</artifactId>
<version>2.27.0</version>
</dependency> <dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-mongodb</artifactId>
<version>2.27.0</version>
</dependency><!-- other dependencies here... --></dependencies>
At the time of writing I was using Beam version 2.27.0. There are probably later versions available now. Probably you should define the Beam version as a Maven variable so you only have to change it in one place, if you care about that sort of thing (Please Note: You will not get good programming practice advice from this blog post, or indeed any of my posts).
Now, here’s an overview of the main class that runs the example. As you can see it’s very basic:
All the activity revolves around the “Pipeline” artefact. The “pipeline” in Apache Beam is the backbone of the process — it defines the series of transformations that you want to carry out on your data.
So the first thing we do is create a PipelineOptions object (actually it’s the second thing. The first thing is needlessly printing “Simple Beam on Azure Example”). This object, as the name suggests, is where we set all the options for the pipeline we’re going to create. Here is where we set up the parameters for connecting to Azure Storage, which is different from the way it works with some of the “native” connectors that Apache Beam has (as we’ll see).
Then to set up the connection to Azure we transform the options object into a BlobstoreOptions object in order to set the connection string parameter on it:
PipelineOptions options = PipelineOptionsFactory.create();options.as(BlobstoreOptions.class).setAzureConnectionString("<BLOB STORE CONNECTION STRING>");
The connection string is the usual Azure storage connection string that you can obtain by looking at the Access Key settings for the storage account in the Azure Portal.
An interesting thing to note is that you can use the options.as(<CLASS>) method more than once on the same pipeline options object, applying different kinds of options types to combine multiple different options together.
Once those options are set, it’s time to create our main Pipeline object, passing the options to it as we do so, like this:
Pipeline p = Pipeline.create(options);
Then it’s time to use function chaining to define the series of transformations that the pipeline is going to carry out:
p.apply(TextIO.read().from("azfs://ACCOUNT/CONTAINER/FOLDER/file.csv"))
.apply(MapElements.via(new ConvertStringToDocument()))
.apply(MongoDbIO.write()
.withUri("mongodb://<MONGO API CONNECTION STRING>")
.withDatabase("beamtest")
.withCollection("people"));
This sequence of actions does the following things:
- First we use the TextIO.read() method to read our sample CSV file (using the azfs:// path syntax) from the blob storage account we set up in the pipeline options. The TextIO (apache.org) class encapsulates a bunch of basic IO transforms that relate to text files, reading/writing etc. Note that the file path has to include the storage account name and container name. Note Also that we don’t have to specifically tell it to open the connection. It does that automagically.
- Next we use the MapElements.via(..) call to do a simple transformation from the CSV row string to the JSON format we want. Essentially I pass into the call a function (ConvertStringToDocument) which implements a certain interface, and this function when it’s passed a String containing a row from the CSV file, it spits out a JSON document in the format we want. I’ll show the code for this further down, but you can see examples of this approach in the MapElements (apache.org) documentation.
- Now that we have JSON documents in the pipeline, it’s time to write them to Cosmos. At the time of writing there isn’t a Core (SQL) API connector for Cosmos in Apache Beam. So I cheat a little here and use the Mongo connector to write to Cosmos’ Mongo API. The MongoDbIO class is one of the out of the box connectors and you can see how we build the connection object, passing the connection string, and details of the database and collection. The connection string can be obtained from the Cosmos account details in the Azure Portal, in a similar way to the storage connection string above — see Connect a MongoDB application to Azure Cosmos DB | Microsoft Docs.
At this point we haven’t actually processed anything, we’ve just defined what the pipeline will try and do when we run it.
As a little aside, I want to quickly look at the code for the MapElements transformation — a really simple way to take some data thing and turn it into some other data thing.
You need to be mindful of what types your transformations are passing into the pipeline, or expecting to get from the pipeline. For instance here the TextIO.read() from blob reads the file line by line and we get strings back as a result. The MongoDbIO.write() method though (see MongoDbIO.Write (apache.org)) expects to see org.bson.Document objects passed to it. If we try and just pass it CSV strings, or even string representations of JSON, it will simply fall over in a heap of Exceptions. Hence the ConvertStringToDocument class I wrote takes a CSV string, and shoves it into a Document object.
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.bson.Document;public class ConvertStringToDocument extends SimpleFunction<String, Document> { @Override
public Document apply(String input) { try { String[] elements = input.split(",");
Document document = new Document();
document.put("name", elements[0]);
document.put("gender", elements[1]);
document.put("city", elements[2]);
return document; } catch (Exception e) {
throw new RuntimeException(e);
}
}
}
Again, this is a really basic ugly example, but it demonstrates the principle. The function extends SimpleFunction and overrides the apply() method to take my CSV row String, pull out the first three columns and create a basic org.bson.Document document out of them. So in my case I have a simple list of people, gender and location, and this:
kenny,male,glasgow
becomes this:
{
"name" : "kenny",
"gender" : "male",
"city" : "glasgow"
}
In an org.bson.Document object that the Mongo IO write method is happy to deal with.
Finally, all that’s left to do back in the main App class is run the pipeline we’ve defined:
p.run().waitUntilFinish();
This just kicks off the pipeline and then waits in synchronous fashion until it’s finished.
Running the example is just a case of running the main class in App just like you would any other basic “Hello World” type example. All being well you should see after a few seconds your CSV file contents happily written to Cosmos.
If you haven’t and are now instead looking at a big Java stack trace, well… there are plenty of things that could cause issues here. Did you open the Azure Storage firewall to allow your client to connect? Did you remember to put the name of your storage account in the AZFS file path? Really, at this stage you’re on your own.
Some Closing Thoughts
The example above really just shows how to connect to a couple of basic Azure services with Apache Beam, which I wrote up partly for my own benefit because the documentation isn’t there.
For this we only used the Direct Runner, which executed on your local machine doesn’t provide a lot of scale for transformations and of course has to pass data over the internet to the Azure services. The next step would be to run this stuff on Azure itself. You could run this on a VM in Azure still using the Direct Runner, but to get true scale you really want to take advantage of the runners that can distribute and parallelise the transformations — the Apache Spark runner for instance, or the Apache Flink runner.
After I got this running I next went and deployed this on an Apache Flink cluster, running on Azure Kubernetes Service (AKS) which hugely increased the scale of the processing possible. But that’s the subject for another blog post. | https://the9090rule.medium.com/apache-beam-on-azure-part-1-a-basic-example-cbcf72969575?source=post_internal_links---------7---------------------------- | CC-MAIN-2021-31 | refinedweb | 2,142 | 60.24 |
Introducing developer mode for the Agent
The Datadog Agent is deployed on a lot of machines, so its performance is very important. As you would imagine, we carefully profile the Agent’s code for efficiency and speed before each release.
Because the Agent is open source, it benefits from contributions made by developers all over the world, which is great. What’s not as great is that until now there was no easy and consistent way for the community to profile their Agent code before submitting a pull request. This led to unnecessarily long GitHub conversations with contributors while we pinned down and resolved inefficiencies. That’s why, as of the most recent release (version 5.4), the Agent ships with profiling tools baked in. We call the new functionality “developer mode.”
Who is this for?
Anyone actively working on or contributing to the Datadog Agent code will find the new developer mode to be an essential tool. Whether modifying the core Agent or creating a custom Agent Check, you will be able to see the impact your code changes have on performance.
Which metrics are supported?
A wide variety of metrics are available, but here are a few of the most important ones:
- CPU usage
- Memory consumption
- Threads in use
- Network connections open
- Total time to run configured checks
Profile individual Agent Checks
Let’s say you just wrote your own Check. Before submitting the pull request, you can (and should) run:
python agent.py check <check_name> --profile
This command will run the specified Agent Check just one time, and then print collected metrics and profiling information (run time, memory use, etc.) to stdout. Once your Check looks good, you may then want to turn on full developer mode and profile everything.
Profile everything with developer mode
To enable developer mode for the Agent itself as well as all Agent Checks, open your datadog.conf and add the following line:
developer_mode: yes
After saving the changes to datadog.conf, be sure to restart the Agent.
Once enabled, developer mode will begin collecting all Agent statistics.
You can also enable developer mode with the addition of the
--profile command line flag:
python agent.py start --profile
Without any additional configuration, the profiling metrics collected in developer mode are available in Datadog under the
datadog.agent.* namespace.
Additionally, since developer mode is built on top of the popular Python profiling library psutil (version 2.1.1), any psutil method supported by your environment is available. You can also report these additional metrics by editing the agen_etrics.yaml file, located in the conf.d directory. Please refer to the documentation on the Datadog Agent Project Wiki for more information on configuring agen_etrics.
Digging into collector.log
Because data collected while developer mode is enabled is sent directly to Datadog, you may never need to open the collector.log. Nonetheless, some example excerpts from collector.log are included below.
Memory leak checks
This block shows memory usage before and after a disk check.
2015-06-22 16:25:05 Eastern Daylight Time | INFO | checks(__init__.pyc:692) | disk Memory Before (RSS): 18685952 Memory After (RSS): 18722816 Difference (RSS): 36864 Memory Before (VMS): 2533859328 Memory After (VMS): 2534907904 Difference (VMS): 1048576
Collected stats
Agent stats include memory use, I/O, and so on.
2015-06-22 16:25:05 Eastern Daylight Time | INFO | checks.collector( collector.pyc:507) | AGENT STATS: [ ( 'datadog.agent.collector.memory_info.rss', 1435004705, 28442624, { 'hostname': 'vagelitab', 'type': 'gauge'}), ( 'datadog.agent.collector.io_counters.write_bytes', 1435004705, 608.1111111111111, { 'hostname': 'vagelitab', 'type': 'gauge'}) … ]
Top function calls
The log captures the top 20 function calls, as ranked by cumulative time.
2015-06-22 16:25:05 Eastern Daylight Time | DEBUG | collector(profile.pyc:37) | 2236475 function calls (2220860 primitive calls) in 383.244 seconds Ordered by: cumulative time List reduced from 930 to 20 due to restriction <20> Ncalls tottime percall cumtime percall filename:lineno(function) 20 299.986 14.999 299.986 14.999 {time.sleep} 21 0.051 0.002 83.260 3.965 checks\collector.pyc:249(run) 147 0.004 0.000 68.352 0.465 wmi.pyc:801(query) 147 0.154 0.001 68.348 0.465 wmi.pyc:1005(query) …
Where can I learn more?
Documentation on using developer mode is available at the Datadog Agent Project Wiki. A full list of process-level methods supported by psutil can be found at pythonhosted.org. | https://www.datadoghq.com/blog/agent-developer-mode/ | CC-MAIN-2017-30 | refinedweb | 736 | 57.57 |
Go Time – Episode #63
Changelog Takeover — K8s and Virtual Kubelet
with Erk St. Martin and Brian Ketelsen
Guests
Adam and Jerod jumped in as hosts for an experiment in quantum podcasting, letting Erik and Brian play guests to talk about Virtual Kubelet, building OSS at Microsoft, BBQ (of course), and other interesting projects and news.
Featuring
Sponsors
Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform.
Linode – Our cloud server of choice. Get one of the fastest, most efficient SSD cloud servers for only $5/mo. Use the code
changelog2017 to get 4 months free!
Rollbar – Our error monitoring partner. Rollbar provides real-time error monitoring, alerting, and analytics to help us resolve production errors in minutes. To start deploying with confidence - head to rollbar.com/changelog
GoCD – GoCD is an on-premise open source continuous delivery server created by ThoughtWorks that lets you automate and streamline your build-test-release cycle for reliable, continuous delivery of your product.
Notes & Links
Virtual Kubelet (an Introduction)
Virtual Kubelet (the project)
The Changelog Transcripts
Interesting Go Projects and News
Free Software Friday!
Each week on the show we give a shout out to an open source project or community (or maintainer) that’s made an impact in our day to day developer lives.
Erik - Metaparticle
Transcript
So we have a short time span here.
We do, we have very short time. Let’s do it the right way, I guess… Do we need to give anyone the breakdown?
We need a spiel.
This is a crossover show, so Erik is still gonna introduce it like normal, but then we’re gonna kind of interview you guys, right? [unintelligible 00:01:04.07]
Yup, [unintelligible 00:01:06.10]
Okay, so I’ll still do the intro?
I was actually thinking about, what if I did the intro and act like I wasn’t Jerod – or, actually, act like I wasn’t Erik, and I said that this actually wasn’t GoTime, but it might be, and then Jerod, you say “I don’t think it’s really the Changelog either. Which one is it?” What do you think?
I think you guys should do the whole host thing for the whole show, right? Intro and all, we’re taking over.
Yeah, GoTime Takeover, I like that.
GoTime Takeover.
GoTime Takeover.
Well, we don’t do an intro though.
No, we really don’t.
Our intro is in the post, so we just start talking on our show.
Technically, we do. This could be the show.
This is the show.
Which show is it, though? Is it GoTime, or is it the Changelog?
It’s both. Simultaneously both show, and…
I’m so confused…
…and we don’t know where we’re at.
We should start at the beginning and work our way up.
We should introduce Erik and Brian, OR we should introduce Jerod and Adam.
It’s quantum podcasting…
Yeah, quantum podcasting.
Well, we’re here today to talk about Virtual Kubelet, which is something Erik and Brian, you are both super excited about, and Adam and I are both super ignorant about, and so I’m about to get schooled–
Speak for yourself, man…
Oh, you know about this?
I know all about it, I was part of the hack team.
Alright, Adam, tell us all about it.
My name isn’t on the list; why is my name not there? Anyways, I’m kidding around.
I like taking credit for your work.
That’s right. You were right, Jerod, I’m ignorant. Go ahead.
Well, since we’ve established that, help us out here, guys. Help us understand Virtual Kubelet - what it is, who built it… The whole spiel, and then we’ll pour into all sorts of questions and side conversations around it, but why don’t you two give us the rundown?
So I guess to kind of fully understand it, how familiar are you with Kubernetes itself?
So we’ve done shows on Kubernetes, which means we’ve had smart people teach us about it, and we haven’t actually used it IRL, or anything, so it’s very much academic and somewhat transient knowledge that floats in and out between my ears; I don’t know if that speaks for you, Adam, but very generic knowledge, no practical use of it, so a general rundown would be nice, too.
Okay, so from a high level, Kubernetes is an orchestration platform for containers, but really it’s more than that. To fully understand how it works, you can think about - there’s an API server as part of the control plane for Kubernetes. The control plane are all the components that Kubernetes kind of handles for you, and then you have your nodes with your logic.
[00:04:04.11] So you submit a spec for a resource that you would like, whether that’s a service, or you’re trying to run a pod, which is really just a group of containers, and the API server kind of recognizes that as desired state, whether you’re creating one or updating it or deleting it, and then there’s other components like controllers and things like that that run within the system that are just constantly trying to reconcile the differences.
So you just submitted a pod, I see a pod in the API, but I don’t see a pod running on any nodes, so I need to assign this to the node and the scheduler runs, and things like that.
Who decides what a reconciliation looks like?
That would be the job of the controller. There’s different controllers - there’s a controller manager process that runs that kind of encompasses some of those, but in some cases, like with the operator pattern, like Prometheus and things, it has its own controller. And the controller’s job is just to kind of look at what’s in the API and watch it and monitor the thing that it controls and try to reconcile the differences. In the case of a pod, first the scheduler kind of jumps in and assigns a node through kind of looking at what else is running and available resources… But yeah, each resource type kind of works the same way - you’re just kind of inserting it, and some process or another within the system is monitoring that and then trying to reconcile the differences. It’s just kind of like a big reconciliation loop.
The Kubelet is actually the agent that runs on all your worker nodes, and it looks at the things that the scheduler has assigned to it, and looks at what’s running in Docker and then reconciles the differences. It sees a pod in the API that it doesn’t have running, it starts it; if it sees something running that is no longer in Kubernetes’ API that’s assigned to it, it deletes it, and that’s sort of just rinse and repeat, that’s how the process works.
The Kubelet has a bunch of other jobs, too. I actually wrote a blog post today that kind of points out some of that stuff; it looks at the pod and it tries to fetch the images from the image repo, it attaches volumes to the containers, it handles the kind of networking, setting up the interfaces and dropping them in the container… So it’s kind of the workhorse for each node.
What’s the point of it? From my understanding, it’s supposed to allow outside systems to call into the Kubernetes cluster?
For the Kubelet, or the Virtual Kubelet?
The Virtual Kubelet. What’s the point?
Here’s kind of where we get into the Virtual Kubelet. The Virtual Kubelet is just a process, but it behaves the way the Kubelet does. So it just runs somewhere in your cluster as an application, but it can access the Kubernetes API and adds a node resource to the cluster; so it just kind of posts a spec saying “Hey, here’s a node”, and Kubernetes thinks that that’s a node, which means that the scheduler starts assigning work to it. So the Virtual Kubelet just sits here and monitors the API for any pods or things like that that could assign to it. And then rather than kind of interacting with a physical host, we created this provider interface, which just really – you implement a few methods, like Create, Update, Delete, the Virtual Kubelet kind of does that reconciliation loop and populates the environment variables and volumes and things like that from your secrets and maps, so that you kind of have to do minimal work outside of implementing how a pod gets deployed. That’s your job.
[00:07:56.26] The Virtual Kubelet kind of runs and calls into the provider at lifecycle events, like “Hey, we know that this pod is in Kubernetes, and we’ve asked you about the pods that are running (or whatever the equivalent is in the provider interface), what’s running there, and we know that you don’t have this. Please deploy it.” It works in that way. Or “Hey, this is no longer here” or “We’ve received a Delete event on this pod, and you’re still running it. Please tear it down.”
So let me see if I’m tracking this here, Erik. So a Kubelet normally runs in the context of a node, and it speaks to the API server and vice versa, representing that node, so to speak. And then it kind of manages or handles that node’s specific context. What is a node usually? Is it like a network endpoint? Is it like an IP address on a network? Is it a virtual machine? What does a node represent?
So a node is usually - in the Kubernetes context it’s either a physical or virtual machine.
It’s a server.
Okay, very good. That clears that up. So then a Virtual Kubelet is basically saying, “Hi, I’m a Kubelet and I have a node, and I can answer all the same regular API calls that an API server would expect a Kubelet to respond to”, only it’s not really any of those things; it’s just faking it.
Exactly. Now, it could be; you could use your Virtual Kubelet to run Docker containers just like Kubernetes does, but you could also build a provider for the Kubelet that did completely different things. One of the first providers we shipped was the ACI provider that lets us use Azure container instances to start work from a Kubernetes cluster without actually having a live node. Azure container instances are ephemeral; you start one and it goes away when it’s gone, and you don’t need to restart it.
And I think part of the confusion we’ve had in conversations is the fact that like it’s a node but it’s not a node, and people wonder, “Is this a process that runs on a host instead of the Kubelet?”, or things like that. I think to kind of fully understand that, you just think about a node in Kubernetes sense is just an entry in the API server. You just add yourself, and you’re like “Yes, there is–”
It’s just something that gets registered into the server. It’s like a line item in the database, or something; it just knows about it.
Exactly, and then the rest of the system reacts based on that, like “Okay, now I need to collect metrics from this”, or “Now the scheduler is allowed to schedule things to this.” Or “I know that I assigned this pod to this node, so when a kubectl exec comes in, I know I need to forward that request to that Kubelet”, because it’s responsible for that container that you’re trying to get in. So it’s really just an entry, and then from there you’re interacting with the API.
It’s super interesting because of the use cases. We talked a bit about that, and I think that’s the part that confuses people the most. You’re like, “Okay, so you’re like masquerading as a node, but why?” Brian pointed out ACI… I don’t know how familiar you are with Azure Container Instances, but…
I know they’re ephemeral, because Brian just told me. [laughter]
It’s true, they go away. [laughter]
So the best way to kind of think about Azure Container Instances is – they’re called a container group in the context of Azure Container Instances, but it’s basically like pods as a service. So you’re not really thinking about Kubernetes and the whole cluster and some of the other resource types that exist, you’re just like “Here’s my group of containers that kind of share a namespace. Just deploy it. And I want a public IP.” You’re only trying to run this one pod, or something like that.
[00:12:13.01] There’s no kind of service discovery and all of these things, and it makes it really interesting for people who just only have a couple things to deploy, or for quick workloads, like you just have workers and jobs and things like that that are running, that are fairly isolated, but you’re only paying per second while these things run… And this is kind of where the power of Virtual Kubelet comes in, because now you can kind of have this node that exists in your cluster, with endless capacity. It could just kind of burst out in parallel, and you could run 100 workers on ACI pay-per-second, and then when they’re done, they’re done. And you don’t have to have the spare capacity in your cluster to support all these batch jobs, or CI/CD or things like that. They just kind of run out there in ACI and come back, but as far as your infrastructure is concerned, you’re just treating it the same way as your normal cluster, except maybe having some node selectors and things on there, saying like “I would like these types of jobs to run out in ACI.”
It’s like a temp agency.
Yeah, there you go. I was gonna ask permission to play the cynic here for a moment, because…
Sure.
So the cynic might say, “Okay, this is a hack so that you can run ACI with Kubernetes, and that’s very much trying to get us to just use ACI.” Are there other uses? Is this what it’s for? Does it go above and beyond, or is that the goal, and now the goal’s accomplished, and now we should go try it with ACI?
No, we created it with kind of like the modular back-end; we wanna encourage other people to implement these. We’ve got companies like Hyper.sh jumping on to build a connector to their systems… So we’d like to see this expand out more. Would we love you to use this with ACI? Absolutely. But I think it’s more important than that, because we’ve got kind of like the Kubernetes landscape going on, but serverless is also catching on, and I think that this type of Virtual Kubelet scenario is a really awesome bridge in between the two, where you have these workloads that are really intermittent, whether that’s a spike in traffic, or a batch job, or just CI/CD, right? Think about a commit-heavy day in CI/CD and how long you might have to wait for your commit to run through CI/CD, because you only have one virtual machine dedicated to that, so you only run five in parallel, or whatever you have that configured for.
This – you don’t even actually have to have a VM for your CI/CD, right? It doesn’t matter whether there’s one commit or 20, they just fan out in parallel and you just kind of pay per second while they’re running, and when they’re done, they’re done. And in a lot of cases, it may actually be cheaper for you to do that, because you’re not paying for all that idle time.
To the note of the agnostics to this, you’ve got in this diagram in this post you mentioned ACI (Azure Container Instances), AWS, and then as you mentioned just before, Hyper.sh. Now, you’ve got those in your examples there, but this is also – you know, Microsoft developers were a part of putting this together, but it’s not under the Microsoft org on GitHub. Can you talk about why that is?
[00:15:57.04] I think that we made a concerted decision to give this all of the reality of being a community project, as opposed to “This is a Microsoft thing, so you can run ACI.” We want this to be a tool that people can use with ACI, but with anything else, too. We’ve already had discussions with other major cloud providers that we can’t name that are jumping on board to play, too. So it’s a community thing, and we didn’t want the big Microsoft badge on top of it. We’re happy to take the credit for building it, because it’s a really cool thing, but at the end of the day we want everybody to be able to use it, and people to jump in and contribute.
Can you talk about what the world was like before this Virtual Kubelet? I’m imagining that often we produce projects like this or solutions that were at some point before duct tape, sort of a band-aid. Was this possible prior to Virtual Kubelet that people do this before, and how did they actually achieve these goals?
So adding to Brian’s point about the community aspect, I think that we’re trying to evolve our own products and make them more usable and offer things to help customers solve problems, and I think things like Virtual Kubelet definitely do that, but I think more importantly though is the advancement of the community and the technology, and we’re – you know, Kubernetes is still so new when we’re trying to figure out innovative ways to use it and run it in different scenarios, for different workloads, and how to do that efficiently. So I think this is valuable internally to Microsoft, but we could also see the value to the broader community, and I think that’s why we decided that this should be done completely in the open.
Now, as far as “Did things like this exist?”, not to my knowledge. A few months ago Brendan Burns and a couple of other people put together a prototype of something like this to connect ACI to Kubernetes, kind of proved out the concept, and we kind of decided to take that and turn it into a much more fledged-out product, with more features, and a community effort.
I think there’s some stuff for doing serverless with Kubernetes; correct me if I’m wrong, Brian, but I can’t remember the name of the project… There’s one out there. But I think we saw this as kind of more of – so serverless I think was containers; you have the warm-up time of the container and stuff, and I don’t know whether we’re quite there yet, but definitely the batch and CI/CD jobs and bursting out into a cloud provider - I think that that’s the main appeal in the core use cases we’re focusing on first.
So James Levato in the chat would like to know, “Does this mean that a Virtual Kubelet will support PowerShell?” Can one of you all answer that question for him?
I’m not sure what that would mean, because the Virtual Kubelet really is just an application that runs and behaves like it’s the Kubelet on a node…
Inside of Kubernetes, yeah.
And Kubernetes supports Windows workloads already. So if you deploy a Windows workload on a Kubernetes cluster that has Windows servers on it, then you can already do PowerShell.
Yeah, and we do have the ability to pass in a flag when the Virtual Kubelet starts up, to tell it that it should behave as if it’s a Windows node. So you could definitely do that to grow your Windows workloads out into ACI or any other provider as those start getting implemented.
[00:19:53.03] James, hopefully that answers your question. If not, reformat it and ask it another way and they will address it, if we have a more full understanding. Just going back to the Microsoft thing, I’d like to introduce a little bit of a meta-conversation, because it’s something that Adam and I think about, and I’m sure you all have thought about, in just trying to navigate life with a job and also - I’ll just, for lack of a better term, call a personal brand, or like the person that you are… And both of you have recently joined Microsoft as employees, and you do a lot of your public speaking in that context, you’re doing open source in that context, whether on the job, off the job… How did you deal with putting on and taking off the “Microsoft hat” and the way that that signals to your friends and followers online and whatnot?
You go first, Erik. That’s a tough question.
I think actually part of the appeal to this job, and a lot of the discussions early on was that we were to be ourselves; we were to be genuine and altruistic. There’s not really this push from executives or marketing for Brian and I to run around and shout from the rooftops, like “Everybody use our stuff!” We get a lot of opportunity to contribute in things, but they just want us to be us. If I’m excited about a product, I’ll talk about it, and if I’m not crazy about it, I won’t talk about it.
One of the interesting things though is that we get the opportunity to use a lot of this stuff, right? Things we didn’t have time to play with when this wasn’t our job. AKS, which is our managed Kubernetes instance - we got to play with that before it was announced to the world, and we got to offer really good feedback to the product teams about things that we thought the community would want or need, or questions that we have. That’s super appealing, because you kind of get to – Brian and I are more members of the community; we’re advocates, but we advocate on behalf of the community to the product teams and documentation teams. We’re deeply ingrained in these communities, and this is what we think that they would want, or these are the problems they’re facing.
So it goes both ways. Brian, we’d love to hear your thoughts on that.
Yeah, Erik covered quite a bit of it, and I would just reiterate that the main focus of the conversations when we started was – or at least when I started; I wasn’t there when Erik had his conversations, but the main focus of them was just how much they wanted us to be ourselves and continue to be ourselves, and not put on the Microsoft marketing hat. So all of the things that I represent when I’m talking, or online, on Twitter, blog posts or whatever - they’re honest and not sponsored. They’re things that I’ve discovered which are fun, or things that I’m doing which are interesting, and some of that is because Microsoft has allowed me the freedom to go play with things that I just didn’t have time to play with before… But I’m not going to talk to the public about Microsoft products I don’t enjoy. Instead, I’ll turn around and talk to those product teams and say, “You know, the people that I know in the Go or Kubernetes community would probably enjoy this particular thing a lot more if it did X, Y and Z.” That’s a really nice place to be in, because the people internally at Microsoft are hungry for that kind of data, and really want to build products that everybody loves, and it allows me to keep a good conscience about the things that I’m talking about online.
[00:23:54.29] And I think it’s hard too, because evangelism kind of got a bad name for so many years. It was kind of, “Buy people with good names and have them talk about your stuff”, and I think people kind of feel dirty when they hear that, and that’s why there’s the whole advocacy thing. So I think it just takes time for people to understand the difference.
I think different companies do advocacy differently, too. I know Google has a very similar advocacy program like we do, where it’s more about being genuine to the community and helping the product teams evolve products or create new product offerings that solve problems that you’re aware of in your community.
When you think about developing products, you wanna create good things that people use. You often get detached from the people who are using it, you are too busy building.
So Erik, when we were at KubeCon you mentioned to me this project, and sort of the back-story on how it came together was I guess being in Austin for a week or so prior to the actual conference, and you were sort of already there for a couple of weeks. Can you kind of talk about maybe the early process of organizing that and maybe whatever the back-story might be to kicking off this project?
Yeah, I didn’t organize it per se; we had talked about rewriting this in Go, because a lot of the people who were working on similar projects in Kubernetes itself, that was the language it was written in, and then it sort of evolved into this “Well, wouldn’t it be cool if…?” We didn’t really dictate what the back-end was, we just kind of provided this project where you can kind of invent what the node actually represents.
So yeah, we were all scheduled as a team to go out to Austin, and talking with Ria, the PM on the project, and Robbie from ACI team - like, “Let’s just get everybody to go out early and hack on it”, and I think it might have been Brian Liston, who is Brian and I’s manager, I think it might have been his idea.
We kind of all got together for a week, and it was actually – even internally to Microsoft, it was a pretty big deal, because what was it, Brian, like eight different teams were involved?
Yeah.
So yeah, we had some CDAs (cloud developer advocates), we had some (I think it’s called) customer solutions engineers… I forget what CSE means. We had some people from the ACI teams, and people from the Azure Container Service team, we had people from the CLI team who built out stuff where there’s now a command within the AZ tool that Azure provides to just install it for you… We had people working on CI for it, we had people working on the actual implementation… it was just super cool to see this big group of people from different teams and even organizations within the company just kind of like all jumping in and making it happen.
[00:28:00.00] It was one of those things – we started working on it as we all had time, weeks leading up to KubeCon, but it really didn’t kick off and start development until that week there, and it was just awesome to watch it get to the point where it’s at in one week.
That’s interesting to hear that you were working on it prior to it; it would make sense, but I wasn’t really sure where the context began. Who’s idea was it? Was it a meeting and someone was like, “Hey, we’ve got this problem…”? How did the idea get formed, who was leading that?
I’m actually not sure. Brian, do you know? I know Brendan Burns was the first person to spike out a prototype of connecting these two. In that case, it was just really the call to the ACI connector; its job literally was just to bridge Kubernetes with ACI. I’m not really sure who had the idea… My assumption is it was Brendan, but it could have been somebody else.
As far as turning it into a modular open source project, I don’t really know either. We got together to talk about porting it to Go and fixing a couple of issues and adding some needed features, and then I think it was just kind of like this collaborative brainstorm of “Well, we could do this, and we could do that… We could make it an interface that could be implemented”, and it just sort of evolved organically through these discussions. Those things are usually hard to remember.
I just scrolled back through Slack and it was Erik’s idea to turn it into a Go interface that anybody could implement so that any provider would work. So Erik, once again, is being shy and humble, but it was absolutely his idea to turn this into more than just the ACI connector and turn it into something big.
Nice! Can you recall that, Erik, or are you just being humble? [laughter]
No, I honestly can’t recall it.
He’s such a team player!
A lot of people kicked out ideas and it’s really hard to remember where the ideas came from.
Why do you think you felt that way? If you can’t really recall it maybe you can’t remember this part, but what do you think motivated you to feel so community-oriented? [laughter]
Well, I think Microsoft is community-oriented too, right? It was meant to be open source from the beginning when we started building it. As far as other people implementing that stuff, it’s really interesting because what IP are you really protecting? Look at all of us who came together in a week and kind of got to where we’re at… So if you hoard it to yourself and not to everybody else, how long would it really take them to make something that’s similar…? So what’s the point?
Well, it was also self-evident too at KubeCon, just how much the community had grown, and it was all because of the original idea, which was to not keep Kubernetes a Google thing and make it more of a community thing, and then ultimately be donated to the CNCF Cloud Foundation (Cloud Computing Foundation), to have that as like an underlying DNA… It was self-evident at that conference, so I would imagine that being there, once you got there and just kind of seeing how the community has grown, that that’s the way things should operate in this community.
It’s always a juggle, right? Because on one hand you have to have your IP, you have products and you want to evolve those, and you wanna kind of keep stuff to yourself, so that you have these value-adds over competitors. From a business perspective it’s totally understandable, but I think on the other side of it, all of the cloud giants and things like that see the value of working together to evolve the space.
[00:32:00.28] From my perspective - and don’t know if this is Microsoft’s view, but this is definitely mine… Competing for customers is kind of a losing game. I don’t think if we offered Netflix free services forever that we could ever get them to convert over, right? So the idea of trying to compete directly and steal customers - I think that you’re putting in a lot more effort for little reward. Now, building abstractions, Virtual Kubelet, building things like Helm and Brigade and things like that that help make the cloud, and things like Kubernetes and containers more approachable to a broader audience - now you’re creating more customers for everybody, right? Because there’s more people that have not adopted the cloud than there are people there, and it makes far more sense for us to keep helping make it more approachable than it does to sit here and try to compete feature for feature, or hoard our knowledge in projects and stuff like that.
Well, speaking of people, let’s give some credit to those who are part of the team, but as a by-product of that, can you talk about – something you said earlier was you all hacked on it prior to the conference, but the idea was spawned to go ahead of time and sort of time-box some collected effort. It seemed a little bit like tunnel vision to focus on it, and out on the other end came this prototypical project, in time for the conference. Can you give some credit to the team that was involved and mention some names, but then also talk about what it was like to meet up ahead of time, where you met…? What were some of the circumstances you were in to make sure that you were all very productive?
Yeah, I mean – because we were all in different teams with our own priorities, and the CDAs travel a lot, and speak, and are creating content, the engineers on the product teams are busy with their own features and stuff, it was one of those, like, jumping in and out as people had time… So it made a lot more sense, I think – I give Brian and Ria a lot of credit for coming up with the idea of like “Let’s get everybody there, under the same roof, for one week.” It’s much easier to focus on it when that’s literally what you’re there for.
I’m trying to determine the scope of this project in terms of like surface area, as something that comes together so quickly, and I found things interesting on GitHub, I tried to look at the dependencies, and it said there weren’t any, but there’s a vendor directory and there’s a bunch of stuff in there, and then I ran a clock on it, and there’s like 1.8 million lines of code… So you guys definitely had some type of dependencies. But maybe help us out with understanding – you mentioned the effort that went into this. I was thinking a lot of it is the idea, the design, conceptual… How much code was cranking, and who gets the props on that stuff?
I think it was roughly – well, the end result, because a lot of code was created and then deleted… [laughter] So it’s much harder to tell exactly how many, but I think the end result if you exclude vendor stuff is around 4,000 lines of credit. I can mention a few names… I hope I cover everybody, but they’re all in the blog post - Brian definitely contributed, myself, Jessie Frazelle… I’m gonna butcher some names: Julien Stroheker, Neil Petersen, Ria Bhatia, Rita Zhang, Robbie Zhang and Sertac Ozercan. I wish I knew the last names here; you always know people by first name, but… I think that’s everybody, and I’m sorry if I left anybody out. It was crazy, and heads down coding, and…
[00:36:01.06] Let’s talk a little bit more about the possibilities now, because now you have this thing, you have this new opportunity, which is you can load up this virtual Kubelet inside of Kubernetes and basically be a facade for all these other things behind it… First ACI, and then also this Hyper.sh, which I’m just learning is an on-demand container, per-second billing, another provider… You list out a few things in the post, and you mentioned CI as one of them earlier in our conversation, but what are some other uses? I know serverless is a possibility, but potentially some drawbacks there, you have batch jobs… Open up into those and tell us why people might want to do this.
I’ve got a really good one. Kubernetes itself is very much a container-focused, container-oriented workflow, but the Kubelet really doesn’t care what it’s starting. So it’s entirely possible to register a Virtual Kubelet on your Mac, and as the workload, give it the name of a Bash script or some executable to run, and have that be the thing that gets executed when Kubernetes tells it to. So you could do this in a container-free environment, and you would lose all of the benefits of containers, but it’s easily possible to do something really crazy like that.
Yeah, and another one I list as a possibility in the post is virtual machines. So the Virtual Kubelet doesn’t care, right? Kubernetes only cares that this node exists, and gives it work. It doesn’t really care how it deploys it, things like that. The Virtual Kubelet - same thing; it just calls out and says, “I need you to create this pod or delete this pod.” So you could have your provider provision a virtual machine and then run that pod inside the virtual machine in complete isolation, like if you were running a multi-tenant environment.
So there’s all these creative things, and I’m really interested to hear other things people come up with, but I think the primary focus for at least phase one of rolling this out to be production-ready would probably be more along the lines of like your batch and CI/CD type stuff, where your core cluster where you have your provision VMs that are just on 24/7, they’re kind of set up at a capacity to handle your normal workload, with some headroom, and things like that… But then allow you to run your batch work that may be really intensive, or it takes a long time only running a single instance of; you could run as many as you want in parallel. And batch, CI/CD - think about it the same way. But those can run out in this virtual node that’s ACI, and you’re only paying for the time that they’re running. And they don’t have to be run serially, they can be run completely in parallel and you’re paying the same amount of money, it doesn’t really matter. But then you’re not paying for this idle resources running, so that you have leftover capacity for when your batch job runs at 3 AM, or whatever.
One thing you mentioned about serverless which definitely piqued my interest when I saw it is that you may have issues with warm-up time, because basically the containers need to spin up and spin down. Can you expand on that, and tell me why that’s different than a Lambda, or… I’m sure Azure has a serverless thing - what’s Azure’s called?
Azure Functions.
Azure Functions, thank you.
I’ll leave this to Brian to describe, because I’m newer to the serverless world, so I think he would have a much better explanation than me.
[00:39:46.23] First of all, I have to find that amusing, if I’m the resident expert on services
… [laughter] Because that’s just hilarious. But on the serverless side, when you’re running a function, you generally are executing code live in some sort of environment, but if you were to use ACI or some other Kubernetes-inspired thing to do that, then you’d have to download a container from a container registry, a Docker container… And the time that it takes to download that container could impact your startup time, which would make your serverless function slower on the first run, or on the first run on each node, since that Docker container would be cached for the second runs. So there definitely would be an impact in startup time with a container versus not a container.
Since you’re the expert, Brian, how do they do it on Azure Functions and AWS Lambda? Surely, they have to spin up something on-demand as well in order to get the environment ready for you.
There’s two answers to that. Both Azure Functions and Lambda allow code execution in a sanitized environment, but it’s not a container environment. So you’re just executing a function. It fires up Node.js and runs your Javascript thing, but that’s not in a container. I can’t answer for Lambda, but I know Azure Functions allows you to run a Docker container too, so if you’re using the dockerized workflow for either one of those, you’re already paying that price in startup time, but if you’re not, then it would be a big difference.
Especially in the Kubernetes environment, too… It can be even more slowed if the container that needs to run hasn’t run on that node before, because then the image has to be pulled, and then depending on how large the image is, you have to wait for that. That’s even how Kubernetes works, it’s an eventually consistent system. I use a declarative API to say “This is my intent, this is the desired state”, and then it evolves there. There’s no guarantee that that’s instant, the second that Kubernetes tells me “Yay, I accepted your new pod.” It doesn’t mean it’s running yet, and it could take who knows how long, depending on whether it needs to pull images, and things like that.
So the other thing that – you’re obviously very excited about this, but you wanna see what other people can come up with. So these are just a few potential use cases; it’s not quite production-ready yet, or it’s on its way to becoming production-ready? What would be like a call-to-action for people? …beyond use cases - maybe providers, people writing interfaces, people trying it. What do you want from the community at large at this point, with regard to Virtual Kubelet?
I’d like to see people actually using it in some real use cases, and start fixing things that come up. Like we said, this was an effort where we all came together and hacked on it for a week, a little time leading into it, so it’s very much still in its prototype phase. For the most part it works, but I imagine there’s some rough edges and there’s different areas that we still need to solve for. But yeah, mostly trying it out, reporting bugs… I’d love to hear use cases people think of, or different providers…
It’s working its way towards production. We’ve got some people using it internally and playing with it, so we’ve been fixing things that come up.
I think one of the first providers that we’ll see that has a generalized business use case is like a Jenkins worker, where you run Jenkins Master, or whatever they’re calling the Jenkins Master thing now - you run that, and then it spins up Virtual Kubelet instances to do each one of the tests or the suite of deployment tasks, and then they go away. I think CI is probably going to be the earliest use case for something like this, but I also agree with Erik, we’re gonna see some interesting stuff, too.
[00:44:10.25] I’m really surprised by the number of people who see the vision. I knew for us and Hyper.sh who had forked our original connector that Brendan Burns had written - I knew those people would get it, like “Oh yeah, we can work on it together.” But the number of people who saw the vision of like “Oh cool, now we can Kubernetes and it doesn’t actually have to be backed by a physical node…”, and use some of this on-demand infrastructure as part of your normal cluster - it was actually really cool to see that, and to see one of the keynote speakers mention that… It was just like, “Whoa…!”
Yeah, one of the things that I’d like to see, and I would write if I had any time, is something like Xen Hypervisor adapter for Virtual Kubelet. Xen has an API - not a complicated one even - and it would be relatively painless to stand up a Xen node and use the Virtual Kubelet to run workloads inside Xen virtual machines, easily. That’s another use case that would be really straightforward with Virtual Kubelet.
[00:45:30.18] to [00:46:09.14]
Alright, so this is a hybrid show with Changelog and the GoTime.fm crew. In the GoTime Podcast we like to bring up interesting news and interesting projects that have come across our news desks over the course of the week, so we’re gonna kick that off now. Lots of interesting things have happened since the last time we gave out news. Probably the biggest is the Go 1.10 Beta 1 release. Lots of things changed there behind the scenes, not a lot changed that’s visible though, which is kind of nice… As per the Go usual. Erik did you have any favorite feature of Go 1.10 that you wanted to hit?
I mean, with every Go release there’s always performance improvements, and I know that there was some stuff in there about lowering allocation latency and improving the garbage collector, but a lot of the stuff that I saw that was really cool was surrounding testing. It now supports caching your test results. If it knows that none of the code behind it has been changed, it just runs and then produces the output of the last run and shows that it’s cached. So that should make consistently running your unit tests (your whole suite) much faster. It also runs govet before it does the tests, which is super cool.
It’s interesting you mentioned the cache test results… That’s actually a bonus side effect of the compiler changes that they made. The -a flag that we have in the previous versions of Go, that would force you to recompile everything - so if you did gotest-a or gobuild-a, they would recompile all the things underneath the covers… That’s no longer supported, it’s no longer needed, because the compiler now knows based on the contents of the file whether they’ve changed, and it doesn’t use file timestamps; I think I have that the right way.
[00:48:11.29] So now it will only compile the things that are absolutely necessary to compile, and that benefit will be mainly in compile times, but it also comes across in terms of tests, too. So we don’t have to rerun tests that have already run successfully with the exact same code… So I’m looking forward to increased speed for compile times; that will be fun, as always.
Another exciting thing is if you’re not watching the Gopher Academy Blog, we started our annual Advent Series, and there’s a whole bunch of good articles in there already, like writing a Kubernetes-ready service from zero, there’s a gRPC one in there in Go, Brian wrote one about repeatable and isolated development environments for Go, Damian Gryski wrote one on minimal perfect hash functions… So there’s a bunch of good ones in there already. That’s not like all the ones I say are good, these are just the ones that I can think of off the top of my head, and there’s still a couple weeks left, so definitely follow that if you’re not already… We’ll drop a link in the show notes.
Yeah, it’s a really good series this year, lots of really great articles. So I came across something that should inspire the hackers in all of us. Everybody who has anything close to a modern car has that obd2 port underneath the dash, and I’ve always wanted to play with it, interface my–
Me too…
…and just do something super hacky and fun and awesome. Well, somebody on GitHub released a Go interface to the obd2 system, and they called it elmODB. So I’m assuming that Elmo is like the Sesame Street Elmo, but it’s elmODB, and that’s at GitHub.com–
elmOBD.
elmOBD.
It’s a database, or what’s going on…?
It’s OBD.
Oh, OBD, not DB.
I said ODB. I said that wrong.
The old dirty database.
Yeah, it’s the Go adapter to that. In theory, you could bring a laptop into the car and really start hacking into stuff, and I intend to do that at some point really soon, because that just sounds fun.
Is there any way, Brian, that you could somehow hook your car up to your Go-based barbecue system, and maybe when you rev the motor, or something, it barbecues better? I don’t know, I’m just spitballing here… What can you do?
These are good questions that I should probably–
Barbecue is better. I like that.
Barbecue is better…
Barbecue is better somehow… I don’t know. That’s a good question, I don’t know the answer to that. I can’t think of an immediate application, but that doesn’t mean that one doesn’t exist.
Real quick, Brian, for the Changelog side of the listeners who haven’t heard about your barbecue system - probably most of the GoTime listeners have, but maybe there are new ones who haven’t… Can you just tell us about this? Because it’s so awesome.
Sure. It’s a Raspberry Pi setup that Erik and I have been building for just a little over a year. It includes some hardware pieces, electronic pieces that control the air flow into a fire-driven barbecue… So a real old-school barbecue, with a fire pit. We use a Raspberry Pi that has a relay; the relay turns on or off a fan which feeds the air into the fire pit, which either dampens or increases the fire temperature. Then there are temperature sensors that determine the temperature of the smokebox, so we know whether or not we need to increase the temperature of the fire or just let it smolder for a while.
[00:52:04.19] The whole thing feeds MQTT data off to a Grafana dashboard, so we’ve got gorgeous graphs that show us how hot the food is, how hot the firebox is… It’s just a great, big IOT barbecue blast.
That’s beautiful. Does the chart go online somewhere, so people can remotely participate in your cooking sessions?
It’s funny you should ask that… [laughter]
So it is live. You got it? I’m loading it up right now…
You won’t see any data there right now because nobody’s barbecuing, but if one of us were barbecuing, you’d be able to pick which of the two grills, on the top of the screen where it says Home, pick either Brian or Erik’s, and you could see the feeds from our barbecues.
Well, what are you guys waiting for? I wanna see these charts move. Run out there and start barbecuing something. [laughter]
We’ve got a job, man… [laughter] I can barbecue every day.
Aren’t you usually barbecuing on Thursdays, though?
Thursday is a pretty big day for barbecuing, yes, but tonight we’re going to get a Christmas tree and stuff that’s gonna take me away from the house, so… No Q today.
Our OBD2 thing - is this where we insert the legal disclaimer that we are not responsible for you damaging your car?
Yes, that’s probably a really good place for that.
I had no idea this port actually even existed. I mean, I know there’s ports, but I didn’t know there was this certain port. I haven’t even considered the idea of plugging something into it and port-scanning it or finding ways to hack it.
That just does metrics though, right? There’s no write ability…
No, you can [unintelligible 00:54:05.21]
Can you?
Yes, you can.
Yeah, so if you go to a mechanic, or you go to Autozone or Advanced Auto or wherever and you have a diagnostic light on, that’s what they’re connecting their little machine to to tell you what the code means.
So it kind of connects to the CAN bus that goes throughout the car where all the messages from the internal computers kind of share… So yeah, you do have the ability to sometimes change stuff. You can definitely pick up the speed of the car, and RPMs and things like that through that port.
Remove the governor…
Well yeah, how much you can change really depends on the car manufacturer. Some manufacturers have a decently secure system, and some are wide open. I mean, you could literally do things like turn on the turn signals from your computer.
What’s security like though? How secure do they make this thing? I don’t know anybody who’s hacking cars…
There’s people who have done it.
If they use TLS for encryption, then it would be really difficult to send messages to systems that require the encryption bits. But if they don’t use any encryption, then you just need to know what message to send, because it’s a giant bus. So you send a message out on the bus, and anybody who cares about it will do something. That’s why some actions that you perform while you’re driving cause other things to happen - you turn on the turn signal, but it turns off the left front headlight because the left turn signal is on… You’ve seen that in the new cars - that’s all bus-driven.
I wonder if there’s a database or an index out there of, like you had said, cars or trucks or vehicles that use or don’t use TLS for encryption; that way it might give you a leg up, like “Oh, I have a Ford Explorer. I can hack that.”
[00:56:04.21] I’m sure there is.
Yeah, there’s a lot of people who have reverse-engineered some of the messages on the canned bus, and things like that; there’s lots of people apparently tearing apart their cars and reverse-engineering them… Surprisingly.
Like, “Hey honey, I just bricked the truck. It’s no longer [unintelligible 00:56:19.26] Now it doesn’t move.”
[laughs] “We’re gonna need a new truck.”
And the manufacturer’s like “What are you doing with the –” what is it, ODB2 port?
Yeah, I said it wrong the first couple times and now it’s gonna stick.
Does it mean something, is it short for something, OBD?
Yeah, it’s On-Board Diagnostics.
Okay, that makes sense… And it’s version two, I’m assuming.
Yeah.
It just reminds me of a Changelog we did back in the summer, Adam, with Tim Mecklem, who reverse-engineered the blood glucose monitor for diabetics with Elixir, and basically was able to build interfaces into that to get the data off, and then eventually to run the – what’s it called, the insulin–
The loop.
Yeah, loop was the term that I remember, but… Anyways, just thinking about reverse-engineering things, and devices that should have encrypted communications between parts that don’t.
It kind of reminds me too of the movie The Martian. There was one point where Johanna– I forget the girl’s name now that I think about it, but she had to be tasked with hacking the computer to essentially override the ability for NASA to course-correct, essentially… And it kind of reminds me of that. She hacks into the code really quickly and determines that because it’s not a secure type of thing, it’s just meant to be a nice-to-have, not a need-to-have, they never really intended to put security on it, because they never considered that there would be mutiny, but of course, anytime you have a ship - or in this case spaceship…
Still a ship.
It’s still ship, but it’s a spaceship; you’ve gotta prefix it… That the crew may go against the will of its originator, which is NASA.
Well, let’s not get Adam too far on The Martian…
Yeah, don’t get me into the movies, man… But that’s fun stuff to think about, to have – everybody typically has a vehicle outside their house, and most people listening to either The Changelog or GoTime would be the type of people that would go out and find a way to hack this thing. I think it’s pretty interesting to think about all the listeners somehow breaking in or interestingly hacking their vehicle.
Or maybe - just maybe - they’re running Fuzzers against bbq.live, trying to ruin Brian’s dinner. [laughter]
Bring it!
Luckily, that is only push [unintelligible 00:59:01.23] there’s nothing on bbq.live that actually pushes down to the controller. It’s only metrics. If you find a way, I will be impressed.
Any more fun news to cover?
There was the Joy Compiler…
Oh, gosh…!
Yeah, it’s not complete enough, so in terms of completeness, Gopher.js is close to 100% or at 100%. Joy, I think they claim roughly 80% complete; there’s several things that don’t compile from Go to Javascript yet… So it’s not quite there.
[00:59:57.24] Honestly, it was one of those things that I’m glad they did it, because it’s awesome, but I wondered why they didn’t spend the time on changing something in Gopher.js if that was the – if there was something missing in Gopher.js, but…
I always wonder what happens there whenever you have a fork or a very paralleled project that’s got similar motives, similar goals, and they intend – or they go on their own, essentially. It’s confusing sometimes.
What usually happens is people get confused and they create a third option. [laughs]
Right.
Yeah. Rails and Merb, and… [laughs]
There’s a section on the website that says “How does Joy compare to Gopher.js”, so he does answer some of these things.
Oh, awesome.
So you can read that, we’ll link to it in the show notes, but the overall thing is they’re two different approaches to the same goal, so apparently just wanting to take a different angle at a similar end, which I think is worthwhile.
Touché then.
Yeah, it’s totally cool.
I think it’s worth mentioning too the design of this page. I mean, going back to some things we tend to – we’ve just had a conversation which is a future episode of the Changelog, it’s just like this intention behind your design… This page does instill some joy into me. And for those going to mat.tm/joy, which is the URL to go to to check it out, it says The Joy Compiler, and it’s beautiful clouds, vanilla skies, and an air balloon.
Pretty pastel colors, yeah. It’s very joyous. I would agree with that.
Outside of that, we can kick off #FreeSoftwareFriday, too.
Let’s!
Let’s do it!
I’ll go first. At KubeCon, Brendan Burns, who works at Microsoft and is one of the co-creators or Kubernetes, announced this new effort he created, which is called Metaparticle (Metaparticle.io) This is extremely interesting… Basically, what it is is this idea that through annotations in code or actually almost like a DSL within the language, just basically libraries that you can include, that you wouldn’t have to be familiar with a Docker file and a Kubernetes spec, and whatever you’re writing your stuff, and maintain properties like what port it’s bound to, and then the container and making sure it exposes it, and making sure the pod spec has that in there, and then making sure that the service that load balances between the instances of it also have that, and there’s kind of like this disconnect where if you change things…
And it’s just a lot for people to understand four or five languages to be able to build an application and deploy it to the cloud… So there’s this experiment of this grand vision of what would it be like if it was just part of writing code, like it was a library within your code, and when you compiled it, it just knew how to containerize itself and deploy it… And it’s really worth a look, and I’m interested to see these abstractions.
I think Kubernetes is an awesome abstraction over infrastructure, but I think we still haven’t got to “What’s the abstraction over that, that makes it just seamless to build an application and have it deploy?” … for most use cases anyway.
Yeah, and this reminds me deeply of a Twitter conversation I had maybe a month or two ago where I said something similar to this… You know, what kind of abstractions are we gonna build on top of Kubernetes? What are we gonna build on top of distributed systems? And somebody that I remember respecting said something to the effect of “No, there will be no more abstractions. We’ve made all of the abstractions we can, and we’re not gonna make any more on top of the stuff that we have. This is it.” And I thought, “Well, that is just the most close-minded thing I’ve ever heard.” Of course we’re gonna abstract more; if we didn’t abstract more, we’d all be writing Assembly language. We’ll always continue to grow like that, and I think Metaparticle is a great step in that direction of really putting the complexity of distributed systems aside and just allowing you to code intent.
[01:04:09.16] I forget exactly how Brendan worded it, but it was something along the lines of he wants to empower developers to build systems they wouldn’t normally build. Learning distributed systems is a challenge; it’s more things – as developers, we’re having to know and learn and understand a lot more things just to participate in the current way things are done.
I don’t know who coined this, but having conversations with Joseph Jacks, he talks about it… When you think about this, it’s like a pendulum - first we swing up and out, and that’s kind of like what we’re doing with some of the Kubernetes stuff, right? And then next we’re kind of down and in, where it’s sort of like embedded within the language, and Metaparticle and things like that are partly kind of like the down and inside of that…
I don’t think we’re gonna be done abstracting until we’ve recreated the holodeck from Star Trek. [laughter] At that point – that’s a good abstraction and we can just take a break after that and enjoy the fruits of our labor. But until then, more abstraction.
I want almost Matrix style; I just want to think it and for it to be, right?
Exactly. Who would like to go next?
I will. I’ve got an interesting terminal emulator that I’ve found… It’s at github.com/eugeny/terminus, and it’s yet another Electron app that you can install on Windows, Mac or Linux. I’m using it on Windows because it’s actually a really nice Linux feeling terminal emulator, which is something that’s missing in the Windows world. So it’s a really good emulator for that Linux feel, but on Windows.
Well, I’ll go next. A project that I love and I’m thankful for, and one that probably everybody has heard of, but still worth all the shoutouts, because Jack Likić’s Semantic-UI is a beautiful system akin to a bootstrap or a foundation, but one that just really speaks to both my design sensibilities, and really just the way that you use it once you get used to the semantics of it. It just allows for very quickly cranking out admins and prototypes and stuff like that, in a way that’s saved me lots of time, and also made me look not too bad with clients and whatnot over the years…
So if you don’t know about Semantic-UI - you probably do though, because it’s one of those 100,000 stars on GitHub type of projects - check that out, and thanks Jack for all the work you’ve put into it; I know he has. He’s been on the show a few times, and he has a ton of people bugging him all the time about bugs and fixes and improvements, and it’s like a huge, massive undertaking and a huge boon to the open source community, so… Check out Semantic-UI.
Surprisingly, I had not heard of that. I’ve been disconnected from the frontend space, so…
Well, there you go. Adam?
Is it my turn?
It is your turn.
Well, it’s a little meta here… I’m gonna mention our transcripts, because it was a participant in Hacktoberfest, and then also 24 pull requests… So if you go to github.com/thechangelog/transcripts, we have all of our episode transcripts in markdown format, open source, meaning that not only can you read them as a markdown file if you wanted to, but you can contribute to them. So that means that if you wanna help clean up “unintelligible”, which is super easy to find just by literally searching the repository for “unintelligible”, and you wanna listen to episodes and hack… You can easily contribute to open source by fixing those kinds of things.
[01:08:14.28] I love that they’re open source, because that was a dream of mine, and Jerod, you made it a reality, which I think, you know, it pays in spades when you don’t really consider the impact of it, but like rewinding… You know, if we didn’t do it like this, we would miss out on community. And chris48s and many others have submitted pull requests to improve these transcripts, and I think it’s phenomenal. We’ve got 28 closed pull requests. None of them by me, and none of them by Jerod, you know what I mean?
Does that mean people actually listen to this stuff?
Yeah! I’m gonna just say a few names… We’ve got Jared Dillard, Sharang (it’s some usernames, of course), Shari Hunt, chris48s, Dotan Dimet, Matt Warren… These were all obviously usernames. ShurcooL… Which was a self-correction; that was a GoTime episode. ComodoHacker, beardicus, merikan… Many others. caseyw, listener of GoTime, here in chat obviously; this time I’m not sure, but he usually is. A couple others. PeterMortensen…
The point is that we ship these shows, we transcript them so that they’re accessible to anybody, as best we can - not only in audio format, but also text format. We have a human behind the scenes, Alexander, who helps us make sure that every single episode we produce is transcribed to make it accessible, but he’s not perfect, and the community can step in and help, and we appreciate it.
You know, I think even outside accessibility it’s nice for discoverability, right?
Reading along… Cmd+F…
It doesn’t hurt for SEO for sure, but I’ll tell you where it really helps… It’s on the off-chance - and this happens once in a while - that somebody submits one of our shows to Hacker News, which is just the loveliest group of hackers in the world, every single time somebody would say “TLDL” (Too Long, Didn’t Listen). They’re like, “Why aren’t there transcripts?” They’ve always complained, “I wanna just read this, I don’t wanna listen. It takes too long.” And finally, finally we can hear silence, as there’s a transcript right there for you, and there’s nothing to complain about. That’s my own personal enjoyment.
They can complain about the content, finally.
And on that note, I’ve gotta go in like one; it’s a tight close to this show… But Erik, it wouldn’t be a GoTime or a Changelog if you didn’t take us out…
[unintelligible 01:10:54.29]
What do you normally say? You normally say “Thank you, everybody, for…”
[01:11:00.04] Well, thank Brian and I for being on the show… [laughter]
Oh, yes, of course…
…who is an awesome panelist on GoTime, and unfortunately not here today, but we’re thankful for her.
Good job, Jerod.
She’s a pillar in the Go community and the open source community. She’s been a huge part of our community for a long time. Carlisia, we love you, and we miss you today.
And we hope that she feels better.
Okay, now you can take us out.
Okay. So thank you, everybody, for being on the show. I love the fact that Jerod and Adam came in and took over, so it was kind of fun, especially getting to talk about something that Brian and I have worked on recently… Huge thank you to all of our listeners; you keep the show going. Definitely share the show with friends and co-workers.
You can find us at GoTime.fm, or @GoTimeFM on Twitter. If you wanna be on the show, have suggestions for topics, hit us up on GitHub.com/gotimefm/ping… And I think I’ve covered everything. We’ve got a short holiday break, so we may skip a couple of episodes for the holidays, but we’ll see you in a couple weeks.
See you, everybody!
Thank you!
Bye!
Bye, everybody!
Our transcripts are open source on GitHub. Improvements are welcome. 💚 | https://changelog.com/gotime/63 | CC-MAIN-2019-35 | refinedweb | 11,942 | 74.42 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.